Sample records for probability weighting function

  1. Psychophysics of the probability weighting function

    NASA Astrophysics Data System (ADS)

    Takahashi, Taiki

    2011-03-01

    A probability weighting function w(p) for an objective probability p in decision under risk plays a pivotal role in Kahneman-Tversky prospect theory. Although recent studies in econophysics and neuroeconomics widely utilized probability weighting functions, psychophysical foundations of the probability weighting functions have been unknown. Notably, a behavioral economist Prelec (1998) [4] axiomatically derived the probability weighting function w(p)=exp(-() (0<α<1 and w(0)=1,w(1e)=1e,w(1)=1), which has extensively been studied in behavioral neuroeconomics. The present study utilizes psychophysical theory to derive Prelec's probability weighting function from psychophysical laws of perceived waiting time in probabilistic choices. Also, the relations between the parameters in the probability weighting function and the probability discounting function in behavioral psychology are derived. Future directions in the application of the psychophysical theory of the probability weighting function in econophysics and neuroeconomics are discussed.

  2. Probability Weighting Functions Derived from Hyperbolic Time Discounting: Psychophysical Models and Their Individual Level Testing.

    PubMed

    Takemura, Kazuhisa; Murakami, Hajime

    2016-01-01

    A probability weighting function (w(p)) is considered to be a nonlinear function of probability (p) in behavioral decision theory. This study proposes a psychophysical model of probability weighting functions derived from a hyperbolic time discounting model and a geometric distribution. The aim of the study is to show probability weighting functions from the point of view of waiting time for a decision maker. Since the expected value of a geometrically distributed random variable X is 1/p, we formulized the probability weighting function of the expected value model for hyperbolic time discounting as w(p) = (1 - k log p)(-1). Moreover, the probability weighting function is derived from Loewenstein and Prelec's (1992) generalized hyperbolic time discounting model. The latter model is proved to be equivalent to the hyperbolic-logarithmic weighting function considered by Prelec (1998) and Luce (2001). In this study, we derive a model from the generalized hyperbolic time discounting model assuming Fechner's (1860) psychophysical law of time and a geometric distribution of trials. In addition, we develop median models of hyperbolic time discounting and generalized hyperbolic time discounting. To illustrate the fitness of each model, a psychological experiment was conducted to assess the probability weighting and value functions at the level of the individual participant. The participants were 50 university students. The results of individual analysis indicated that the expected value model of generalized hyperbolic discounting fitted better than previous probability weighting decision-making models. The theoretical implications of this finding are discussed.

  3. Uncertainty plus Prior Equals Rational Bias: An Intuitive Bayesian Probability Weighting Function

    ERIC Educational Resources Information Center

    Fennell, John; Baddeley, Roland

    2012-01-01

    Empirical research has shown that when making choices based on probabilistic options, people behave as if they overestimate small probabilities, underestimate large probabilities, and treat positive and negative outcomes differently. These distortions have been modeled using a nonlinear probability weighting function, which is found in several…

  4. Discriminating Among Probability Weighting Functions Using Adaptive Design Optimization

    PubMed Central

    Cavagnaro, Daniel R.; Pitt, Mark A.; Gonzalez, Richard; Myung, Jay I.

    2014-01-01

    Probability weighting functions relate objective probabilities and their subjective weights, and play a central role in modeling choices under risk within cumulative prospect theory. While several different parametric forms have been proposed, their qualitative similarities make it challenging to discriminate among them empirically. In this paper, we use both simulation and choice experiments to investigate the extent to which different parametric forms of the probability weighting function can be discriminated using adaptive design optimization, a computer-based methodology that identifies and exploits model differences for the purpose of model discrimination. The simulation experiments show that the correct (data-generating) form can be conclusively discriminated from its competitors. The results of an empirical experiment reveal heterogeneity between participants in terms of the functional form, with two models (Prelec-2, Linear in Log Odds) emerging as the most common best-fitting models. The findings shed light on assumptions underlying these models. PMID:24453406

  5. Uncertainty plus prior equals rational bias: an intuitive Bayesian probability weighting function.

    PubMed

    Fennell, John; Baddeley, Roland

    2012-10-01

    Empirical research has shown that when making choices based on probabilistic options, people behave as if they overestimate small probabilities, underestimate large probabilities, and treat positive and negative outcomes differently. These distortions have been modeled using a nonlinear probability weighting function, which is found in several nonexpected utility theories, including rank-dependent models and prospect theory; here, we propose a Bayesian approach to the probability weighting function and, with it, a psychological rationale. In the real world, uncertainty is ubiquitous and, accordingly, the optimal strategy is to combine probability statements with prior information using Bayes' rule. First, we show that any reasonable prior on probabilities leads to 2 of the observed effects; overweighting of low probabilities and underweighting of high probabilities. We then investigate 2 plausible kinds of priors: informative priors based on previous experience and uninformative priors of ignorance. Individually, these priors potentially lead to large problems of bias and inefficiency, respectively; however, when combined using Bayesian model comparison methods, both forms of prior can be applied adaptively, gaining the efficiency of empirical priors and the robustness of ignorance priors. We illustrate this for the simple case of generic good and bad options, using Internet blogs to estimate the relevant priors of inference. Given this combined ignorant/informative prior, the Bayesian probability weighting function is not only robust and efficient but also matches all of the major characteristics of the distortions found in empirical research. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  6. Decision making generalized by a cumulative probability weighting function

    NASA Astrophysics Data System (ADS)

    dos Santos, Lindomar Soares; Destefano, Natália; Martinez, Alexandre Souto

    2018-01-01

    Typical examples of intertemporal decision making involve situations in which individuals must choose between a smaller reward, but more immediate, and a larger one, delivered later. Analogously, probabilistic decision making involves choices between options whose consequences differ in relation to their probability of receiving. In Economics, the expected utility theory (EUT) and the discounted utility theory (DUT) are traditionally accepted normative models for describing, respectively, probabilistic and intertemporal decision making. A large number of experiments confirmed that the linearity assumed by the EUT does not explain some observed behaviors, as nonlinear preference, risk-seeking and loss aversion. That observation led to the development of new theoretical models, called non-expected utility theories (NEUT), which include a nonlinear transformation of the probability scale. An essential feature of the so-called preference function of these theories is that the probabilities are transformed by decision weights by means of a (cumulative) probability weighting function, w(p) . We obtain in this article a generalized function for the probabilistic discount process. This function has as particular cases mathematical forms already consecrated in the literature, including discount models that consider effects of psychophysical perception. We also propose a new generalized function for the functional form of w. The limiting cases of this function encompass some parametric forms already proposed in the literature. Far beyond a mere generalization, our function allows the interpretation of probabilistic decision making theories based on the assumption that individuals behave similarly in the face of probabilities and delays and is supported by phenomenological models.

  7. Computation of the Complex Probability Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trainer, Amelia Jo; Ledwith, Patrick John

    The complex probability function is important in many areas of physics and many techniques have been developed in an attempt to compute it for some z quickly and e ciently. Most prominent are the methods that use Gauss-Hermite quadrature, which uses the roots of the n th degree Hermite polynomial and corresponding weights to approximate the complex probability function. This document serves as an overview and discussion of the use, shortcomings, and potential improvements on the Gauss-Hermite quadrature for the complex probability function.

  8. Suboptimal Decision Criteria Are Predicted by Subjectively Weighted Probabilities and Rewards

    PubMed Central

    Ackermann, John F.; Landy, Michael S.

    2014-01-01

    Subjects performed a visual detection task in which the probability of target occurrence at each of the two possible locations, and the rewards for correct responses for each, were varied across conditions. To maximize monetary gain, observers should bias their responses, choosing one location more often than the other in line with the varied probabilities and rewards. Typically, and in our task, observers do not bias their responses to the extent they should, and instead distribute their responses more evenly across locations, a phenomenon referred to as ‘conservatism.’ We investigated several hypotheses regarding the source of the conservatism. We measured utility and probability weighting functions under Prospect Theory for each subject in an independent economic choice task and used the weighting-function parameters to calculate each subject’s subjective utility (SU(c)) as a function of the criterion c, and the corresponding weighted optimal criteria (wcopt). Subjects’ criteria were not close to optimal relative to wcopt. The slope of SU (c) and of expected gain EG(c) at the neutral criterion corresponding to β = 1 were both predictive of subjects’ criteria. The slope of SU(c) was a better predictor of observers’ decision criteria overall. Thus, rather than behaving optimally, subjects move their criterion away from the neutral criterion by estimating how much they stand to gain by such a change based on the slope of subjective gain as a function of criterion, using inherently distorted probabilities and values. PMID:25366822

  9. Suboptimal decision criteria are predicted by subjectively weighted probabilities and rewards.

    PubMed

    Ackermann, John F; Landy, Michael S

    2015-02-01

    Subjects performed a visual detection task in which the probability of target occurrence at each of the two possible locations, and the rewards for correct responses for each, were varied across conditions. To maximize monetary gain, observers should bias their responses, choosing one location more often than the other in line with the varied probabilities and rewards. Typically, and in our task, observers do not bias their responses to the extent they should, and instead distribute their responses more evenly across locations, a phenomenon referred to as 'conservatism.' We investigated several hypotheses regarding the source of the conservatism. We measured utility and probability weighting functions under Prospect Theory for each subject in an independent economic choice task and used the weighting-function parameters to calculate each subject's subjective utility (SU(c)) as a function of the criterion c, and the corresponding weighted optimal criteria (wc opt ). Subjects' criteria were not close to optimal relative to wc opt . The slope of SU(c) and of expected gain EG(c) at the neutral criterion corresponding to β = 1 were both predictive of the subjects' criteria. The slope of SU(c) was a better predictor of observers' decision criteria overall. Thus, rather than behaving optimally, subjects move their criterion away from the neutral criterion by estimating how much they stand to gain by such a change based on the slope of subjective gain as a function of criterion, using inherently distorted probabilities and values.

  10. Sensitivity of feedforward neural networks to weight errors

    NASA Technical Reports Server (NTRS)

    Stevenson, Maryhelen; Widrow, Bernard; Winter, Rodney

    1990-01-01

    An analysis is made of the sensitivity of feedforward layered networks of Adaline elements (threshold logic units) to weight errors. An approximation is derived which expresses the probability of error for an output neuron of a large network (a network with many neurons per layer) as a function of the percentage change in the weights. As would be expected, the probability of error increases with the number of layers in the network and with the percentage change in the weights. The probability of error is essentially independent of the number of weights per neuron and of the number of neurons per layer, as long as these numbers are large (on the order of 100 or more).

  11. Economic Choices Reveal Probability Distortion in Macaque Monkeys

    PubMed Central

    Lak, Armin; Bossaerts, Peter; Schultz, Wolfram

    2015-01-01

    Economic choices are largely determined by two principal elements, reward value (utility) and probability. Although nonlinear utility functions have been acknowledged for centuries, nonlinear probability weighting (probability distortion) was only recently recognized as a ubiquitous aspect of real-world choice behavior. Even when outcome probabilities are known and acknowledged, human decision makers often overweight low probability outcomes and underweight high probability outcomes. Whereas recent studies measured utility functions and their corresponding neural correlates in monkeys, it is not known whether monkeys distort probability in a manner similar to humans. Therefore, we investigated economic choices in macaque monkeys for evidence of probability distortion. We trained two monkeys to predict reward from probabilistic gambles with constant outcome values (0.5 ml or nothing). The probability of winning was conveyed using explicit visual cues (sector stimuli). Choices between the gambles revealed that the monkeys used the explicit probability information to make meaningful decisions. Using these cues, we measured probability distortion from choices between the gambles and safe rewards. Parametric modeling of the choices revealed classic probability weighting functions with inverted-S shape. Therefore, the animals overweighted low probability rewards and underweighted high probability rewards. Empirical investigation of the behavior verified that the choices were best explained by a combination of nonlinear value and nonlinear probability distortion. Together, these results suggest that probability distortion may reflect evolutionarily preserved neuronal processing. PMID:25698750

  12. Economic choices reveal probability distortion in macaque monkeys.

    PubMed

    Stauffer, William R; Lak, Armin; Bossaerts, Peter; Schultz, Wolfram

    2015-02-18

    Economic choices are largely determined by two principal elements, reward value (utility) and probability. Although nonlinear utility functions have been acknowledged for centuries, nonlinear probability weighting (probability distortion) was only recently recognized as a ubiquitous aspect of real-world choice behavior. Even when outcome probabilities are known and acknowledged, human decision makers often overweight low probability outcomes and underweight high probability outcomes. Whereas recent studies measured utility functions and their corresponding neural correlates in monkeys, it is not known whether monkeys distort probability in a manner similar to humans. Therefore, we investigated economic choices in macaque monkeys for evidence of probability distortion. We trained two monkeys to predict reward from probabilistic gambles with constant outcome values (0.5 ml or nothing). The probability of winning was conveyed using explicit visual cues (sector stimuli). Choices between the gambles revealed that the monkeys used the explicit probability information to make meaningful decisions. Using these cues, we measured probability distortion from choices between the gambles and safe rewards. Parametric modeling of the choices revealed classic probability weighting functions with inverted-S shape. Therefore, the animals overweighted low probability rewards and underweighted high probability rewards. Empirical investigation of the behavior verified that the choices were best explained by a combination of nonlinear value and nonlinear probability distortion. Together, these results suggest that probability distortion may reflect evolutionarily preserved neuronal processing. Copyright © 2015 Stauffer et al.

  13. Money, kisses, and electric shocks: on the affective psychology of risk.

    PubMed

    Rottenstreich, Y; Hsee, C K

    2001-05-01

    Prospect theory's S-shaped weighting function is often said to reflect the psychophysics of chance. We propose an affective rather than psychophysical deconstruction of the weighting function resting on two assumptions. First, preferences depend on the affective reactions associated with potential outcomes of a risky choice. Second, even with monetary values controlled, some outcomes are relatively affect-rich and others relatively affect-poor. Although the psychophysical and affective approaches are complementary, the affective approach has one novel implication: Weighting functions will be more S-shaped for lotteries involving affect-rich than affect-poor outcomes. That is, people will be more sensitive to departures from impossibility and certainty but less sensitive to intermediate probability variations for affect-rich outcomes. We corroborated this prediction by observing probability-outcome interactions: An affect-poor prize was preferred over an affect-rich prize under certainty, but the direction of preference reversed under low probability. We suggest that the assumption of probability-outcome independence, adopted by both expected-utility and prospect theory, may hold across outcomes of different monetary values, but not different affective values.

  14. Numeracy moderates the influence of task-irrelevant affect on probability weighting.

    PubMed

    Traczyk, Jakub; Fulawka, Kamil

    2016-06-01

    Statistical numeracy, defined as the ability to understand and process statistical and probability information, plays a significant role in superior decision making. However, recent research has demonstrated that statistical numeracy goes beyond simple comprehension of numbers and mathematical operations. On the contrary to previous studies that were focused on emotions integral to risky prospects, we hypothesized that highly numerate individuals would exhibit more linear probability weighting because they would be less biased by incidental and decision-irrelevant affect. Participants were instructed to make a series of insurance decisions preceded by negative (i.e., fear-inducing) or neutral stimuli. We found that incidental negative affect increased the curvature of the probability weighting function (PWF). Interestingly, this effect was significant only for less numerate individuals, while probability weighting in more numerate people was not altered by decision-irrelevant affect. We propose two candidate mechanisms for the observed effect. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Music-evoked incidental happiness modulates probability weighting during risky lottery choices

    PubMed Central

    Schulreich, Stefan; Heussen, Yana G.; Gerhardt, Holger; Mohr, Peter N. C.; Binkofski, Ferdinand C.; Koelsch, Stefan; Heekeren, Hauke R.

    2014-01-01

    We often make decisions with uncertain consequences. The outcomes of the choices we make are usually not perfectly predictable but probabilistic, and the probabilities can be known or unknown. Probability judgments, i.e., the assessment of unknown probabilities, can be influenced by evoked emotional states. This suggests that also the weighting of known probabilities in decision making under risk might be influenced by incidental emotions, i.e., emotions unrelated to the judgments and decisions at issue. Probability weighting describes the transformation of probabilities into subjective decision weights for outcomes and is one of the central components of cumulative prospect theory (CPT) that determine risk attitudes. We hypothesized that music-evoked emotions would modulate risk attitudes in the gain domain and in particular probability weighting. Our experiment featured a within-subject design consisting of four conditions in separate sessions. In each condition, the 41 participants listened to a different kind of music—happy, sad, or no music, or sequences of random tones—and performed a repeated pairwise lottery choice task. We found that participants chose the riskier lotteries significantly more often in the “happy” than in the “sad” and “random tones” conditions. Via structural regressions based on CPT, we found that the observed changes in participants' choices can be attributed to changes in the elevation parameter of the probability weighting function: in the “happy” condition, participants showed significantly higher decision weights associated with the larger payoffs than in the “sad” and “random tones” conditions. Moreover, elevation correlated positively with self-reported music-evoked happiness. Thus, our experimental results provide evidence in favor of a causal effect of incidental happiness on risk attitudes that can be explained by changes in probability weighting. PMID:24432007

  16. Music-evoked incidental happiness modulates probability weighting during risky lottery choices.

    PubMed

    Schulreich, Stefan; Heussen, Yana G; Gerhardt, Holger; Mohr, Peter N C; Binkofski, Ferdinand C; Koelsch, Stefan; Heekeren, Hauke R

    2014-01-07

    We often make decisions with uncertain consequences. The outcomes of the choices we make are usually not perfectly predictable but probabilistic, and the probabilities can be known or unknown. Probability judgments, i.e., the assessment of unknown probabilities, can be influenced by evoked emotional states. This suggests that also the weighting of known probabilities in decision making under risk might be influenced by incidental emotions, i.e., emotions unrelated to the judgments and decisions at issue. Probability weighting describes the transformation of probabilities into subjective decision weights for outcomes and is one of the central components of cumulative prospect theory (CPT) that determine risk attitudes. We hypothesized that music-evoked emotions would modulate risk attitudes in the gain domain and in particular probability weighting. Our experiment featured a within-subject design consisting of four conditions in separate sessions. In each condition, the 41 participants listened to a different kind of music-happy, sad, or no music, or sequences of random tones-and performed a repeated pairwise lottery choice task. We found that participants chose the riskier lotteries significantly more often in the "happy" than in the "sad" and "random tones" conditions. Via structural regressions based on CPT, we found that the observed changes in participants' choices can be attributed to changes in the elevation parameter of the probability weighting function: in the "happy" condition, participants showed significantly higher decision weights associated with the larger payoffs than in the "sad" and "random tones" conditions. Moreover, elevation correlated positively with self-reported music-evoked happiness. Thus, our experimental results provide evidence in favor of a causal effect of incidental happiness on risk attitudes that can be explained by changes in probability weighting.

  17. Probabilistic Inference: Task Dependency and Individual Differences of Probability Weighting Revealed by Hierarchical Bayesian Modeling

    PubMed Central

    Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno

    2016-01-01

    Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323

  18. Modeling the effect of reward amount on probability discounting.

    PubMed

    Myerson, Joel; Green, Leonard; Morris, Joshua

    2011-03-01

    The present study with college students examined the effect of amount on the discounting of probabilistic monetary rewards. A hyperboloid function accurately described the discounting of hypothetical rewards ranging in amount from $20 to $10,000,000. The degree of discounting increased continuously with amount of probabilistic reward. This effect of amount was not due to changes in the rate parameter of the discounting function, but rather was due to increases in the exponent. These results stand in contrast to those observed with the discounting of delayed monetary rewards, in which the degree of discounting decreases with reward amount due to amount-dependent decreases in the rate parameter. Taken together, this pattern of results suggests that delay and probability discounting reflect different underlying mechanisms. That is, the fact that the exponent in the delay discounting function is independent of amount is consistent with a psychophysical scaling interpretation, whereas the finding that the exponent of the probability-discounting function is amount-dependent is inconsistent with such an interpretation. Instead, the present results are consistent with the idea that the probability-discounting function is itself the product of a value function and a weighting function. This idea was first suggested by Kahneman and Tversky (1979), although their prospect theory does not predict amount effects like those observed. The effect of amount on probability discounting was parsimoniously incorporated into our hyperboloid discounting function by assuming that the exponent was proportional to the amount raised to a power. The amount-dependent exponent of the probability-discounting function may be viewed as reflecting the effect of amount on the weighting of the probability with which the reward will be received.

  19. Neural response to reward anticipation under risk is nonlinear in probabilities.

    PubMed

    Hsu, Ming; Krajbich, Ian; Zhao, Chen; Camerer, Colin F

    2009-02-18

    A widely observed phenomenon in decision making under risk is the apparent overweighting of unlikely events and the underweighting of nearly certain events. This violates standard assumptions in expected utility theory, which requires that expected utility be linear (objective) in probabilities. Models such as prospect theory have relaxed this assumption and introduced the notion of a "probability weighting function," which captures the key properties found in experimental data. This study reports functional magnetic resonance imaging (fMRI) data that neural response to expected reward is nonlinear in probabilities. Specifically, we found that activity in the striatum during valuation of monetary gambles are nonlinear in probabilities in the pattern predicted by prospect theory, suggesting that probability distortion is reflected at the level of the reward encoding process. The degree of nonlinearity reflected in individual subjects' decisions is also correlated with striatal activity across subjects. Our results shed light on the neural mechanisms of reward processing, and have implications for future neuroscientific studies of decision making involving extreme tails of the distribution, where probability weighting provides an explanation for commonly observed behavioral anomalies.

  20. Exaggerated risk: prospect theory and probability weighting in risky choice.

    PubMed

    Kusev, Petko; van Schaik, Paul; Ayton, Peter; Dent, John; Chater, Nick

    2009-11-01

    In 5 experiments, we studied precautionary decisions in which participants decided whether or not to buy insurance with specified cost against an undesirable event with specified probability and cost. We compared the risks taken for precautionary decisions with those taken for equivalent monetary gambles. Fitting these data to Tversky and Kahneman's (1992) prospect theory, we found that the weighting function required to model precautionary decisions differed from that required for monetary gambles. This result indicates a failure of the descriptive invariance axiom of expected utility theory. For precautionary decisions, people overweighted small, medium-sized, and moderately large probabilities-they exaggerated risks. This effect is not anticipated by prospect theory or experience-based decision research (Hertwig, Barron, Weber, & Erev, 2004). We found evidence that exaggerated risk is caused by the accessibility of events in memory: The weighting function varies as a function of the accessibility of events. This suggests that people's experiences of events leak into decisions even when risk information is explicitly provided. Our findings highlight a need to investigate how variation in decision content produces variation in preferences for risk.

  1. Convergence analyses on on-line weight noise injection-based training algorithms for MLPs.

    PubMed

    Sum, John; Leung, Chi-Sing; Ho, Kevin

    2012-11-01

    Injecting weight noise during training is a simple technique that has been proposed for almost two decades. However, little is known about its convergence behavior. This paper studies the convergence of two weight noise injection-based training algorithms, multiplicative weight noise injection with weight decay and additive weight noise injection with weight decay. We consider that they are applied to multilayer perceptrons either with linear or sigmoid output nodes. Let w(t) be the weight vector, let V(w) be the corresponding objective function of the training algorithm, let α >; 0 be the weight decay constant, and let μ(t) be the step size. We show that if μ(t)→ 0, then with probability one E[||w(t)||2(2)] is bound and lim(t) → ∞ ||w(t)||2 exists. Based on these two properties, we show that if μ(t)→ 0, Σtμ(t)=∞, and Σtμ(t)(2) <; ∞, then with probability one these algorithms converge. Moreover, w(t) converges with probability one to a point where ∇wV(w)=0.

  2. [Weight loss in overweight or obese patients and family functioning].

    PubMed

    Jaramillo-Sánchez, Rosalba; Espinosa-de Santillana, Irene; Espíndola-Jaramillo, Ilia Angélica

    2012-01-01

    to determine the association between weight loss and family functioning. a cohort of 168 persons with overweight or obesity from 20-49 years, either sex, with no comorbidity was studied at the nutrition department. A sociodemographic data was obtained and FACES III instrument to measure family functioning was applied. At the third month a new assessment of the body mass index was measured. Descriptive statistical analysis and relative risk were done. obesity presented in 50.6 %, 59.53 % of them did not lose weight. Family dysfunction was present in 56.6 % of which 50 % did not lose weight. From 43.4 % of functional families, 9.52 % did not lose weight (p = 0.001). The probability or risk of not losing weight was to belong to a dysfunctional family is 4.03 % (CI = 2.60-6.25). A significant association was found between the variables: weight loss and family functioning. Belonging to a dysfunctional family may be a risk factor for not losing weight.

  3. Risk-taking in disorders of natural and drug rewards: neural correlates and effects of probability, valence, and magnitude.

    PubMed

    Voon, Valerie; Morris, Laurel S; Irvine, Michael A; Ruck, Christian; Worbe, Yulia; Derbyshire, Katherine; Rankov, Vladan; Schreiber, Liana Rn; Odlaug, Brian L; Harrison, Neil A; Wood, Jonathan; Robbins, Trevor W; Bullmore, Edward T; Grant, Jon E

    2015-03-01

    Pathological behaviors toward drugs and food rewards have underlying commonalities. Risk-taking has a fourfold pattern varying as a function of probability and valence leading to the nonlinearity of probability weighting with overweighting of small probabilities and underweighting of large probabilities. Here we assess these influences on risk-taking in patients with pathological behaviors toward drug and food rewards and examine structural neural correlates of nonlinearity of probability weighting in healthy volunteers. In the anticipation of rewards, subjects with binge eating disorder show greater risk-taking, similar to substance-use disorders. Methamphetamine-dependent subjects had greater nonlinearity of probability weighting along with impaired subjective discrimination of probability and reward magnitude. Ex-smokers also had lower risk-taking to rewards compared with non-smokers. In the anticipation of losses, obesity without binge eating had a similar pattern to other substance-use disorders. Obese subjects with binge eating also have impaired discrimination of subjective value similar to that of the methamphetamine-dependent subjects. Nonlinearity of probability weighting was associated with lower gray matter volume in dorsolateral and ventromedial prefrontal cortex and orbitofrontal cortex in healthy volunteers. Our findings support a distinct subtype of binge eating disorder in obesity with similarities in risk-taking in the reward domain to substance use disorders. The results dovetail with the current approach of defining mechanistically based dimensional approaches rather than categorical approaches to psychiatric disorders. The relationship to risk probability and valence may underlie the propensity toward pathological behaviors toward different types of rewards.

  4. Risk-Taking in Disorders of Natural and Drug Rewards: Neural Correlates and Effects of Probability, Valence, and Magnitude

    PubMed Central

    Voon, Valerie; Morris, Laurel S; Irvine, Michael A; Ruck, Christian; Worbe, Yulia; Derbyshire, Katherine; Rankov, Vladan; Schreiber, Liana RN; Odlaug, Brian L; Harrison, Neil A; Wood, Jonathan; Robbins, Trevor W; Bullmore, Edward T; Grant, Jon E

    2015-01-01

    Pathological behaviors toward drugs and food rewards have underlying commonalities. Risk-taking has a fourfold pattern varying as a function of probability and valence leading to the nonlinearity of probability weighting with overweighting of small probabilities and underweighting of large probabilities. Here we assess these influences on risk-taking in patients with pathological behaviors toward drug and food rewards and examine structural neural correlates of nonlinearity of probability weighting in healthy volunteers. In the anticipation of rewards, subjects with binge eating disorder show greater risk-taking, similar to substance-use disorders. Methamphetamine-dependent subjects had greater nonlinearity of probability weighting along with impaired subjective discrimination of probability and reward magnitude. Ex-smokers also had lower risk-taking to rewards compared with non-smokers. In the anticipation of losses, obesity without binge eating had a similar pattern to other substance-use disorders. Obese subjects with binge eating also have impaired discrimination of subjective value similar to that of the methamphetamine-dependent subjects. Nonlinearity of probability weighting was associated with lower gray matter volume in dorsolateral and ventromedial prefrontal cortex and orbitofrontal cortex in healthy volunteers. Our findings support a distinct subtype of binge eating disorder in obesity with similarities in risk-taking in the reward domain to substance use disorders. The results dovetail with the current approach of defining mechanistically based dimensional approaches rather than categorical approaches to psychiatric disorders. The relationship to risk probability and valence may underlie the propensity toward pathological behaviors toward different types of rewards. PMID:25270821

  5. Minimum Expected Risk Estimation for Near-neighbor Classification

    DTIC Science & Technology

    2006-04-01

    We consider the problems of class probability estimation and classification when using near-neighbor classifiers, such as k-nearest neighbors ( kNN ...estimate for weighted kNN classifiers with different prior information, for a broad class of risk functions. Theory and simulations show how significant...the difference is compared to the standard maximum likelihood weighted kNN estimates. Comparisons are made with uniform weights, symmetric weights

  6. Efficient Iris Recognition Based on Optimal Subfeature Selection and Weighted Subregion Fusion

    PubMed Central

    Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, andMMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity. PMID:24683317

  7. Efficient iris recognition based on optimal subfeature selection and weighted subregion fusion.

    PubMed

    Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; He, Fei; Wang, Hongye; Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, and MMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity.

  8. Fracture mechanics analysis of cracked structures using weight function and neural network method

    NASA Astrophysics Data System (ADS)

    Chen, J. G.; Zang, F. G.; Yang, Y.; Shi, K. K.; Fu, X. L.

    2018-06-01

    Stress intensity factors(SIFs) due to thermal-mechanical load has been established by using weight function method. Two reference stress states sere used to determine the coefficients in the weight function. Results were evaluated by using data from literature and show a good agreement between them. So, the SIFs can be determined quickly using the weight function obtained when cracks subjected to arbitrary loads, and presented method can be used for probabilistic fracture mechanics analysis. A probabilistic methodology considering Monte-Carlo with neural network (MCNN) has been developed. The results indicate that an accurate probabilistic characteristic of the KI can be obtained by using the developed method. The probability of failure increases with the increasing of loads, and the relationship between is nonlinear.

  9. Computer Vision Tracking Using Particle Filters for 3D Position Estimation

    DTIC Science & Technology

    2014-03-27

    the United States Air Force, the Department of Defense, or the United States Government. This material is declared a work of the U.S. Government and is...probability distribution (unless otherwise noted) π proposal distribution ω importance weight i index of normalized weights δ Dirac -delta function x...p(x) and the importance weights, where δ is the Dirac delta function [2, p. 178]. p(x) = N∑ n=1 ωnδ (x − xn) (2.14) ωn ∝ p(x) π(x) (2.15) Applying

  10. Economic decision-making compared with an equivalent motor task.

    PubMed

    Wu, Shih-Wei; Delgado, Mauricio R; Maloney, Laurence T

    2009-04-14

    There is considerable evidence that human economic decision-making deviates from the predictions of expected utility theory (EUT) and that human performance conforms to EUT in many perceptual and motor decision tasks. It is possible that these results reflect a real difference in decision-making in the 2 domains but it is also possible that the observed discrepancy simply reflects typical differences in experimental design. We developed a motor task that is mathematically equivalent to choosing between lotteries and used it to compare how the same subject chose between classical economic lotteries and the same lotteries presented in equivalent motor form. In experiment 1, we found that subjects are more risk seeking in deciding between motor lotteries. In experiment 2, we used cumulative prospect theory to model choice and separately estimated the probability weighting functions and the value functions for each subject carrying out each task. We found no patterned differences in how subjects represented outcome value in the motor and the classical tasks. However, the probability weighting functions for motor and classical tasks were markedly and significantly different. Those for the classical task showed a typical tendency to overweight small probabilities and underweight large probabilities, and those for the motor task showed the opposite pattern of probability distortion. This outcome also accounts for the increased risk-seeking observed in the motor tasks of experiment 1. We conclude that the same subject distorts probability, but not value, differently in making identical decisions in motor and classical form.

  11. Signal-processing analysis of the MC2823 radar fuze: an addendum concerning clutter effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jelinek, D.A.

    1978-07-01

    A detailed analysis of the signal processing of the MC2823 radar fuze was published by Thompson in 1976 which enabled the computation of dud probability versus signal-to-noise ratio where the noise was receiver noise. An addendum to Thompson's work was published by Williams in 1978 that modified the weighting function used by Thompson. The analysis presented herein extends the work of Thompson to include the effects of clutter (the non-signal portion of the echo from a terrain) using the new weighting function. This extension enables computation of dud probability versus signal-to-total-noise ratio where total noise is the sum of themore » receiver-noise power and the clutter power.« less

  12. A Weighted Configuration Model and Inhomogeneous Epidemics

    NASA Astrophysics Data System (ADS)

    Britton, Tom; Deijfen, Maria; Liljeros, Fredrik

    2011-12-01

    A random graph model with prescribed degree distribution and degree dependent edge weights is introduced. Each vertex is independently equipped with a random number of half-edges and each half-edge is assigned an integer valued weight according to a distribution that is allowed to depend on the degree of its vertex. Half-edges with the same weight are then paired randomly to create edges. An expression for the threshold for the appearance of a giant component in the resulting graph is derived using results on multi-type branching processes. The same technique also gives an expression for the basic reproduction number for an epidemic on the graph where the probability that a certain edge is used for transmission is a function of the edge weight (reflecting how closely `connected' the corresponding vertices are). It is demonstrated that, if vertices with large degree tend to have large (small) weights on their edges and if the transmission probability increases with the edge weight, then it is easier (harder) for the epidemic to take off compared to a randomized epidemic with the same degree and weight distribution. A recipe for calculating the probability of a large outbreak in the epidemic and the size of such an outbreak is also given. Finally, the model is fitted to three empirical weighted networks of importance for the spread of contagious diseases and it is shown that R 0 can be substantially over- or underestimated if the correlation between degree and weight is not taken into account.

  13. The maximum entropy method of moments and Bayesian probability theory

    NASA Astrophysics Data System (ADS)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  14. A computer tool for a minimax criterion in binary response and heteroscedastic simple linear regression models.

    PubMed

    Casero-Alonso, V; López-Fidalgo, J; Torsney, B

    2017-01-01

    Binary response models are used in many real applications. For these models the Fisher information matrix (FIM) is proportional to the FIM of a weighted simple linear regression model. The same is also true when the weight function has a finite integral. Thus, optimal designs for one binary model are also optimal for the corresponding weighted linear regression model. The main objective of this paper is to provide a tool for the construction of MV-optimal designs, minimizing the maximum of the variances of the estimates, for a general design space. MV-optimality is a potentially difficult criterion because of its nondifferentiability at equal variance designs. A methodology for obtaining MV-optimal designs where the design space is a compact interval [a, b] will be given for several standard weight functions. The methodology will allow us to build a user-friendly computer tool based on Mathematica to compute MV-optimal designs. Some illustrative examples will show a representation of MV-optimal designs in the Euclidean plane, taking a and b as the axes. The applet will be explained using two relevant models. In the first one the case of a weighted linear regression model is considered, where the weight function is directly chosen from a typical family. In the second example a binary response model is assumed, where the probability of the outcome is given by a typical probability distribution. Practitioners can use the provided applet to identify the solution and to know the exact support points and design weights. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Prospect Theory and Coercive Bargaining

    ERIC Educational Resources Information Center

    Butler, Christopher K.

    2007-01-01

    Despite many applications of prospect theory's concepts to explain political and strategic phenomena, formal analyses of strategic problems using prospect theory are rare. Using Fearon's model of bargaining, Tversky and Kahneman's value function, and an existing probability weighting function, I construct a model that demonstrates the differences…

  16. Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data

    NASA Astrophysics Data System (ADS)

    Li, Lan; Chen, Erxue; Li, Zengyuan

    2013-01-01

    This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.

  17. Two-step estimation in ratio-of-mediator-probability weighted causal mediation analysis.

    PubMed

    Bein, Edward; Deutsch, Jonah; Hong, Guanglei; Porter, Kristin E; Qin, Xu; Yang, Cheng

    2018-04-15

    This study investigates appropriate estimation of estimator variability in the context of causal mediation analysis that employs propensity score-based weighting. Such an analysis decomposes the total effect of a treatment on the outcome into an indirect effect transmitted through a focal mediator and a direct effect bypassing the mediator. Ratio-of-mediator-probability weighting estimates these causal effects by adjusting for the confounding impact of a large number of pretreatment covariates through propensity score-based weighting. In step 1, a propensity score model is estimated. In step 2, the causal effects of interest are estimated using weights derived from the prior step's regression coefficient estimates. Statistical inferences obtained from this 2-step estimation procedure are potentially problematic if the estimated standard errors of the causal effect estimates do not reflect the sampling uncertainty in the estimation of the weights. This study extends to ratio-of-mediator-probability weighting analysis a solution to the 2-step estimation problem by stacking the score functions from both steps. We derive the asymptotic variance-covariance matrix for the indirect effect and direct effect 2-step estimators, provide simulation results, and illustrate with an application study. Our simulation results indicate that the sampling uncertainty in the estimated weights should not be ignored. The standard error estimation using the stacking procedure offers a viable alternative to bootstrap standard error estimation. We discuss broad implications of this approach for causal analysis involving propensity score-based weighting. Copyright © 2018 John Wiley & Sons, Ltd.

  18. The internal consistency of the standard gamble: tests after adjusting for prospect theory.

    PubMed

    Oliver, Adam

    2003-07-01

    This article reports a study that tests whether the internal consistency of the standard gamble can be improved upon by incorporating loss weighting and probability transformation parameters in the standard gamble valuation procedure. Five alternatives to the standard EU formulation are considered: (1) probability transformation within an EU framework; and, within a prospect theory framework, (2) loss weighting and full probability transformation, (3) no loss weighting and full probability transformation, (4) loss weighting and no probability transformation, and (5) loss weighting and partial probability transformation. Of the five alternatives, only the prospect theory formulation with loss weighting and no probability transformation offers an improvement in internal consistency over the standard EU valuation procedure.

  19. Probabilistic treatment of the uncertainty from the finite size of weighted Monte Carlo data

    NASA Astrophysics Data System (ADS)

    Glüsenkamp, Thorsten

    2018-06-01

    Parameter estimation in HEP experiments often involves Monte Carlo simulation to model the experimental response function. A typical application are forward-folding likelihood analyses with re-weighting, or time-consuming minimization schemes with a new simulation set for each parameter value. Problematically, the finite size of such Monte Carlo samples carries intrinsic uncertainty that can lead to a substantial bias in parameter estimation if it is neglected and the sample size is small. We introduce a probabilistic treatment of this problem by replacing the usual likelihood functions with novel generalized probability distributions that incorporate the finite statistics via suitable marginalization. These new PDFs are analytic, and can be used to replace the Poisson, multinomial, and sample-based unbinned likelihoods, which covers many use cases in high-energy physics. In the limit of infinite statistics, they reduce to the respective standard probability distributions. In the general case of arbitrary Monte Carlo weights, the expressions involve the fourth Lauricella function FD, for which we find a new finite-sum representation in a certain parameter setting. The result also represents an exact form for Carlson's Dirichlet average Rn with n > 0, and thereby an efficient way to calculate the probability generating function of the Dirichlet-multinomial distribution, the extended divided difference of a monomial, or arbitrary moments of univariate B-splines. We demonstrate the bias reduction of our approach with a typical toy Monte Carlo problem, estimating the normalization of a peak in a falling energy spectrum, and compare the results with previously published methods from the literature.

  20. New paradoxes of risky decision making.

    PubMed

    Birnbaum, Michael H

    2008-04-01

    During the last 25 years, prospect theory and its successor, cumulative prospect theory, replaced expected utility as the dominant descriptive theories of risky decision making. Although these models account for the original Allais paradoxes, 11 new paradoxes show where prospect theories lead to self-contradiction or systematic false predictions. The new findings are consistent with and, in several cases, were predicted in advance by simple "configural weight" models in which probability-consequence branches are weighted by a function that depends on branch probability and ranks of consequences on discrete branches. Although they have some similarities to later models called "rank-dependent utility," configural weight models do not satisfy coalescing, the assumption that branches leading to the same consequence can be combined by adding their probabilities. Nor do they satisfy cancellation, the "independence" assumption that branches common to both alternatives can be removed. The transfer of attention exchange model, with parameters estimated from previous data, correctly predicts results with all 11 new paradoxes. Apparently, people do not frame choices as prospects but, instead, as trees with branches.

  1. Tensor distribution function

    NASA Astrophysics Data System (ADS)

    Leow, Alex D.; Zhu, Siwei

    2008-03-01

    Diffusion weighted MR imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitizing gradients along a minimum of 6 directions, second-order tensors (represetnted by 3-by-3 positive definiite matrices) can be computed to model dominant diffusion processes. However, it has been shown that conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g. crossing fiber tracts. More recently, High Angular Resolution Diffusion Imaging (HARDI) seeks to address this issue by employing more than 6 gradient directions. To account for fiber crossing when analyzing HARDI data, several methodologies have been introduced. For example, q-ball imaging was proposed to approximate Orientation Diffusion Function (ODF). Similarly, the PAS method seeks to reslove the angular structure of displacement probability functions using the maximum entropy principle. Alternatively, deconvolution methods extract multiple fiber tracts by computing fiber orientations using a pre-specified single fiber response function. In this study, we introduce Tensor Distribution Function (TDF), a probability function defined on the space of symmetric and positive definite matrices. Using calculus of variations, we solve for the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, ODF can easily be computed by analytical integration of the resulting displacement probability function. Moreover, principle fiber directions can also be directly derived from the TDF.

  2. About an adaptively weighted Kaplan-Meier estimate.

    PubMed

    Plante, Jean-François

    2009-09-01

    The minimum averaged mean squared error nonparametric adaptive weights use data from m possibly different populations to infer about one population of interest. The definition of these weights is based on the properties of the empirical distribution function. We use the Kaplan-Meier estimate to let the weights accommodate right-censored data and use them to define the weighted Kaplan-Meier estimate. The proposed estimate is smoother than the usual Kaplan-Meier estimate and converges uniformly in probability to the target distribution. Simulations show that the performances of the weighted Kaplan-Meier estimate on finite samples exceed that of the usual Kaplan-Meier estimate. A case study is also presented.

  3. The tensor distribution function.

    PubMed

    Leow, A D; Zhu, S; Zhan, L; McMahon, K; de Zubicaray, G I; Meredith, M; Wright, M J; Toga, A W; Thompson, P M

    2009-01-01

    Diffusion weighted magnetic resonance imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitized gradients along a minimum of six directions, second-order tensors (represented by three-by-three positive definite matrices) can be computed to model dominant diffusion processes. However, conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g., crossing fiber tracts. Recently, a number of high-angular resolution schemes with more than six gradient directions have been employed to address this issue. In this article, we introduce the tensor distribution function (TDF), a probability function defined on the space of symmetric positive definite matrices. Using the calculus of variations, we solve the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, the orientation distribution function (ODF) can easily be computed by analytic integration of the resulting displacement probability function. Moreover, a tensor orientation distribution function (TOD) may also be derived from the TDF, allowing for the estimation of principal fiber directions and their corresponding eigenvalues.

  4. Exploring activity-driven network with biased walks

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Wu, Ding Juan; Lv, Fang; Su, Meng Long

    We investigate the concurrent dynamics of biased random walks and the activity-driven network, where the preferential transition probability is in terms of the edge-weighting parameter. We also obtain the analytical expressions for stationary distribution and the coverage function in directed and undirected networks, all of which depend on the weight parameter. Appropriately adjusting this parameter, more effective search strategy can be obtained when compared with the unbiased random walk, whether in directed or undirected networks. Since network weights play a significant role in the diffusion process.

  5. On-line prognosis of fatigue crack propagation based on Gaussian weight-mixture proposal particle filter.

    PubMed

    Chen, Jian; Yuan, Shenfang; Qiu, Lei; Wang, Hui; Yang, Weibo

    2018-01-01

    Accurate on-line prognosis of fatigue crack propagation is of great meaning for prognostics and health management (PHM) technologies to ensure structural integrity, which is a challenging task because of uncertainties which arise from sources such as intrinsic material properties, loading, and environmental factors. The particle filter algorithm has been proved to be a powerful tool to deal with prognostic problems those are affected by uncertainties. However, most studies adopted the basic particle filter algorithm, which uses the transition probability density function as the importance density and may suffer from serious particle degeneracy problem. This paper proposes an on-line fatigue crack propagation prognosis method based on a novel Gaussian weight-mixture proposal particle filter and the active guided wave based on-line crack monitoring. Based on the on-line crack measurement, the mixture of the measurement probability density function and the transition probability density function is proposed to be the importance density. In addition, an on-line dynamic update procedure is proposed to adjust the parameter of the state equation. The proposed method is verified on the fatigue test of attachment lugs which are a kind of important joint components in aircraft structures. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Prediction of radiation-induced normal tissue complications in radiotherapy using functional image data

    NASA Astrophysics Data System (ADS)

    Nioutsikou, Elena; Partridge, Mike; Bedford, James L.; Webb, Steve

    2005-03-01

    The aim of this study has been to explicitly include the functional heterogeneity of an organ as a factor that contributes to the probability of complication of normal tissues following radiotherapy. Situations for which the inclusion of this information can be advantageous to the design of treatment plans are then investigated. A Java program has been implemented for this purpose. This makes use of a voxelated model of a patient, which is based on registered anatomical and functional data in order to enable functional voxel weighting. Using this model, the functional dose-volume histogram (fDVH) and the functional normal tissue complication probability (fNTCP) are then introduced as extensions to the conventional dose-volume histogram (DVH) and normal tissue complication probability (NTCP). In the presence of functional heterogeneity, these tools are physically more meaningful for plan evaluation than the traditional indices, as they incorporate additional information and are anticipated to show a better correlation with outcome. New parameters mf, nf and TD50f are required to replace the m, n and TD50 parameters. A range of plausible values was investigated, awaiting fitting of these new parameters to patient outcomes where functional data have been measured. As an example, the model is applied to two lung datasets utilizing accurately registered computed tomography (CT) and single photon emission computed tomography (SPECT) perfusion scans. Assuming a linear perfusion-function relationship, the biological index mean perfusion weighted lung dose (MPWLD) has been extracted from integration over outlined regions of interest. In agreement with the MPWLD ranking, the fNTCP predictions reveal that incorporation of functional imaging in radiotherapy treatment planning is most beneficial for organs with a large volume effect and large focal areas of dysfunction. There is, however, no additional advantage in cases presenting with homogeneous function. Although presented for lung radiotherapy, this model is general. It can also be applied to positron emission tomography (PET)-CT or functional magnetic resonance imaging (fMRI)-CT registered data and extended to the functional description of tumour control probability.

  7. Combined loading criterial influence on structural performance

    NASA Technical Reports Server (NTRS)

    Kuchta, B. J.; Sealey, D. M.; Howell, L. J.

    1972-01-01

    An investigation was conducted to determine the influence of combined loading criteria on the space shuttle structural performance. The study consisted of four primary phases: Phase (1) The determination of the sensitivity of structural weight to various loading parameters associated with the space shuttle. Phase (2) The determination of the sensitivity of structural weight to various levels of loading parameter variability and probability. Phase (3) The determination of shuttle mission loading parameters variability and probability as a function of design evolution and the identification of those loading parameters where inadequate data exists. Phase (4) The determination of rational methods of combining both deterministic time varying and probabilistic loading parameters to provide realistic design criteria. The study results are presented.

  8. Stochastic transfer of polarized radiation in finite cloudy atmospheric media with reflective boundaries

    NASA Astrophysics Data System (ADS)

    Sallah, M.

    2014-03-01

    The problem of monoenergetic radiative transfer in a finite planar stochastic atmospheric medium with polarized (vector) Rayleigh scattering is proposed. The solution is presented for an arbitrary absorption and scattering cross sections. The extinction function of the medium is assumed to be a continuous random function of position, with fluctuations about the mean taken as Gaussian distributed. The joint probability distribution function of these Gaussian random variables is used to calculate the ensemble-averaged quantities, such as reflectivity and transmissivity, for an arbitrary correlation function. A modified Gaussian probability distribution function is also used to average the solution in order to exclude the probable negative values of the optical variable. Pomraning-Eddington approximation is used, at first, to obtain the deterministic analytical solution for both the total intensity and the difference function used to describe the polarized radiation. The problem is treated with specular reflecting boundaries and angular-dependent externally incident flux upon the medium from one side and with no flux from the other side. For the sake of comparison, two different forms of the weight function, which introduced to force the boundary conditions to be fulfilled, are used. Numerical results of the average reflectivity and average transmissivity are obtained for both Gaussian and modified Gaussian probability density functions at the different degrees of polarization.

  9. Reliability Based Design for a Raked Wing Tip of an Airframe

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2011-01-01

    A reliability-based optimization methodology has been developed to design the raked wing tip of the Boeing 767-400 extended range airliner made of composite and metallic materials. Design is formulated for an accepted level of risk or reliability. The design variables, weight and the constraints became functions of reliability. Uncertainties in the load, strength and the material properties, as well as the design variables, were modeled as random parameters with specified distributions, like normal, Weibull or Gumbel functions. The objective function and constraint, or a failure mode, became derived functions of the risk-level. Solution to the problem produced the optimum design with weight, variables and constraints as a function of the risk-level. Optimum weight versus reliability traced out an inverted-S shaped graph. The center of the graph corresponded to a 50 percent probability of success, or one failure in two samples. Under some assumptions, this design would be quite close to the deterministic optimum solution. The weight increased when reliability exceeded 50 percent, and decreased when the reliability was compromised. A design could be selected depending on the level of risk acceptable to a situation. The optimization process achieved up to a 20-percent reduction in weight over traditional design.

  10. An innovative method for offshore wind farm site selection based on the interval number with probability distribution

    NASA Astrophysics Data System (ADS)

    Wu, Yunna; Chen, Kaifeng; Xu, Hu; Xu, Chuanbo; Zhang, Haobo; Yang, Meng

    2017-12-01

    There is insufficient research relating to offshore wind farm site selection in China. The current methods for site selection have some defects. First, information loss is caused by two aspects: the implicit assumption that the probability distribution on the interval number is uniform; and ignoring the value of decision makers' (DMs') common opinion on the criteria information evaluation. Secondly, the difference in DMs' utility function has failed to receive attention. An innovative method is proposed in this article to solve these drawbacks. First, a new form of interval number and its weighted operator are proposed to reflect the uncertainty and reduce information loss. Secondly, a new stochastic dominance degree is proposed to quantify the interval number with a probability distribution. Thirdly, a two-stage method integrating the weighted operator with stochastic dominance degree is proposed to evaluate the alternatives. Finally, a case from China proves the effectiveness of this method.

  11. Explanation of asymmetric dynamics of human water consumption in arid regions: prospect theory versus expected utility theory

    NASA Astrophysics Data System (ADS)

    Tian, F.; Lu, Y.

    2017-12-01

    Based on socioeconomic and hydrological data in three arid inland basins and error analysis, the dynamics of human water consumption (HWC) are analyzed to be asymmetric, i.e., HWC increase rapidly in wet periods while maintain or decrease slightly in dry periods. Besides the qualitative analysis that in wet periods great water availability inspires HWC to grow fast but the now expanded economy is managed to sustain by over-exploitation in dry periods, two quantitative models are established and tested, based on expected utility theory (EUT) and prospect theory (PT) respectively. EUT states that humans make decisions based on the total expected utility, namely the sum of utility function multiplied by probability of each result, while PT states that the utility function is defined over gains and losses separately, and probability should be replaced by probability weighting function.

  12. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Yumin; Lum, Kai-Yew; Wang Qingguo

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus,more » the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.« less

  13. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    NASA Astrophysics Data System (ADS)

    Zhang, Yumin; Wang, Qing-Guo; Lum, Kai-Yew

    2009-03-01

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.

  14. The Difference Calculus and The NEgative Binomial Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowman, Kimiko o; Shenton, LR

    2007-01-01

    In a previous paper we state the dominant term in the third central moment of the maximum likelihood estimator k of the parameter k in the negative binomial probability function where the probability generating function is (p + 1 - pt){sup -k}. A partial sum of the series {Sigma}1/(k + x){sup 3} is involved, where x is a negative binomial random variate. In expectation this sum can only be found numerically using the computer. Here we give a simple definite integral in (0,1) for the generalized case. This means that now we do have a valid expression for {radical}{beta}{sub 11}(k)more » and {radical}{beta}{sub 11}(p). In addition we use the finite difference operator {Delta}, and E = 1 + {Delta} to set up formulas for low order moments. Other examples of the operators are quoted relating to the orthogonal set of polynomials associated with the negative binomial probability function used as a weight function.« less

  15. Singular solution of the Feller diffusion equation via a spectral decomposition.

    PubMed

    Gan, Xinjun; Waxman, David

    2015-01-01

    Feller studied a branching process and found that the distribution for this process approximately obeys a diffusion equation [W. Feller, in Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability (University of California Press, Berkeley and Los Angeles, 1951), pp. 227-246]. This diffusion equation and its generalizations play an important role in many scientific problems, including, physics, biology, finance, and probability theory. We work under the assumption that the fundamental solution represents a probability density and should account for all of the probability in the problem. Thus, under the circumstances where the random process can be irreversibly absorbed at the boundary, this should lead to the presence of a Dirac delta function in the fundamental solution at the boundary. However, such a feature is not present in the standard approach (Laplace transformation). Here we require that the total integrated probability is conserved. This yields a fundamental solution which, when appropriate, contains a term proportional to a Dirac delta function at the boundary. We determine the fundamental solution directly from the diffusion equation via spectral decomposition. We obtain exact expressions for the eigenfunctions, and when the fundamental solution contains a Dirac delta function at the boundary, every eigenfunction of the forward diffusion operator contains a delta function. We show how these combine to produce a weight of the delta function at the boundary which ensures the total integrated probability is conserved. The solution we present covers cases where parameters are time dependent, thereby greatly extending its applicability.

  16. Singular solution of the Feller diffusion equation via a spectral decomposition

    NASA Astrophysics Data System (ADS)

    Gan, Xinjun; Waxman, David

    2015-01-01

    Feller studied a branching process and found that the distribution for this process approximately obeys a diffusion equation [W. Feller, in Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability (University of California Press, Berkeley and Los Angeles, 1951), pp. 227-246]. This diffusion equation and its generalizations play an important role in many scientific problems, including, physics, biology, finance, and probability theory. We work under the assumption that the fundamental solution represents a probability density and should account for all of the probability in the problem. Thus, under the circumstances where the random process can be irreversibly absorbed at the boundary, this should lead to the presence of a Dirac delta function in the fundamental solution at the boundary. However, such a feature is not present in the standard approach (Laplace transformation). Here we require that the total integrated probability is conserved. This yields a fundamental solution which, when appropriate, contains a term proportional to a Dirac delta function at the boundary. We determine the fundamental solution directly from the diffusion equation via spectral decomposition. We obtain exact expressions for the eigenfunctions, and when the fundamental solution contains a Dirac delta function at the boundary, every eigenfunction of the forward diffusion operator contains a delta function. We show how these combine to produce a weight of the delta function at the boundary which ensures the total integrated probability is conserved. The solution we present covers cases where parameters are time dependent, thereby greatly extending its applicability.

  17. Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.

  18. Statistical methods for incomplete data: Some results on model misspecification.

    PubMed

    McIsaac, Michael; Cook, R J

    2017-02-01

    Inverse probability weighted estimating equations and multiple imputation are two of the most studied frameworks for dealing with incomplete data in clinical and epidemiological research. We examine the limiting behaviour of estimators arising from inverse probability weighted estimating equations, augmented inverse probability weighted estimating equations and multiple imputation when the requisite auxiliary models are misspecified. We compute limiting values for settings involving binary responses and covariates and illustrate the effects of model misspecification using simulations based on data from a breast cancer clinical trial. We demonstrate that, even when both auxiliary models are misspecified, the asymptotic biases of double-robust augmented inverse probability weighted estimators are often smaller than the asymptotic biases of estimators arising from complete-case analyses, inverse probability weighting or multiple imputation. We further demonstrate that use of inverse probability weighting or multiple imputation with slightly misspecified auxiliary models can actually result in greater asymptotic bias than the use of naïve, complete case analyses. These asymptotic results are shown to be consistent with empirical results from simulation studies.

  19. A modified weighted function method for parameter estimation of Pearson type three distribution

    NASA Astrophysics Data System (ADS)

    Liang, Zhongmin; Hu, Yiming; Li, Binquan; Yu, Zhongbo

    2014-04-01

    In this paper, an unconventional method called Modified Weighted Function (MWF) is presented for the conventional moment estimation of a probability distribution function. The aim of MWF is to estimate the coefficient of variation (CV) and coefficient of skewness (CS) from the original higher moment computations to the first-order moment calculations. The estimators for CV and CS of Pearson type three distribution function (PE3) were derived by weighting the moments of the distribution with two weight functions, which were constructed by combining two negative exponential-type functions. The selection of these weight functions was based on two considerations: (1) to relate weight functions to sample size in order to reflect the relationship between the quantity of sample information and the role of weight function and (2) to allocate more weights to data close to medium-tail positions in a sample series ranked in an ascending order. A Monte-Carlo experiment was conducted to simulate a large number of samples upon which statistical properties of MWF were investigated. For the PE3 parent distribution, results of MWF were compared to those of the original Weighted Function (WF) and Linear Moments (L-M). The results indicate that MWF was superior to WF and slightly better than L-M, in terms of statistical unbiasness and effectiveness. In addition, the robustness of MWF, WF, and L-M were compared by designing the Monte-Carlo experiment that samples are obtained from Log-Pearson type three distribution (LPE3), three parameter Log-Normal distribution (LN3), and Generalized Extreme Value distribution (GEV), respectively, but all used as samples from the PE3 distribution. The results show that in terms of statistical unbiasness, no one method possesses the absolutely overwhelming advantage among MWF, WF, and L-M, while in terms of statistical effectiveness, the MWF is superior to WF and L-M.

  20. Determination of riverbank erosion probability using Locally Weighted Logistic Regression

    NASA Astrophysics Data System (ADS)

    Ioannidou, Elena; Flori, Aikaterini; Varouchakis, Emmanouil A.; Giannakis, Georgios; Vozinaki, Anthi Eirini K.; Karatzas, George P.; Nikolaidis, Nikolaos

    2015-04-01

    Riverbank erosion is a natural geomorphologic process that affects the fluvial environment. The most important issue concerning riverbank erosion is the identification of the vulnerable locations. An alternative to the usual hydrodynamic models to predict vulnerable locations is to quantify the probability of erosion occurrence. This can be achieved by identifying the underlying relations between riverbank erosion and the geomorphological or hydrological variables that prevent or stimulate erosion. Thus, riverbank erosion can be determined by a regression model using independent variables that are considered to affect the erosion process. The impact of such variables may vary spatially, therefore, a non-stationary regression model is preferred instead of a stationary equivalent. Locally Weighted Regression (LWR) is proposed as a suitable choice. This method can be extended to predict the binary presence or absence of erosion based on a series of independent local variables by using the logistic regression model. It is referred to as Locally Weighted Logistic Regression (LWLR). Logistic regression is a type of regression analysis used for predicting the outcome of a categorical dependent variable (e.g. binary response) based on one or more predictor variables. The method can be combined with LWR to assign weights to local independent variables of the dependent one. LWR allows model parameters to vary over space in order to reflect spatial heterogeneity. The probabilities of the possible outcomes are modelled as a function of the independent variables using a logistic function. Logistic regression measures the relationship between a categorical dependent variable and, usually, one or several continuous independent variables by converting the dependent variable to probability scores. Then, a logistic regression is formed, which predicts success or failure of a given binary variable (e.g. erosion presence or absence) for any value of the independent variables. The erosion occurrence probability can be calculated in conjunction with the model deviance regarding the independent variables tested. The most straightforward measure for goodness of fit is the G statistic. It is a simple and effective way to study and evaluate the Logistic Regression model efficiency and the reliability of each independent variable. The developed statistical model is applied to the Koiliaris River Basin on the island of Crete, Greece. Two datasets of river bank slope, river cross-section width and indications of erosion were available for the analysis (12 and 8 locations). Two different types of spatial dependence functions, exponential and tricubic, were examined to determine the local spatial dependence of the independent variables at the measurement locations. The results show a significant improvement when the tricubic function is applied as the erosion probability is accurately predicted at all eight validation locations. Results for the model deviance show that cross-section width is more important than bank slope in the estimation of erosion probability along the Koiliaris riverbanks. The proposed statistical model is a useful tool that quantifies the erosion probability along the riverbanks and can be used to assist managing erosion and flooding events. Acknowledgements This work is part of an on-going THALES project (CYBERSENSORS - High Frequency Monitoring System for Integrated Water Resources Management of Rivers). The project has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: THALES. Investing in knowledge society through the European Social Fund.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antolin, J.; Instituto Carlos I de Fisica Teorica y Computacional, Universidad de Granada, ES-18071 Granada; Bouvrie, P. A.

    An alternative one-parameter measure of divergence is proposed, quantifying the discrepancy among general probability densities. Its main mathematical properties include (i) comparison among an arbitrary number of functions, (ii) the possibility of assigning different weights to each function according to its relevance on the comparative procedure, and (iii) ability to modify the relative contribution of different regions within the domain. Applications to the study of atomic density functions, in both conjugated spaces, show the versatility and universality of this divergence.

  2. Dopaminergic Drug Effects on Probability Weighting during Risky Decision Making.

    PubMed

    Ojala, Karita E; Janssen, Lieneke K; Hashemi, Mahur M; Timmer, Monique H M; Geurts, Dirk E M; Ter Huurne, Niels P; Cools, Roshan; Sescousse, Guillaume

    2018-01-01

    Dopamine has been associated with risky decision-making, as well as with pathological gambling, a behavioral addiction characterized by excessive risk-taking behavior. However, the specific mechanisms through which dopamine might act to foster risk-taking and pathological gambling remain elusive. Here we test the hypothesis that this might be achieved, in part, via modulation of subjective probability weighting during decision making. Human healthy controls ( n = 21) and pathological gamblers ( n = 16) played a decision-making task involving choices between sure monetary options and risky gambles both in the gain and loss domains. Each participant played the task twice, either under placebo or the dopamine D 2 /D 3 receptor antagonist sulpiride, in a double-blind counterbalanced design. A prospect theory modelling approach was used to estimate subjective probability weighting and sensitivity to monetary outcomes. Consistent with prospect theory, we found that participants presented a distortion in the subjective weighting of probabilities, i.e., they overweighted low probabilities and underweighted moderate to high probabilities, both in the gain and loss domains. Compared with placebo, sulpiride attenuated this distortion in the gain domain. Across drugs, the groups did not differ in their probability weighting, although gamblers consistently underweighted losing probabilities in the placebo condition. Overall, our results reveal that dopamine D 2 /D 3 receptor antagonism modulates the subjective weighting of probabilities in the gain domain, in the direction of more objective, economically rational decision making.

  3. Spreadsheet-Based Program for Simulating Atomic Emission Spectra

    ERIC Educational Resources Information Center

    Flannigan, David J.

    2014-01-01

    A simple Excel spreadsheet-based program for simulating atomic emission spectra from the properties of neutral atoms (e.g., energies and statistical weights of the electronic states, electronic partition functions, transition probabilities, etc.) is described. The contents of the spreadsheet (i.e., input parameters, formulas for calculating…

  4. Quantitative quasiperiodicity

    NASA Astrophysics Data System (ADS)

    Das, Suddhasattwa; Saiki, Yoshitaka; Sander, Evelyn; Yorke, James A.

    2017-11-01

    The Birkhoff ergodic theorem concludes that time averages, i.e. Birkhoff averages, B_N( f):=Σn=0N-1 f(x_n)/N of a function f along a length N ergodic trajectory (x_n) of a function T converge to the space average \\int f dμ , where μ is the unique invariant probability measure. Convergence of the time average to the space average is slow. We use a modified average of f(x_n) by giving very small weights to the ‘end’ terms when n is near 0 or N-1 . When (x_n) is a trajectory on a quasiperiodic torus and f and T are C^∞ , our weighted Birkhoff average (denoted \

  5. Model Considerations for Memory-based Automatic Music Transcription

    NASA Astrophysics Data System (ADS)

    Albrecht, Štěpán; Šmídl, Václav

    2009-12-01

    The problem of automatic music description is considered. The recorded music is modeled as a superposition of known sounds from a library weighted by unknown weights. Similar observation models are commonly used in statistics and machine learning. Many methods for estimation of the weights are available. These methods differ in the assumptions imposed on the weights. In Bayesian paradigm, these assumptions are typically expressed in the form of prior probability density function (pdf) on the weights. In this paper, commonly used assumptions about music signal are summarized and complemented by a new assumption. These assumptions are translated into pdfs and combined into a single prior density using combination of pdfs. Validity of the model is tested in simulation using synthetic data.

  6. Creation of the BMA ensemble for SST using a parallel processing technique

    NASA Astrophysics Data System (ADS)

    Kim, Kwangjin; Lee, Yang Won

    2013-10-01

    Despite the same purpose, each satellite product has different value because of its inescapable uncertainty. Also the satellite products have been calculated for a long time, and the kinds of the products are various and enormous. So the efforts for reducing the uncertainty and dealing with enormous data will be necessary. In this paper, we create an ensemble Sea Surface Temperature (SST) using MODIS Aqua, MODIS Terra and COMS (Communication Ocean and Meteorological Satellite). We used Bayesian Model Averaging (BMA) as ensemble method. The principle of the BMA is synthesizing the conditional probability density function (PDF) using posterior probability as weight. The posterior probability is estimated using EM algorithm. The BMA PDF is obtained by weighted average. As the result, the ensemble SST showed the lowest RMSE and MAE, which proves the applicability of BMA for satellite data ensemble. As future work, parallel processing techniques using Hadoop framework will be adopted for more efficient computation of very big satellite data.

  7. Riemann-Liouville Fractional Calculus of Certain Finite Class of Classical Orthogonal Polynomials

    NASA Astrophysics Data System (ADS)

    Malik, Pradeep; Swaminathan, A.

    2010-11-01

    In this work we consider certain class of classical orthogonal polynomials defined on the positive real line. These polynomials have their weight function related to the probability density function of F distribution and are finite in number up to orthogonality. We generalize these polynomials for fractional order by considering the Riemann-Liouville type operator on these polynomials. Various properties like explicit representation in terms of hypergeometric functions, differential equations, recurrence relations are derived.

  8. Constructing inverse probability weights for continuous exposures: a comparison of methods.

    PubMed

    Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S

    2014-03-01

    Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.

  9. Deep Learning Role in Early Diagnosis of Prostate Cancer

    PubMed Central

    Reda, Islam; Khalil, Ashraf; Elmogy, Mohammed; Abou El-Fetouh, Ahmed; Shalaby, Ahmed; Abou El-Ghar, Mohamed; Elmaghraby, Adel; Ghazal, Mohammed; El-Baz, Ayman

    2018-01-01

    The objective of this work is to develop a computer-aided diagnostic system for early diagnosis of prostate cancer. The presented system integrates both clinical biomarkers (prostate-specific antigen) and extracted features from diffusion-weighted magnetic resonance imaging collected at multiple b values. The presented system performs 3 major processing steps. First, prostate delineation using a hybrid approach that combines a level-set model with nonnegative matrix factorization. Second, estimation and normalization of diffusion parameters, which are the apparent diffusion coefficients of the delineated prostate volumes at different b values followed by refinement of those apparent diffusion coefficients using a generalized Gaussian Markov random field model. Then, construction of the cumulative distribution functions of the processed apparent diffusion coefficients at multiple b values. In parallel, a K-nearest neighbor classifier is employed to transform the prostate-specific antigen results into diagnostic probabilities. Finally, those prostate-specific antigen–based probabilities are integrated with the initial diagnostic probabilities obtained using stacked nonnegativity constraint sparse autoencoders that employ apparent diffusion coefficient–cumulative distribution functions for better diagnostic accuracy. Experiments conducted on 18 diffusion-weighted magnetic resonance imaging data sets achieved 94.4% diagnosis accuracy (sensitivity = 88.9% and specificity = 100%), which indicate the promising results of the presented computer-aided diagnostic system. PMID:29804518

  10. Elapsed decision time affects the weighting of prior probability in a perceptual decision task

    PubMed Central

    Hanks, Timothy D.; Mazurek, Mark E.; Kiani, Roozbeh; Hopp, Elizabeth; Shadlen, Michael N.

    2012-01-01

    Decisions are often based on a combination of new evidence with prior knowledge of the probable best choice. Optimal combination requires knowledge about the reliability of evidence, but in many realistic situations, this is unknown. Here we propose and test a novel theory: the brain exploits elapsed time during decision formation to combine sensory evidence with prior probability. Elapsed time is useful because (i) decisions that linger tend to arise from less reliable evidence, and (ii) the expected accuracy at a given decision time depends on the reliability of the evidence gathered up to that point. These regularities allow the brain to combine prior information with sensory evidence by weighting the latter in accordance with reliability. To test this theory, we manipulated the prior probability of the rewarded choice while subjects performed a reaction-time discrimination of motion direction using a range of stimulus reliabilities that varied from trial to trial. The theory explains the effect of prior probability on choice and reaction time over a wide range of stimulus strengths. We found that prior probability was incorporated into the decision process as a dynamic bias signal that increases as a function of decision time. This bias signal depends on the speed-accuracy setting of human subjects, and it is reflected in the firing rates of neurons in the lateral intraparietal cortex (LIP) of rhesus monkeys performing this task. PMID:21525274

  11. Elapsed decision time affects the weighting of prior probability in a perceptual decision task.

    PubMed

    Hanks, Timothy D; Mazurek, Mark E; Kiani, Roozbeh; Hopp, Elisabeth; Shadlen, Michael N

    2011-04-27

    Decisions are often based on a combination of new evidence with prior knowledge of the probable best choice. Optimal combination requires knowledge about the reliability of evidence, but in many realistic situations, this is unknown. Here we propose and test a novel theory: the brain exploits elapsed time during decision formation to combine sensory evidence with prior probability. Elapsed time is useful because (1) decisions that linger tend to arise from less reliable evidence, and (2) the expected accuracy at a given decision time depends on the reliability of the evidence gathered up to that point. These regularities allow the brain to combine prior information with sensory evidence by weighting the latter in accordance with reliability. To test this theory, we manipulated the prior probability of the rewarded choice while subjects performed a reaction-time discrimination of motion direction using a range of stimulus reliabilities that varied from trial to trial. The theory explains the effect of prior probability on choice and reaction time over a wide range of stimulus strengths. We found that prior probability was incorporated into the decision process as a dynamic bias signal that increases as a function of decision time. This bias signal depends on the speed-accuracy setting of human subjects, and it is reflected in the firing rates of neurons in the lateral intraparietal area (LIP) of rhesus monkeys performing this task.

  12. Dopaminergic Drug Effects on Probability Weighting during Risky Decision Making

    PubMed Central

    Timmer, Monique H. M.; ter Huurne, Niels P.

    2018-01-01

    Abstract Dopamine has been associated with risky decision-making, as well as with pathological gambling, a behavioral addiction characterized by excessive risk-taking behavior. However, the specific mechanisms through which dopamine might act to foster risk-taking and pathological gambling remain elusive. Here we test the hypothesis that this might be achieved, in part, via modulation of subjective probability weighting during decision making. Human healthy controls (n = 21) and pathological gamblers (n = 16) played a decision-making task involving choices between sure monetary options and risky gambles both in the gain and loss domains. Each participant played the task twice, either under placebo or the dopamine D2/D3 receptor antagonist sulpiride, in a double-blind counterbalanced design. A prospect theory modelling approach was used to estimate subjective probability weighting and sensitivity to monetary outcomes. Consistent with prospect theory, we found that participants presented a distortion in the subjective weighting of probabilities, i.e., they overweighted low probabilities and underweighted moderate to high probabilities, both in the gain and loss domains. Compared with placebo, sulpiride attenuated this distortion in the gain domain. Across drugs, the groups did not differ in their probability weighting, although gamblers consistently underweighted losing probabilities in the placebo condition. Overall, our results reveal that dopamine D2/D3 receptor antagonism modulates the subjective weighting of probabilities in the gain domain, in the direction of more objective, economically rational decision making. PMID:29632870

  13. Reduced brachial flow-mediated vasodilation in young adult ex extremely low birth weight preterm: a condition predictive of increased cardiovascular risk?

    PubMed

    Bassareo, P P; Fanos, V; Puddu, M; Demuru, P; Cadeddu, F; Balzarini, M; Mercuro, G

    2010-10-01

    Sporadic data present in literature report how preterm birth and low birth weight constitute the risk factors for the development of cardiovascular diseases in later life. To assess the presence of potential alterations to endothelial function in young adults born preterm at extremely low birth weight (<1000 g; ex ELBW). Thirty-two ex-ELBW subjects (10 males [M] and 22 females [F], aged 17-28 years, mean [+/- DS] 20.1 +/- 2.5 years) were compared with 32 healthy, age-matched subjects born at term (C, 9 M and 23 F). 1) pathological conditions known to affect endothelial function; 2) administration of drugs known to affect endothelial function. Endothelial function was assessed by non-invasive finger plethysmography, previously validated by the US Food and Drug Administration (Endopath; Itamar Medical Ltd., Cesarea, Israel). Endothelial function was significantly reduced in ex-ELBW subjects compared to C (1.94 +/- 0.37 vs. 2.68 +/- 0.41, p < 0.0001). Moreover, this function correlated significantly with gestational age (r = 0.56, p < 0.0009) and birth weight (r = 0.63, p < 0.0001). The results obtained reveal a significant decrease in endothelial function of ex-ELBW subjects compared to controls, underlining a probable correlation with preterm birth and low birth weight. Taken together, these results suggest that an ELBW may underlie the onset of early circulatory dysfunction predictive of increased cardiovascular risk.

  14. A Riemannian framework for orientation distribution function computing.

    PubMed

    Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid

    2009-01-01

    Compared with Diffusion Tensor Imaging (DTI), High Angular Resolution Imaging (HARDI) can better explore the complex microstructure of white matter. Orientation Distribution Function (ODF) is used to describe the probability of the fiber direction. Fisher information metric has been constructed for probability density family in Information Geometry theory and it has been successfully applied for tensor computing in DTI. In this paper, we present a state of the art Riemannian framework for ODF computing based on Information Geometry and sparse representation of orthonormal bases. In this Riemannian framework, the exponential map, logarithmic map and geodesic have closed forms. And the weighted Frechet mean exists uniquely on this manifold. We also propose a novel scalar measurement, named Geometric Anisotropy (GA), which is the Riemannian geodesic distance between the ODF and the isotropic ODF. The Renyi entropy H1/2 of the ODF can be computed from the GA. Moreover, we present an Affine-Euclidean framework and a Log-Euclidean framework so that we can work in an Euclidean space. As an application, Lagrange interpolation on ODF field is proposed based on weighted Frechet mean. We validate our methods on synthetic and real data experiments. Compared with existing Riemannian frameworks on ODF, our framework is model-free. The estimation of the parameters, i.e. Riemannian coordinates, is robust and linear. Moreover it should be noted that our theoretical results can be used for any probability density function (PDF) under an orthonormal basis representation.

  15. Resampling probability values for weighted kappa with multiple raters.

    PubMed

    Mielke, Paul W; Berry, Kenneth J; Johnston, Janis E

    2008-04-01

    A new procedure to compute weighted kappa with multiple raters is described. A resampling procedure to compute approximate probability values for weighted kappa with multiple raters is presented. Applications of weighted kappa are illustrated with an example analysis of classifications by three independent raters.

  16. Locally Weighted Score Estimation for Quantile Classification in Binary Regression Models

    PubMed Central

    Rice, John D.; Taylor, Jeremy M. G.

    2016-01-01

    One common use of binary response regression methods is classification based on an arbitrary probability threshold dictated by the particular application. Since this is given to us a priori, it is sensible to incorporate the threshold into our estimation procedure. Specifically, for the linear logistic model, we solve a set of locally weighted score equations, using a kernel-like weight function centered at the threshold. The bandwidth for the weight function is selected by cross validation of a novel hybrid loss function that combines classification error and a continuous measure of divergence between observed and fitted values; other possible cross-validation functions based on more common binary classification metrics are also examined. This work has much in common with robust estimation, but diers from previous approaches in this area in its focus on prediction, specifically classification into high- and low-risk groups. Simulation results are given showing the reduction in error rates that can be obtained with this method when compared with maximum likelihood estimation, especially under certain forms of model misspecification. Analysis of a melanoma data set is presented to illustrate the use of the method in practice. PMID:28018492

  17. Acute weight gain, gender, and therapeutic response to antipsychotics in the treatment of patients with schizophrenia

    PubMed Central

    Ascher-Svanum, Haya; Stensland, Michael; Zhao, Zhongyun; Kinon, Bruce J

    2005-01-01

    Background Previous research indicated that women are more vulnerable than men to adverse psychological consequences of weight gain. Other research has suggested that weight gain experienced during antipsychotic therapy may also psychologically impact women more negatively. This study assessed the impact of acute treatment-emergent weight gain on clinical and functional outcomes of patients with schizophrenia by patient gender and antipsychotic treatment (olanzapine or haloperidol). Methods Data were drawn from the acute phase (first 6-weeks) of a double-blind randomized clinical trial of olanzapine versus haloperidol in the treatment of 1296 men and 700 women with schizophrenia-spectrum disorders. The associations between weight change and change in core schizophrenia symptoms, depressive symptoms, and functional status were examined post-hoc for men and women and for each medication group. Core schizophrenia symptoms (positive and negative) were measured with the Brief Psychiatric Rating Scale (BPRS), depressive symptoms with the BPRS Anxiety/Depression Scale and the Montgomery-Asberg Depression Rating Scale, and functional status with the mental and physical component scores on the Medical Outcome Survey-Short Form 36. Statistical analysis included methods that controlled for treatment duration. Results Weight gain during 6-week treatment with olanzapine and haloperidol was significantly associated with improvements in core schizophrenia symptoms, depressive symptoms, mental functioning, and physical functioning for men and women alike. The conditional probability of clinical response (20% reduction in core schizophrenia symptom), given a clinically significant weight gain (at least 7% of baseline weight), showed that about half of the patients who lost weight responded to treatment, whereas three-quarters of the patients who had a clinically significant weight gain responded to treatment. The positive associations between therapeutic response and weight gain were similar for the olanzapine and haloperidol treatment groups. Improved outcomes were, however, more pronounced for the olanzapine-treated patients, and more olanzapine-treated patients gained weight. Conclusions The findings of significant relationships between treatment-emergent weight gain and improvements in clinical and functional status at 6-weeks suggest that patients who have greater treatment-emergent weight gain are more likely to benefit from treatment with olanzapine or haloperidol regardless of gender. PMID:15649317

  18. Performance of correlation receivers in the presence of impulse noise.

    NASA Technical Reports Server (NTRS)

    Moore, J. D.; Houts, R. C.

    1972-01-01

    An impulse noise model, which assumes that each noise burst contains a randomly weighted version of a basic waveform, is used to derive the performance equations for a correlation receiver. The expected number of bit errors per noise burst is expressed as a function of the average signal energy, signal-set correlation coefficient, bit time, noise-weighting-factor variance and probability density function, and a time range function which depends on the crosscorrelation of the signal-set basis functions and the noise waveform. Unlike the performance results for additive white Gaussian noise, it is shown that the error performance for impulse noise is affected by the choice of signal-set basis function, and that Orthogonal signaling is not equivalent to On-Off signaling with the same average energy. Furthermore, it is demonstrated that the correlation-receiver error performance can be improved by inserting a properly specified nonlinear device prior to the receiver input.

  19. Interpolating Non-Parametric Distributions of Hourly Rainfall Intensities Using Random Mixing

    NASA Astrophysics Data System (ADS)

    Mosthaf, Tobias; Bárdossy, András; Hörning, Sebastian

    2015-04-01

    The correct spatial interpolation of hourly rainfall intensity distributions is of great importance for stochastical rainfall models. Poorly interpolated distributions may lead to over- or underestimation of rainfall and consequently to wrong estimates of following applications, like hydrological or hydraulic models. By analyzing the spatial relation of empirical rainfall distribution functions, a persistent order of the quantile values over a wide range of non-exceedance probabilities is observed. As the order remains similar, the interpolation weights of quantile values for one certain non-exceedance probability can be applied to the other probabilities. This assumption enables the use of kernel smoothed distribution functions for interpolation purposes. Comparing the order of hourly quantile values over different gauges with the order of their daily quantile values for equal probabilities, results in high correlations. The hourly quantile values also show high correlations with elevation. The incorporation of these two covariates into the interpolation is therefore tested. As only positive interpolation weights for the quantile values assure a monotonically increasing distribution function, the use of geostatistical methods like kriging is problematic. Employing kriging with external drift to incorporate secondary information is not applicable. Nonetheless, it would be fruitful to make use of covariates. To overcome this shortcoming, a new random mixing approach of spatial random fields is applied. Within the mixing process hourly quantile values are considered as equality constraints and correlations with elevation values are included as relationship constraints. To profit from the dependence of daily quantile values, distribution functions of daily gauges are used to set up lower equal and greater equal constraints at their locations. In this way the denser daily gauge network can be included in the interpolation of the hourly distribution functions. The applicability of this new interpolation procedure will be shown for around 250 hourly rainfall gauges in the German federal state of Baden-Württemberg. The performance of the random mixing technique within the interpolation is compared to applicable kriging methods. Additionally, the interpolation of kernel smoothed distribution functions is compared with the interpolation of fitted parametric distributions.

  20. Survival analysis for the missing censoring indicator model using kernel density estimation techniques

    PubMed Central

    Subramanian, Sundarraman

    2008-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423

  1. Survival analysis for the missing censoring indicator model using kernel density estimation techniques.

    PubMed

    Subramanian, Sundarraman

    2006-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.

  2. Psychiatric Disorders and General Functioning in Low Birth Weight Adults: A Longitudinal Study.

    PubMed

    Lærum, Astrid M W; Reitan, Solveig Klæbo; Evensen, Kari Anne I; Lydersen, Stian; Brubakk, Ann-Mari; Skranes, Jon; Indredavik, Marit S

    2017-02-01

    To examine psychiatric morbidity and overall functioning in adults born with low birth weight compared with normal birth weight controls at age 26 years and to study longitudinal trajectories of psychiatric morbidity from early adolescence to adulthood. Prospective cohort study wherein 44 preterm very low birth weight (≤1500 g), 64 term small for gestational age (SGA; <10th percentile), and 81 control adults were examined using the MINI-International Neuropsychiatric Interview: M.I.N.I. Plus, Norwegian version, the Global Assessment of Functioning, and questions on daily occupation and level of education. Prevalence of psychiatric disorders from previous follow-ups at age 14 and 19 years were included for longitudinal analysis. From adolescence to adulthood, the term SGA group had a marked increase in the estimated probability of psychiatric disorders from 9% (95% confidence interval, 4-19) to 39% (95% confidence interval, 28-51). At 26 years, psychiatric diagnoses were significantly more prevalent in the preterm very low birth weight group (n = 16, 36%; P = .003) and the term SGA group (n = 24, 38%; P = .019) compared with the control group (n = 11, 14%). Both low birth weight groups had lower educational level and functioning scores than controls and a higher frequency of unemployment and disability benefit. Low birth weight was a substantial risk factor for adult psychiatric morbidity and lowered overall functioning. The results underscore the need for long-term follow-up of low birth weight survivors through adolescence and adulthood, focusing on mental health. The longitudinal increase in psychiatric morbidity in the term SGA group calls for additional investigation. Copyright © 2017 by the American Academy of Pediatrics.

  3. Lost in search: (Mal-)adaptation to probabilistic decision environments in children and adults.

    PubMed

    Betsch, Tilmann; Lehmann, Anne; Lindow, Stefanie; Lang, Anna; Schoemann, Martin

    2016-02-01

    Adaptive decision making in probabilistic environments requires individuals to use probabilities as weights in predecisional information searches and/or when making subsequent choices. Within a child-friendly computerized environment (Mousekids), we tracked 205 children's (105 children 5-6 years of age and 100 children 9-10 years of age) and 103 adults' (age range: 21-22 years) search behaviors and decisions under different probability dispersions (.17; .33, .83 vs. .50, .67, .83) and constraint conditions (instructions to limit search: yes vs. no). All age groups limited their depth of search when instructed to do so and when probability dispersion was high (range: .17-.83). Unlike adults, children failed to use probabilities as weights for their searches, which were largely not systematic. When examining choices, however, elementary school children (unlike preschoolers) systematically used probabilities as weights in their decisions. This suggests that an intuitive understanding of probabilities and the capacity to use them as weights during integration is not a sufficient condition for applying simple selective search strategies that place one's focus on weight distributions. PsycINFO Database Record (c) 2016 APA, all rights reserved.

  4. Using Discrete Loss Functions and Weighted Kappa for Classification: An Illustration Based on Bayesian Network Analysis

    ERIC Educational Resources Information Center

    Zwick, Rebecca; Lenaburg, Lubella

    2009-01-01

    In certain data analyses (e.g., multiple discriminant analysis and multinomial log-linear modeling), classification decisions are made based on the estimated posterior probabilities that individuals belong to each of several distinct categories. In the Bayesian network literature, this type of classification is often accomplished by assigning…

  5. DNS of moderate-temperature gaseous mixing layers laden with multicomponent-fuel drops

    NASA Technical Reports Server (NTRS)

    Clercq, P. C. Le; Bellan, J.

    2004-01-01

    A formulation representing multicomponent-fuel (MC-fuel) composition as a Probability Distribution Function (PDF) depending on the molar weight is used to construct a model of a large number of MC-fuel drops evaporating in a gas flow, so as to assess the extent of fuel specificity on the vapor composition.

  6. Landsat D Thematic Mapper image dimensionality reduction and geometric correction accuracy

    NASA Technical Reports Server (NTRS)

    Ford, G. E.

    1986-01-01

    To characterize and quantify the performance of the Landsat thematic mapper (TM), techniques for dimensionality reduction by linear transformation have been studied and evaluated and the accuracy of the correction of geometric errors in TM images analyzed. Theoretical evaluations and comparisons for existing methods for the design of linear transformation for dimensionality reduction are presented. These methods include the discrete Karhunen Loeve (KL) expansion, Multiple Discriminant Analysis (MDA), Thematic Mapper (TM)-Tasseled Cap Linear Transformation and Singular Value Decomposition (SVD). A unified approach to these design problems is presented in which each method involves optimizing an objective function with respect to the linear transformation matrix. From these studies, four modified methods are proposed. They are referred to as the Space Variant Linear Transformation, the KL Transform-MDA hybrid method, and the First and Second Version of the Weighted MDA method. The modifications involve the assignment of weights to classes to achieve improvements in the class conditional probability of error for classes with high weights. Experimental evaluations of the existing and proposed methods have been performed using the six reflective bands of the TM data. It is shown that in terms of probability of classification error and the percentage of the cumulative eigenvalues, the six reflective bands of the TM data require only a three dimensional feature space. It is shown experimentally as well that for the proposed methods, the classes with high weights have improvements in class conditional probability of error estimates as expected.

  7. Impact of Competing Risk of Mortality on Association of Weight Loss with Risk of Central Body Fractures in Older Men: A Prospective Cohort Study

    PubMed Central

    Ensrud, Kristine E.; Harrison, Stephanie L.; Cauley, Jane A.; Langsetmo, Lisa; Schousboe, John T.; Kado, Deborah M.; Gourlay, Margaret L.; Lyons, Jennifer G.; Fredman, Lisa; Napoli, Nicolas; Crandall, Carolyn J.; Lewis, Cora E.; Orwoll, Eric S.; Stefanick, Marcia L.; Cawthon, Peggy M.

    2017-01-01

    To determine the association of weight loss with risk of clinical fractures at the hip, spine and pelvis (central body fractures [CBF]) in older men with and without accounting for the competing risk of mortality, we used data from 4,523 men (mean age 77.5 years). Weight change between baseline and follow-up (mean 4.5 years between examinations) was categorized as moderate loss (loss ≥10%), mild loss (loss 5% to <10%), stable (<5% change) or gain (gain ≥5%). Participants were contacted every 4 months after the follow-up examination to ascertain vital status (deaths verified by death certificates) and ask about fractures (confirmed by radiographic reports). Absolute probability of CBF by weight change category was estimated using traditional Kaplan-Meier method and cumulative incidence function accounting for competing mortality risk. Risk of CBF by weight change category was determined using conventional Cox proportional hazards regression and subdistribution hazards models with death as a competing risk. During an average of 8 years, 337 men (7.5%) experienced CBF and 1,569 (34.7%) died before experiencing this outcome. Among men with moderate weight loss, CBF probability was 6.8% at 5 years and 16.9% at 10 years using Kaplan-Meier vs. 5.7% at 5 years and 10.2% at 10 years using a competing risk approach. Men with moderate weight loss compared with those with stable weight had a 1.6-fold higher adjusted risk of CBF (HR 1.59, 95% CI 1.06–2.38) using Cox models that was substantially attenuated in models accounting for competing mortality risk and no longer significant (subdistribution HR 1.16, 95% CI 0.77–1.75). Results were similar in analyses substituting hip fracture for CBF. Older men with weight loss who survive are at increased risk of CBF, including hip fracture. However, ignoring the competing mortality risk among men with weight loss substantially overestimates their longterm fracture probability and relative fracture risk. PMID:27739103

  8. Impact of Competing Risk of Mortality on Association of Weight Loss With Risk of Central Body Fractures in Older Men: A Prospective Cohort Study.

    PubMed

    Ensrud, Kristine E; Harrison, Stephanie L; Cauley, Jane A; Langsetmo, Lisa; Schousboe, John T; Kado, Deborah M; Gourlay, Margaret L; Lyons, Jennifer G; Fredman, Lisa; Napoli, Nicolas; Crandall, Carolyn J; Lewis, Cora E; Orwoll, Eric S; Stefanick, Marcia L; Cawthon, Peggy M

    2017-03-01

    To determine the association of weight loss with risk of clinical fractures at the hip, spine, and pelvis (central body fractures [CBFs]) in older men with and without accounting for the competing risk of mortality, we used data from 4523 men (mean age 77.5 years). Weight change between baseline and follow-up (mean 4.5 years between examinations) was categorized as moderate loss (loss ≥10%), mild loss (loss 5% to <10%), stable (<5% change) or gain (gain ≥5%). Participants were contacted every 4 months after the follow-up examination to ascertain vital status (deaths verified by death certificates) and ask about fractures (confirmed by radiographic reports). Absolute probability of CBF by weight change category was estimated using traditional Kaplan-Meier method and cumulative incidence function accounting for competing mortality risk. Risk of CBF by weight change category was determined using conventional Cox proportional hazards regression and subdistribution hazards models with death as a competing risk. During an average of 8 years, 337 men (7.5%) experienced CBF and 1569 (34.7%) died before experiencing this outcome. Among men with moderate weight loss, CBF probability was 6.8% at 5 years and 16.9% at 10 years using Kaplan-Meier versus 5.7% at 5 years and 10.2% at 10 years using a competing risk approach. Men with moderate weight loss compared with those with stable weight had a 1.6-fold higher adjusted risk of CBF (HR 1.59; 95% CI, 1.06 to 2.38) using Cox models that was substantially attenuated in models accounting for competing mortality risk and no longer significant (subdistribution HR 1.16; 95% CI, 0.77 to 1.75). Results were similar in analyses substituting hip fracture for CBF. Older men with weight loss who survive are at increased risk of CBF, including hip fracture. However, ignoring the competing mortality risk among men with weight loss substantially overestimates their long-term fracture probability and relative fracture risk. © 2016 American Society for Bone and Mineral Research. © 2016 American Society for Bone and Mineral Research.

  9. [A graph cuts-based interactive method for segmentation of magnetic resonance images of meningioma].

    PubMed

    Li, Shuan-qiang; Feng, Qian-jin; Chen, Wu-fan; Lin, Ya-zhong

    2011-06-01

    For accurate segmentation of the magnetic resonance (MR) images of meningioma, we propose a novel interactive segmentation method based on graph cuts. The high dimensional image features was extracted, and for each pixel, the probabilities of its origin, either the tumor or the background regions, were estimated by exploiting the weighted K-nearest neighborhood classifier. Based on these probabilities, a new energy function was proposed. Finally, a graph cut optimal framework was used for the solution of the energy function. The proposed method was evaluated by application in the segmentation of MR images of meningioma, and the results showed that the method significantly improved the segmentation accuracy compared with the gray level information-based graph cut method.

  10. Bayesian anomaly detection in monitoring data applying relevance vector machine

    NASA Astrophysics Data System (ADS)

    Saito, Tomoo

    2011-04-01

    A method for automatically classifying the monitoring data into two categories, normal and anomaly, is developed in order to remove anomalous data included in the enormous amount of monitoring data, applying the relevance vector machine (RVM) to a probabilistic discriminative model with basis functions and their weight parameters whose posterior PDF (probabilistic density function) conditional on the learning data set is given by Bayes' theorem. The proposed framework is applied to actual monitoring data sets containing some anomalous data collected at two buildings in Tokyo, Japan, which shows that the trained models discriminate anomalous data from normal data very clearly, giving high probabilities of being normal to normal data and low probabilities of being normal to anomalous data.

  11. Expert Financial Advice Neurobiologically “Offloads” Financial Decision-Making under Risk

    PubMed Central

    Engelmann, Jan B.; Capra, C. Monica; Noussair, Charles; Berns, Gregory S.

    2009-01-01

    Background Financial advice from experts is commonly sought during times of uncertainty. While the field of neuroeconomics has made considerable progress in understanding the neurobiological basis of risky decision-making, the neural mechanisms through which external information, such as advice, is integrated during decision-making are poorly understood. In the current experiment, we investigated the neurobiological basis of the influence of expert advice on financial decisions under risk. Methodology/Principal Findings While undergoing fMRI scanning, participants made a series of financial choices between a certain payment and a lottery. Choices were made in two conditions: 1) advice from a financial expert about which choice to make was displayed (MES condition); and 2) no advice was displayed (NOM condition). Behavioral results showed a significant effect of expert advice. Specifically, probability weighting functions changed in the direction of the expert's advice. This was paralleled by neural activation patterns. Brain activations showing significant correlations with valuation (parametric modulation by value of lottery/sure win) were obtained in the absence of the expert's advice (NOM) in intraparietal sulcus, posterior cingulate cortex, cuneus, precuneus, inferior frontal gyrus and middle temporal gyrus. Notably, no significant correlations with value were obtained in the presence of advice (MES). These findings were corroborated by region of interest analyses. Neural equivalents of probability weighting functions showed significant flattening in the MES compared to the NOM condition in regions associated with probability weighting, including anterior cingulate cortex, dorsolateral PFC, thalamus, medial occipital gyrus and anterior insula. Finally, during the MES condition, significant activations in temporoparietal junction and medial PFC were obtained. Conclusions/Significance These results support the hypothesis that one effect of expert advice is to “offload” the calculation of value of decision options from the individual's brain. PMID:19308261

  12. Simulations of Spray Reacting Flows in a Single Element LDI Injector With and Without Invoking an Eulerian Scalar PDF Method

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    This paper presents the numerical simulations of the Jet-A spray reacting flow in a single element lean direct injection (LDI) injector by using the National Combustion Code (NCC) with and without invoking the Eulerian scalar probability density function (PDF) method. The flow field is calculated by using the Reynolds averaged Navier-Stokes equations (RANS and URANS) with nonlinear turbulence models, and when the scalar PDF method is invoked, the energy and compositions or species mass fractions are calculated by solving the equation of an ensemble averaged density-weighted fine-grained probability density function that is referred to here as the averaged probability density function (APDF). A nonlinear model for closing the convection term of the scalar APDF equation is used in the presented simulations and will be briefly described. Detailed comparisons between the results and available experimental data are carried out. Some positive findings of invoking the Eulerian scalar PDF method in both improving the simulation quality and reducing the computing cost are observed.

  13. An application of model-fitting procedures for marginal structural models.

    PubMed

    Mortimer, Kathleen M; Neugebauer, Romain; van der Laan, Mark; Tager, Ira B

    2005-08-15

    Marginal structural models (MSMs) are being used more frequently to obtain causal effect estimates in observational studies. Although the principal estimator of MSM coefficients has been the inverse probability of treatment weight (IPTW) estimator, there are few published examples that illustrate how to apply IPTW or discuss the impact of model selection on effect estimates. The authors applied IPTW estimation of an MSM to observational data from the Fresno Asthmatic Children's Environment Study (2000-2002) to evaluate the effect of asthma rescue medication use on pulmonary function and compared their results with those obtained through traditional regression methods. Akaike's Information Criterion and cross-validation methods were used to fit the MSM. In this paper, the influence of model selection and evaluation of key assumptions such as the experimental treatment assignment assumption are discussed in detail. Traditional analyses suggested that medication use was not associated with an improvement in pulmonary function--a finding that is counterintuitive and probably due to confounding by symptoms and asthma severity. The final MSM estimated that medication use was causally related to a 7% improvement in pulmonary function. The authors present examples that should encourage investigators who use IPTW estimation to undertake and discuss the impact of model-fitting procedures to justify the choice of the final weights.

  14. Prospect theory in the health domain: a quantitative assessment.

    PubMed

    Attema, Arthur E; Brouwer, Werner B F; I'Haridon, Olivier

    2013-12-01

    It is well-known that expected utility (EU) has empirical deficiencies. Cumulative prospect theory (CPT) has developed as an alternative with more descriptive validity. However, CPT's full function had not yet been quantified in the health domain. This paper is therefore the first to simultaneously measure utility of life duration, probability weighting, and loss aversion in this domain. We observe loss aversion and risk aversion for gains and losses, which for gains can be explained by probabilistic pessimism. Utility for gains is almost linear. For losses, we find less weighting of probability 1/2 and concave utility. This contrasts with the common finding of convex utility for monetary losses. However, CPT was proposed to explain choices among lotteries involving monetary outcomes. Life years are arguably very different from monetary outcomes and need not generate convex utility for losses. Moreover, utility of life duration reflects discounting, causing concave utility. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Transition probability functions for applications of inelastic electron scattering

    PubMed Central

    Löffler, Stefan; Schattschneider, Peter

    2012-01-01

    In this work, the transition matrix elements for inelastic electron scattering are investigated which are the central quantity for interpreting experiments. The angular part is given by spherical harmonics. For the weighted radial wave function overlap, analytic expressions are derived in the Slater-type and the hydrogen-like orbital models. These expressions are shown to be composed of a finite sum of polynomials and elementary trigonometric functions. Hence, they are easy to use, require little computation time, and are significantly more accurate than commonly used approximations. PMID:22560709

  16. Use of an improved radiation amplification factor to estimate the effect of total ozone changes on action spectrum weighted irradiances and an instrument response function

    NASA Astrophysics Data System (ADS)

    Herman, Jay R.

    2010-12-01

    Multiple scattering radiative transfer results are used to calculate action spectrum weighted irradiances and fractional irradiance changes in terms of a power law in ozone Ω, U(Ω/200)-RAF, where the new radiation amplification factor (RAF) is just a function of solar zenith angle. Including Rayleigh scattering caused small differences in the estimated 30 year changes in action spectrum-weighted irradiances compared to estimates that neglect multiple scattering. The radiative transfer results are applied to several action spectra and to an instrument response function corresponding to the Solar Light 501 meter. The effect of changing ozone on two plant damage action spectra are shown for plants with high sensitivity to UVB (280-315 nm) and those with lower sensitivity, showing that the probability for plant damage for the latter has increased since 1979, especially at middle to high latitudes in the Southern Hemisphere. Similarly, there has been an increase in rates of erythemal skin damage and pre-vitamin D3 production corresponding to measured ozone decreases. An example conversion function is derived to obtain erythemal irradiances and the UV index from measurements with the Solar Light 501 instrument response function. An analytic expressions is given to convert changes in erythemal irradiances to changes in CIE vitamin-D action spectrum weighted irradiances.

  17. Use of an Improved Radiation Amplification Factor to Estimate the Effect of Total Ozone Changes on Action Spectrum Weighted Irradiances and an Instrument Response Function

    NASA Technical Reports Server (NTRS)

    Herman, Jay R.

    2010-01-01

    Multiple scattering radiative transfer results are used to calculate action spectrum weighted irradiances and fractional irradiance changes in terms of a power law in ozone OMEGA, U(OMEGA/200)(sup -RAF), where the new radiation amplification factor (RAF) is just a function of solar zenith angle. Including Rayleigh scattering caused small differences in the estimated 30 year changes in action spectrum-weighted irradiances compared to estimates that neglect multiple scattering. The radiative transfer results are applied to several action spectra and to an instrument response function corresponding to the Solar Light 501 meter. The effect of changing ozone on two plant damage action spectra are shown for plants with high sensitivity to UVB (280-315 run) and those with lower sensitivity, showing that the probability for plant damage for the latter has increased since 1979, especially at middle to high latitudes in the Southern Hemisphere. Similarly, there has been an increase in rates of erythemal skin damage and pre-vitamin D3 production corresponding to measured ozone decreases. An example conversion function is derived to obtain erythemal irradiances and the UV index from measurements with the Solar Light 501 instrument response function. An analytic expressions is given to convert changes in erythemal irradiances to changes in CIE vitamin-D action spectrum weighted irradiances.

  18. The neural correlates of subjective utility of monetary outcome and probability weight in economic and in motor decision under risk

    PubMed Central

    Wu, Shih-Wei; Delgado, Mauricio R.; Maloney, Laurence T.

    2011-01-01

    In decision under risk, people choose between lotteries that contain a list of potential outcomes paired with their probabilities of occurrence. We previously developed a method for translating such lotteries to mathematically equivalent motor lotteries. The probability of each outcome in a motor lottery is determined by the subject’s noise in executing a movement. In this study, we used functional magnetic resonance imaging in humans to compare the neural correlates of monetary outcome and probability in classical lottery tasks where information about probability was explicitly communicated to the subjects and in mathematically equivalent motor lottery tasks where probability was implicit in the subjects’ own motor noise. We found that activity in the medial prefrontal cortex (mPFC) and the posterior cingulate cortex (PCC) quantitatively represent the subjective utility of monetary outcome in both tasks. For probability, we found that the mPFC significantly tracked the distortion of such information in both tasks. Specifically, activity in mPFC represents probability information but not the physical properties of the stimuli correlated with this information. Together, the results demonstrate that mPFC represents probability from two distinct forms of decision under risk. PMID:21677166

  19. The neural correlates of subjective utility of monetary outcome and probability weight in economic and in motor decision under risk.

    PubMed

    Wu, Shih-Wei; Delgado, Mauricio R; Maloney, Laurence T

    2011-06-15

    In decision under risk, people choose between lotteries that contain a list of potential outcomes paired with their probabilities of occurrence. We previously developed a method for translating such lotteries to mathematically equivalent "motor lotteries." The probability of each outcome in a motor lottery is determined by the subject's noise in executing a movement. In this study, we used functional magnetic resonance imaging in humans to compare the neural correlates of monetary outcome and probability in classical lottery tasks in which information about probability was explicitly communicated to the subjects and in mathematically equivalent motor lottery tasks in which probability was implicit in the subjects' own motor noise. We found that activity in the medial prefrontal cortex (mPFC) and the posterior cingulate cortex quantitatively represent the subjective utility of monetary outcome in both tasks. For probability, we found that the mPFC significantly tracked the distortion of such information in both tasks. Specifically, activity in mPFC represents probability information but not the physical properties of the stimuli correlated with this information. Together, the results demonstrate that mPFC represents probability from two distinct forms of decision under risk.

  20. Characterization of impulse noise and analysis of its effect upon correlation receivers

    NASA Technical Reports Server (NTRS)

    Houts, R. C.; Moore, J. D.

    1971-01-01

    A noise model is formulated to describe the impulse noise in many digital systems. A simplified model, which assumes that each noise burst contains a randomly weighted version of the same basic waveform, is used to derive the performance equations for a correlation receiver. The expected number of bit errors per noise burst is expressed as a function of the average signal energy, signal-set correlation coefficient, bit time, noise-weighting-factor variance and probability density function, and a time range function which depends on the crosscorrelation of the signal-set basis functions and the noise waveform. A procedure is established for extending the results for the simplified noise model to the general model. Unlike the performance results for Gaussian noise, it is shown that for impulse noise the error performance is affected by the choice of signal-set basis functions and that Orthogonal signaling is not equivalent to On-Off signaling with the same average energy.

  1. Mean apparent propagator (MAP) MRI: a novel diffusion imaging method for mapping tissue microstructure.

    PubMed

    Özarslan, Evren; Koay, Cheng Guan; Shepherd, Timothy M; Komlosh, Michal E; İrfanoğlu, M Okan; Pierpaoli, Carlo; Basser, Peter J

    2013-09-01

    Diffusion-weighted magnetic resonance (MR) signals reflect information about underlying tissue microstructure and cytoarchitecture. We propose a quantitative, efficient, and robust mathematical and physical framework for representing diffusion-weighted MR imaging (MRI) data obtained in "q-space," and the corresponding "mean apparent propagator (MAP)" describing molecular displacements in "r-space." We also define and map novel quantitative descriptors of diffusion that can be computed robustly using this MAP-MRI framework. We describe efficient analytical representation of the three-dimensional q-space MR signal in a series expansion of basis functions that accurately describes diffusion in many complex geometries. The lowest order term in this expansion contains a diffusion tensor that characterizes the Gaussian displacement distribution, equivalent to diffusion tensor MRI (DTI). Inclusion of higher order terms enables the reconstruction of the true average propagator whose projection onto the unit "displacement" sphere provides an orientational distribution function (ODF) that contains only the orientational dependence of the diffusion process. The representation characterizes novel features of diffusion anisotropy and the non-Gaussian character of the three-dimensional diffusion process. Other important measures this representation provides include the return-to-the-origin probability (RTOP), and its variants for diffusion in one- and two-dimensions-the return-to-the-plane probability (RTPP), and the return-to-the-axis probability (RTAP), respectively. These zero net displacement probabilities measure the mean compartment (pore) volume and cross-sectional area in distributions of isolated pores irrespective of the pore shape. MAP-MRI represents a new comprehensive framework to model the three-dimensional q-space signal and transform it into diffusion propagators. Experiments on an excised marmoset brain specimen demonstrate that MAP-MRI provides several novel, quantifiable parameters that capture previously obscured intrinsic features of nervous tissue microstructure. This should prove helpful for investigating the functional organization of normal and pathologic nervous tissue. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Double inverse-weighted estimation of cumulative treatment effects under nonproportional hazards and dependent censoring.

    PubMed

    Schaubel, Douglas E; Wei, Guanghui

    2011-03-01

    In medical studies of time-to-event data, nonproportional hazards and dependent censoring are very common issues when estimating the treatment effect. A traditional method for dealing with time-dependent treatment effects is to model the time-dependence parametrically. Limitations of this approach include the difficulty to verify the correctness of the specified functional form and the fact that, in the presence of a treatment effect that varies over time, investigators are usually interested in the cumulative as opposed to instantaneous treatment effect. In many applications, censoring time is not independent of event time. Therefore, we propose methods for estimating the cumulative treatment effect in the presence of nonproportional hazards and dependent censoring. Three measures are proposed, including the ratio of cumulative hazards, relative risk, and difference in restricted mean lifetime. For each measure, we propose a double inverse-weighted estimator, constructed by first using inverse probability of treatment weighting (IPTW) to balance the treatment-specific covariate distributions, then using inverse probability of censoring weighting (IPCW) to overcome the dependent censoring. The proposed estimators are shown to be consistent and asymptotically normal. We study their finite-sample properties through simulation. The proposed methods are used to compare kidney wait-list mortality by race. © 2010, The International Biometric Society.

  3. Role of the site of synaptic competition and the balance of learning forces for Hebbian encoding of probabilistic Markov sequences

    PubMed Central

    Bouchard, Kristofer E.; Ganguli, Surya; Brainard, Michael S.

    2015-01-01

    The majority of distinct sensory and motor events occur as temporally ordered sequences with rich probabilistic structure. Sequences can be characterized by the probability of transitioning from the current state to upcoming states (forward probability), as well as the probability of having transitioned to the current state from previous states (backward probability). Despite the prevalence of probabilistic sequencing of both sensory and motor events, the Hebbian mechanisms that mold synapses to reflect the statistics of experienced probabilistic sequences are not well understood. Here, we show through analytic calculations and numerical simulations that Hebbian plasticity (correlation, covariance, and STDP) with pre-synaptic competition can develop synaptic weights equal to the conditional forward transition probabilities present in the input sequence. In contrast, post-synaptic competition can develop synaptic weights proportional to the conditional backward probabilities of the same input sequence. We demonstrate that to stably reflect the conditional probability of a neuron's inputs and outputs, local Hebbian plasticity requires balance between competitive learning forces that promote synaptic differentiation and homogenizing learning forces that promote synaptic stabilization. The balance between these forces dictates a prior over the distribution of learned synaptic weights, strongly influencing both the rate at which structure emerges and the entropy of the final distribution of synaptic weights. Together, these results demonstrate a simple correspondence between the biophysical organization of neurons, the site of synaptic competition, and the temporal flow of information encoded in synaptic weights by Hebbian plasticity while highlighting the utility of balancing learning forces to accurately encode probability distributions, and prior expectations over such probability distributions. PMID:26257637

  4. Density Weighted FDF Equations for Simulations of Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2011-01-01

    In this report, we briefly revisit the formulation of density weighted filtered density function (DW-FDF) for large eddy simulation (LES) of turbulent reacting flows, which was proposed by Jaberi et al. (Jaberi, F.A., Colucci, P.J., James, S., Givi, P. and Pope, S.B., Filtered mass density function for Large-eddy simulation of turbulent reacting flows, J. Fluid Mech., vol. 401, pp. 85-121, 1999). At first, we proceed the traditional derivation of the DW-FDF equations by using the fine grained probability density function (FG-PDF), then we explore another way of constructing the DW-FDF equations by starting directly from the compressible Navier-Stokes equations. We observe that the terms which are unclosed in the traditional DW-FDF equations are now closed in the newly constructed DW-FDF equations. This significant difference and its practical impact on the computational simulations may deserve further studies.

  5. Probability of Vitamin D Deficiency by Body Weight and Race/Ethnicity.

    PubMed

    Weishaar, Tom; Rajan, Sonali; Keller, Bryan

    2016-01-01

    While most physicians recognize that vitamin D status varies by skin color because darker skin requires more light to synthesize vitamin D than lighter skin, the importance of body weight to vitamin D status is a newer, less recognized, finding. The purpose of this study was to use nationally representative US data to determine the probability of vitamin D deficiency by body weight and skin color. Using data for individuals age ≥6 years from the 2001 to 2010 cycles of the US National Health and Nutrition Examination Survey, we calculated the effect of skin color, body weight, and age on vitamin D status. We determined the probability of deficiency within the normal range of body weight for 3 race/ethnicity groups at 3 target levels of 25-hydroxyvitamin D. Darker skin colors and heavier body weights are independently and significantly associated with poorer vitamin D status. We report graphically the probability of vitamin D deficiency by body weight and skin color at vitamin D targets of 20 and 30 ng/mL. The effects of skin color and body weight on vitamin D status are large both statistically and clinically. Knowledge of these effects may facilitate diagnosis of vitamin D deficiency. © Copyright 2016 by the American Board of Family Medicine.

  6. Lung function decline over 25 years of follow-up among black and white adults in the ARIC study cohort.

    PubMed

    Mirabelli, Maria C; Preisser, John S; Loehr, Laura R; Agarwal, Sunil K; Barr, R Graham; Couper, David J; Hankinson, John L; Hyun, Noorie; Folsom, Aaron R; London, Stephanie J

    2016-04-01

    Interpretation of longitudinal information about lung function decline from middle to older age has been limited by loss to follow-up that may be correlated with baseline lung function or the rate of decline. We conducted these analyses to estimate age-related decline in lung function across groups of race, sex, and smoking status while accounting for dropout from the Atherosclerosis Risk in Communities Study. We analyzed data from 13,896 black and white participants, aged 45-64 years at the 1987-1989 baseline clinical examination. Using spirometry data collected at baseline and two follow-up visits, we estimated annual population-averaged mean changes in forced expiratory volume in one second (FEV1) and forced vital capacity (FVC) by race, sex, and smoking status using inverse-probability-weighted independence estimating equations conditioning-on-being-alive. Estimated rates of FEV1 decline estimated using inverse-probability-weighted independence estimating equations conditioning on being alive were higher among white than black participants at age 45 years (e.g., male never smokers: black: -29.5 ml/year; white: -51.9 ml/year), but higher among black than white participants by age 75 (black: -51.2 ml/year; white: -26). Observed differences by race were more pronounced among men than among women. By smoking status, FEV1 declines were larger among current than former or never smokers at age 45 across all categories of race and sex. By age 60, FEV1 decline was larger among former and never than current smokers. Estimated annual declines generated using unweighted generalized estimating equations were smaller for current smokers at younger ages in all four groups of race and sex compared with results from weighted analyses that accounted for attrition. Using methods accounting for dropout from an approximately 25-year health study, estimated rates of lung function decline varied by age, race, sex, and smoking status, with largest declines observed among current smokers at younger ages. Published by Elsevier Ltd.

  7. Evidence that multiple genetic variants of MC4R play a functional role in the regulation of energy expenditure and appetite in Hispanic children1234

    PubMed Central

    Cole, Shelley A; Voruganti, V Saroja; Cai, Guowen; Haack, Karin; Kent, Jack W; Blangero, John; Comuzzie, Anthony G; McPherson, John D; Gibbs, Richard A

    2010-01-01

    Background: Melanocortin-4-receptor (MC4R) haploinsufficiency is the most common form of monogenic obesity; however, the frequency of MC4R variants and their functional effects in general populations remain uncertain. Objective: The aim was to identify and characterize the effects of MC4R variants in Hispanic children. Design: MC4R was resequenced in 376 parents, and the identified single nucleotide polymorphisms (SNPs) were genotyped in 613 parents and 1016 children from the Viva la Familia cohort. Measured genotype analysis (MGA) tested associations between SNPs and phenotypes. Bayesian quantitative trait nucleotide (BQTN) analysis was used to infer the most likely functional polymorphisms influencing obesity-related traits. Results: Seven rare SNPs in coding and 18 SNPs in flanking regions of MC4R were identified. MGA showed suggestive associations between MC4R variants and body size, adiposity, glucose, insulin, leptin, ghrelin, energy expenditure, physical activity, and food intake. BQTN analysis identified SNP 1704 in a predicted micro-RNA target sequence in the downstream flanking region of MC4R as a strong, probable functional variant influencing total, sedentary, and moderate activities with posterior probabilities of 1.0. SNP 2132 was identified as a variant with a high probability (1.0) of exerting a functional effect on total energy expenditure and sleeping metabolic rate. SNP rs34114122 was selected as having likely functional effects on the appetite hormone ghrelin, with a posterior probability of 0.81. Conclusion: This comprehensive investigation provides strong evidence that MC4R genetic variants are likely to play a functional role in the regulation of weight, not only through energy intake but through energy expenditure. PMID:19889825

  8. Extension of Kaplan-Meier methods in observational studies with time-varying treatment.

    PubMed

    Xu, Stanley; Shetterly, Susan; Powers, David; Raebel, Marsha A; Tsai, Thomas T; Ho, P Michael; Magid, David

    2012-01-01

    Inverse probability of treatment weighted Kaplan-Meier estimates have been developed to compare two treatments in the presence of confounders in observational studies. Recently, stabilized weights were developed to reduce the influence of extreme inverse probability of treatment-weighted weights in estimating treatment effects. The objective of this research was to use adjusted Kaplan-Meier estimates and modified log-rank and Wilcoxon tests to examine the effect of a treatment that varies over time in an observational study. We proposed stabilized weight adjusted Kaplan-Meier estimates and modified log-rank and Wilcoxon tests when the treatment was time-varying over the follow-up period. We applied these new methods in examining the effect of an anti-platelet agent, clopidogrel, on subsequent events, including bleeding, myocardial infarction, and death after a drug-eluting stent was implanted into a coronary artery. In this population, clopidogrel use may change over time based on a patient's behavior (e.g., nonadherence) and physicians' recommendations (e.g., end of duration of therapy). Consequently, clopidogrel use was treated as a time-varying variable. We demonstrate that 1) the sample sizes at three chosen time points are almost identical in the original and weighted datasets; and 2) the covariates between patients on and off clopidogrel were well balanced after stabilized weights were applied to the original samples. The stabilized weight-adjusted Kaplan-Meier estimates and modified log-rank and Wilcoxon tests are useful in presenting and comparing survival functions for time-varying treatments in observational studies while adjusting for known confounders. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  9. Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.

    PubMed

    Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C

    2014-12-01

    D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase advanced sleep model was computed and provided more efficient trial designs and increased nonlinear mixed-effects modeling parameter precision.

  10. QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility

    NASA Astrophysics Data System (ADS)

    Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.

    2013-08-01

    One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision-making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps, i.e. the spatial probability of a future vent opening given the past eruptive activity of a volcano. This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source Geographic Information System Quantum GIS, that is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows to select an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input datasets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).

  11. Scaling of Average Weighted Receiving Time on Double-Weighted Koch Networks

    NASA Astrophysics Data System (ADS)

    Dai, Meifeng; Ye, Dandan; Hou, Jie; Li, Xingyi

    2015-03-01

    In this paper, we introduce a model of the double-weighted Koch networks based on actual road networks depending on the two weight factors w,r ∈ (0, 1]. The double weights represent the capacity-flowing weight and the cost-traveling weight, respectively. Denote by wFij the capacity-flowing weight connecting the nodes i and j, and denote by wCij the cost-traveling weight connecting the nodes i and j. Let wFij be related to the weight factor w, and let wCij be related to the weight factor r. This paper assumes that the walker, at each step, starting from its current node, moves to any of its neighbors with probability proportional to the capacity-flowing weight of edge linking them. The weighted time for two adjacency nodes is the cost-traveling weight connecting the two nodes. We define the average weighted receiving time (AWRT) on the double-weighted Koch networks. The obtained result displays that in the large network, the AWRT grows as power-law function of the network order with the exponent, represented by θ(w,r) = ½ log2(1 + 3wr). We show that the AWRT exhibits a sublinear or linear dependence on network order. Thus, the double-weighted Koch networks are more efficient than classic Koch networks in receiving information.

  12. The Neural Basis of Risky Choice with Affective Outcomes

    PubMed Central

    Suter, Renata S.; Pachur, Thorsten; Hertwig, Ralph; Endestad, Tor; Biele, Guido

    2015-01-01

    Both normative and many descriptive theories of decision making under risk are based on the notion that outcomes are weighted by their probability, with subsequent maximization of the (subjective) expected outcome. Numerous investigations from psychology, economics, and neuroscience have produced evidence consistent with this notion. However, this research has typically investigated choices involving relatively affect-poor, monetary outcomes. We compared choice in relatively affect-poor, monetary lottery problems with choice in relatively affect-rich medical decision problems. Computational modeling of behavioral data and model-based neuroimaging analyses provide converging evidence for substantial differences in the respective decision mechanisms. Relative to affect-poor choices, affect-rich choices yielded a more strongly curved probability weighting function of cumulative prospect theory, thus signaling that the psychological impact of probabilities is strongly diminished for affect-rich outcomes. Examining task-dependent brain activation, we identified a region-by-condition interaction indicating qualitative differences of activation between affect-rich and affect-poor choices. Moreover, brain activation in regions that were more active during affect-poor choices (e.g., the supramarginal gyrus) correlated with individual trial-by-trial decision weights, indicating that these regions reflect processing of probabilities. Formal reverse inference Neurosynth meta-analyses suggested that whereas affect-poor choices seem to be based on brain mechanisms for calculative processes, affect-rich choices are driven by the representation of outcomes’ emotional value and autobiographical memories associated with them. These results provide evidence that the traditional notion of expectation maximization may not apply in the context of outcomes laden with affective responses, and that understanding the brain mechanisms of decision making requires the domain of the decision to be taken into account. PMID:25830918

  13. The neural basis of risky choice with affective outcomes.

    PubMed

    Suter, Renata S; Pachur, Thorsten; Hertwig, Ralph; Endestad, Tor; Biele, Guido

    2015-01-01

    Both normative and many descriptive theories of decision making under risk are based on the notion that outcomes are weighted by their probability, with subsequent maximization of the (subjective) expected outcome. Numerous investigations from psychology, economics, and neuroscience have produced evidence consistent with this notion. However, this research has typically investigated choices involving relatively affect-poor, monetary outcomes. We compared choice in relatively affect-poor, monetary lottery problems with choice in relatively affect-rich medical decision problems. Computational modeling of behavioral data and model-based neuroimaging analyses provide converging evidence for substantial differences in the respective decision mechanisms. Relative to affect-poor choices, affect-rich choices yielded a more strongly curved probability weighting function of cumulative prospect theory, thus signaling that the psychological impact of probabilities is strongly diminished for affect-rich outcomes. Examining task-dependent brain activation, we identified a region-by-condition interaction indicating qualitative differences of activation between affect-rich and affect-poor choices. Moreover, brain activation in regions that were more active during affect-poor choices (e.g., the supramarginal gyrus) correlated with individual trial-by-trial decision weights, indicating that these regions reflect processing of probabilities. Formal reverse inference Neurosynth meta-analyses suggested that whereas affect-poor choices seem to be based on brain mechanisms for calculative processes, affect-rich choices are driven by the representation of outcomes' emotional value and autobiographical memories associated with them. These results provide evidence that the traditional notion of expectation maximization may not apply in the context of outcomes laden with affective responses, and that understanding the brain mechanisms of decision making requires the domain of the decision to be taken into account.

  14. Multi-model ensemble hydrologic prediction using Bayesian model averaging

    NASA Astrophysics Data System (ADS)

    Duan, Qingyun; Ajami, Newsha K.; Gao, Xiaogang; Sorooshian, Soroosh

    2007-05-01

    Multi-model ensemble strategy is a means to exploit the diversity of skillful predictions from different models. This paper studies the use of Bayesian model averaging (BMA) scheme to develop more skillful and reliable probabilistic hydrologic predictions from multiple competing predictions made by several hydrologic models. BMA is a statistical procedure that infers consensus predictions by weighing individual predictions based on their probabilistic likelihood measures, with the better performing predictions receiving higher weights than the worse performing ones. Furthermore, BMA provides a more reliable description of the total predictive uncertainty than the original ensemble, leading to a sharper and better calibrated probability density function (PDF) for the probabilistic predictions. In this study, a nine-member ensemble of hydrologic predictions was used to test and evaluate the BMA scheme. This ensemble was generated by calibrating three different hydrologic models using three distinct objective functions. These objective functions were chosen in a way that forces the models to capture certain aspects of the hydrograph well (e.g., peaks, mid-flows and low flows). Two sets of numerical experiments were carried out on three test basins in the US to explore the best way of using the BMA scheme. In the first set, a single set of BMA weights was computed to obtain BMA predictions, while the second set employed multiple sets of weights, with distinct sets corresponding to different flow intervals. In both sets, the streamflow values were transformed using Box-Cox transformation to ensure that the probability distribution of the prediction errors is approximately Gaussian. A split sample approach was used to obtain and validate the BMA predictions. The test results showed that BMA scheme has the advantage of generating more skillful and equally reliable probabilistic predictions than original ensemble. The performance of the expected BMA predictions in terms of daily root mean square error (DRMS) and daily absolute mean error (DABS) is generally superior to that of the best individual predictions. Furthermore, the BMA predictions employing multiple sets of weights are generally better than those using single set of weights.

  15. Nonuniform sampling by quantiles.

    PubMed

    Craft, D Levi; Sonstrom, Reilly E; Rovnyak, Virginia G; Rovnyak, David

    2018-03-01

    A flexible strategy for choosing samples nonuniformly from a Nyquist grid using the concept of statistical quantiles is presented for broad classes of NMR experimentation. Quantile-directed scheduling is intuitive and flexible for any weighting function, promotes reproducibility and seed independence, and is generalizable to multiple dimensions. In brief, weighting functions are divided into regions of equal probability, which define the samples to be acquired. Quantile scheduling therefore achieves close adherence to a probability distribution function, thereby minimizing gaps for any given degree of subsampling of the Nyquist grid. A characteristic of quantile scheduling is that one-dimensional, weighted NUS schedules are deterministic, however higher dimensional schedules are similar within a user-specified jittering parameter. To develop unweighted sampling, we investigated the minimum jitter needed to disrupt subharmonic tracts, and show that this criterion can be met in many cases by jittering within 25-50% of the subharmonic gap. For nD-NUS, three supplemental components to choosing samples by quantiles are proposed in this work: (i) forcing the corner samples to ensure sampling to specified maximum values in indirect evolution times, (ii) providing an option to triangular backfill sampling schedules to promote dense/uniform tracts at the beginning of signal evolution periods, and (iii) providing an option to force the edges of nD-NUS schedules to be identical to the 1D quantiles. Quantile-directed scheduling meets the diverse needs of current NUS experimentation, but can also be used for future NUS implementations such as off-grid NUS and more. A computer program implementing these principles (a.k.a. QSched) in 1D- and 2D-NUS is available under the general public license. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Nonuniform sampling by quantiles

    NASA Astrophysics Data System (ADS)

    Craft, D. Levi; Sonstrom, Reilly E.; Rovnyak, Virginia G.; Rovnyak, David

    2018-03-01

    A flexible strategy for choosing samples nonuniformly from a Nyquist grid using the concept of statistical quantiles is presented for broad classes of NMR experimentation. Quantile-directed scheduling is intuitive and flexible for any weighting function, promotes reproducibility and seed independence, and is generalizable to multiple dimensions. In brief, weighting functions are divided into regions of equal probability, which define the samples to be acquired. Quantile scheduling therefore achieves close adherence to a probability distribution function, thereby minimizing gaps for any given degree of subsampling of the Nyquist grid. A characteristic of quantile scheduling is that one-dimensional, weighted NUS schedules are deterministic, however higher dimensional schedules are similar within a user-specified jittering parameter. To develop unweighted sampling, we investigated the minimum jitter needed to disrupt subharmonic tracts, and show that this criterion can be met in many cases by jittering within 25-50% of the subharmonic gap. For nD-NUS, three supplemental components to choosing samples by quantiles are proposed in this work: (i) forcing the corner samples to ensure sampling to specified maximum values in indirect evolution times, (ii) providing an option to triangular backfill sampling schedules to promote dense/uniform tracts at the beginning of signal evolution periods, and (iii) providing an option to force the edges of nD-NUS schedules to be identical to the 1D quantiles. Quantile-directed scheduling meets the diverse needs of current NUS experimentation, but can also be used for future NUS implementations such as off-grid NUS and more. A computer program implementing these principles (a.k.a. QSched) in 1D- and 2D-NUS is available under the general public license.

  17. Doubly Robust and Efficient Estimation of Marginal Structural Models for the Hazard Function

    PubMed Central

    Zheng, Wenjing; Petersen, Maya; van der Laan, Mark

    2016-01-01

    In social and health sciences, many research questions involve understanding the causal effect of a longitudinal treatment on mortality (or time-to-event outcomes in general). Often, treatment status may change in response to past covariates that are risk factors for mortality, and in turn, treatment status may also affect such subsequent covariates. In these situations, Marginal Structural Models (MSMs), introduced by Robins (1997), are well-established and widely used tools to account for time-varying confounding. In particular, a MSM can be used to specify the intervention-specific counterfactual hazard function, i.e. the hazard for the outcome of a subject in an ideal experiment where he/she was assigned to follow a given intervention on their treatment variables. The parameters of this hazard MSM are traditionally estimated using the Inverse Probability Weighted estimation (IPTW, van der Laan and Petersen (2007), Robins et al. (2000b), Robins (1999), Robins et al. (2008)). This estimator is easy to implement and admits Wald-type confidence intervals. However, its consistency hinges on the correct specification of the treatment allocation probabilities, and the estimates are generally sensitive to large treatment weights (especially in the presence of strong confounding), which are difficult to stabilize for dynamic treatment regimes. In this paper, we present a pooled targeted maximum likelihood estimator (TMLE, van der Laan and Rubin (2006)) for MSM for the hazard function under longitudinal dynamic treatment regimes. The proposed estimator is semiparametric efficient and doubly robust, hence offers bias reduction and efficiency gain over the incumbent IPTW estimator. Moreover, the substitution principle rooted in the TMLE potentially mitigates the sensitivity to large treatment weights in IPTW. We compare the performance of the proposed estimator with the IPTW and a non-targeted substitution estimator in a simulation study. PMID:27227723

  18. Nitrogen oxide emission calculation for post-Panamax container ships by using engine operation power probability as weighting factor: A slow-steaming case.

    PubMed

    Cheng, Chih-Wen; Hua, Jian; Hwang, Daw-Shang

    2018-06-01

    In this study, the nitrogen oxide (NO x ) emission factors and total NO x emissions of two groups of post-Panamax container ships operating on a long-term slow-steaming basis along Euro-Asian routes were calculated using both the probability density function of engine power levels and the NO x emission function. The main engines of the five sister ships in Group I satisfied the Tier I emission limit stipulated in MARPOL (International Convention for the Prevention of Pollution from Ships) Annex VI, and those in Group II satisfied the Tier II limit. The calculated NO x emission factors of the Group I and Group II ships were 14.73 and 17.85 g/kWhr, respectively. The total NO x emissions of the Group II ships were determined to be 4.4% greater than those of the Group I ships. When the Tier II certification value was used to calculate the average total NO x emissions of Group II engines, the result was lower than the actual value by 21.9%. Although fuel consumption and carbon dioxide (CO 2 ) emissions were increased by 1.76% because of slow steaming, the NO x emissions were markedly reduced by 17.2%. The proposed method is more effective and accurate than the NO x Technical Code 2008. Furthermore, it can be more appropriately applied to determine the NO x emissions of international shipping inventory. The usage of operating power probability density function of diesel engines as the weighting factor and the NO x emission function obtained from test bed for calculating NO x emissions is more accurate and practical. The proposed method is suitable for all types and purposes of diesel engines, irrespective of their operating power level. The method can be used to effectively determine the NO x emissions of international shipping and inventory applications and should be considered in determining the carbon tax to be imposed in the future.

  19. Correcting for dependent censoring in routine outcome monitoring data by applying the inverse probability censoring weighted estimator.

    PubMed

    Willems, Sjw; Schat, A; van Noorden, M S; Fiocco, M

    2018-02-01

    Censored data make survival analysis more complicated because exact event times are not observed. Statistical methodology developed to account for censored observations assumes that patients' withdrawal from a study is independent of the event of interest. However, in practice, some covariates might be associated to both lifetime and censoring mechanism, inducing dependent censoring. In this case, standard survival techniques, like Kaplan-Meier estimator, give biased results. The inverse probability censoring weighted estimator was developed to correct for bias due to dependent censoring. In this article, we explore the use of inverse probability censoring weighting methodology and describe why it is effective in removing the bias. Since implementing this method is highly time consuming and requires programming and mathematical skills, we propose a user friendly algorithm in R. Applications to a toy example and to a medical data set illustrate how the algorithm works. A simulation study was carried out to investigate the performance of the inverse probability censoring weighted estimators in situations where dependent censoring is present in the data. In the simulation process, different sample sizes, strengths of the censoring model, and percentages of censored individuals were chosen. Results show that in each scenario inverse probability censoring weighting reduces the bias induced in the traditional Kaplan-Meier approach where dependent censoring is ignored.

  20. SU-G-JeP2-02: A Unifying Multi-Atlas Approach to Electron Density Mapping Using Multi-Parametric MRI for Radiation Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, S; Tianjin University, Tianjin; Hara, W

    Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less

  1. Online probabilistic learning with an ensemble of forecasts

    NASA Astrophysics Data System (ADS)

    Thorey, Jean; Mallet, Vivien; Chaussin, Christophe

    2016-04-01

    Our objective is to produce a calibrated weighted ensemble to forecast a univariate time series. In addition to a meteorological ensemble of forecasts, we rely on observations or analyses of the target variable. The celebrated Continuous Ranked Probability Score (CRPS) is used to evaluate the probabilistic forecasts. However applying the CRPS on weighted empirical distribution functions (deriving from the weighted ensemble) may introduce a bias because of which minimizing the CRPS does not produce the optimal weights. Thus we propose an unbiased version of the CRPS which relies on clusters of members and is strictly proper. We adapt online learning methods for the minimization of the CRPS. These methods generate the weights associated to the members in the forecasted empirical distribution function. The weights are updated before each forecast step using only past observations and forecasts. Our learning algorithms provide the theoretical guarantee that, in the long run, the CRPS of the weighted forecasts is at least as good as the CRPS of any weighted ensemble with weights constant in time. In particular, the performance of our forecast is better than that of any subset ensemble with uniform weights. A noteworthy advantage of our algorithm is that it does not require any assumption on the distributions of the observations and forecasts, both for the application and for the theoretical guarantee to hold. As application example on meteorological forecasts for photovoltaic production integration, we show that our algorithm generates a calibrated probabilistic forecast, with significant performance improvements on probabilistic diagnostic tools (the CRPS, the reliability diagram and the rank histogram).

  2. Phylogeny of immunoglobulin structure and function. VII. Monomeric and tetrameric immunoglobulins of the margate, a marine teleost fish.

    PubMed Central

    Clem, L W; McLean, W E

    1975-01-01

    The margate, a marine teleost fish, was found to contain both high (16S) and low (7S) molecular weight antibodies 17 days after initial immunization. The 16S antibodies were detectable with both haemagglutination and antigen-binding assays, whereas the 7S antibodies were only detected by the latter technique. Margate 16S (molecular weight approximately 700,000) and 7S (molecular weight approximately 175,000) immunoglobulins were isolated and shown to be antigenically indistinguishable. They therefore appear to belong to the same immunoglobulin class and to have a tetramer--monomer relationship. Experiments with stored sera indicated the 7S protein is probably not an in vitro degradation product of the 16S molecule. Images FIG. 3 FIG. 4 PMID:1184121

  3. Shape Control in Multivariate Barycentric Rational Interpolation

    NASA Astrophysics Data System (ADS)

    Nguyen, Hoa Thang; Cuyt, Annie; Celis, Oliver Salazar

    2010-09-01

    The most stable formula for a rational interpolant for use on a finite interval is the barycentric form [1, 2]. A simple choice of the barycentric weights ensures the absence of (unwanted) poles on the real line [3]. In [4] we indicate that a more refined choice of the weights in barycentric rational interpolation can guarantee comonotonicity and coconvexity of the rational interpolant in addition to a polefree region of interest. In this presentation we generalize the above to the multivariate case. We use a product-like form of univariate barycentric rational interpolants and indicate how the location of the poles and the shape of the function can be controlled. This functionality is of importance in the construction of mathematical models that need to express a certain trend, such as in probability distributions, economics, population dynamics, tumor growth models etc.

  4. Statistical Optimality in Multipartite Ranking and Ordinal Regression.

    PubMed

    Uematsu, Kazuki; Lee, Yoonkyung

    2015-05-01

    Statistical optimality in multipartite ranking is investigated as an extension of bipartite ranking. We consider the optimality of ranking algorithms through minimization of the theoretical risk which combines pairwise ranking errors of ordinal categories with differential ranking costs. The extension shows that for a certain class of convex loss functions including exponential loss, the optimal ranking function can be represented as a ratio of weighted conditional probability of upper categories to lower categories, where the weights are given by the misranking costs. This result also bridges traditional ranking methods such as proportional odds model in statistics with various ranking algorithms in machine learning. Further, the analysis of multipartite ranking with different costs provides a new perspective on non-smooth list-wise ranking measures such as the discounted cumulative gain and preference learning. We illustrate our findings with simulation study and real data analysis.

  5. Estimating the concordance probability in a survival analysis with a discrete number of risk groups.

    PubMed

    Heller, Glenn; Mo, Qianxing

    2016-04-01

    A clinical risk classification system is an important component of a treatment decision algorithm. A measure used to assess the strength of a risk classification system is discrimination, and when the outcome is survival time, the most commonly applied global measure of discrimination is the concordance probability. The concordance probability represents the pairwise probability of lower patient risk given longer survival time. The c-index and the concordance probability estimate have been used to estimate the concordance probability when patient-specific risk scores are continuous. In the current paper, the concordance probability estimate and an inverse probability censoring weighted c-index are modified to account for discrete risk scores. Simulations are generated to assess the finite sample properties of the concordance probability estimate and the weighted c-index. An application of these measures of discriminatory power to a metastatic prostate cancer risk classification system is examined.

  6. Neuronal correlates of reduced memory performance in overweight subjects.

    PubMed

    Stingl, Krunoslav T; Kullmann, Stephanie; Ketterer, Caroline; Heni, Martin; Häring, Hans-Ulrich; Fritsche, Andreas; Preissl, Hubert

    2012-03-01

    There is growing evidence that excessive body weight correlates with impaired cognitive performance like executive function, attention and memory. In our study, we applied a visual working memory task to quantify associations between body weight and executive function. In total, 34 lean (BMI 22±2.1 kg/m(2)) and 34 obese (BMI 30.4±3.2 kg/m(2)) subjects were included. Magnetic brain activity and behavioral responses were recorded during a one-back visual memory task with food and non-food pictures, which were matched for color, size and complexity. Behavioral responses (reaction time and accuracy) were reduced in obese subjects independent of the stimulus category. Neuronal activity at the source level showed a positive correlation between the right dorsolateral prefrontal cortex (DLPFC) activity and BMI only for the food category. In addition, a negative correlation between BMI and neuronal activity was observed in the occipital area for both categories. Therefore we conclude that increased body weight is associated with reduced task performance and specific neuronal changes. This altered activity is probably related to executive function as well as encoding and retrieval of information. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. Peelle's pertinent puzzle using the Monte Carlo technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawano, Toshihiko; Talou, Patrick; Burr, Thomas

    2009-01-01

    We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, andmore » if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.« less

  8. OCA-B regulation of B-cell development and function.

    PubMed

    Teitell, Michael A

    2003-10-01

    The transcriptional co-activator OCA-B [for Oct co-activator from B cells, also known as OBF-1 (OCT-binding factor-1) and Bob1] is not required for B-cell genesis but does regulate subsequent B-cell development and function. OCA-B deficient mice show strain-specific, partial blocks at multiple stages of B-cell maturation and a complete disruption of germinal center formation in all strains, causing humoral immune deficiency and susceptibility to infection. OCA-B probably exerts its effects through the regulation of octamer-motif controlled gene expression. The OCA-B gene encodes two proteins of distinct molecular weight, designated p34 and p35. The p34 isoform localizes in the nucleus, whereas the p35 isoform is myristoylated and is bound to the cytoplasmic membrane. p35 can traffic to the nucleus and probably activates octamer-dependent transcription, although this OCA-B isoform might regulate B cells through membrane-related signal transduction.

  9. Universality classes of fluctuation dynamics in hierarchical complex systems

    NASA Astrophysics Data System (ADS)

    Macêdo, A. M. S.; González, Iván R. Roa; Salazar, D. S. P.; Vasconcelos, G. L.

    2017-03-01

    A unified approach is proposed to describe the statistics of the short-time dynamics of multiscale complex systems. The probability density function of the relevant time series (signal) is represented as a statistical superposition of a large time-scale distribution weighted by the distribution of certain internal variables that characterize the slowly changing background. The dynamics of the background is formulated as a hierarchical stochastic model whose form is derived from simple physical constraints, which in turn restrict the dynamics to only two possible classes. The probability distributions of both the signal and the background have simple representations in terms of Meijer G functions. The two universality classes for the background dynamics manifest themselves in the signal distribution as two types of tails: power law and stretched exponential, respectively. A detailed analysis of empirical data from classical turbulence and financial markets shows excellent agreement with the theory.

  10. Effects of breastfeeding on postpartum weight loss among U.S. women

    PubMed Central

    Jarlenski, Marian P.; Bennett, Wendy L.; Bleich, Sara N.; Barry, Colleen L.; Stuart, Elizabeth A.

    2014-01-01

    Objective To evaluate the effects of breastfeeding on maternal weight loss in the 12 months postpartum among U.S. women. Methods Using data from a national cohort of U.S. women conducted in 2005-2007 (N=2,102), we employed propensity scores to match women who breastfed exclusively and non-exclusive for at least three months to comparison women who had not breastfed or breastfed for less than three months. Outcomes included postpartum weight loss at 3, 6, 9, and 12 months postpartum; and the probability of returning to pre-pregnancy body mass index (BMI) category and the probability of returning to pre-pregnancy weight. Results Compared to women who did not breastfeed or breastfed non-exclusively, exclusive breastfeeding for at least 3 months resulted in 3.2 pounds (95% CI: 1.4,4.7) greater weight loss at 12 months postpartum, a 6.0-percentage-point increase (95% CI: 2.3,9.7) in the probability of returning to the same or lower BMI category postpartum; and a 6.1-percentage-point increase (95% CI: 1.0,11.3) in the probability of returning to pre-pregnancy weight or lower postpartum. Non-exclusive breastfeeding did not significantly affect any outcomes. Conclusion Our study provides evidence that exclusive breastfeeding for at least three months has a small effect on postpartum weight loss among U.S. women. PMID:25284261

  11. Effects of breastfeeding on postpartum weight loss among U.S. women.

    PubMed

    Jarlenski, Marian P; Bennett, Wendy L; Bleich, Sara N; Barry, Colleen L; Stuart, Elizabeth A

    2014-12-01

    The aim of this study is to evaluate the effects of breastfeeding on maternal weight loss in the 12months postpartum among U.S. women. Using data from a national cohort of U.S. women conducted in 2005-2007 (N=2102), we employed propensity scores to match women who breastfed exclusively and non-exclusive for at least three months to comparison women who had not breastfed or breastfed for less than three months. Outcomes included postpartum weight loss at 3, 6, 9, and 12months postpartum; and the probability of returning to pre-pregnancy body mass index (BMI) category and the probability of returning to pre-pregnancy weight. Compared to women who did not breastfeed or breastfed non-exclusively, exclusive breastfeeding for at least 3months resulted in 3.2 pound (95% CI: 1.4,4.7) greater weight loss at 12months postpartum, a 6.0-percentage-point increase (95% CI: 2.3,9.7) in the probability of returning to the same or lower BMI category postpartum; and a 6.1-percentage-point increase (95% CI: 1.0,11.3) in the probability of returning to pre-pregnancy weight or lower postpartum. Non-exclusive breastfeeding did not significantly affect any outcomes. Our study provides evidence that exclusive breastfeeding for at least three months has a small effect on postpartum weight loss among U.S. women. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Over, under, or about right: misperceptions of body weight among food stamp participants.

    PubMed

    Ver Ploeg, Michele L; Chang, Hung-Hao; Lin, Biing-Hwan

    2008-09-01

    The purpose of this research was to investigate the associations between misperception of body weight and sociodemographic factors such as food stamp participation status, income, education, and race/ethnicity. National Health and Nutrition Examination Survey (NHANES) data from 1999-2004 and multivariate logistic regression are used to estimate how sociodemographic factors are associated with (i) the probability that overweight adults misperceive themselves as healthy weight; (ii) the probability that healthy-weight adults misperceive themselves as underweight; and (iii) the probability that healthy-weight adults misperceive themselves as overweight. NHANES data are representative of the US civilian noninstitutionalized population. The analysis included 4,362 men and 4,057 women. BMI derived from measured weight and height was used to classify individuals as healthy weight or overweight. These classifications were compared with self-reported categorical weight status. We find that differences across sociodemographic characteristics in the propensity to underestimate or overestimate weight status were more pronounced for women than for men. Overweight female food stamp participants were more likely to underestimate weight status than income-eligible nonparticipants. Among healthy-weight and overweight women, non-Hispanic black and Mexican-American women, and women with less education were more likely to underestimate actual weight status. We found few differences across sociodemographic characteristics for men. Misperceptions of weight are common among both overweight and healthy-weight individuals and vary across socioeconomic and demographic groups. The nutrition education component of the Food Stamp Program could increase awareness of healthy body weight among participants.

  13. [Inverse probability weighting (IPW) for evaluating and "correcting" selection bias].

    PubMed

    Narduzzi, Silvia; Golini, Martina Nicole; Porta, Daniela; Stafoggia, Massimo; Forastiere, Francesco

    2014-01-01

    the Inverse probability weighting (IPW) is a methodology developed to account for missingness and selection bias caused by non-randomselection of observations, or non-random lack of some information in a subgroup of the population. to provide an overview of IPW methodology and an application in a cohort study of the association between exposure to traffic air pollution (nitrogen dioxide, NO₂) and 7-year children IQ. this methodology allows to correct the analysis by weighting the observations with the probability of being selected. The IPW is based on the assumption that individual information that can predict the probability of inclusion (non-missingness) are available for the entire study population, so that, after taking account of them, we can make inferences about the entire target population starting from the nonmissing observations alone.The procedure for the calculation is the following: firstly, we consider the entire population at study and calculate the probability of non-missing information using a logistic regression model, where the response is the nonmissingness and the covariates are its possible predictors.The weight of each subject is given by the inverse of the predicted probability. Then the analysis is performed only on the non-missing observations using a weighted model. IPW is a technique that allows to embed the selection process in the analysis of the estimates, but its effectiveness in "correcting" the selection bias depends on the availability of enough information, for the entire population, to predict the non-missingness probability. In the example proposed, the IPW application showed that the effect of exposure to NO2 on the area of verbal intelligence quotient of children is stronger than the effect showed from the analysis performed without regard to the selection processes.

  14. QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility

    NASA Astrophysics Data System (ADS)

    Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.

    2013-11-01

    One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps (i.e., the spatial probability of a future vent opening given the past eruptive activity of a volcano). This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source geographic information system Quantum GIS, which is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows the selection of an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input data sets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).

  15. Risk Preferences, Probability Weighting, and Strategy Tradeoffs in Wildfire Management.

    PubMed

    Hand, Michael S; Wibbenmeyer, Matthew J; Calkin, David E; Thompson, Matthew P

    2015-10-01

    Wildfires present a complex applied risk management environment, but relatively little attention has been paid to behavioral and cognitive responses to risk among public agency wildfire managers. This study investigates responses to risk, including probability weighting and risk aversion, in a wildfire management context using a survey-based experiment administered to federal wildfire managers. Respondents were presented with a multiattribute lottery-choice experiment where each lottery is defined by three outcome attributes: expenditures for fire suppression, damage to private property, and exposure of firefighters to the risk of aviation-related fatalities. Respondents choose one of two strategies, each of which includes "good" (low cost/low damage) and "bad" (high cost/high damage) outcomes that occur with varying probabilities. The choice task also incorporates an information framing experiment to test whether information about fatality risk to firefighters alters managers' responses to risk. Results suggest that managers exhibit risk aversion and nonlinear probability weighting, which can result in choices that do not minimize expected expenditures, property damage, or firefighter exposure. Information framing tends to result in choices that reduce the risk of aviation fatalities, but exacerbates nonlinear probability weighting. © 2015 Society for Risk Analysis.

  16. Probability weighted moments: Definition and relation to parameters of several distributions expressable in inverse form

    USGS Publications Warehouse

    Greenwood, J. Arthur; Landwehr, J. Maciunas; Matalas, N.C.; Wallis, J.R.

    1979-01-01

    Distributions whose inverse forms are explicitly defined, such as Tukey's lambda, may present problems in deriving their parameters by more conventional means. Probability weighted moments are introduced and shown to be potentially useful in expressing the parameters of these distributions.

  17. [Implication of inverse-probability weighting method in the evaluation of diagnostic test with verification bias].

    PubMed

    Kang, Leni; Zhang, Shaokai; Zhao, Fanghui; Qiao, Youlin

    2014-03-01

    To evaluate and adjust the verification bias existed in the screening or diagnostic tests. Inverse-probability weighting method was used to adjust the sensitivity and specificity of the diagnostic tests, with an example of cervical cancer screening used to introduce the Compare Tests package in R software which could be implemented. Sensitivity and specificity calculated from the traditional method and maximum likelihood estimation method were compared to the results from Inverse-probability weighting method in the random-sampled example. The true sensitivity and specificity of the HPV self-sampling test were 83.53% (95%CI:74.23-89.93)and 85.86% (95%CI: 84.23-87.36). In the analysis of data with randomly missing verification by gold standard, the sensitivity and specificity calculated by traditional method were 90.48% (95%CI:80.74-95.56)and 71.96% (95%CI:68.71-75.00), respectively. The adjusted sensitivity and specificity under the use of Inverse-probability weighting method were 82.25% (95% CI:63.11-92.62) and 85.80% (95% CI: 85.09-86.47), respectively, whereas they were 80.13% (95%CI:66.81-93.46)and 85.80% (95%CI: 84.20-87.41) under the maximum likelihood estimation method. The inverse-probability weighting method could effectively adjust the sensitivity and specificity of a diagnostic test when verification bias existed, especially when complex sampling appeared.

  18. An Efficient Numerical Approach for Nonlinear Fokker-Planck equations

    NASA Astrophysics Data System (ADS)

    Otten, Dustin; Vedula, Prakash

    2009-03-01

    Fokker-Planck equations which are nonlinear with respect to their probability densities that occur in many nonequilibrium systems relevant to mean field interaction models, plasmas, classical fermions and bosons can be challenging to solve numerically. To address some underlying challenges in obtaining numerical solutions, we propose a quadrature based moment method for efficient and accurate determination of transient (and stationary) solutions of nonlinear Fokker-Planck equations. In this approach the distribution function is represented as a collection of Dirac delta functions with corresponding quadrature weights and locations, that are in turn determined from constraints based on evolution of generalized moments. Properties of the distribution function can be obtained by solution of transport equations for quadrature weights and locations. We will apply this computational approach to study a wide range of problems, including the Desai-Zwanzig Model (for nonlinear muscular contraction) and multivariate nonlinear Fokker-Planck equations describing classical fermions and bosons, and will also demonstrate good agreement with results obtained from Monte Carlo and other standard numerical methods.

  19. Mental and Physical Health Status and Alcohol and Drug Use Following Return From Deployment to Iraq or Afghanistan

    PubMed Central

    Schultz, Mark R.; Vogt, Dawne; Glickman, Mark E.; Elwy, A. Rani; Drainoni, Mari-Lynn; Osei-Bonsu, Princess E.; Martin, James

    2012-01-01

    Objectives. We examined (1) mental and physical health symptoms and functioning in US veterans within 1 year of returning from deployment, and (2) differences by gender, service component (Active, National Guard, other Reserve), service branch (Army, Navy, Air Force, Marines), and deployment operation (Operation Enduring Freedom/Operation Iraqi Freedom [OEF/OIF]). Methods. We surveyed a national sample of 596 OEF/OIF veterans, oversampling women to make up 50% of the total, and National Guard and Reserve components to each make up 25%. Weights were applied to account for stratification and nonresponse bias. Results. Mental health functioning was significantly worse compared with the general population; 13.9% screened positive for probable posttraumatic stress disorder, 39% for probable alcohol abuse, and 3% for probable drug abuse. Men reported more alcohol and drug use than did women, but there were no gender differences in posttraumatic stress disorder or other mental health domains. OIF veterans reported more depression or functioning problems and alcohol and drug use than did OEF veterans. Army and Marine veterans reported worse mental and physical health than did Air Force or Navy veterans. Conclusions. Continuing identification of veterans at risk for mental health and substance use problems is important for evidence-based interventions intended to increase resilience and enhance treatment. PMID:22390605

  20. A quadrature based method of moments for nonlinear Fokker-Planck equations

    NASA Astrophysics Data System (ADS)

    Otten, Dustin L.; Vedula, Prakash

    2011-09-01

    Fokker-Planck equations which are nonlinear with respect to their probability densities and occur in many nonequilibrium systems relevant to mean field interaction models, plasmas, fermions and bosons can be challenging to solve numerically. To address some underlying challenges, we propose the application of the direct quadrature based method of moments (DQMOM) for efficient and accurate determination of transient (and stationary) solutions of nonlinear Fokker-Planck equations (NLFPEs). In DQMOM, probability density (or other distribution) functions are represented using a finite collection of Dirac delta functions, characterized by quadrature weights and locations (or abscissas) that are determined based on constraints due to evolution of generalized moments. Three particular examples of nonlinear Fokker-Planck equations considered in this paper include descriptions of: (i) the Shimizu-Yamada model, (ii) the Desai-Zwanzig model (both of which have been developed as models of muscular contraction) and (iii) fermions and bosons. Results based on DQMOM, for the transient and stationary solutions of the nonlinear Fokker-Planck equations, have been found to be in good agreement with other available analytical and numerical approaches. It is also shown that approximate reconstruction of the underlying probability density function from moments obtained from DQMOM can be satisfactorily achieved using a maximum entropy method.

  1. Functional annotation by sequence-weighted structure alignments: statistical analysis and case studies from the Protein 3000 structural genomics project in Japan.

    PubMed

    Standley, Daron M; Toh, Hiroyuki; Nakamura, Haruki

    2008-09-01

    A method to functionally annotate structural genomics targets, based on a novel structural alignment scoring function, is proposed. In the proposed score, position-specific scoring matrices are used to weight structurally aligned residue pairs to highlight evolutionarily conserved motifs. The functional form of the score is first optimized for discriminating domains belonging to the same Pfam family from domains belonging to different families but the same CATH or SCOP superfamily. In the optimization stage, we consider four standard weighting functions as well as our own, the "maximum substitution probability," and combinations of these functions. The optimized score achieves an area of 0.87 under the receiver-operating characteristic curve with respect to identifying Pfam families within a sequence-unique benchmark set of domain pairs. Confidence measures are then derived from the benchmark distribution of true-positive scores. The alignment method is next applied to the task of functionally annotating 230 query proteins released to the public as part of the Protein 3000 structural genomics project in Japan. Of these queries, 78 were found to align to templates with the same Pfam family as the query or had sequence identities > or = 30%. Another 49 queries were found to match more distantly related templates. Within this group, the template predicted by our method to be the closest functional relative was often not the most structurally similar. Several nontrivial cases are discussed in detail. Finally, 103 queries matched templates at the fold level, but not the family or superfamily level, and remain functionally uncharacterized. 2008 Wiley-Liss, Inc.

  2. From anomalies to forecasts: Toward a descriptive model of decisions under risk, under ambiguity, and from experience.

    PubMed

    Erev, Ido; Ert, Eyal; Plonsky, Ori; Cohen, Doron; Cohen, Oded

    2017-07-01

    Experimental studies of choice behavior document distinct, and sometimes contradictory, deviations from maximization. For example, people tend to overweight rare events in 1-shot decisions under risk, and to exhibit the opposite bias when they rely on past experience. The common explanations of these results assume that the contradicting anomalies reflect situation-specific processes that involve the weighting of subjective values and the use of simple heuristics. The current article analyzes 14 choice anomalies that have been described by different models, including the Allais, St. Petersburg, and Ellsberg paradoxes, and the reflection effect. Next, it uses a choice prediction competition methodology to clarify the interaction between the different anomalies. It focuses on decisions under risk (known payoff distributions) and under ambiguity (unknown probabilities), with and without feedback concerning the outcomes of past choices. The results demonstrate that it is not necessary to assume situation-specific processes. The distinct anomalies can be captured by assuming high sensitivity to the expected return and 4 additional tendencies: pessimism, bias toward equal weighting, sensitivity to payoff sign, and an effort to minimize the probability of immediate regret. Importantly, feedback increases sensitivity to probability of regret. Simple abstractions of these assumptions, variants of the model Best Estimate and Sampling Tools (BEAST), allow surprisingly accurate ex ante predictions of behavior. Unlike the popular models, BEAST does not assume subjective weighting functions or cognitive shortcuts. Rather, it assumes the use of sampling tools and reliance on small samples, in addition to the estimation of the expected values. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Activation of Antioxidative Functions by Radon Inhalation Enhances the Mitigation Effects of Pregabalin on Chronic Constriction Injury-Induced Neuropathic Pain in Mice

    PubMed Central

    Horie, Shunsuke; Etani, Reo; Kanzaki, Norie; Sasaoka, Kaori; Kobashi, Yusuke; Hanamoto, Katsumi; Yamaoka, Kiyonori

    2016-01-01

    Radon inhalation brings pain relief for chronic constriction injury- (CCI-) induced neuropathic pain in mice due to the activation of antioxidative functions, which is different from the mechanism of the pregabalin effect. In this study, we assessed whether a combination of radon inhalation and pregabalin administration is more effective against neuropathic pain than radon or pregabalin only. Mice were treated with inhaled radon at a concentration of 1,000 Bq/m3 for 24 hours and pregabalin administration after CCI surgery. In mice treated with pregabalin at a dose of 3 mg/kg weight, the 50% paw withdrawal threshold of mice treated with pregabalin or radon and pregabalin was significantly increased, suggesting pain relief. The therapeutic effects of radon inhalation or the combined effects of radon and pregabalin (3 mg/kg weight) were almost equivalent to treatment with pregabalin at a dose of 1.4 mg/kg weight or 4.1 mg/kg weight, respectively. Radon inhalation and the combination of radon and pregabalin increased antioxidant associated substances in the paw. The antioxidant substances increased much more in radon inhalation than in pregabalin administration. These findings suggested that the activation of antioxidative functions by radon inhalation enhances the pain relief of pregabalin and that this combined effect is probably an additive effect. PMID:26798431

  4. Activation of Antioxidative Functions by Radon Inhalation Enhances the Mitigation Effects of Pregabalin on Chronic Constriction Injury-Induced Neuropathic Pain in Mice.

    PubMed

    Kataoka, Takahiro; Horie, Shunsuke; Etani, Reo; Kanzaki, Norie; Sasaoka, Kaori; Kobashi, Yusuke; Hanamoto, Katsumi; Yamaoka, Kiyonori

    2016-01-01

    Radon inhalation brings pain relief for chronic constriction injury- (CCI-) induced neuropathic pain in mice due to the activation of antioxidative functions, which is different from the mechanism of the pregabalin effect. In this study, we assessed whether a combination of radon inhalation and pregabalin administration is more effective against neuropathic pain than radon or pregabalin only. Mice were treated with inhaled radon at a concentration of 1,000 Bq/m(3) for 24 hours and pregabalin administration after CCI surgery. In mice treated with pregabalin at a dose of 3 mg/kg weight, the 50% paw withdrawal threshold of mice treated with pregabalin or radon and pregabalin was significantly increased, suggesting pain relief. The therapeutic effects of radon inhalation or the combined effects of radon and pregabalin (3 mg/kg weight) were almost equivalent to treatment with pregabalin at a dose of 1.4 mg/kg weight or 4.1 mg/kg weight, respectively. Radon inhalation and the combination of radon and pregabalin increased antioxidant associated substances in the paw. The antioxidant substances increased much more in radon inhalation than in pregabalin administration. These findings suggested that the activation of antioxidative functions by radon inhalation enhances the pain relief of pregabalin and that this combined effect is probably an additive effect.

  5. Committor of elementary reactions on multistate systems

    NASA Astrophysics Data System (ADS)

    Király, Péter; Kiss, Dóra Judit; Tóth, Gergely

    2018-04-01

    In our study, we extend the committor concept on multi-minima systems, where more than one reaction may proceed, but the feasible data evaluation needs the projection onto partial reactions. The elementary reaction committor and the corresponding probability density of the reactive trajectories are defined and calculated on a three-hole two-dimensional model system explored by single-particle Langevin dynamics. We propose a method to visualize more elementary reaction committor functions or probability densities of reactive trajectories on a single plot that helps to identify the most important reaction channels and the nonreactive domains simultaneously. We suggest a weighting for the energy-committor plots that correctly shows the limits of both the minimal energy path and the average energy concepts. The methods also performed well on the analysis of molecular dynamics trajectories of 2-chlorobutane, where an elementary reaction committor, the probability densities, the potential energy/committor, and the free-energy/committor curves are presented.

  6. The Agility Advantage: A Survival Guide for Complex Enterprises and Endeavors

    DTIC Science & Technology

    2011-09-01

    weighted probability 75th pctl weighted probability 13. Carman. K. G. and Kooreman, P, “ Flu Shots, Mammogram, and the Perception of Probabilities,” 2010...sharing capabilities until users can tes- tify to the benefi ts. This creates a chicken and egg situa- tion, because the consumers of information fi...Bibliography 563 Campen, Alan D. Look Closely at Network-Centric Warfare. Signal, January 2004. Carman, Katherine G., and Peter Kooreman, Peter. Flu Shots

  7. A note on the IQ of monozygotic twins raised apart and the order of their birth.

    PubMed

    Pencavel, J H

    1976-10-01

    This note examines James Shields' sample of monozygotic twins raised apart to entertain the hypothesis that there is a significant association between the measured IQ of these twins and the order of their birth. A non-parametric test supports this hypothesis and then a linear probability function is estimated that discriminates the effects on IQ of birth order from the effects of birth weight.

  8. Weighted Parzen Windows for Pattern Classification

    DTIC Science & Technology

    1994-05-01

    Nearest-Neighbor Rule The k-Nearest-Neighbor ( kNN ) technique is nonparametric, assuming nothing about the distribution of the data. Stated succinctly...probabilities P(wj I x) from samples." Raudys and Jain [20:255] advance this interpretation by pointing out that the kNN technique can be viewed as the...34Parzen window classifier with a hyper- rectangular window function." As with the Parzen-window technique, the kNN classifier is more accurate as the

  9. Sensor Drift Compensation Algorithm based on PDF Distance Minimization

    NASA Astrophysics Data System (ADS)

    Kim, Namyong; Byun, Hyung-Gi; Persaud, Krishna C.; Huh, Jeung-Soo

    2009-05-01

    In this paper, a new unsupervised classification algorithm is introduced for the compensation of sensor drift effects of the odor sensing system using a conducting polymer sensor array. The proposed method continues updating adaptive Radial Basis Function Network (RBFN) weights in the testing phase based on minimizing Euclidian Distance between two Probability Density Functions (PDFs) of a set of training phase output data and another set of testing phase output data. The output in the testing phase using the fixed weights of the RBFN are significantly dispersed and shifted from each target value due mostly to sensor drift effect. In the experimental results, the output data by the proposed methods are observed to be concentrated closer again to their own target values significantly. This indicates that the proposed method can be effectively applied to improved odor sensing system equipped with the capability of sensor drift effect compensation

  10. Inferring drug-disease associations based on known protein complexes.

    PubMed

    Yu, Liang; Huang, Jianbin; Ma, Zhixin; Zhang, Jing; Zou, Yapeng; Gao, Lin

    2015-01-01

    Inferring drug-disease associations is critical in unveiling disease mechanisms, as well as discovering novel functions of available drugs, or drug repositioning. Previous work is primarily based on drug-gene-disease relationship, which throws away many important information since genes execute their functions through interacting others. To overcome this issue, we propose a novel methodology that discover the drug-disease association based on protein complexes. Firstly, the integrated heterogeneous network consisting of drugs, protein complexes, and disease are constructed, where we assign weights to the drug-disease association by using probability. Then, from the tripartite network, we get the indirect weighted relationships between drugs and diseases. The larger the weight, the higher the reliability of the correlation. We apply our method to mental disorders and hypertension, and validate the result by using comparative toxicogenomics database. Our ranked results can be directly reinforced by existing biomedical literature, suggesting that our proposed method obtains higher specificity and sensitivity. The proposed method offers new insight into drug-disease discovery. Our method is publicly available at http://1.complexdrug.sinaapp.com/Drug_Complex_Disease/Data_Download.html.

  11. Inferring drug-disease associations based on known protein complexes

    PubMed Central

    2015-01-01

    Inferring drug-disease associations is critical in unveiling disease mechanisms, as well as discovering novel functions of available drugs, or drug repositioning. Previous work is primarily based on drug-gene-disease relationship, which throws away many important information since genes execute their functions through interacting others. To overcome this issue, we propose a novel methodology that discover the drug-disease association based on protein complexes. Firstly, the integrated heterogeneous network consisting of drugs, protein complexes, and disease are constructed, where we assign weights to the drug-disease association by using probability. Then, from the tripartite network, we get the indirect weighted relationships between drugs and diseases. The larger the weight, the higher the reliability of the correlation. We apply our method to mental disorders and hypertension, and validate the result by using comparative toxicogenomics database. Our ranked results can be directly reinforced by existing biomedical literature, suggesting that our proposed method obtains higher specificity and sensitivity. The proposed method offers new insight into drug-disease discovery. Our method is publicly available at http://1.complexdrug.sinaapp.com/Drug_Complex_Disease/Data_Download.html. PMID:26044949

  12. Prediction of microRNAs Associated with Human Diseases Based on Weighted k Most Similar Neighbors

    PubMed Central

    Guo, Maozu; Guo, Yahong; Li, Jinbao; Ding, Jian; Liu, Yong; Dai, Qiguo; Li, Jin; Teng, Zhixia; Huang, Yufei

    2013-01-01

    Background The identification of human disease-related microRNAs (disease miRNAs) is important for further investigating their involvement in the pathogenesis of diseases. More experimentally validated miRNA-disease associations have been accumulated recently. On the basis of these associations, it is essential to predict disease miRNAs for various human diseases. It is useful in providing reliable disease miRNA candidates for subsequent experimental studies. Methodology/Principal Findings It is known that miRNAs with similar functions are often associated with similar diseases and vice versa. Therefore, the functional similarity of two miRNAs has been successfully estimated by measuring the semantic similarity of their associated diseases. To effectively predict disease miRNAs, we calculated the functional similarity by incorporating the information content of disease terms and phenotype similarity between diseases. Furthermore, the members of miRNA family or cluster are assigned higher weight since they are more probably associated with similar diseases. A new prediction method, HDMP, based on weighted k most similar neighbors is presented for predicting disease miRNAs. Experiments validated that HDMP achieved significantly higher prediction performance than existing methods. In addition, the case studies examining prostatic neoplasms, breast neoplasms, and lung neoplasms, showed that HDMP can uncover potential disease miRNA candidates. Conclusions The superior performance of HDMP can be attributed to the accurate measurement of miRNA functional similarity, the weight assignment based on miRNA family or cluster, and the effective prediction based on weighted k most similar neighbors. The online prediction and analysis tool is freely available at http://nclab.hit.edu.cn/hdmpred. PMID:23950912

  13. Possible hypoglycemic effect of Aloe vera L. high molecular weight fractions on type 2 diabetic patients

    PubMed Central

    Yagi, Akira; Hegazy, Sahar; Kabbash, Amal; Wahab, Engy Abd-El

    2009-01-01

    Aloe vera L. high molecular weight fractions (AHM) containing less than 10 ppm of barbaloin and polysaccharide (MW: 1000 kDa) with glycoprotein, verectin (MW: 29 kDa), were prepared by patented hyper-dry system in combination of freeze–dry technique with microwave and far infrared radiation. AHM produced significant decrease in blood glucose level sustained for 6 weeks of the start of the study. Significant decrease in triglycerides was only observed 4 weeks after treatment and continued thereafter. No deterious effects on kidney and liver functions were apparent. Treatment of diabetic patients with AHM may relief vascular complications probably via activation of immunosystem. PMID:23964163

  14. Risk preferences, probability weighting, and strategy tradeoffs in wildfire management

    Treesearch

    Michael S. Hand; Matthew J. Wibbenmeyer; Dave Calkin; Matthew P. Thompson

    2015-01-01

    Wildfires present a complex applied risk management environment, but relatively little attention has been paid to behavioral and cognitive responses to risk among public agency wildfire managers. This study investigates responses to risk, including probability weighting and risk aversion, in a wildfire management context using a survey-based experiment administered to...

  15. Estimating the risk of Amazonian forest dieback.

    PubMed

    Rammig, Anja; Jupp, Tim; Thonicke, Kirsten; Tietjen, Britta; Heinke, Jens; Ostberg, Sebastian; Lucht, Wolfgang; Cramer, Wolfgang; Cox, Peter

    2010-08-01

    *Climate change will very likely affect most forests in Amazonia during the course of the 21st century, but the direction and intensity of the change are uncertain, in part because of differences in rainfall projections. In order to constrain this uncertainty, we estimate the probability for biomass change in Amazonia on the basis of rainfall projections that are weighted by climate model performance for current conditions. *We estimate the risk of forest dieback by using weighted rainfall projections from 24 general circulation models (GCMs) to create probability density functions (PDFs) for future forest biomass changes simulated by a dynamic vegetation model (LPJmL). *Our probabilistic assessment of biomass change suggests a likely shift towards increasing biomass compared with nonweighted results. Biomass estimates range between a gain of 6.2 and a loss of 2.7 kg carbon m(-2) for the Amazon region, depending on the strength of CO(2) fertilization. *The uncertainty associated with the long-term effect of CO(2) is much larger than that associated with precipitation change. This underlines the importance of reducing uncertainties in the direct effects of CO(2) on tropical ecosystems.

  16. Large margin nearest neighbor classifiers.

    PubMed

    Domeniconi, Carlotta; Gunopulos, Dimitrios; Peng, Jing

    2005-07-01

    The nearest neighbor technique is a simple and appealing approach to addressing classification problems. It relies on the assumption of locally constant class conditional probabilities. This assumption becomes invalid in high dimensions with a finite number of examples due to the curse of dimensionality. Severe bias can be introduced under these conditions when using the nearest neighbor rule. The employment of a locally adaptive metric becomes crucial in order to keep class conditional probabilities close to uniform, thereby minimizing the bias of estimates. We propose a technique that computes a locally flexible metric by means of support vector machines (SVMs). The decision function constructed by SVMs is used to determine the most discriminant direction in a neighborhood around the query. Such a direction provides a local feature weighting scheme. We formally show that our method increases the margin in the weighted space where classification takes place. Moreover, our method has the important advantage of online computational efficiency over competing locally adaptive techniques for nearest neighbor classification. We demonstrate the efficacy of our method using both real and simulated data.

  17. Factors Influencing Ball-Player Impact Probability in Youth Baseball

    PubMed Central

    Matta, Philip A.; Myers, Joseph B.; Sawicki, Gregory S.

    2015-01-01

    Background: Altering the weight of baseballs for youth play has been studied out of concern for player safety. Research has shown that decreasing the weight of baseballs may limit the severity of both chronic arm and collision injuries. Unfortunately, reducing the weight of the ball also increases its exit velocity, leaving pitchers and nonpitchers with less time to defend themselves. The purpose of this study was to examine impact probability for pitchers and nonpitchers. Hypothesis: Reducing the available time to respond by 10% (expected from reducing ball weight from 142 g to 113 g) would increase impact probability for pitchers and nonpitchers, and players’ mean simple response time would be a primary predictor of impact probability for all participants. Study Design: Nineteen subjects between the ages of 9 and 13 years performed 3 experiments in a controlled laboratory setting: a simple response time test, an avoidance response time test, and a pitching response time test. Methods: Each subject performed these tests in order. The simple reaction time test tested the subjects’ mean simple response time, the avoidance reaction time test tested the subjects’ ability to avoid a simulated batted ball as a fielder, and the pitching reaction time test tested the subjects’ ability to avoid a simulated batted ball as a pitcher. Results: Reducing the weight of a standard baseball from 142 g to 113 g led to a less than 5% increase in impact probability for nonpitchers. However, the results indicate that the impact probability for pitchers could increase by more than 25%. Conclusion: Pitching may greatly increase the amount of time needed to react and defend oneself from a batted ball. Clinical Relevance: Impact injuries to youth baseball players may increase if a 113-g ball is used. PMID:25984261

  18. Tree-average distances on certain phylogenetic networks have their weights uniquely determined.

    PubMed

    Willson, Stephen J

    2012-01-01

    A phylogenetic network N has vertices corresponding to species and arcs corresponding to direct genetic inheritance from the species at the tail to the species at the head. Measurements of DNA are often made on species in the leaf set, and one seeks to infer properties of the network, possibly including the graph itself. In the case of phylogenetic trees, distances between extant species are frequently used to infer the phylogenetic trees by methods such as neighbor-joining. This paper proposes a tree-average distance for networks more general than trees. The notion requires a weight on each arc measuring the genetic change along the arc. For each displayed tree the distance between two leaves is the sum of the weights along the path joining them. At a hybrid vertex, each character is inherited from one of its parents. We will assume that for each hybrid there is a probability that the inheritance of a character is from a specified parent. Assume that the inheritance events at different hybrids are independent. Then for each displayed tree there will be a probability that the inheritance of a given character follows the tree; this probability may be interpreted as the probability of the tree. The tree-average distance between the leaves is defined to be the expected value of their distance in the displayed trees. For a class of rooted networks that includes rooted trees, it is shown that the weights and the probabilities at each hybrid vertex can be calculated given the network and the tree-average distances between the leaves. Hence these weights and probabilities are uniquely determined. The hypotheses on the networks include that hybrid vertices have indegree exactly 2 and that vertices that are not leaves have a tree-child.

  19. Coding of level of ambiguity within neural systems mediating choice.

    PubMed

    Lopez-Paniagua, Dan; Seger, Carol A

    2013-01-01

    Data from previous neuroimaging studies exploring neural activity associated with uncertainty suggest varying levels of activation associated with changing degrees of uncertainty in neural regions that mediate choice behavior. The present study used a novel task that parametrically controlled the amount of information hidden from the subject; levels of uncertainty ranged from full ambiguity (no information about probability of winning) through multiple levels of partial ambiguity, to a condition of risk only (zero ambiguity with full knowledge of the probability of winning). A parametric analysis compared a linear model in which weighting increased as a function of level of ambiguity, and an inverted-U quadratic models in which partial ambiguity conditions were weighted most heavily. Overall we found that risk and all levels of ambiguity recruited a common "fronto-parietal-striatal" network including regions within the dorsolateral prefrontal cortex, intraparietal sulcus, and dorsal striatum. Activation was greatest across these regions and additional anterior and superior prefrontal regions for the quadratic function which most heavily weighs trials with partial ambiguity. These results suggest that the neural regions involved in decision processes do not merely track the absolute degree ambiguity or type of uncertainty (risk vs. ambiguity). Instead, recruitment of prefrontal regions may result from greater degree of difficulty in conditions of partial ambiguity: when information regarding reward probabilities important for decision making is hidden or not easily obtained the subject must engage in a search for tractable information. Additionally, this study identified regions of activity related to the valuation of potential gains associated with stimuli or options (including the orbitofrontal and medial prefrontal cortices and dorsal striatum) and related to winning (including orbitofrontal cortex and ventral striatum).

  20. Coding of level of ambiguity within neural systems mediating choice

    PubMed Central

    Lopez-Paniagua, Dan; Seger, Carol A.

    2013-01-01

    Data from previous neuroimaging studies exploring neural activity associated with uncertainty suggest varying levels of activation associated with changing degrees of uncertainty in neural regions that mediate choice behavior. The present study used a novel task that parametrically controlled the amount of information hidden from the subject; levels of uncertainty ranged from full ambiguity (no information about probability of winning) through multiple levels of partial ambiguity, to a condition of risk only (zero ambiguity with full knowledge of the probability of winning). A parametric analysis compared a linear model in which weighting increased as a function of level of ambiguity, and an inverted-U quadratic models in which partial ambiguity conditions were weighted most heavily. Overall we found that risk and all levels of ambiguity recruited a common “fronto—parietal—striatal” network including regions within the dorsolateral prefrontal cortex, intraparietal sulcus, and dorsal striatum. Activation was greatest across these regions and additional anterior and superior prefrontal regions for the quadratic function which most heavily weighs trials with partial ambiguity. These results suggest that the neural regions involved in decision processes do not merely track the absolute degree ambiguity or type of uncertainty (risk vs. ambiguity). Instead, recruitment of prefrontal regions may result from greater degree of difficulty in conditions of partial ambiguity: when information regarding reward probabilities important for decision making is hidden or not easily obtained the subject must engage in a search for tractable information. Additionally, this study identified regions of activity related to the valuation of potential gains associated with stimuli or options (including the orbitofrontal and medial prefrontal cortices and dorsal striatum) and related to winning (including orbitofrontal cortex and ventral striatum). PMID:24367286

  1. Probability density function characterization for aggregated large-scale wind power based on Weibull mixtures

    DOE PAGES

    Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; ...

    2016-02-02

    Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less

  2. Design and Weighting Methods for a Nationally Representative Sample of HIV-infected Adults Receiving Medical Care in the United States-Medical Monitoring Project

    PubMed Central

    Iachan, Ronaldo; H. Johnson, Christopher; L. Harding, Richard; Kyle, Tonja; Saavedra, Pedro; L. Frazier, Emma; Beer, Linda; L. Mattson, Christine; Skarbinski, Jacek

    2016-01-01

    Background: Health surveys of the general US population are inadequate for monitoring human immunodeficiency virus (HIV) infection because the relatively low prevalence of the disease (<0.5%) leads to small subpopulation sample sizes. Objective: To collect a nationally and locally representative probability sample of HIV-infected adults receiving medical care to monitor clinical and behavioral outcomes, supplementing the data in the National HIV Surveillance System. This paper describes the sample design and weighting methods for the Medical Monitoring Project (MMP) and provides estimates of the size and characteristics of this population. Methods: To develop a method for obtaining valid, representative estimates of the in-care population, we implemented a cross-sectional, three-stage design that sampled 23 jurisdictions, then 691 facilities, then 9,344 HIV patients receiving medical care, using probability-proportional-to-size methods. The data weighting process followed standard methods, accounting for the probabilities of selection at each stage and adjusting for nonresponse and multiplicity. Nonresponse adjustments accounted for differing response at both facility and patient levels. Multiplicity adjustments accounted for visits to more than one HIV care facility. Results: MMP used a multistage stratified probability sampling design that was approximately self-weighting in each of the 23 project areas and nationally. The probability sample represents the estimated 421,186 HIV-infected adults receiving medical care during January through April 2009. Methods were efficient (i.e., induced small, unequal weighting effects and small standard errors for a range of weighted estimates). Conclusion: The information collected through MMP allows monitoring trends in clinical and behavioral outcomes and informs resource allocation for treatment and prevention activities. PMID:27651851

  3. Weighted networks as randomly reinforced urn processes

    NASA Astrophysics Data System (ADS)

    Caldarelli, Guido; Chessa, Alessandro; Crimaldi, Irene; Pammolli, Fabio

    2013-02-01

    We analyze weighted networks as randomly reinforced urn processes, in which the edge-total weights are determined by a reinforcement mechanism. We develop a statistical test and a procedure based on it to study the evolution of networks over time, detecting the “dominance” of some edges with respect to the others and then assessing if a given instance of the network is taken at its steady state or not. Distance from the steady state can be considered as a measure of the relevance of the observed properties of the network. Our results are quite general, in the sense that they are not based on a particular probability distribution or functional form of the random weights. Moreover, the proposed tool can be applied also to dense networks, which have received little attention by the network community so far, since they are often problematic. We apply our procedure in the context of the International Trade Network, determining a core of “dominant edges.”

  4. A new general method for the assessment of the molecular-weight distribution of polydisperse preparations. Its application to an intestinal epithelial glycoprotein and two dextran samples, and comparison with a monodisperse glycoprotein

    PubMed Central

    Gibbons, Richard A.; Dixon, Stephen N.; Pocock, David H.

    1973-01-01

    A specimen of intestinal glycoprotein isolated from the pig and two samples of dextran, all of which are polydisperse (that is, the preparations may be regarded as consisting of a continuous distribution of molecular weights), have been examined in the ultracentrifuge under meniscus-depletion conditions at equilibrium. They are compared with each other and with a glycoprotein from Cysticercus tenuicollis cyst fluid which is almost monodisperse. The quantity c−⅓ (c=concentration) is plotted against ξ (the reduced radius); this plot is linear when the molecular-weight distribution approximates to the `most probable', i.e. when Mn:Mw:Mz: M(z+1)....... is as 1:2:3:4: etc. The use of this plot, and related procedures, to evaluate qualitatively and semi-quantitatively molecular-weight distribution functions where they can be realistically approximated to Schulz distributions is discussed. The theoretical basis is given in an Appendix. PMID:4778265

  5. Doppler Temperature Coefficient Calculations Using Adjoint-Weighted Tallies and Continuous Energy Cross Sections in MCNP6

    NASA Astrophysics Data System (ADS)

    Gonzales, Matthew Alejandro

    The calculation of the thermal neutron Doppler temperature reactivity feedback co-efficient, a key parameter in the design and safe operation of advanced reactors, using first order perturbation theory in continuous energy Monte Carlo codes is challenging as the continuous energy adjoint flux is not readily available. Traditional approaches of obtaining the adjoint flux attempt to invert the random walk process as well as require data corresponding to all temperatures and their respective temperature derivatives within the system in order to accurately calculate the Doppler temperature feedback. A new method has been developed using adjoint-weighted tallies and On-The-Fly (OTF) generated continuous energy cross sections within the Monte Carlo N-Particle (MCNP6) transport code. The adjoint-weighted tallies are generated during the continuous energy k-eigenvalue Monte Carlo calculation. The weighting is based upon the iterated fission probability interpretation of the adjoint flux, which is the steady state population in a critical nuclear reactor caused by a neutron introduced at that point in phase space. The adjoint-weighted tallies are produced in a forward calculation and do not require an inversion of the random walk. The OTF cross section database uses a high order functional expansion between points on a user-defined energy-temperature mesh in which the coefficients with respect to a polynomial fitting in temperature are stored. The coefficients of the fits are generated before run- time and called upon during the simulation to produce cross sections at any given energy and temperature. The polynomial form of the OTF cross sections allows the possibility of obtaining temperature derivatives of the cross sections on-the-fly. The use of Monte Carlo sampling of adjoint-weighted tallies and the capability of computing derivatives of continuous energy cross sections with respect to temperature are used to calculate the Doppler temperature coefficient in a research version of MCNP6. Temperature feedback results from the cross sections themselves, changes in the probability density functions, as well as changes in the density of the materials. The focus of this work is specific to the Doppler temperature feedback which result from Doppler broadening of cross sections as well as changes in the probability density function within the scattering kernel. This method is compared against published results using Mosteller's numerical benchmark to show accurate evaluations of the Doppler temperature coefficient, fuel assembly calculations, and a benchmark solution based on the heavy gas model for free-gas elastic scattering. An infinite medium benchmark for neutron free gas elastic scattering for large scattering ratios and constant absorption cross section has been developed using the heavy gas model. An exact closed form solution for the neutron energy spectrum is obtained in terms of the confluent hypergeometric function and compared against spectra for the free gas scattering model in MCNP6. Results show a quick increase in convergence of the analytic energy spectrum to the MCNP6 code with increasing target size, showing absolute relative differences of less than 5% for neutrons scattering with carbon. The analytic solution has been generalized to accommodate piecewise constant in energy absorption cross section to produce temperature feedback. Results reinforce the constraints in which heavy gas theory may be applied resulting in a significant target size to accommodate increasing cross section structure. The energy dependent piecewise constant cross section heavy gas model was used to produce a benchmark calculation of the Doppler temperature coefficient to show accurate calculations when using the adjoint-weighted method. Results show the Doppler temperature coefficient using adjoint weighting and cross section derivatives accurately obtains the correct solution within statistics as well as reduce computer runtimes by a factor of 50.

  6. Supervised variational model with statistical inference and its application in medical image segmentation.

    PubMed

    Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David

    2015-01-01

    Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.

  7. Mixture EMOS model for calibrating ensemble forecasts of wind speed.

    PubMed

    Baran, S; Lerch, S

    2016-03-01

    Ensemble model output statistics (EMOS) is a statistical tool for post-processing forecast ensembles of weather variables obtained from multiple runs of numerical weather prediction models in order to produce calibrated predictive probability density functions. The EMOS predictive probability density function is given by a parametric distribution with parameters depending on the ensemble forecasts. We propose an EMOS model for calibrating wind speed forecasts based on weighted mixtures of truncated normal (TN) and log-normal (LN) distributions where model parameters and component weights are estimated by optimizing the values of proper scoring rules over a rolling training period. The new model is tested on wind speed forecasts of the 50 member European Centre for Medium-range Weather Forecasts ensemble, the 11 member Aire Limitée Adaptation dynamique Développement International-Hungary Ensemble Prediction System ensemble of the Hungarian Meteorological Service, and the eight-member University of Washington mesoscale ensemble, and its predictive performance is compared with that of various benchmark EMOS models based on single parametric families and combinations thereof. The results indicate improved calibration of probabilistic and accuracy of point forecasts in comparison with the raw ensemble and climatological forecasts. The mixture EMOS model significantly outperforms the TN and LN EMOS methods; moreover, it provides better calibrated forecasts than the TN-LN combination model and offers an increased flexibility while avoiding covariate selection problems. © 2016 The Authors Environmetrics Published by JohnWiley & Sons Ltd.

  8. Ratio-of-Mediator-Probability Weighting for Causal Mediation Analysis in the Presence of Treatment-by-Mediator Interaction

    ERIC Educational Resources Information Center

    Hong, Guanglei; Deutsch, Jonah; Hill, Heather D.

    2015-01-01

    Conventional methods for mediation analysis generate biased results when the mediator-outcome relationship depends on the treatment condition. This article shows how the ratio-of-mediator-probability weighting (RMPW) method can be used to decompose total effects into natural direct and indirect effects in the presence of treatment-by-mediator…

  9. Ratio-of-Mediator-Probability Weighting for Causal Mediation Analysis in the Presence of Treatment-by-Mediator Interaction

    ERIC Educational Resources Information Center

    Hong, Guanglei; Deutsch, Jonah; Hill, Heather D.

    2015-01-01

    Conventional methods for mediation analysis generate biased results when the mediator--outcome relationship depends on the treatment condition. This article shows how the ratio-of-mediator-probability weighting (RMPW) method can be used to decompose total effects into natural direct and indirect effects in the presence of treatment-by-mediator…

  10. Prospect theory on the brain? Toward a cognitive neuroscience of decision under risk.

    PubMed

    Trepel, Christopher; Fox, Craig R; Poldrack, Russell A

    2005-04-01

    Most decisions must be made without advance knowledge of their consequences. Economists and psychologists have devoted much attention to modeling decisions made under conditions of risk in which options can be characterized by a known probability distribution over possible outcomes. The descriptive shortcomings of classical economic models motivated the development of prospect theory (D. Kahneman, A. Tversky, Prospect theory: An analysis of decision under risk. Econometrica, 4 (1979) 263-291; A. Tversky, D. Kahneman, Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty, 5 (4) (1992) 297-323) the most successful behavioral model of decision under risk. In the prospect theory, subjective value is modeled by a value function that is concave for gains, convex for losses, and steeper for losses than for gains; the impact of probabilities are characterized by a weighting function that overweights low probabilities and underweights moderate to high probabilities. We outline the possible neural bases of the components of prospect theory, surveying evidence from human imaging, lesion, and neuropharmacology studies as well as animal neurophysiology studies. These results provide preliminary suggestions concerning the neural bases of prospect theory that include a broad set of brain regions and neuromodulatory systems. These data suggest that focused studies of decision making in the context of quantitative models may provide substantial leverage towards a fuller understanding of the cognitive neuroscience of decision making.

  11. Benchmarks for detecting 'breakthroughs' in clinical trials: empirical assessment of the probability of large treatment effects using kernel density estimation.

    PubMed

    Miladinovic, Branko; Kumar, Ambuj; Mhaskar, Rahul; Djulbegovic, Benjamin

    2014-10-21

    To understand how often 'breakthroughs,' that is, treatments that significantly improve health outcomes, can be developed. We applied weighted adaptive kernel density estimation to construct the probability density function for observed treatment effects from five publicly funded cohorts and one privately funded group. 820 trials involving 1064 comparisons and enrolling 331,004 patients were conducted by five publicly funded cooperative groups. 40 cancer trials involving 50 comparisons and enrolling a total of 19,889 patients were conducted by GlaxoSmithKline. We calculated that the probability of detecting treatment with large effects is 10% (5-25%), and that the probability of detecting treatment with very large treatment effects is 2% (0.3-10%). Researchers themselves judged that they discovered a new, breakthrough intervention in 16% of trials. We propose these figures as the benchmarks against which future development of 'breakthrough' treatments should be measured. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  12. Economic weights of somatic cell score in dairy sheep.

    PubMed

    Legarra, A; Ramón, M; Ugarte, E; Pérez-Guzmán, M D; Arranz, J

    2007-03-01

    The economic weights for somatic cell score (SCS) have been calculated using profit functions. Economic data were collected in the Latxa breed. Three aspects have been considered: bulk tank milk payment, veterinary treatments due to high SCS, and culling. All of them are non-linear profit functions. Milk payment is based on the sum of the log-normal distributions of somatic cell count, and veterinary treatments on the probability of subclinical mastitis, which is inferred when individual SCS surpass some threshold. Both functions lead to non-standard distributions. The derivatives of the profit function were computed numerically. Culling was computed by assuming that a conceptual trait culled by mastitis (CBM) is genetically correlated to SCS. The economic weight considers the increase in the breeding value of CBM correlated to an increase in the breeding value of SCS, assuming genetic correlations ranging from 0 to 0.9. The relevance of the economic weights for selection purposes was checked by the estimation of genetic gains for milk yield and SCS under several scenarios of genetic parameters and economic weights. The overall economic weights for SCS range from - 2.6 to - 9.5 € per point of SCS, with an average of - 4 € per point of SCS, depending on the expected average SCS of the flock. The economic weight is higher around the thresholds for payment policies. Economic weights did not change greatly with other assumptions. The estimated genetic gains with economic weights of 0.83 € per l of milk yield and - 4 € per point of SCS, assuming a genetic correlation of - 0.30, were 3.85 l and - 0.031 SCS per year, with an associated increase in profit of 3.32 €. This represents a very small increase in profit (about 1%) relative to selecting only for milk yield. Other situations (increased economic weights, different genetic correlations) produced similar genetic gains and changes in profit. A desired-gains index reduced the increase in profit by 3%, although it could be greater depending on the genetic parameters. It is concluded that the inclusion of SCS in dairy sheep breeding programs is of low economic relevance and recommended only if recording is inexpensive or for animal welfare concerns.

  13. Biased and greedy random walks on two-dimensional lattices with quenched randomness: The greedy ant within a disordered environment

    NASA Astrophysics Data System (ADS)

    Mitran, T. L.; Melchert, O.; Hartmann, A. K.

    2013-12-01

    The main characteristics of biased greedy random walks (BGRWs) on two-dimensional lattices with real-valued quenched disorder on the lattice edges are studied. Here the disorder allows for negative edge weights. In previous studies, considering the negative-weight percolation (NWP) problem, this was shown to change the universality class of the existing, static percolation transition. In the presented study, four different types of BGRWs and an algorithm based on the ant colony optimization heuristic were considered. Regarding the BGRWs, the precise configurations of the lattice walks constructed during the numerical simulations were influenced by two parameters: a disorder parameter ρ that controls the amount of negative edge weights on the lattice and a bias strength B that governs the drift of the walkers along a certain lattice direction. The random walks are “greedy” in the sense that the local optimal choice of the walker is to preferentially traverse edges with a negative weight (associated with a net gain of “energy” for the walker). Here, the pivotal observable is the probability that, after termination, a lattice walk exhibits a total negative weight, which is here considered as percolating. The behavior of this observable as function of ρ for different bias strengths B is put under scrutiny. Upon tuning ρ, the probability to find such a feasible lattice walk increases from zero to 1. This is the key feature of the percolation transition in the NWP model. Here, we address the question how well the transition point ρc, resulting from numerically exact and “static” simulations in terms of the NWP model, can be resolved using simple dynamic algorithms that have only local information available, one of the basic questions in the physics of glassy systems.

  14. Cortical Dynamics in Presence of Assemblies of Densely Connected Weight-Hub Neurons

    PubMed Central

    Setareh, Hesam; Deger, Moritz; Petersen, Carl C. H.; Gerstner, Wulfram

    2017-01-01

    Experimental measurements of pairwise connection probability of pyramidal neurons together with the distribution of synaptic weights have been used to construct randomly connected model networks. However, several experimental studies suggest that both wiring and synaptic weight structure between neurons show statistics that differ from random networks. Here we study a network containing a subset of neurons which we call weight-hub neurons, that are characterized by strong inward synapses. We propose a connectivity structure for excitatory neurons that contain assemblies of densely connected weight-hub neurons, while the pairwise connection probability and synaptic weight distribution remain consistent with experimental data. Simulations of such a network with generalized integrate-and-fire neurons display regular and irregular slow oscillations akin to experimentally observed up/down state transitions in the activity of cortical neurons with a broad distribution of pairwise spike correlations. Moreover, stimulation of a model network in the presence or absence of assembly structure exhibits responses similar to light-evoked responses of cortical layers in optogenetically modified animals. We conclude that a high connection probability into and within assemblies of excitatory weight-hub neurons, as it likely is present in some but not all cortical layers, changes the dynamics of a layer of cortical microcircuitry significantly. PMID:28690508

  15. Moment and maximum likelihood estimators for Weibull distributions under length- and area-biased sampling

    Treesearch

    Jeffrey H. Gove

    2003-01-01

    Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...

  16. Southern pine beetle infestation probability mapping using weights of evidence analysis

    Treesearch

    Jason B. Grogan; David L. Kulhavy; James C. Kroll

    2010-01-01

    Weights of Evidence (WofE) spatial analysis was used to predict probability of southern pine beetle (Dendroctonus frontalis) (SPB) infestation in Angelina, Nacogdoches, San Augustine and Shelby Co., TX. Thematic data derived from Landsat imagery (1974–2002 Landsat 1–7) were used. Data layers included: forest covertype, forest age, forest patch size...

  17. A global logrank test for adaptive treatment strategies based on observational studies.

    PubMed

    Li, Zhiguo; Valenstein, Marcia; Pfeiffer, Paul; Ganoczy, Dara

    2014-02-28

    In studying adaptive treatment strategies, a natural question that is of paramount interest is whether there is any significant difference among all possible treatment strategies. When the outcome variable of interest is time-to-event, we propose an inverse probability weighted logrank test for testing the equivalence of a fixed set of pre-specified adaptive treatment strategies based on data from an observational study. The weights take into account both the possible selection bias in an observational study and the fact that the same subject may be consistent with more than one treatment strategy. The asymptotic distribution of the weighted logrank statistic under the null hypothesis is obtained. We show that, in an observational study where the treatment selection probabilities need to be estimated, the estimation of these probabilities does not have an effect on the asymptotic distribution of the weighted logrank statistic, as long as the estimation of the parameters in the models for these probabilities is n-consistent. Finite sample performance of the test is assessed via a simulation study. We also show in the simulation that the test can be pretty robust to misspecification of the models for the probabilities of treatment selection. The method is applied to analyze data on antidepressant adherence time from an observational database maintained at the Department of Veterans Affairs' Serious Mental Illness Treatment Research and Evaluation Center. Copyright © 2013 John Wiley & Sons, Ltd.

  18. A reliability-based cost effective fail-safe design procedure

    NASA Technical Reports Server (NTRS)

    Hanagud, S.; Uppaluri, B.

    1976-01-01

    The authors have developed a methodology for cost-effective fatigue design of structures subject to random fatigue loading. A stochastic model for fatigue crack propagation under random loading has been discussed. Fracture mechanics is then used to estimate the parameters of the model and the residual strength of structures with cracks. The stochastic model and residual strength variations have been used to develop procedures for estimating the probability of failure and its changes with inspection frequency. This information on reliability is then used to construct an objective function in terms of either a total weight function or cost function. A procedure for selecting the design variables, subject to constraints, by optimizing the objective function has been illustrated by examples. In particular, optimum design of stiffened panel has been discussed.

  19. Effects of Vertex Activity and Self-organized Criticality Behavior on a Weighted Evolving Network

    NASA Astrophysics Data System (ADS)

    Zhang, Gui-Qing; Yang, Qiu-Ying; Chen, Tian-Lun

    2008-08-01

    Effects of vertex activity have been analyzed on a weighted evolving network. The network is characterized by the probability distribution of vertex strength, each edge weight and evolution of the strength of vertices with different vertex activities. The model exhibits self-organized criticality behavior. The probability distribution of avalanche size for different network sizes is also shown. In addition, there is a power law relation between the size and the duration of an avalanche and the average of avalanche size has been studied for different vertex activities.

  20. A fuzzy call admission control scheme in wireless networks

    NASA Astrophysics Data System (ADS)

    Ma, Yufeng; Gong, Shenguang; Hu, Xiulin; Zhang, Yunyu

    2007-11-01

    Scarcity of the spectrum resource and mobility of users make quality of service (QoS) provision a critical issue in wireless networks. This paper presents a fuzzy call admission control scheme to meet the requirement of the QoS. A performance measure is formed as a weighted linear function of new call and handoff call blocking probabilities. Simulation compares the proposed fuzzy scheme with an adaptive channel reservation scheme. Simulation results show that fuzzy scheme has a better robust performance in terms of average blocking criterion.

  1. Advances in modeling trait-based plant community assembly.

    PubMed

    Laughlin, Daniel C; Laughlin, David E

    2013-10-01

    In this review, we examine two new trait-based models of community assembly that predict the relative abundance of species from a regional species pool. The models use fundamentally different mathematical approaches and the predictions can differ considerably. Maxent obtains the most even probability distribution subject to community-weighted mean trait constraints. Traitspace predicts low probabilities for any species whose trait distribution does not pass through the environmental filter. Neither model maximizes functional diversity because of the emphasis on environmental filtering over limiting similarity. Traitspace can test for the effects of limiting similarity by explicitly incorporating intraspecific trait variation. The range of solutions in both models could be used to define the range of natural variability of community composition in restoration projects. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Body weight, metabolism and clock genes

    PubMed Central

    2010-01-01

    Biological rhythms are present in the lives of almost all organisms ranging from plants to more evolved creatures. These oscillations allow the anticipation of many physiological and behavioral mechanisms thus enabling coordination of rhythms in a timely manner, adaption to environmental changes and more efficient organization of the cellular processes responsible for survival of both the individual and the species. Many components of energy homeostasis exhibit circadian rhythms, which are regulated by central (suprachiasmatic nucleus) and peripheral (located in other tissues) circadian clocks. Adipocyte plays an important role in the regulation of energy homeostasis, the signaling of satiety and cellular differentiation and proliferation. Also, the adipocyte circadian clock is probably involved in the control of many of these functions. Thus, circadian clocks are implicated in the control of energy balance, feeding behavior and consequently in the regulation of body weight. In this regard, alterations in clock genes and rhythms can interfere with the complex mechanism of metabolic and hormonal anticipation, contributing to multifactorial diseases such as obesity and diabetes. The aim of this review was to define circadian clocks by describing their functioning and role in the whole body and in adipocyte metabolism, as well as their influence on body weight control and the development of obesity. PMID:20712885

  3. An integrated quality function deployment and capital budgeting methodology for occupational safety and health as a systems thinking approach: the case of the construction industry.

    PubMed

    Bas, Esra

    2014-07-01

    In this paper, an integrated methodology for Quality Function Deployment (QFD) and a 0-1 knapsack model is proposed for occupational safety and health as a systems thinking approach. The House of Quality (HoQ) in QFD methodology is a systematic tool to consider the inter-relationships between two factors. In this paper, three HoQs are used to consider the interrelationships between tasks and hazards, hazards and events, and events and preventive/protective measures. The final priority weights of events are defined by considering their project-specific preliminary weights, probability of occurrence, and effects on the victim and the company. The priority weights of the preventive/protective measures obtained in the last HoQ are fed into a 0-1 knapsack model for the investment decision. Then, the selected preventive/protective measures can be adapted to the task design. The proposed step-by-step methodology can be applied to any stage of a project to design the workplace for occupational safety and health, and continuous improvement for safety is endorsed by the closed loop characteristic of the integrated methodology. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Is Weight Training Safe during Pregnancy?

    ERIC Educational Resources Information Center

    Work, Janis A.

    1989-01-01

    Examines the opinions of several experts on the safety of weight training during pregnancy, noting that no definitive research on weight training alone has been done. Experts agree that low-intensity weight training probably poses no harm for mother or fetus; exercise programs should be individualized. (SM)

  5. Genetic evaluation of weaning weight and probability of lambing at 1 year of age in Targhee lambs

    USDA-ARS?s Scientific Manuscript database

    The objective of this study was to investigate genetic control of 120-day weaning weight and the probability of lambing at 1 year of age in Targhee ewe lambs. Records of 5,967 ewe lambs born from 1989 to 2012 and first exposed to rams for breeding at approximately 7 months of age were analyzed. Reco...

  6. Cosmological measure with volume averaging and the vacuum energy problem

    NASA Astrophysics Data System (ADS)

    Astashenok, Artyom V.; del Popolo, Antonino

    2012-04-01

    In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Λ = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero.

  7. Quantum and classical dynamics of water dissociation on Ni(111): A test of the site-averaging model in dissociative chemisorption of polyatomic molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Bin; Department of Chemical Physics, University of Science and Technology of China, Hefei 230026; Guo, Hua, E-mail: hguo@unm.edu

    Recently, we reported the first highly accurate nine-dimensional global potential energy surface (PES) for water interacting with a rigid Ni(111) surface, built on a large number of density functional theory points [B. Jiang and H. Guo, Phys. Rev. Lett. 114, 166101 (2015)]. Here, we investigate site-specific reaction probabilities on this PES using a quasi-seven-dimensional quantum dynamical model. It is shown that the site-specific reactivity is largely controlled by the topography of the PES instead of the barrier height alone, underscoring the importance of multidimensional dynamics. In addition, the full-dimensional dissociation probability is estimated by averaging fixed-site reaction probabilities with appropriatemore » weights. To validate this model and gain insights into the dynamics, additional quasi-classical trajectory calculations in both full and reduced dimensions have also been performed and important dynamical factors such as the steering effect are discussed.« less

  8. The probability of misassociation between neighboring targets

    NASA Astrophysics Data System (ADS)

    Areta, Javier A.; Bar-Shalom, Yaakov; Rothrock, Ronald

    2008-04-01

    This paper presents procedures to calculate the probability that the measurement originating from an extraneous target will be (mis)associated with a target of interest for the cases of Nearest Neighbor and Global association. It is shown that these misassociation probabilities depend, under certain assumptions, on a particular - covariance weighted - norm of the difference between the targets' predicted measurements. For the Nearest Neighbor association, the exact solution, obtained for the case of equal innovation covariances, is based on a noncentral chi-square distribution. An approximate solution is also presented for the case of unequal innovation covariances. For the Global case an approximation is presented for the case of "similar" innovation covariances. In the general case of unequal innovation covariances where this approximation fails, an exact method based on the inversion of the characteristic function is presented. The theoretical results, confirmed by Monte Carlo simulations, quantify the benefit of Global vs. Nearest Neighbor association. These results are applied to problems of single sensor as well as centralized fusion architecture multiple sensor tracking.

  9. On the number of infinite geodesics and ground states in disordered systems

    NASA Astrophysics Data System (ADS)

    Wehr, Jan

    1997-04-01

    We study first-passage percolation models and their higher dimensional analogs—models of surfaces with random weights. We prove that under very general conditions the number of lines or, in the second case, hypersurfaces which locally minimize the sum of the random weights is with probability one equal to 0 or with probability one equal to +∞. As corollaries we show that in any dimension d≥2 the number of ground states of an Ising ferromagnet with random coupling constants equals (with probability one) 2 or +∞. Proofs employ simple large-deviation estimates and ergodic arguments.

  10. Topology and weights in a protein domain interaction network--a novel way to predict protein interactions.

    PubMed

    Wuchty, Stefan

    2006-05-23

    While the analysis of unweighted biological webs as diverse as genetic, protein and metabolic networks allowed spectacular insights in the inner workings of a cell, biological networks are not only determined by their static grid of links. In fact, we expect that the heterogeneity in the utilization of connections has a major impact on the organization of cellular activities as well. We consider a web of interactions between protein domains of the Protein Family database (PFAM), which are weighted by a probability score. We apply metrics that combine the static layout and the weights of the underlying interactions. We observe that unweighted measures as well as their weighted counterparts largely share the same trends in the underlying domain interaction network. However, we only find weak signals that weights and the static grid of interactions are connected entities. Therefore assuming that a protein interaction is governed by a single domain interaction, we observe strong and significant correlations of the highest scoring domain interaction and the confidence of protein interactions in the underlying interactions of yeast and fly. Modeling an interaction between proteins if we find a high scoring protein domain interaction we obtain 1, 428 protein interactions among 361 proteins in the human malaria parasite Plasmodium falciparum. Assessing their quality by a logistic regression method we observe that increasing confidence of predicted interactions is accompanied by high scoring domain interactions and elevated levels of functional similarity and evolutionary conservation. Our results indicate that probability scores are randomly distributed, allowing to treat static grid and weights of domain interactions as separate entities. In particular, these finding confirms earlier observations that a protein interaction is a matter of a single interaction event on domain level. As an immediate application, we show a simple way to predict potential protein interactions by utilizing expectation scores of single domain interactions.

  11. Weighted Fuzzy Risk Priority Number Evaluation of Turbine and Compressor Blades Considering Failure Mode Correlations

    NASA Astrophysics Data System (ADS)

    Gan, Luping; Li, Yan-Feng; Zhu, Shun-Peng; Yang, Yuan-Jian; Huang, Hong-Zhong

    2014-06-01

    Failure mode, effects and criticality analysis (FMECA) and Fault tree analysis (FTA) are powerful tools to evaluate reliability of systems. Although single failure mode issue can be efficiently addressed by traditional FMECA, multiple failure modes and component correlations in complex systems cannot be effectively evaluated. In addition, correlated variables and parameters are often assumed to be precisely known in quantitative analysis. In fact, due to the lack of information, epistemic uncertainty commonly exists in engineering design. To solve these problems, the advantages of FMECA, FTA, fuzzy theory, and Copula theory are integrated into a unified hybrid method called fuzzy probability weighted geometric mean (FPWGM) risk priority number (RPN) method. The epistemic uncertainty of risk variables and parameters are characterized by fuzzy number to obtain fuzzy weighted geometric mean (FWGM) RPN for single failure mode. Multiple failure modes are connected using minimum cut sets (MCS), and Boolean logic is used to combine fuzzy risk priority number (FRPN) of each MCS. Moreover, Copula theory is applied to analyze the correlation of multiple failure modes in order to derive the failure probabilities of each MCS. Compared to the case where dependency among multiple failure modes is not considered, the Copula modeling approach eliminates the error of reliability analysis. Furthermore, for purpose of quantitative analysis, probabilities importance weight from failure probabilities are assigned to FWGM RPN to reassess the risk priority, which generalize the definition of probability weight and FRPN, resulting in a more accurate estimation than that of the traditional models. Finally, a basic fatigue analysis case drawn from turbine and compressor blades in aeroengine is used to demonstrate the effectiveness and robustness of the presented method. The result provides some important insights on fatigue reliability analysis and risk priority assessment of structural system under failure correlations.

  12. Longitudinal changes in gestational weight gain and the association with intrauterine fetal growth.

    PubMed

    Hinkle, Stefanie N; Johns, Alicia M; Albert, Paul S; Kim, Sungduk; Grantz, Katherine L

    2015-07-01

    Total pregnancy weight gain has been associated with infant birthweight; however, most prior studies lacked repeat ultrasound measurements. Understanding of the longitudinal changes in maternal weight gain and intrauterine changes in fetal anthropometrics is limited. Prospective data from 1314 Scandinavian singleton pregnancies at high-risk for delivering small-for-gestational-age (SGA) were analyzed. Women had ≥1 (median 12) antenatal weight measurements. Ultrasounds were targeted at 17, 25, 33, and 37 weeks of gestation. Analyses involved a multi-step process. First, trajectories were estimated across gestation for maternal weight gain and fetal biometrics [abdominal circumference (AC, mm), biparietal diameter (BPD, mm), femur length (FL, mm), and estimated fetal weight (EFW, g)] using linear mixed models. Second, the association between maternal weight changes (per 5 kg) and corresponding fetal growth from 0 to 17, 17 to 28, and 28 to 37 weeks was estimated for each fetal parameter adjusting for prepregnancy body mass index, height, parity, chronic diseases, age, smoking, fetal sex, and weight gain up to the respective period as applicable. Third, the probability of fetal SGA, EFW <10th percentile, at the 3rd ultrasound was estimated across the spectrum of maternal weight gain rate by SGA status at the 2nd ultrasound. From 0 to 17 weeks, changes in maternal weight were most strongly associated with changes in BPD [β=0.51 per 5 kg (95%CI 0.26, 0.76)] and FL [β=0.46 per 5 kg (95%CI 0.26, 0.65)]. From 17 to 28 weeks, AC [β=2.92 per 5 kg (95%CI 1.62, 4.22)] and EFW [β=58.7 per 5 kg (95%CI 29.5, 88.0)] were more strongly associated with changes in maternal weight. Increased maternal weight gain was significantly associated with a reduced probability of intrauterine SGA; for a normal weight woman with SGA at the 2nd ultrasound, the probability of fetal SGA with a weight gain rate of 0.29 kg/w (10th percentile) was 59%, compared to 38% with a rate of 0.67 kg/w (90th percentile). Among women at high-risk for SGA, maternal weight gain was associated with fetal growth throughout pregnancy, but had a differential relationship with specific biometrics across gestation. For women with fetal SGA identified mid-pregnancy, increased antenatal weight gain was associated with a decreased probability of fetal SGA approximately 7 weeks later. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Utility of inverse probability weighting in molecular pathological epidemiology.

    PubMed

    Liu, Li; Nevo, Daniel; Nishihara, Reiko; Cao, Yin; Song, Mingyang; Twombly, Tyler S; Chan, Andrew T; Giovannucci, Edward L; VanderWeele, Tyler J; Wang, Molin; Ogino, Shuji

    2018-04-01

    As one of causal inference methodologies, the inverse probability weighting (IPW) method has been utilized to address confounding and account for missing data when subjects with missing data cannot be included in a primary analysis. The transdisciplinary field of molecular pathological epidemiology (MPE) integrates molecular pathological and epidemiological methods, and takes advantages of improved understanding of pathogenesis to generate stronger biological evidence of causality and optimize strategies for precision medicine and prevention. Disease subtyping based on biomarker analysis of biospecimens is essential in MPE research. However, there are nearly always cases that lack subtype information due to the unavailability or insufficiency of biospecimens. To address this missing subtype data issue, we incorporated inverse probability weights into Cox proportional cause-specific hazards regression. The weight was inverse of the probability of biomarker data availability estimated based on a model for biomarker data availability status. The strategy was illustrated in two example studies; each assessed alcohol intake or family history of colorectal cancer in relation to the risk of developing colorectal carcinoma subtypes classified by tumor microsatellite instability (MSI) status, using a prospective cohort study, the Nurses' Health Study. Logistic regression was used to estimate the probability of MSI data availability for each cancer case with covariates of clinical features and family history of colorectal cancer. This application of IPW can reduce selection bias caused by nonrandom variation in biospecimen data availability. The integration of causal inference methods into the MPE approach will likely have substantial potentials to advance the field of epidemiology.

  14. Modeling the Impact of Control on the Attractiveness of Risk in a Prospect Theory Framework

    PubMed Central

    Young, Diana L.; Goodie, Adam S.; Hall, Daniel B.

    2010-01-01

    Many decisions involve a degree of personal control over event outcomes, which is exerted through one’s knowledge or skill. In three experiments we investigated differences in decision making between prospects based on a) the outcome of random events and b) the outcome of events characterized by control. In Experiment 1, participants estimated certainty equivalents (CEs) for bets based on either random events or the correctness of their answers to U.S. state population questions across the probability spectrum. In Experiment 2, participants estimated CEs for bets based on random events, answers to U.S. state population questions, or answers to questions about 2007 NCAA football game results. Experiment 3 extended the same procedure as Experiment 1 using a within-subjects design. We modeled data from all experiments in a prospect theory framework to establish psychological mechanisms underlying decision behavior. Participants weighted the probabilities associated with bets characterized by control so as to reflect greater risk attractiveness relative to bets based on random events, as evidenced by more elevated weighting functions under conditions of control. This research elucidates possible cognitive mechanisms behind increased risk taking for decisions characterized by control, and implications for various literatures are discussed. PMID:21278906

  15. Modeling the Impact of Control on the Attractiveness of Risk in a Prospect Theory Framework.

    PubMed

    Young, Diana L; Goodie, Adam S; Hall, Daniel B

    2011-01-01

    Many decisions involve a degree of personal control over event outcomes, which is exerted through one's knowledge or skill. In three experiments we investigated differences in decision making between prospects based on a) the outcome of random events and b) the outcome of events characterized by control. In Experiment 1, participants estimated certainty equivalents (CEs) for bets based on either random events or the correctness of their answers to U.S. state population questions across the probability spectrum. In Experiment 2, participants estimated CEs for bets based on random events, answers to U.S. state population questions, or answers to questions about 2007 NCAA football game results. Experiment 3 extended the same procedure as Experiment 1 using a within-subjects design. We modeled data from all experiments in a prospect theory framework to establish psychological mechanisms underlying decision behavior. Participants weighted the probabilities associated with bets characterized by control so as to reflect greater risk attractiveness relative to bets based on random events, as evidenced by more elevated weighting functions under conditions of control. This research elucidates possible cognitive mechanisms behind increased risk taking for decisions characterized by control, and implications for various literatures are discussed.

  16. Cortisol shifts financial risk preferences

    PubMed Central

    Kandasamy, Narayanan; Hardy, Ben; Page, Lionel; Schaffner, Markus; Graggaber, Johann; Powlson, Andrew S.; Fletcher, Paul C.; Gurnell, Mark; Coates, John

    2014-01-01

    Risk taking is central to human activity. Consequently, it lies at the focal point of behavioral sciences such as neuroscience, economics, and finance. Many influential models from these sciences assume that financial risk preferences form a stable trait. Is this assumption justified and, if not, what causes the appetite for risk to fluctuate? We have previously found that traders experience a sustained increase in the stress hormone cortisol when the amount of uncertainty, in the form of market volatility, increases. Here we ask whether these elevated cortisol levels shift risk preferences. Using a double-blind, placebo-controlled, cross-over protocol we raised cortisol levels in volunteers over 8 d to the same extent previously observed in traders. We then tested for the utility and probability weighting functions underlying their risk taking and found that participants became more risk-averse. We also observed that the weighting of probabilities became more distorted among men relative to women. These results suggest that risk preferences are highly dynamic. Specifically, the stress response calibrates risk taking to our circumstances, reducing it in times of prolonged uncertainty, such as a financial crisis. Physiology-induced shifts in risk preferences may thus be an underappreciated cause of market instability. PMID:24550472

  17. Cortisol shifts financial risk preferences.

    PubMed

    Kandasamy, Narayanan; Hardy, Ben; Page, Lionel; Schaffner, Markus; Graggaber, Johann; Powlson, Andrew S; Fletcher, Paul C; Gurnell, Mark; Coates, John

    2014-03-04

    Risk taking is central to human activity. Consequently, it lies at the focal point of behavioral sciences such as neuroscience, economics, and finance. Many influential models from these sciences assume that financial risk preferences form a stable trait. Is this assumption justified and, if not, what causes the appetite for risk to fluctuate? We have previously found that traders experience a sustained increase in the stress hormone cortisol when the amount of uncertainty, in the form of market volatility, increases. Here we ask whether these elevated cortisol levels shift risk preferences. Using a double-blind, placebo-controlled, cross-over protocol we raised cortisol levels in volunteers over 8 d to the same extent previously observed in traders. We then tested for the utility and probability weighting functions underlying their risk taking and found that participants became more risk-averse. We also observed that the weighting of probabilities became more distorted among men relative to women. These results suggest that risk preferences are highly dynamic. Specifically, the stress response calibrates risk taking to our circumstances, reducing it in times of prolonged uncertainty, such as a financial crisis. Physiology-induced shifts in risk preferences may thus be an underappreciated cause of market instability.

  18. Modeling pore corrosion in normally open gold- plated copper connectors.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Battaile, Corbett Chandler; Moffat, Harry K.; Sun, Amy Cha-Tien

    2008-09-01

    The goal of this study is to model the electrical response of gold plated copper electrical contacts exposed to a mixed flowing gas stream consisting of air containing 10 ppb H{sub 2}S at 30 C and a relative humidity of 70%. This environment accelerates the attack normally observed in a light industrial environment (essentially a simplified version of the Battelle Class 2 environment). Corrosion rates were quantified by measuring the corrosion site density, size distribution, and the macroscopic electrical resistance of the aged surface as a function of exposure time. A pore corrosion numerical model was used to predict bothmore » the growth of copper sulfide corrosion product which blooms through defects in the gold layer and the resulting electrical contact resistance of the aged surface. Assumptions about the distribution of defects in the noble metal plating and the mechanism for how corrosion blooms affect electrical contact resistance were needed to complete the numerical model. Comparisons are made to the experimentally observed number density of corrosion sites, the size distribution of corrosion product blooms, and the cumulative probability distribution of the electrical contact resistance. Experimentally, the bloom site density increases as a function of time, whereas the bloom size distribution remains relatively independent of time. These two effects are included in the numerical model by adding a corrosion initiation probability proportional to the surface area along with a probability for bloom-growth extinction proportional to the corrosion product bloom volume. The cumulative probability distribution of electrical resistance becomes skewed as exposure time increases. While the electrical contact resistance increases as a function of time for a fraction of the bloom population, the median value remains relatively unchanged. In order to model this behavior, the resistance calculated for large blooms has been weighted more heavily.« less

  19. Producing chicken eggs containing isoflavone as functional food due to feeding effect of soy sauce by-product

    NASA Astrophysics Data System (ADS)

    Mahfudz, L. D.; Sarjana, T. A.; Muryani, R.; Suthama, N.

    2018-01-01

    The present study was aimed to verify the impact of feeding soy sauce by-product in producing isoflavone-enriched chicken eggs as functional food. Experiment used 200 laying hens of 80-week old with average body weight of 1,932.75±189.50 g. Experimental diets were formulated using yellow corn, rice bran, soybean meal, fish meal, meat bone meal, poultry meal, premix, CaCO3, and soy sauce by-product (SSBP). A completely randomized design with 4 treatments and 5 replication (10 birds each), was assigned in this experiment. Inclusion levels of SSBP were the treatments, namely, none (T0), 10 (T1), 12.5 (T2), and 15.0% (T3). Parameters observed were colour, index, and weight of egg yolk, and isoflavone content. Analysis of variance was applied and continued to Duncan test at 5% probability. Results indicated that yolk colour index and weight were not affected by the treatments, but isoflavone content was significantly (P<0.05) increased by feeding SSBP. Egg yolk isoflavone in T2 (0.41 mg/g) and T3 (0.47 mg/g) were higher than those in T0 (0.31 mg/g) and T1 (0.35 mg/g). In conclusion, dietary inclusion of soy sauce by-product at higher level (12.5 and 15.0%) can produce isoflavone-enriched eggs as functional food.

  20. Responsiveness-informed multiple imputation and inverse probability-weighting in cohort studies with missing data that are non-monotone or not missing at random.

    PubMed

    Doidge, James C

    2018-02-01

    Population-based cohort studies are invaluable to health research because of the breadth of data collection over time, and the representativeness of their samples. However, they are especially prone to missing data, which can compromise the validity of analyses when data are not missing at random. Having many waves of data collection presents opportunity for participants' responsiveness to be observed over time, which may be informative about missing data mechanisms and thus useful as an auxiliary variable. Modern approaches to handling missing data such as multiple imputation and maximum likelihood can be difficult to implement with the large numbers of auxiliary variables and large amounts of non-monotone missing data that occur in cohort studies. Inverse probability-weighting can be easier to implement but conventional wisdom has stated that it cannot be applied to non-monotone missing data. This paper describes two methods of applying inverse probability-weighting to non-monotone missing data, and explores the potential value of including measures of responsiveness in either inverse probability-weighting or multiple imputation. Simulation studies are used to compare methods and demonstrate that responsiveness in longitudinal studies can be used to mitigate bias induced by missing data, even when data are not missing at random.

  1. Comparison of dynamic treatment regimes via inverse probability weighting.

    PubMed

    Hernán, Miguel A; Lanoy, Emilie; Costagliola, Dominique; Robins, James M

    2006-03-01

    Appropriate analysis of observational data is our best chance to obtain answers to many questions that involve dynamic treatment regimes. This paper describes a simple method to compare dynamic treatment regimes by artificially censoring subjects and then using inverse probability weighting (IPW) to adjust for any selection bias introduced by the artificial censoring. The basic strategy can be summarized in four steps: 1) define two regimes of interest, 2) artificially censor individuals when they stop following one of the regimes of interest, 3) estimate inverse probability weights to adjust for the potential selection bias introduced by censoring in the previous step, 4) compare the survival of the uncensored individuals under each regime of interest by fitting an inverse probability weighted Cox proportional hazards model with the dichotomous regime indicator and the baseline confounders as covariates. In the absence of model misspecification, the method is valid provided data are available on all time-varying and baseline joint predictors of survival and regime discontinuation. We present an application of the method to compare the AIDS-free survival under two dynamic treatment regimes in a large prospective study of HIV-infected patients. The paper concludes by discussing the relative advantages and disadvantages of censoring/IPW versus g-estimation of nested structural models to compare dynamic regimes.

  2. An autoregressive model-based particle filtering algorithms for extraction of respiratory rates as high as 90 breaths per minute from pulse oximeter.

    PubMed

    Lee, Jinseok; Chon, Ki H

    2010-09-01

    We present particle filtering (PF) algorithms for an accurate respiratory rate extraction from pulse oximeter recordings over a broad range: 12-90 breaths/min. These methods are based on an autoregressive (AR) model, where the aim is to find the pole angle with the highest magnitude as it corresponds to the respiratory rate. However, when SNR is low, the pole angle with the highest magnitude may not always lead to accurate estimation of the respiratory rate. To circumvent this limitation, we propose a probabilistic approach, using a sequential Monte Carlo method, named PF, which is combined with the optimal parameter search (OPS) criterion for an accurate AR model-based respiratory rate extraction. The PF technique has been widely adopted in many tracking applications, especially for nonlinear and/or non-Gaussian problems. We examine the performances of five different likelihood functions of the PF algorithm: the strongest neighbor, nearest neighbor (NN), weighted nearest neighbor (WNN), probability data association (PDA), and weighted probability data association (WPDA). The performance of these five combined OPS-PF algorithms was measured against a solely OPS-based AR algorithm for respiratory rate extraction from pulse oximeter recordings. The pulse oximeter data were collected from 33 healthy subjects with breathing rates ranging from 12 to 90 breaths/ min. It was found that significant improvement in accuracy can be achieved by employing particle filters, and that the combined OPS-PF employing either the NN or WNN likelihood function achieved the best results for all respiratory rates considered in this paper. The main advantage of the combined OPS-PF with either the NN or WNN likelihood function is that for the first time, respiratory rates as high as 90 breaths/min can be accurately extracted from pulse oximeter recordings.

  3. Hazard Function Estimation with Cause-of-Death Data Missing at Random.

    PubMed

    Wang, Qihua; Dinse, Gregg E; Liu, Chunling

    2012-04-01

    Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data.

  4. A nonparametric multiple imputation approach for missing categorical data.

    PubMed

    Zhou, Muhan; He, Yulei; Yu, Mandi; Hsu, Chiu-Hsieh

    2017-06-06

    Incomplete categorical variables with more than two categories are common in public health data. However, most of the existing missing-data methods do not use the information from nonresponse (missingness) probabilities. We propose a nearest-neighbour multiple imputation approach to impute a missing at random categorical outcome and to estimate the proportion of each category. The donor set for imputation is formed by measuring distances between each missing value with other non-missing values. The distance function is calculated based on a predictive score, which is derived from two working models: one fits a multinomial logistic regression for predicting the missing categorical outcome (the outcome model) and the other fits a logistic regression for predicting missingness probabilities (the missingness model). A weighting scheme is used to accommodate contributions from two working models when generating the predictive score. A missing value is imputed by randomly selecting one of the non-missing values with the smallest distances. We conduct a simulation to evaluate the performance of the proposed method and compare it with several alternative methods. A real-data application is also presented. The simulation study suggests that the proposed method performs well when missingness probabilities are not extreme under some misspecifications of the working models. However, the calibration estimator, which is also based on two working models, can be highly unstable when missingness probabilities for some observations are extremely high. In this scenario, the proposed method produces more stable and better estimates. In addition, proper weights need to be chosen to balance the contributions from the two working models and achieve optimal results for the proposed method. We conclude that the proposed multiple imputation method is a reasonable approach to dealing with missing categorical outcome data with more than two levels for assessing the distribution of the outcome. In terms of the choices for the working models, we suggest a multinomial logistic regression for predicting the missing outcome and a binary logistic regression for predicting the missingness probability.

  5. A testable model of earthquake probability based on changes in mean event size

    NASA Astrophysics Data System (ADS)

    Imoto, Masajiro

    2003-02-01

    We studied changes in mean event size using data on microearthquakes obtained from a local network in Kanto, central Japan, from a viewpoint that a mean event size tends to increase as the critical point is approached. A parameter describing changes was defined using a simple weighting average procedure. In order to obtain the distribution of the parameter in the background, we surveyed values of the parameter from 1982 to 1999 in a 160 × 160 × 80 km volume. The 16 events of M5.5 or larger in this volume were selected as target events. The conditional distribution of the parameter was estimated from the 16 values, each of which referred to the value immediately prior to each target event. The distribution of the background becomes a function of symmetry, the center of which corresponds to no change in b value. In contrast, the conditional distribution exhibits an asymmetric feature, which tends to decrease the b value. The difference in the distributions between the two groups was significant and provided us a hazard function for estimating earthquake probabilities. Comparing the hazard function with a Poisson process, we obtained an Akaike Information Criterion (AIC) reduction of 24. This reduction agreed closely with the probability gains of a retrospective study in a range of 2-4. A successful example of the proposed model can be seen in the earthquake of 3 June 2000, which is the only event during the period of prospective testing.

  6. Predicting Manual Therapy Treatment Success in Patients With Chronic Ankle Instability: Improving Self-Reported Function.

    PubMed

    Wikstrom, Erik A; McKeon, Patrick O

    2017-04-01

      Therapeutic modalities that stimulate sensory receptors around the foot-ankle complex improve chronic ankle instability (CAI)-associated impairments. However, not all patients have equal responses to these modalities. Identifying predictors of treatment success could improve clinician efficiency when treating patients with CAI.   To conduct a response analysis on existing data to identify predictors of improved self-reported function in patients with CAI.   Secondary analysis of a randomized controlled clinical trial.   Sports medicine research laboratories.   Fifty-nine patients with CAI, which was defined in accordance with the International Ankle Consortium recommendations.   Participants were randomized into 3 treatment groups (plantar massage [PM], ankle-joint mobilization [AJM], or calf stretching [CS]) that received six 5-minute treatments over 2 weeks.   Treatment success, defined as a patient exceeding the minimally clinically important difference of the Foot and Ankle Ability Measure-Sport (FAAM-S).   Patients with ≤5 recurrent sprains and ≤82.73% on the Foot and Ankle Ability Measure had a 98% probability of having a meaningful FAAM-S improvement after AJM. As well, ≥5 balance errors demonstrated 98% probability of meaningful FAAM-S improvements from AJM. Patients <22 years old and with ≤9.9 cm of dorsiflexion had a 99% probability of a meaningful FAAM-S improvement after PM. Also, those who made ≥2 single-limb-stance errors had a 98% probability of a meaningful FAAM-S improvement from PM. Patients with ≤53.1% on the FAAM-S had an 83% probability of a meaningful FAAM-S improvement after CS.   Each sensory-targeted ankle-rehabilitation strategy resulted in a unique combination of predictors of success for patients with CAI. Specific indicators of success with AJM were deficits in self-reported function, single-limb balance, and <5 previous sprains. Age, weight-bearing-dorsiflexion restrictions, and single-limb balance deficits identified patients with CAI who will respond well to PM. Assessing self-reported sport-related function can identify CAI patients who will respond positively to CS.

  7. Feedback produces divergence from prospect theory in descriptive choice.

    PubMed

    Jessup, Ryan K; Bishara, Anthony J; Busemeyer, Jerome R

    2008-10-01

    A recent study demonstrated that individuals making experience-based choices underweight small probabilities, in contrast to the overweighting observed in a typical descriptive paradigm. We tested whether trial-by-trial feedback in a repeated descriptive paradigm would engender choices more correspondent with experiential or descriptive paradigms. The results of a repeated gambling task indicated that individuals receiving feedback underweighted small probabilities, relative to their no-feedback counterparts. These results implicate feedback as a critical component during the decision-making process, even in the presence of fully specified descriptive information. A model comparison at the individual-subject level suggested that feedback drove individuals' decision weights toward objective probability weighting.

  8. Tsunami probability in the Caribbean Region

    USGS Publications Warehouse

    Parsons, T.; Geist, E.L.

    2008-01-01

    We calculated tsunami runup probability (in excess of 0.5 m) at coastal sites throughout the Caribbean region. We applied a Poissonian probability model because of the variety of uncorrelated tsunami sources in the region. Coastlines were discretized into 20 km by 20 km cells, and the mean tsunami runup rate was determined for each cell. The remarkable ???500-year empirical record compiled by O'Loughlin and Lander (2003) was used to calculate an empirical tsunami probability map, the first of three constructed for this study. However, it is unclear whether the 500-year record is complete, so we conducted a seismic moment-balance exercise using a finite-element model of the Caribbean-North American plate boundaries and the earthquake catalog, and found that moment could be balanced if the seismic coupling coefficient is c = 0.32. Modeled moment release was therefore used to generate synthetic earthquake sequences to calculate 50 tsunami runup scenarios for 500-year periods. We made a second probability map from numerically-calculated runup rates in each cell. Differences between the first two probability maps based on empirical and numerical-modeled rates suggest that each captured different aspects of tsunami generation; the empirical model may be deficient in primary plate-boundary events, whereas numerical model rates lack backarc fault and landslide sources. We thus prepared a third probability map using Bayesian likelihood functions derived from the empirical and numerical rate models and their attendant uncertainty to weight a range of rates at each 20 km by 20 km coastal cell. Our best-estimate map gives a range of 30-year runup probability from 0 - 30% regionally. ?? irkhaueser 2008.

  9. Effects of environmental changes on natural selection active on human polygenic traits.

    PubMed

    Ulizzi, L

    1993-06-01

    During the last century, industrialized countries experienced such an improvement in socioeconomic conditions and in sanitation that it is likely that the selective forces active on human metric traits have been modified. Perinatal mortality as a function of birth weight is one of the clearest examples of natural selection in humans. Here, trends over time of stabilizing and directional selection associated with birth weight have been analyzed in Japan from 1969 to 1989. The population of newborns has been subdivided according to gestational age, which is one of the main covariates of birth weight. The results show that in full-term babies both stabilizing and directional selection are coming to an end, whereas in babies born after 8 months of gestation these selective forces are still active, even if at much lower levels than in the past. The peculiar results found in the 7-month-gestation population are probably due to grossly abnormal cases of immaturity.

  10. Estimation of the sensitive volume for gravitational-wave source populations using weighted Monte Carlo integration

    NASA Astrophysics Data System (ADS)

    Tiwari, Vaibhav

    2018-07-01

    The population analysis and estimation of merger rates of compact binaries is one of the important topics in gravitational wave astronomy. The primary ingredient in these analyses is the population-averaged sensitive volume. Typically, sensitive volume, of a given search to a given simulated source population, is estimated by drawing signals from the population model and adding them to the detector data as injections. Subsequently injections, which are simulated gravitational waveforms, are searched for by the search pipelines and their signal-to-noise ratio (SNR) is determined. Sensitive volume is estimated, by using Monte-Carlo (MC) integration, from the total number of injections added to the data, the number of injections that cross a chosen threshold on SNR and the astrophysical volume in which the injections are placed. So far, only fixed population models have been used in the estimation of binary black holes (BBH) merger rates. However, as the scope of population analysis broaden in terms of the methodologies and source properties considered, due to an increase in the number of observed gravitational wave (GW) signals, the procedure will need to be repeated multiple times at a large computational cost. In this letter we address the problem by performing a weighted MC integration. We show how a single set of generic injections can be weighted to estimate the sensitive volume for multiple population models; thereby greatly reducing the computational cost. The weights in this MC integral are the ratios of the output probabilities, determined by the population model and standard cosmology, and the injection probability, determined by the distribution function of the generic injections. Unlike analytical/semi-analytical methods, which usually estimate sensitive volume using single detector sensitivity, the method is accurate within statistical errors, comes at no added cost and requires minimal computational resources.

  11. Academic achievement of twins and singletons in early adulthood: Taiwanese cohort study.

    PubMed

    Tsou, Meng-Ting; Tsou, Meng-Wen; Wu, Ming-Ping; Liu, Jin-Tan

    2008-07-21

    To examine the long term effects of low birth weight on academic achievements in twins and singletons and to determine whether the academic achievement of twins in early adulthood is inferior to that of singletons. Cohort study. Taiwanese nationwide register of academic outcome. A cohort of 218 972 singletons and 1687 twins born in Taiwan, 1983-5. College attendance and test scores in the college joint entrance examinations. After adjustment for birth weight, gestational age, birth order, and sex and the sociodemographic characteristics of the parents, twins were found to have significantly lower mean test scores than singletons in Chinese, mathematics, and natural science, as well as a 2.2% lower probability of attending college. Low birthweight twins had an 8.5% lower probability of college attendance than normal weight twins, while low birthweight singletons had only a 3.2% lower probability. The negative effects of low birth weight on the test scores in English and mathematics were substantially greater for twins than for singletons. The twin pair analysis showed that the association between birth weight and academic achievement scores, which existed for opposite sex twin pairs, was not discernible for same sex twin pairs, indicating that birth weight might partly reflect other underlying genetic variations. These data support the proposition that twins perform less well academically than singletons. Low birth weight has a negative association with subsequent academic achievement in early adulthood, with the effect being stronger for twins than for singletons. The association between birth weight and academic performance might be partly attributable to genetic factors.

  12. Levofloxacin dosing regimen in severely morbidly obese patients (BMI ≥40 kg/m(2)) should be guided by creatinine clearance estimates based on ideal body weight and optimized by therapeutic drug monitoring.

    PubMed

    Pai, Manjunath P; Cojutti, Piergiorgio; Pea, Federico

    2014-08-01

    Levofloxacin is a commonly prescribed antimicrobial where recommendations exist to reduce doses for renal impairment but not to increase doses for augmented renal function. Morbidly obese patients are increasing in prevalence, and represent a population that can have augmented renal function requiring higher-than-standard doses. The current investigation was performed to characterize the pharmacokinetics (PK) and evaluate the influence of alternate body size descriptors and renal function as predictors of levofloxacin clearance (CL) and the area under the curve over 24 h (AUC24). A database of patients undergoing levofloxacin therapeutic drug monitoring (TDM) were queried to identify patients ≥18 years of age with a body mass index ≥40 kg/m(2). A maximum a posteriori probability Bayesian approach using a two-compartment linear PK model was used to estimate individual PK parameters and AUC24. A total of 394 concentration-time data points (peaks and trough) from 68 patients between 98 and 250 kg were evaluated. The median (5th, 95th percentile) daily dose and AUC24 was 1,000 (250, 1,500) mg and 90.7 (44.4, 228) mg·h/L, respectively. Levofloxacin CL was significantly (p < 0.05) related to height but not weight. As a result, levofloxacin CL was best related (R (2) = 0.57) to creatinine CL (CLcr) estimated by the Cockcroft-Gault (CG) equation and ideal body weight (IBW) because IBW is a height transformation. An empiric four-category daily-dose regimen (500, 750, 1,000, 1,250 mg) stratified by CLcr (CG-IBW) is expected to have >90 % probability of achieving an AUC24 of 50-150 mg·h/L in morbidly obese patients. Subsequent application of TDM and integration with pathogen-specific information could then be applied to tailor the levofloxacin regimen. The proposed approach serves as a relevant alternative to the current fixed-dosing paradigm of levofloxacin in the morbidly obese.

  13. Detection and classification of interstitial lung diseases and emphysema using a joint morphological-fuzzy approach

    NASA Astrophysics Data System (ADS)

    Chang Chien, Kuang-Che; Fetita, Catalin; Brillet, Pierre-Yves; Prêteux, Françoise; Chang, Ruey-Feng

    2009-02-01

    Multi-detector computed tomography (MDCT) has high accuracy and specificity on volumetrically capturing serial images of the lung. It increases the capability of computerized classification for lung tissue in medical research. This paper proposes a three-dimensional (3D) automated approach based on mathematical morphology and fuzzy logic for quantifying and classifying interstitial lung diseases (ILDs) and emphysema. The proposed methodology is composed of several stages: (1) an image multi-resolution decomposition scheme based on a 3D morphological filter is used to detect and analyze the different density patterns of the lung texture. Then, (2) for each pattern in the multi-resolution decomposition, six features are computed, for which fuzzy membership functions define a probability of association with a pathology class. Finally, (3) for each pathology class, the probabilities are combined up according to the weight assigned to each membership function and two threshold values are used to decide the final class of the pattern. The proposed approach was tested on 10 MDCT cases and the classification accuracy was: emphysema: 95%, fibrosis/honeycombing: 84% and ground glass: 97%.

  14. Birds and insects as radar targets - A review

    NASA Technical Reports Server (NTRS)

    Vaughn, C. R.

    1985-01-01

    A review of radar cross-section measurements of birds and insects is presented. A brief discussion of some possible theoretical models is also given and comparisons made with the measurements. The comparisons suggest that most targets are, at present, better modeled by a prolate spheroid having a length-to-width ratio between 3 and 10 than by the often used equivalent weight water sphere. In addition, many targets observed with linear horizontal polarization have maximum cross sections much better estimated by a resonant half-wave dipole than by a water sphere. Also considered are birds and insects in the aggregate as a local radar 'clutter' source. Order-of-magnitude estimates are given for many reasonable target number densities. These estimates are then used to predict X-band volume reflectivities. Other topics that are of interest to the radar engineer are discussed, including the doppler bandwidth due to the internal motions of a single bird, the radar cross-section probability densities of single birds and insects, the variability of the functional form of the probability density functions, and the Fourier spectra of single birds and insects.

  15. Density functional study for crystalline structures and electronic properties of Si1- x Sn x binary alloys

    NASA Astrophysics Data System (ADS)

    Nagae, Yuki; Kurosawa, Masashi; Shibayama, Shigehisa; Araidai, Masaaki; Sakashita, Mitsuo; Nakatsuka, Osamu; Shiraishi, Kenji; Zaima, Shigeaki

    2016-08-01

    We have carried out density functional theory (DFT) calculation for Si1- x Sn x alloy and investigated the effect of the displacement of Si and Sn atoms with strain relaxation on the lattice constant and E- k dispersion. We calculated the formation probabilities for all atomic configurations of Si1- x Sn x according to the Boltzmann distribution. The average lattice constant and E- k dispersion were weighted by the formation probability of each configuration of Si1- x Sn x . We estimated the displacement of Si and Sn atoms from the initial tetrahedral site in the Si1- x Sn x unit cell considering structural relaxation under hydrostatic pressure, and we found that the breaking of the degenerated electronic levels of the valence band edge could be caused by the breaking of the tetrahedral symmetry. We also calculated the E- k dispersion of the Si1- x Sn x alloy by the DFT+U method and found that a Sn content above 50% would be required for the indirect-direct transition.

  16. Catlas: An magnetic resonance imaging-based three-dimensional cortical atlas and tissue probability maps for the domestic cat (Felis catus).

    PubMed

    Stolzberg, Daniel; Wong, Carmen; Butler, Blake E; Lomber, Stephen G

    2017-10-15

    Brain atlases play an important role in effectively communicating results from neuroimaging studies in a standardized coordinate system. Furthermore, brain atlases extend analysis of functional magnetic resonance imaging (MRI) data by delineating regions of interest over which to evaluate the extent of functional activation as well as measures of inter-regional connectivity. Here, we introduce a three-dimensional atlas of the cat cerebral cortex based on established cytoarchitectonic and electrophysiological findings. In total, 71 cerebral areas were mapped onto the gray matter (GM) of an averaged T1-weighted structural MRI acquired at 7 T from eight adult domestic cats. In addition, a nonlinear registration procedure was used to generate a common template brain as well as GM, white matter, and cerebral spinal fluid tissue probability maps to facilitate tissue segmentation as part of the standard preprocessing pipeline for MRI data analysis. The atlas and associated files can also be used for planning stereotaxic surgery and for didactic purposes. © 2017 Wiley Periodicals, Inc.

  17. Reducing Capacities and Distribution of Redox-Active Functional Groups in Low Molecular Weight Fractions of Humic Acids.

    PubMed

    Yang, Zhen; Kappler, Andreas; Jiang, Jie

    2016-11-15

    Humic substances (HS) are redox-active organic compounds with a broad spectrum of molecular sizes and reducing capacities, that is, number of electrons donated or accepted. However, it is unknown which role the distribution of redox-active functional groups in different molecule sizes plays for HS redox reactions in varying pore sizes microenvironments. We used dialysis experiments to separate bulk humic acids (HA) into low molecular weight fractions (LMWF) and retentate, for example, the remaining HA in the dialysis bag. LMWF accounted for only 2% of the total organic carbon content of the HA. However, their reducing capacities per gram of carbon were up to 33 times greater than either those of the bulk HA or the retentate. For a structural/mechanistic understanding of the high reducing capacity of the LMWF, we used fluorescence spectroscopy. We found that the LWMF showed significant fluorescence intensities for quinone-like functional groups, as indicated by the quinoid π-π* transition, that are probably responsible for the high reducing capacities. Therefore, the small-sized HS fraction can play a major role for redox transformation of metals or pollutants trapped in soil micropores (<2.5 nm diameter).

  18. Dimension-independent likelihood-informed MCMC

    DOE PAGES

    Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.

    2015-10-08

    Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian informationmore » and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.« less

  19. Assessing future vent opening locations at the Somma-Vesuvio volcanic complex: 2. Probability maps of the caldera for a future Plinian/sub-Plinian event with uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Tadini, A.; Bevilacqua, A.; Neri, A.; Cioni, R.; Aspinall, W. P.; Bisson, M.; Isaia, R.; Mazzarini, F.; Valentine, G. A.; Vitale, S.; Baxter, P. J.; Bertagnini, A.; Cerminara, M.; de Michieli Vitturi, M.; Di Roberto, A.; Engwell, S.; Esposti Ongaro, T.; Flandoli, F.; Pistolesi, M.

    2017-06-01

    In this study, we combine reconstructions of volcanological data sets and inputs from a structured expert judgment to produce a first long-term probability map for vent opening location for the next Plinian or sub-Plinian eruption of Somma-Vesuvio. In the past, the volcano has exhibited significant spatial variability in vent location; this can exert a significant control on where hazards materialize (particularly of pyroclastic density currents). The new vent opening probability mapping has been performed through (i) development of spatial probability density maps with Gaussian kernel functions for different data sets and (ii) weighted linear combination of these spatial density maps. The epistemic uncertainties affecting these data sets were quantified explicitly with expert judgments and implemented following a doubly stochastic approach. Various elicitation pooling metrics and subgroupings of experts and target questions were tested to evaluate the robustness of outcomes. Our findings indicate that (a) Somma-Vesuvio vent opening probabilities are distributed inside the whole caldera, with a peak corresponding to the area of the present crater, but with more than 50% probability that the next vent could open elsewhere within the caldera; (b) there is a mean probability of about 30% that the next vent will open west of the present edifice; (c) there is a mean probability of about 9.5% that the next medium-large eruption will enlarge the present Somma-Vesuvio caldera, and (d) there is a nonnegligible probability (mean value of 6-10%) that the next Plinian or sub-Plinian eruption will have its initial vent opening outside the present Somma-Vesuvio caldera.

  20. Use of probabilistic weights to enhance linear regression myoelectric control

    NASA Astrophysics Data System (ADS)

    Smith, Lauren H.; Kuiken, Todd A.; Hargrove, Levi J.

    2015-12-01

    Objective. Clinically available prostheses for transradial amputees do not allow simultaneous myoelectric control of degrees of freedom (DOFs). Linear regression methods can provide simultaneous myoelectric control, but frequently also result in difficulty with isolating individual DOFs when desired. This study evaluated the potential of using probabilistic estimates of categories of gross prosthesis movement, which are commonly used in classification-based myoelectric control, to enhance linear regression myoelectric control. Approach. Gaussian models were fit to electromyogram (EMG) feature distributions for three movement classes at each DOF (no movement, or movement in either direction) and used to weight the output of linear regression models by the probability that the user intended the movement. Eight able-bodied and two transradial amputee subjects worked in a virtual Fitts’ law task to evaluate differences in controllability between linear regression and probability-weighted regression for an intramuscular EMG-based three-DOF wrist and hand system. Main results. Real-time and offline analyses in able-bodied subjects demonstrated that probability weighting improved performance during single-DOF tasks (p < 0.05) by preventing extraneous movement at additional DOFs. Similar results were seen in experiments with two transradial amputees. Though goodness-of-fit evaluations suggested that the EMG feature distributions showed some deviations from the Gaussian, equal-covariance assumptions used in this experiment, the assumptions were sufficiently met to provide improved performance compared to linear regression control. Significance. Use of probability weights can improve the ability to isolate individual during linear regression myoelectric control, while maintaining the ability to simultaneously control multiple DOFs.

  1. Limitation of Inverse Probability-of-Censoring Weights in Estimating Survival in the Presence of Strong Selection Bias

    PubMed Central

    Howe, Chanelle J.; Cole, Stephen R.; Chmiel, Joan S.; Muñoz, Alvaro

    2011-01-01

    In time-to-event analyses, artificial censoring with correction for induced selection bias using inverse probability-of-censoring weights can be used to 1) examine the natural history of a disease after effective interventions are widely available, 2) correct bias due to noncompliance with fixed or dynamic treatment regimens, and 3) estimate survival in the presence of competing risks. Artificial censoring entails censoring participants when they meet a predefined study criterion, such as exposure to an intervention, failure to comply, or the occurrence of a competing outcome. Inverse probability-of-censoring weights use measured common predictors of the artificial censoring mechanism and the outcome of interest to determine what the survival experience of the artificially censored participants would be had they never been exposed to the intervention, complied with their treatment regimen, or not developed the competing outcome. Even if all common predictors are appropriately measured and taken into account, in the context of small sample size and strong selection bias, inverse probability-of-censoring weights could fail because of violations in assumptions necessary to correct selection bias. The authors used an example from the Multicenter AIDS Cohort Study, 1984–2008, regarding estimation of long-term acquired immunodeficiency syndrome-free survival to demonstrate the impact of violations in necessary assumptions. Approaches to improve correction methods are discussed. PMID:21289029

  2. Topology of two-dimensional turbulent flows of dust and gas

    NASA Astrophysics Data System (ADS)

    Mitra, Dhrubaditya; Perlekar, Prasad

    2018-04-01

    We perform direct numerical simulations (DNS) of passive heavy inertial particles (dust) in homogeneous and isotropic two-dimensional turbulent flows (gas) for a range of Stokes number, St<1 . We solve for the particles using both a Lagrangian and an Eulerian approach (with a shock-capturing scheme). In the latter, the particles are described by a dust-density field and a dust-velocity field. We find the following: the dust-density field in our Eulerian simulations has the same correlation dimension d2 as obtained from the clustering of particles in the Lagrangian simulations for St<1 ; the cumulative probability distribution function of the dust density coarse grained over a scale r , in the inertial range, has a left tail with a power-law falloff indicating the presence of voids; the energy spectrum of the dust velocity has a power-law range with an exponent that is the same as the gas-velocity spectrum except at very high Fourier modes; the compressibility of the dust-velocity field is proportional to St2. We quantify the topological properties of the dust velocity and the gas velocity through their gradient matrices, called A and B , respectively. Our DNS confirms that the statistics of topological properties of B are the same in Eulerian and Lagrangian frames only if the Eulerian data are weighed by the dust density. We use this correspondence to study the statistics of topological properties of A in the Lagrangian frame from our Eulerian simulations by calculating density-weighted probability distribution functions. We further find that in the Lagrangian frame, the mean value of the trace of A is negative and its magnitude increases with St approximately as exp(-C /St) with a constant C ≈0.1 . The statistical distribution of different topological structures that appear in the dust flow is different in Eulerian and Lagrangian (density-weighted Eulerian) cases, particularly for St close to unity. In both of these cases, for small St the topological structures have close to zero divergence and are either vortical (elliptic) or strain dominated (hyperbolic, saddle). As St increases, the contribution to negative divergence comes mostly from saddles and the contribution to positive divergence comes from both vortices and saddles. Compared to the Eulerian case, the Lagrangian (density-weighted Eulerian) case has less outward spirals and more converging saddles. Inward spirals are the least probable topological structures in both cases.

  3. A New Self-Constrained Inversion Method of Potential Fields Based on Probability Tomography

    NASA Astrophysics Data System (ADS)

    Sun, S.; Chen, C.; WANG, H.; Wang, Q.

    2014-12-01

    The self-constrained inversion method of potential fields uses a priori information self-extracted from potential field data. Differing from external a priori information, the self-extracted information are generally parameters derived exclusively from the analysis of the gravity and magnetic data (Paoletti et al., 2013). Here we develop a new self-constrained inversion method based on probability tomography. Probability tomography doesn't need any priori information, as well as large inversion matrix operations. Moreover, its result can describe the sources, especially the distribution of which is complex and irregular, entirely and clearly. Therefore, we attempt to use the a priori information extracted from the probability tomography results to constrain the inversion for physical properties. The magnetic anomaly data was taken as an example in this work. The probability tomography result of magnetic total field anomaly(ΔΤ) shows a smoother distribution than the anomalous source and cannot display the source edges exactly. However, the gradients of ΔΤ are with higher resolution than ΔΤ in their own direction, and this characteristic is also presented in their probability tomography results. So we use some rules to combine the probability tomography results of ∂ΔΤ⁄∂x, ∂ΔΤ⁄∂y and ∂ΔΤ⁄∂z into a new result which is used for extracting a priori information, and then incorporate the information into the model objective function as spatial weighting functions to invert the final magnetic susceptibility. Some magnetic synthetic examples incorporated with and without a priori information extracted from the probability tomography results were made to do comparison, results of which show that the former are more concentrated and with higher resolution of the source body edges. This method is finally applied in an iron mine in China with field measured ΔΤ data and performs well. ReferencesPaoletti, V., Ialongo, S., Florio, G., Fedi, M. & Cella, F., 2013. Self-constrained inversion of potential fields, Geophys J Int.This research is supported by the Fundamental Research Funds for Institute for Geophysical and Geochemical Exploration, Chinese Academy of Geological Sciences (Grant Nos. WHS201210 and WHS201211).

  4. Convergence and Efficiency of Adaptive Importance Sampling Techniques with Partial Biasing

    NASA Astrophysics Data System (ADS)

    Fort, G.; Jourdain, B.; Lelièvre, T.; Stoltz, G.

    2018-04-01

    We propose a new Monte Carlo method to efficiently sample a multimodal distribution (known up to a normalization constant). We consider a generalization of the discrete-time Self Healing Umbrella Sampling method, which can also be seen as a generalization of well-tempered metadynamics. The dynamics is based on an adaptive importance technique. The importance function relies on the weights (namely the relative probabilities) of disjoint sets which form a partition of the space. These weights are unknown but are learnt on the fly yielding an adaptive algorithm. In the context of computational statistical physics, the logarithm of these weights is, up to an additive constant, the free-energy, and the discrete valued function defining the partition is called the collective variable. The algorithm falls into the general class of Wang-Landau type methods, and is a generalization of the original Self Healing Umbrella Sampling method in two ways: (i) the updating strategy leads to a larger penalization strength of already visited sets in order to escape more quickly from metastable states, and (ii) the target distribution is biased using only a fraction of the free-energy, in order to increase the effective sample size and reduce the variance of importance sampling estimators. We prove the convergence of the algorithm and analyze numerically its efficiency on a toy example.

  5. Decision analysis with cumulative prospect theory.

    PubMed

    Bayoumi, A M; Redelmeier, D A

    2000-01-01

    Individuals sometimes express preferences that do not follow expected utility theory. Cumulative prospect theory adjusts for some phenomena by using decision weights rather than probabilities when analyzing a decision tree. The authors examined how probability transformations from cumulative prospect theory might alter a decision analysis of a prophylactic therapy in AIDS, eliciting utilities from patients with HIV infection (n = 75) and calculating expected outcomes using an established Markov model. They next focused on transformations of three sets of probabilities: 1) the probabilities used in calculating standard-gamble utility scores; 2) the probabilities of being in discrete Markov states; 3) the probabilities of transitioning between Markov states. The same prophylaxis strategy yielded the highest quality-adjusted survival under all transformations. For the average patient, prophylaxis appeared relatively less advantageous when standard-gamble utilities were transformed. Prophylaxis appeared relatively more advantageous when state probabilities were transformed and relatively less advantageous when transition probabilities were transformed. Transforming standard-gamble and transition probabilities simultaneously decreased the gain from prophylaxis by almost half. Sensitivity analysis indicated that even near-linear probability weighting transformations could substantially alter quality-adjusted survival estimates. The magnitude of benefit estimated in a decision-analytic model can change significantly after using cumulative prospect theory. Incorporating cumulative prospect theory into decision analysis can provide a form of sensitivity analysis and may help describe when people deviate from expected utility theory.

  6. An Optimization Principle for Deriving Nonequilibrium Statistical Models of Hamiltonian Dynamics

    NASA Astrophysics Data System (ADS)

    Turkington, Bruce

    2013-08-01

    A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a vector of resolved variables, selected to describe the macroscopic state of the system, a family of quasi-equilibrium probability densities on phase space corresponding to the resolved variables is employed as a statistical model, and the evolution of the mean resolved vector is estimated by optimizing over paths of these densities. Specifically, a cost function is constructed to quantify the lack-of-fit to the microscopic dynamics of any feasible path of densities from the statistical model; it is an ensemble-averaged, weighted, squared-norm of the residual that results from submitting the path of densities to the Liouville equation. The path that minimizes the time integral of the cost function determines the best-fit evolution of the mean resolved vector. The closed reduced equations satisfied by the optimal path are derived by Hamilton-Jacobi theory. When expressed in terms of the macroscopic variables, these equations have the generic structure of governing equations for nonequilibrium thermodynamics. In particular, the value function for the optimization principle coincides with the dissipation potential that defines the relation between thermodynamic forces and fluxes. The adjustable closure parameters in the best-fit reduced equations depend explicitly on the arbitrary weights that enter into the lack-of-fit cost function. Two particular model reductions are outlined to illustrate the general method. In each example the set of weights in the optimization principle contracts into a single effective closure parameter.

  7. Trends in Canadian Birth Weights, 1971 to 1989

    PubMed Central

    Wadhera, S.; Millar, W. J.; Nimrod, Carl

    1992-01-01

    This paper outlines levels and trends in birth weights of singleton birth weights of singleton births in Canada between 1971 and 1989. It relates these birth weights to maternal age, marital status, and parity and to gestational age. From 1971 to 1989, the median birth weight of all singletons increased by 104g, or 3.1%. The proportion of low birth weight babies declined, probably contributing to improved infant mortality rates. PMID:21221364

  8. End-to-end distance and contour length distribution functions of DNA helices

    NASA Astrophysics Data System (ADS)

    Zoli, Marco

    2018-06-01

    I present a computational method to evaluate the end-to-end and the contour length distribution functions of short DNA molecules described by a mesoscopic Hamiltonian. The method generates a large statistical ensemble of possible configurations for each dimer in the sequence, selects the global equilibrium twist conformation for the molecule, and determines the average base pair distances along the molecule backbone. Integrating over the base pair radial and angular fluctuations, I derive the room temperature distribution functions as a function of the sequence length. The obtained values for the most probable end-to-end distance and contour length distance, providing a measure of the global molecule size, are used to examine the DNA flexibility at short length scales. It is found that, also in molecules with less than ˜60 base pairs, coiled configurations maintain a large statistical weight and, consistently, the persistence lengths may be much smaller than in kilo-base DNA.

  9. Hazard Function Estimation with Cause-of-Death Data Missing at Random

    PubMed Central

    Wang, Qihua; Dinse, Gregg E.; Liu, Chunling

    2010-01-01

    Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data. PMID:22267874

  10. Why anthropic reasoning cannot predict Lambda.

    PubMed

    Starkman, Glenn D; Trotta, Roberto

    2006-11-17

    We revisit anthropic arguments purporting to explain the measured value of the cosmological constant. We argue that different ways of assigning probabilities to candidate universes lead to totally different anthropic predictions. As an explicit example, we show that weighting different universes by the total number of possible observations leads to an extremely small probability for observing a value of Lambda equal to or greater than what we now measure. We conclude that anthropic reasoning within the framework of probability as frequency is ill-defined and that in the absence of a fundamental motivation for selecting one weighting scheme over another the anthropic principle cannot be used to explain the value of Lambda, nor, likely, any other physical parameters.

  11. On the Composition of Risk Preference and Belief

    ERIC Educational Resources Information Center

    Wakkar, Peter P.

    2004-01-01

    Prospect theory assumes nonadditive decision weights for preferences over risky gambles. Such decision weights generalize additive probabilities. This article proposes a decomposition of decision weights into a component reflecting risk attitude and a new component depending on belief. The decomposition is based on an observable preference…

  12. Biologically plausible learning in neural networks: a lesson from bacterial chemotaxis.

    PubMed

    Shimansky, Yury P

    2009-12-01

    Learning processes in the brain are usually associated with plastic changes made to optimize the strength of connections between neurons. Although many details related to biophysical mechanisms of synaptic plasticity have been discovered, it is unclear how the concurrent performance of adaptive modifications in a huge number of spatial locations is organized to minimize a given objective function. Since direct experimental observation of even a relatively small subset of such changes is not feasible, computational modeling is an indispensable investigation tool for solving this problem. However, the conventional method of error back-propagation (EBP) employed for optimizing synaptic weights in artificial neural networks is not biologically plausible. This study based on computational experiments demonstrated that such optimization can be performed rather efficiently using the same general method that bacteria employ for moving closer to an attractant or away from a repellent. With regard to neural network optimization, this method consists of regulating the probability of an abrupt change in the direction of synaptic weight modification according to the temporal gradient of the objective function. Neural networks utilizing this method (regulation of modification probability, RMP) can be viewed as analogous to swimming in the multidimensional space of their parameters in the flow of biochemical agents carrying information about the optimality criterion. The efficiency of RMP is comparable to that of EBP, while RMP has several important advantages. Since the biological plausibility of RMP is beyond a reasonable doubt, the RMP concept provides a constructive framework for the experimental analysis of learning in natural neural networks.

  13. Selection for territory acquisition is modulated by social network structure in a wild songbird

    PubMed Central

    Farine, D R; Sheldon, B C

    2015-01-01

    The social environment may be a key mediator of selection that operates on animals. In many cases, individuals may experience selection not only as a function of their phenotype, but also as a function of the interaction between their phenotype and the phenotypes of the conspecifics they associate with. For example, when animals settle after dispersal, individuals may benefit from arriving early, but, in many cases, these benefits will be affected by the arrival times of other individuals in their local environment. We integrated a recently described method for calculating assortativity on weighted networks, which is the correlation between an individual's phenotype and that of its associates, into an existing framework for measuring the magnitude of social selection operating on phenotypes. We applied this approach to large-scale data on social network structure and the timing of arrival into the breeding area over three years. We found that late-arriving individuals had a reduced probability of breeding. However, the probability of breeding was also influenced by individuals’ social networks. Associating with late-arriving conspecifics increased the probability of successfully acquiring a breeding territory. Hence, social selection could offset the effects of nonsocial selection. Given parallel theoretical developments of the importance of local network structure on population processes, and increasing data being collected on social networks in free-living populations, the integration of these concepts could yield significant insights into social evolution. PMID:25611344

  14. Optimization Testbed Cometboards Extended into Stochastic Domain

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.; Patnaik, Surya N.

    2010-01-01

    COMparative Evaluation Testbed of Optimization and Analysis Routines for the Design of Structures (CometBoards) is a multidisciplinary design optimization software. It was originally developed for deterministic calculation. It has now been extended into the stochastic domain for structural design problems. For deterministic problems, CometBoards is introduced through its subproblem solution strategy as well as the approximation concept in optimization. In the stochastic domain, a design is formulated as a function of the risk or reliability. Optimum solution including the weight of a structure, is also obtained as a function of reliability. Weight versus reliability traced out an inverted-S-shaped graph. The center of the graph corresponded to 50 percent probability of success, or one failure in two samples. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponded to unity for reliability. Weight can be reduced to a small value for the most failure-prone design with a compromised reliability approaching zero. The stochastic design optimization (SDO) capability for an industrial problem was obtained by combining three codes: MSC/Nastran code was the deterministic analysis tool, fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life airframe component made of metallic and composite materials.

  15. Influence of extraction pH on the foaming, emulsification, oil-binding and visco-elastic properties of marama protein.

    PubMed

    Gulzar, Muhammad; Taylor, John Rn; Minnaar, Amanda

    2017-11-01

    Marama bean protein, as extracted previously at pH 8, forms a viscous, adhesive and extensible dough. To obtain a protein isolate with optimum functional properties, protein extraction under slightly acidic conditions (pH 6) was investigated. Two-dimensional electrophoresis showed that pH 6 extracted marama protein lacked some basic 11S legumin polypeptides, present in pH 8 extracted protein. However, it additionally contained acidic high molecular weight polypeptides (∼180 kDa), which were disulfide crosslinked into larger proteins. pH 6 extracted marama proteins had similar emulsification properties to soy protein isolate and several times higher foaming capacity than pH 8 extracted protein, egg white and soy protein isolate. pH 6 extracted protein dough was more elastic than pH 8 extracted protein, approaching the elasticity of wheat gluten. Marama protein extracted at pH 6 has excellent food-type functional properties, probably because it lacks some 11S polypeptides but has additional high molecular weight proteins. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  16. On the log-normality of historical magnetic-storm intensity statistics: implications for extreme-event probabilities

    USGS Publications Warehouse

    Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete

    2015-01-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.

  17. Study on Failure of Third-Party Damage for Urban Gas Pipeline Based on Fuzzy Comprehensive Evaluation.

    PubMed

    Li, Jun; Zhang, Hong; Han, Yinshan; Wang, Baodong

    2016-01-01

    Focusing on the diversity, complexity and uncertainty of the third-party damage accident, the failure probability of third-party damage to urban gas pipeline was evaluated on the theory of analytic hierarchy process and fuzzy mathematics. The fault tree of third-party damage containing 56 basic events was built by hazard identification of third-party damage. The fuzzy evaluation of basic event probabilities were conducted by the expert judgment method and using membership function of fuzzy set. The determination of the weight of each expert and the modification of the evaluation opinions were accomplished using the improved analytic hierarchy process, and the failure possibility of the third-party to urban gas pipeline was calculated. Taking gas pipelines of a certain large provincial capital city as an example, the risk assessment structure of the method was proved to conform to the actual situation, which provides the basis for the safety risk prevention.

  18. A Gibbs sampler for motif detection in phylogenetically close sequences

    NASA Astrophysics Data System (ADS)

    Siddharthan, Rahul; van Nimwegen, Erik; Siggia, Eric

    2004-03-01

    Genes are regulated by transcription factors that bind to DNA upstream of genes and recognize short conserved ``motifs'' in a random intergenic ``background''. Motif-finders such as the Gibbs sampler compare the probability of these short sequences being represented by ``weight matrices'' to the probability of their arising from the background ``null model'', and explore this space (analogous to a free-energy landscape). But closely related species may show conservation not because of functional sites but simply because they have not had sufficient time to diverge, so conventional methods will fail. We introduce a new Gibbs sampler algorithm that accounts for common ancestry when searching for motifs, while requiring minimal ``prior'' assumptions on the number and types of motifs, assessing the significance of detected motifs by ``tracking'' clusters that stay together. We apply this scheme to motif detection in sporulation-cycle genes in the yeast S. cerevisiae, using recent sequences of other closely-related Saccharomyces species.

  19. The performance of different propensity score methods for estimating absolute effects of treatments on survival outcomes: A simulation study.

    PubMed

    Austin, Peter C; Schuster, Tibor

    2016-10-01

    Observational studies are increasingly being used to estimate the effect of treatments, interventions and exposures on outcomes that can occur over time. Historically, the hazard ratio, which is a relative measure of effect, has been reported. However, medical decision making is best informed when both relative and absolute measures of effect are reported. When outcomes are time-to-event in nature, the effect of treatment can also be quantified as the change in mean or median survival time due to treatment and the absolute reduction in the probability of the occurrence of an event within a specified duration of follow-up. We describe how three different propensity score methods, propensity score matching, stratification on the propensity score and inverse probability of treatment weighting using the propensity score, can be used to estimate absolute measures of treatment effect on survival outcomes. These methods are all based on estimating marginal survival functions under treatment and lack of treatment. We then conducted an extensive series of Monte Carlo simulations to compare the relative performance of these methods for estimating the absolute effects of treatment on survival outcomes. We found that stratification on the propensity score resulted in the greatest bias. Caliper matching on the propensity score and a method based on earlier work by Cole and Hernán tended to have the best performance for estimating absolute effects of treatment on survival outcomes. When the prevalence of treatment was less extreme, then inverse probability of treatment weighting-based methods tended to perform better than matching-based methods. © The Author(s) 2014.

  20. [Gene method for inconsistent hydrological frequency calculation. I: Inheritance, variability and evolution principles of hydrological genes].

    PubMed

    Xie, Ping; Wu, Zi Yi; Zhao, Jiang Yan; Sang, Yan Fang; Chen, Jie

    2018-04-01

    A stochastic hydrological process is influenced by both stochastic and deterministic factors. A hydrological time series contains not only pure random components reflecting its inheri-tance characteristics, but also deterministic components reflecting variability characteristics, such as jump, trend, period, and stochastic dependence. As a result, the stochastic hydrological process presents complicated evolution phenomena and rules. To better understand these complicated phenomena and rules, this study described the inheritance and variability characteristics of an inconsistent hydrological series from two aspects: stochastic process simulation and time series analysis. In addition, several frequency analysis approaches for inconsistent time series were compared to reveal the main problems in inconsistency study. Then, we proposed a new concept of hydrological genes origined from biological genes to describe the inconsistent hydrolocal processes. The hydrologi-cal genes were constructed using moments methods, such as general moments, weight function moments, probability weight moments and L-moments. Meanwhile, the five components, including jump, trend, periodic, dependence and pure random components, of a stochastic hydrological process were defined as five hydrological bases. With this method, the inheritance and variability of inconsistent hydrological time series were synthetically considered and the inheritance, variability and evolution principles were fully described. Our study would contribute to reveal the inheritance, variability and evolution principles in probability distribution of hydrological elements.

  1. Modelling the occurrence and severity of enoxaparin-induced bleeding and bruising events

    PubMed Central

    Barras, Michael A; Duffull, Stephen B; Atherton, John J; Green, Bruce

    2009-01-01

    AIMS To develop a population pharmacokinetic–pharmacodynamic model to describe the occurrence and severity of bleeding or bruising as a function of enoxaparin exposure. METHODS Data were obtained from a randomized controlled trial (n = 118) that compared conventional dosing of enoxaparin (product label) with an individualized dosing regimen. Anti-Xa concentrations were sampled using a sparse design and the size, location and type of bruising and bleeding event, during enoxaparin therapy, were collected daily. A population pharmacokinetic–pharmacodynamic analysis was performed using nonlinear mixed effects techniques. The final model was used to explore how the probability of events in patients with obesity and/or renal impairment varied under differing dosing strategies. RESULTS Three hundred and forty-nine anti-Xa concentrations were available for analysis. A two-compartment first-order absorption and elimination model best fit the data, with lean body weight describing between-subject variability in clearance and central volume of distribution. A three-category proportional-odds model described the occurrence and severity of events as a function of both cumulative enoxaparin AUC (cAUC) and subject age. Simulations showed that individualized dosing decreased the probability of a bleeding or major bruising event when compared with conventional dosing, which was most noticeable in subjects with obesity and renal impairment. CONCLUSIONS The occurrence and severity of a bleeding or major bruising event to enoxaparin, administered for the treatment of a thromboembolic disease, can be described as a function of both cAUC and subject age. Individualized dosing of enoxaparin will reduce the probability of an event. PMID:19916994

  2. Preformulation considerations for controlled release dosage forms. Part III. Candidate form selection using numerical weighting and scoring.

    PubMed

    Chrzanowski, Frank

    2008-01-01

    Two numerical methods, Decision Analysis (DA) and Potential Problem Analysis (PPA) are presented as alternative selection methods to the logical method presented in Part I. In DA properties are weighted and outcomes are scored. The weighted scores for each candidate are totaled and final selection is based on the totals. Higher scores indicate better candidates. In PPA potential problems are assigned a seriousness factor and test outcomes are used to define the probability of occurrence. The seriousness-probability products are totaled and forms with minimal scores are preferred. DA and PPA have never been compared to the logical-elimination method. Additional data were available for two forms of McN-5707 to provide complete preformulation data for five candidate forms. Weight and seriousness factors (independent variables) were obtained from a survey of experienced formulators. Scores and probabilities (dependent variables) were provided independently by Preformulation. The rankings of the five candidate forms, best to worst, were similar for all three methods. These results validate the applicability of DA and PPA for candidate form selection. DA and PPA are particularly applicable in cases where there are many candidate forms and where each form has some degree of unfavorable properties.

  3. Pseudo Bayes Estimates for Test Score Distributions and Chained Equipercentile Equating. Research Report. ETS RR-09-47

    ERIC Educational Resources Information Center

    Moses, Tim; Oh, Hyeonjoo J.

    2009-01-01

    Pseudo Bayes probability estimates are weighted averages of raw and modeled probabilities; these estimates have been studied primarily in nonpsychometric contexts. The purpose of this study was to evaluate pseudo Bayes probability estimates as applied to the estimation of psychometric test score distributions and chained equipercentile equating…

  4. The relationship between species detection probability and local extinction probability

    USGS Publications Warehouse

    Alpizar-Jara, R.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Pollock, K.H.; Rosenberry, C.S.

    2004-01-01

    In community-level ecological studies, generally not all species present in sampled areas are detected. Many authors have proposed the use of estimation methods that allow detection probabilities that are < 1 and that are heterogeneous among species. These methods can also be used to estimate community-dynamic parameters such as species local extinction probability and turnover rates (Nichols et al. Ecol Appl 8:1213-1225; Conserv Biol 12:1390-1398). Here, we present an ad hoc approach to estimating community-level vital rates in the presence of joint heterogeneity of detection probabilities and vital rates. The method consists of partitioning the number of species into two groups using the detection frequencies and then estimating vital rates (e.g., local extinction probabilities) for each group. Estimators from each group are combined in a weighted estimator of vital rates that accounts for the effect of heterogeneity. Using data from the North American Breeding Bird Survey, we computed such estimates and tested the hypothesis that detection probabilities and local extinction probabilities were negatively related. Our analyses support the hypothesis that species detection probability covaries negatively with local probability of extinction and turnover rates. A simulation study was conducted to assess the performance of vital parameter estimators as well as other estimators relevant to questions about heterogeneity, such as coefficient of variation of detection probabilities and proportion of species in each group. Both the weighted estimator suggested in this paper and the original unweighted estimator for local extinction probability performed fairly well and provided no basis for preferring one to the other.

  5. Accounting for Selection Bias in Studies of Acute Cardiac Events.

    PubMed

    Banack, Hailey R; Harper, Sam; Kaufman, Jay S

    2018-06-01

    In cardiovascular research, pre-hospital mortality represents an important potential source of selection bias. Inverse probability of censoring weights are a method to account for this source of bias. The objective of this article is to examine and correct for the influence of selection bias due to pre-hospital mortality on the relationship between cardiovascular risk factors and all-cause mortality after an acute cardiac event. The relationship between the number of cardiovascular disease (CVD) risk factors (0-5; smoking status, diabetes, hypertension, dyslipidemia, and obesity) and all-cause mortality was examined using data from the Atherosclerosis Risk in Communities (ARIC) study. To illustrate the magnitude of selection bias, estimates from an unweighted generalized linear model with a log link and binomial distribution were compared with estimates from an inverse probability of censoring weighted model. In unweighted multivariable analyses the estimated risk ratio for mortality ranged from 1.09 (95% confidence interval [CI], 0.98-1.21) for 1 CVD risk factor to 1.95 (95% CI, 1.41-2.68) for 5 CVD risk factors. In the inverse probability of censoring weights weighted analyses, the risk ratios ranged from 1.14 (95% CI, 0.94-1.39) to 4.23 (95% CI, 2.69-6.66). Estimates from the inverse probability of censoring weighted model were substantially greater than unweighted, adjusted estimates across all risk factor categories. This shows the magnitude of selection bias due to pre-hospital mortality and effect on estimates of the effect of CVD risk factors on mortality. Moreover, the results highlight the utility of using this method to address a common form of bias in cardiovascular research. Copyright © 2018 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.

  6. Improved Neuroimaging Atlas of the Dentate Nucleus.

    PubMed

    He, Naying; Langley, Jason; Huddleston, Daniel E; Ling, Huawei; Xu, Hongmin; Liu, Chunlei; Yan, Fuhua; Hu, Xiaoping P

    2017-12-01

    The dentate nucleus (DN) of the cerebellum is the major output nucleus of the cerebellum and is rich in iron. Quantitative susceptibility mapping (QSM) provides better iron-sensitive MRI contrast to delineate the boundary of the DN than either T 2 -weighted images or susceptibility-weighted images. Prior DN atlases used T 2 -weighted or susceptibility-weighted images to create DN atlases. Here, we employ QSM images to develop an improved dentate nucleus atlas for use in imaging studies. The DN was segmented in QSM images from 38 healthy volunteers. The resulting DN masks were transformed to a common space and averaged to generate the DN atlas. The center of mass of the left and right sides of the QSM-based DN atlas in the Montreal Neurological Institute space was -13.8, -55.8, and -36.4 mm, and 13.8, -55.7, and -36.4 mm, respectively. The maximal probability and mean probability of the DN atlas with the individually segmented DNs in this cohort were 100 and 39.3%, respectively, in contrast to the maximum probability of approximately 75% and the mean probability of 23.4 to 33.7% with earlier DN atlases. Using QSM, which provides superior iron-sensitive MRI contrast for delineating iron-rich structures, an improved atlas for the dentate nucleus has been generated. The atlas can be applied to investigate the role of the DN in both normal cortico-cerebellar physiology and the variety of disease states in which it is implicated.

  7. Neural Mechanisms for Integrating Prior Knowledge and Likelihood in Value-Based Probabilistic Inference

    PubMed Central

    Ting, Chih-Chung; Yu, Chia-Chen; Maloney, Laurence T.

    2015-01-01

    In Bayesian decision theory, knowledge about the probabilities of possible outcomes is captured by a prior distribution and a likelihood function. The prior reflects past knowledge and the likelihood summarizes current sensory information. The two combined (integrated) form a posterior distribution that allows estimation of the probability of different possible outcomes. In this study, we investigated the neural mechanisms underlying Bayesian integration using a novel lottery decision task in which both prior knowledge and likelihood information about reward probability were systematically manipulated on a trial-by-trial basis. Consistent with Bayesian integration, as sample size increased, subjects tended to weigh likelihood information more compared with prior information. Using fMRI in humans, we found that the medial prefrontal cortex (mPFC) correlated with the mean of the posterior distribution, a statistic that reflects the integration of prior knowledge and likelihood of reward probability. Subsequent analysis revealed that both prior and likelihood information were represented in mPFC and that the neural representations of prior and likelihood in mPFC reflected changes in the behaviorally estimated weights assigned to these different sources of information in response to changes in the environment. Together, these results establish the role of mPFC in prior-likelihood integration and highlight its involvement in representing and integrating these distinct sources of information. PMID:25632152

  8. Using the weighted area under the net benefit curve for decision curve analysis.

    PubMed

    Talluri, Rajesh; Shete, Sanjay

    2016-07-18

    Risk prediction models have been proposed for various diseases and are being improved as new predictors are identified. A major challenge is to determine whether the newly discovered predictors improve risk prediction. Decision curve analysis has been proposed as an alternative to the area under the curve and net reclassification index to evaluate the performance of prediction models in clinical scenarios. The decision curve computed using the net benefit can evaluate the predictive performance of risk models at a given or range of threshold probabilities. However, when the decision curves for 2 competing models cross in the range of interest, it is difficult to identify the best model as there is no readily available summary measure for evaluating the predictive performance. The key deterrent for using simple measures such as the area under the net benefit curve is the assumption that the threshold probabilities are uniformly distributed among patients. We propose a novel measure for performing decision curve analysis. The approach estimates the distribution of threshold probabilities without the need of additional data. Using the estimated distribution of threshold probabilities, the weighted area under the net benefit curve serves as the summary measure to compare risk prediction models in a range of interest. We compared 3 different approaches, the standard method, the area under the net benefit curve, and the weighted area under the net benefit curve. Type 1 error and power comparisons demonstrate that the weighted area under the net benefit curve has higher power compared to the other methods. Several simulation studies are presented to demonstrate the improvement in model comparison using the weighted area under the net benefit curve compared to the standard method. The proposed measure improves decision curve analysis by using the weighted area under the curve and thereby improves the power of the decision curve analysis to compare risk prediction models in a clinical scenario.

  9. Synaptic convergence regulates synchronization-dependent spike transfer in feedforward neural networks.

    PubMed

    Sailamul, Pachaya; Jang, Jaeson; Paik, Se-Bum

    2017-12-01

    Correlated neural activities such as synchronizations can significantly alter the characteristics of spike transfer between neural layers. However, it is not clear how this synchronization-dependent spike transfer can be affected by the structure of convergent feedforward wiring. To address this question, we implemented computer simulations of model neural networks: a source and a target layer connected with different types of convergent wiring rules. In the Gaussian-Gaussian (GG) model, both the connection probability and the strength are given as Gaussian distribution as a function of spatial distance. In the Uniform-Constant (UC) and Uniform-Exponential (UE) models, the connection probability density is a uniform constant within a certain range, but the connection strength is set as a constant value or an exponentially decaying function, respectively. Then we examined how the spike transfer function is modulated under these conditions, while static or synchronized input patterns were introduced to simulate different levels of feedforward spike synchronization. We observed that the synchronization-dependent modulation of the transfer function appeared noticeably different for each convergence condition. The modulation of the spike transfer function was largest in the UC model, and smallest in the UE model. Our analysis showed that this difference was induced by the different spike weight distributions that was generated from convergent synapses in each model. Our results suggest that, the structure of the feedforward convergence is a crucial factor for correlation-dependent spike control, thus must be considered important to understand the mechanism of information transfer in the brain.

  10. Reader reaction to "a robust method for estimating optimal treatment regimes" by Zhang et al. (2012).

    PubMed

    Taylor, Jeremy M G; Cheng, Wenting; Foster, Jared C

    2015-03-01

    A recent article (Zhang et al., 2012, Biometrics 168, 1010-1018) compares regression based and inverse probability based methods of estimating an optimal treatment regime and shows for a small number of covariates that inverse probability weighted methods are more robust to model misspecification than regression methods. We demonstrate that using models that fit the data better reduces the concern about non-robustness for the regression methods. We extend the simulation study of Zhang et al. (2012, Biometrics 168, 1010-1018), also considering the situation of a larger number of covariates, and show that incorporating random forests into both regression and inverse probability weighted based methods improves their properties. © 2014, The International Biometric Society.

  11. Reliability-Based Design Optimization of a Composite Airframe Component

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2009-01-01

    A stochastic design optimization methodology (SDO) has been developed to design components of an airframe structure that can be made of metallic and composite materials. The design is obtained as a function of the risk level, or reliability, p. The design method treats uncertainties in load, strength, and material properties as distribution functions, which are defined with mean values and standard deviations. A design constraint or a failure mode is specified as a function of reliability p. Solution to stochastic optimization yields the weight of a structure as a function of reliability p. Optimum weight versus reliability p traced out an inverted-S-shaped graph. The center of the inverted-S graph corresponded to 50 percent (p = 0.5) probability of success. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponds to unity for reliability p (or p = 1). Weight can be reduced to a small value for the most failure-prone design with a reliability that approaches zero (p = 0). Reliability can be changed for different components of an airframe structure. For example, the landing gear can be designed for a very high reliability, whereas it can be reduced to a small extent for a raked wingtip. The SDO capability is obtained by combining three codes: (1) The MSC/Nastran code was the deterministic analysis tool, (2) The fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and (3) NASA Glenn Research Center s optimization testbed CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life raked wingtip structure of the Boeing 767-400 extended range airliner made of metallic and composite materials.

  12. Structure and biochemical functions of four simian virus 40 truncated large-T antigens.

    PubMed Central

    Chaudry, F; Harvey, R; Smith, A E

    1982-01-01

    The structure of four abnormal T antigens which are present in different simian virus 40 (SV40)-transformed mouse cell lines was studied by tryptic peptide mapping, partial proteolysis fingerprinting, immunoprecipitation with monoclonal antibodies, and in vitro translation. The results obtained allowed us to deduce that these proteins, which have apparent molecular weights of 15,000, 22,000, 33,000 and 45,000, are truncated forms of large-T antigen extending to different amounts into the amino acid sequences unique to large-T. The proteins are all phosphorylated, probably at a site between amino acids 106 and 123. The mRNAs coding for the proteins probably contain the normal large-T splice but are shorter than the normal transcripts of the SV40 early region. The truncated large-Ts were tested for the ability to bind to double-stranded DNA-cellulose. This showed that the 33,000- and 45,000-molecular-weight polypeptides contained sequences sufficient for binding under the conditions used, whereas the 15,000- and 22,000-molecular-weight forms did not. Together with published data, this allows the tentative mapping of a region of SV40 large-T between amino acids 109 and 272 that is necessary and may be sufficient for the binding to double-stranded DNA-cellulose in vitro. None of the truncated large-T species formed a stable complex with the host cell protein referred to as nonviral T-antigen or p53, suggesting that the carboxy-terminal sequences of large-T are necessary for complex formation. Images PMID:6292504

  13. An estimation method of the direct benefit of a waterlogging control project applicable to the changing environment

    NASA Astrophysics Data System (ADS)

    Zengmei, L.; Guanghua, Q.; Zishen, C.

    2015-05-01

    The direct benefit of a waterlogging control project is reflected by the reduction or avoidance of waterlogging loss. Before and after the construction of a waterlogging control project, the disaster-inducing environment in the waterlogging-prone zone is generally different. In addition, the category, quantity and spatial distribution of the disaster-bearing bodies are also changed more or less. Therefore, under the changing environment, the direct benefit of a waterlogging control project should be the reduction of waterlogging losses compared to conditions with no control project. Moreover, the waterlogging losses with or without the project should be the mathematical expectations of the waterlogging losses when rainstorms of all frequencies meet various water levels in the drainage-accepting zone. So an estimation model of the direct benefit of waterlogging control is proposed. Firstly, on the basis of a Copula function, the joint distribution of the rainstorms and the water levels are established, so as to obtain their joint probability density function. Secondly, according to the two-dimensional joint probability density distribution, the dimensional domain of integration is determined, which is then divided into small domains so as to calculate the probability for each of the small domains and the difference between the average waterlogging loss with and without a waterlogging control project, called the regional benefit of waterlogging control project, under the condition that rainstorms in the waterlogging-prone zone meet the water level in the drainage-accepting zone. Finally, it calculates the weighted mean of the project benefit of all small domains, with probability as the weight, and gets the benefit of the waterlogging control project. Taking the estimation of benefit of a waterlogging control project in Yangshan County, Guangdong Province, as an example, the paper briefly explains the procedures in waterlogging control project benefit estimation. The results show that the waterlogging control benefit estimation model constructed is applicable to the changing conditions that occur in both the disaster-inducing environment of the waterlogging-prone zone and disaster-bearing bodies, considering all conditions when rainstorms of all frequencies meet different water levels in the drainage-accepting zone. Thus, the estimation method of waterlogging control benefit can reflect the actual situation more objectively, and offer a scientific basis for rational decision-making for waterlogging control projects.

  14. Intelligent call admission control for multi-class services in mobile cellular networks

    NASA Astrophysics Data System (ADS)

    Ma, Yufeng; Hu, Xiulin; Zhang, Yunyu

    2005-11-01

    Scarcity of the spectrum resource and mobility of users make quality of service (QoS) provision a critical issue in mobile cellular networks. This paper presents a fuzzy call admission control scheme to meet the requirement of the QoS. A performance measure is formed as a weighted linear function of new call and handoff call blocking probabilities of each service class. Simulation compares the proposed fuzzy scheme with complete sharing and guard channel policies. Simulation results show that fuzzy scheme has a better robust performance in terms of average blocking criterion.

  15. Estimation and classification by sigmoids based on mutual information

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1994-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.

  16. Weighted Optimization-Based Distributed Kalman Filter for Nonlinear Target Tracking in Collaborative Sensor Networks.

    PubMed

    Chen, Jie; Li, Jiahong; Yang, Shuanghua; Deng, Fang

    2017-11-01

    The identification of the nonlinearity and coupling is crucial in nonlinear target tracking problem in collaborative sensor networks. According to the adaptive Kalman filtering (KF) method, the nonlinearity and coupling can be regarded as the model noise covariance, and estimated by minimizing the innovation or residual errors of the states. However, the method requires large time window of data to achieve reliable covariance measurement, making it impractical for nonlinear systems which are rapidly changing. To deal with the problem, a weighted optimization-based distributed KF algorithm (WODKF) is proposed in this paper. The algorithm enlarges the data size of each sensor by the received measurements and state estimates from its connected sensors instead of the time window. A new cost function is set as the weighted sum of the bias and oscillation of the state to estimate the "best" estimate of the model noise covariance. The bias and oscillation of the state of each sensor are estimated by polynomial fitting a time window of state estimates and measurements of the sensor and its neighbors weighted by the measurement noise covariance. The best estimate of the model noise covariance is computed by minimizing the weighted cost function using the exhaustive method. The sensor selection method is in addition to the algorithm to decrease the computation load of the filter and increase the scalability of the sensor network. The existence, suboptimality and stability analysis of the algorithm are given. The local probability data association method is used in the proposed algorithm for the multitarget tracking case. The algorithm is demonstrated in simulations on tracking examples for a random signal, one nonlinear target, and four nonlinear targets. Results show the feasibility and superiority of WODKF against other filtering algorithms for a large class of systems.

  17. It's Not All About Calories

    Cancer.gov

    Losing weight is about balancing calories in (food and drink) with calories out (exercise). Sounds simple, right? But if it were that simple, you and the millions of other women struggling with their weight probably would have figured it out.

  18. A risk-based multi-objective model for optimal placement of sensors in water distribution system

    NASA Astrophysics Data System (ADS)

    Naserizade, Sareh S.; Nikoo, Mohammad Reza; Montaseri, Hossein

    2018-02-01

    In this study, a new stochastic model based on Conditional Value at Risk (CVaR) and multi-objective optimization methods is developed for optimal placement of sensors in water distribution system (WDS). This model determines minimization of risk which is caused by simultaneous multi-point contamination injection in WDS using CVaR approach. The CVaR considers uncertainties of contamination injection in the form of probability distribution function and calculates low-probability extreme events. In this approach, extreme losses occur at tail of the losses distribution function. Four-objective optimization model based on NSGA-II algorithm is developed to minimize losses of contamination injection (through CVaR of affected population and detection time) and also minimize the two other main criteria of optimal placement of sensors including probability of undetected events and cost. Finally, to determine the best solution, Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), as a subgroup of Multi Criteria Decision Making (MCDM) approach, is utilized to rank the alternatives on the trade-off curve among objective functions. Also, sensitivity analysis is done to investigate the importance of each criterion on PROMETHEE results considering three relative weighting scenarios. The effectiveness of the proposed methodology is examined through applying it to Lamerd WDS in the southwestern part of Iran. The PROMETHEE suggests 6 sensors with suitable distribution that approximately cover all regions of WDS. Optimal values related to CVaR of affected population and detection time as well as probability of undetected events for the best optimal solution are equal to 17,055 persons, 31 mins and 0.045%, respectively. The obtained results of the proposed methodology in Lamerd WDS show applicability of CVaR-based multi-objective simulation-optimization model for incorporating the main uncertainties of contamination injection in order to evaluate extreme value of losses in WDS.

  19. Modeling the solute transport by particle-tracing method with variable weights

    NASA Astrophysics Data System (ADS)

    Jiang, J.

    2016-12-01

    Particle-tracing method is usually used to simulate the solute transport in fracture media. In this method, the concentration at one point is proportional to number of particles visiting this point. However, this method is rather inefficient at the points with small concentration. Few particles visit these points, which leads to violent oscillation or gives zero value of concentration. In this paper, we proposed a particle-tracing method with variable weights. The concentration at one point is proportional to the sum of the weights of the particles visiting it. It adjusts the weight factors during simulations according to the estimated probabilities of corresponding walks. If the weight W of a tracking particle is larger than the relative concentration C at the corresponding site, the tracking particle will be splitted into Int(W/C) copies and each copy will be simulated independently with the weight W/Int(W/C) . If the weight W of a tracking particle is less than the relative concentration C at the corresponding site, the tracking particle will be continually tracked with a probability W/C and the weight will be adjusted to be C. By adjusting weights, the number of visiting particles distributes evenly in the whole range. Through this variable weights scheme, we can eliminate the violent oscillation and increase the accuracy of orders of magnitudes.

  20. State-dependent biasing method for importance sampling in the weighted stochastic simulation algorithm.

    PubMed

    Roh, Min K; Gillespie, Dan T; Petzold, Linda R

    2010-11-07

    The weighted stochastic simulation algorithm (wSSA) was developed by Kuwahara and Mura [J. Chem. Phys. 129, 165101 (2008)] to efficiently estimate the probabilities of rare events in discrete stochastic systems. The wSSA uses importance sampling to enhance the statistical accuracy in the estimation of the probability of the rare event. The original algorithm biases the reaction selection step with a fixed importance sampling parameter. In this paper, we introduce a novel method where the biasing parameter is state-dependent. The new method features improved accuracy, efficiency, and robustness.

  1. Nanomechanical characterization of heterogeneous and hierarchical biomaterials and tissues using nanoindentation: the role of finite mixture models.

    PubMed

    Zadpoor, Amir A

    2015-03-01

    Mechanical characterization of biological tissues and biomaterials at the nano-scale is often performed using nanoindentation experiments. The different constituents of the characterized materials will then appear in the histogram that shows the probability of measuring a certain range of mechanical properties. An objective technique is needed to separate the probability distributions that are mixed together in such a histogram. In this paper, finite mixture models (FMMs) are proposed as a tool capable of performing such types of analysis. Finite Gaussian mixture models assume that the measured probability distribution is a weighted combination of a finite number of Gaussian distributions with separate mean and standard deviation values. Dedicated optimization algorithms are available for fitting such a weighted mixture model to experimental data. Moreover, certain objective criteria are available to determine the optimum number of Gaussian distributions. In this paper, FMMs are used for interpreting the probability distribution functions representing the distributions of the elastic moduli of osteoarthritic human cartilage and co-polymeric microspheres. As for cartilage experiments, FMMs indicate that at least three mixture components are needed for describing the measured histogram. While the mechanical properties of the softer mixture components, often assumed to be associated with Glycosaminoglycans, were found to be more or less constant regardless of whether two or three mixture components were used, those of the second mixture component (i.e. collagen network) considerably changed depending on the number of mixture components. Regarding the co-polymeric microspheres, the optimum number of mixture components estimated by the FMM theory, i.e. 3, nicely matches the number of co-polymeric components used in the structure of the polymer. The computer programs used for the presented analyses are made freely available online for other researchers to use. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. TU-AB-BRB-01: Coverage Evaluation and Probabilistic Treatment Planning as a Margin Alternative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siebers, J.

    The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less

  3. TU-AB-BRB-03: Coverage-Based Treatment Planning to Accommodate Organ Deformable Motions and Contouring Uncertainties for Prostate Treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, H.

    The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less

  4. TU-AB-BRB-02: Stochastic Programming Methods for Handling Uncertainty and Motion in IMRT Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unkelbach, J.

    The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less

  5. TU-AB-BRB-00: New Methods to Ensure Target Coverage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2015-06-15

    The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less

  6. Estimating inverse probability weights using super learner when weight-model specification is unknown in a marginal structural Cox model context.

    PubMed

    Karim, Mohammad Ehsanul; Platt, Robert W

    2017-06-15

    Correct specification of the inverse probability weighting (IPW) model is necessary for consistent inference from a marginal structural Cox model (MSCM). In practical applications, researchers are typically unaware of the true specification of the weight model. Nonetheless, IPWs are commonly estimated using parametric models, such as the main-effects logistic regression model. In practice, assumptions underlying such models may not hold and data-adaptive statistical learning methods may provide an alternative. Many candidate statistical learning approaches are available in the literature. However, the optimal approach for a given dataset is impossible to predict. Super learner (SL) has been proposed as a tool for selecting an optimal learner from a set of candidates using cross-validation. In this study, we evaluate the usefulness of a SL in estimating IPW in four different MSCM simulation scenarios, in which we varied the specification of the true weight model specification (linear and/or additive). Our simulations show that, in the presence of weight model misspecification, with a rich and diverse set of candidate algorithms, SL can generally offer a better alternative to the commonly used statistical learning approaches in terms of MSE as well as the coverage probabilities of the estimated effect in an MSCM. The findings from the simulation studies guided the application of the MSCM in a multiple sclerosis cohort from British Columbia, Canada (1995-2008), to estimate the impact of beta-interferon treatment in delaying disability progression. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Inverse probability weighting in STI/HIV prevention research: methods for evaluating social and community interventions

    PubMed Central

    Lippman, Sheri A.; Shade, Starley B.; Hubbard, Alan E.

    2011-01-01

    Background Intervention effects estimated from non-randomized intervention studies are plagued by biases, yet social or structural intervention studies are rarely randomized. There are underutilized statistical methods available to mitigate biases due to self-selection, missing data, and confounding in longitudinal, observational data permitting estimation of causal effects. We demonstrate the use of Inverse Probability Weighting (IPW) to evaluate the effect of participating in a combined clinical and social STI/HIV prevention intervention on reduction of incident chlamydia and gonorrhea infections among sex workers in Brazil. Methods We demonstrate the step-by-step use of IPW, including presentation of the theoretical background, data set up, model selection for weighting, application of weights, estimation of effects using varied modeling procedures, and discussion of assumptions for use of IPW. Results 420 sex workers contributed data on 840 incident chlamydia and gonorrhea infections. Participators were compared to non-participators following application of inverse probability weights to correct for differences in covariate patterns between exposed and unexposed participants and between those who remained in the intervention and those who were lost-to-follow-up. Estimators using four model selection procedures provided estimates of intervention effect between odds ratio (OR) .43 (95% CI:.22-.85) and .53 (95% CI:.26-1.1). Conclusions After correcting for selection bias, loss-to-follow-up, and confounding, our analysis suggests a protective effect of participating in the Encontros intervention. Evaluations of behavioral, social, and multi-level interventions to prevent STI can benefit by introduction of weighting methods such as IPW. PMID:20375927

  8. Exploring the full natural variability of eruption sizes within probabilistic hazard assessment of tephra dispersal

    NASA Astrophysics Data System (ADS)

    Selva, Jacopo; Sandri, Laura; Costa, Antonio; Tonini, Roberto; Folch, Arnau; Macedonio, Giovanni

    2014-05-01

    The intrinsic uncertainty and variability associated to the size of next eruption strongly affects short to long-term tephra hazard assessment. Often, emergency plans are established accounting for the effects of one or a few representative scenarios (meant as a specific combination of eruptive size and vent position), selected with subjective criteria. On the other hand, probabilistic hazard assessments (PHA) consistently explore the natural variability of such scenarios. PHA for tephra dispersal needs the definition of eruptive scenarios (usually by grouping possible eruption sizes and vent positions in classes) with associated probabilities, a meteorological dataset covering a representative time period, and a tephra dispersal model. PHA results from combining simulations considering different volcanological and meteorological conditions through a weight given by their specific probability of occurrence. However, volcanological parameters, such as erupted mass, eruption column height and duration, bulk granulometry, fraction of aggregates, typically encompass a wide range of values. Because of such a variability, single representative scenarios or size classes cannot be adequately defined using single values for the volcanological inputs. Here we propose a method that accounts for this within-size-class variability in the framework of Event Trees. The variability of each parameter is modeled with specific Probability Density Functions, and meteorological and volcanological inputs are chosen by using a stratified sampling method. This procedure allows avoiding the bias introduced by selecting single representative scenarios and thus neglecting most of the intrinsic eruptive variability. When considering within-size-class variability, attention must be paid to appropriately weight events falling within the same size class. While a uniform weight to all the events belonging to a size class is the most straightforward idea, this implies a strong dependence on the thresholds dividing classes: under this choice, the largest event of a size class has a much larger weight than the smallest event of the subsequent size class. In order to overcome this problem, in this study, we propose an innovative solution able to smoothly link the weight variability within each size class to the variability among the size classes through a common power law, and, simultaneously, respect the probability of different size classes conditional to the occurrence of an eruption. Embedding this procedure into the Bayesian Event Tree scheme enables for tephra fall PHA, quantified through hazard curves and maps representing readable results applicable in planning risk mitigation actions, and for the quantification of its epistemic uncertainties. As examples, we analyze long-term tephra fall PHA at Vesuvius and Campi Flegrei. We integrate two tephra dispersal models (the analytical HAZMAP and the numerical FALL3D) into BET_VH. The ECMWF reanalysis dataset are used for exploring different meteorological conditions. The results obtained clearly show that PHA accounting for the whole natural variability significantly differs from that based on a representative scenarios, as in volcanic hazard common practice.

  9. Neural substrates of reward magnitude, probability, and risk during a wheel of fortune decision-making task.

    PubMed

    Smith, Bruce W; Mitchell, Derek G V; Hardin, Michael G; Jazbec, Sandra; Fridberg, Daniel; Blair, R James R; Ernst, Monique

    2009-01-15

    Economic decision-making involves the weighting of magnitude and probability of potential gains/losses. While previous work has examined the neural systems involved in decision-making, there is a need to understand how the parameters associated with decision-making (e.g., magnitude of expected reward, probability of expected reward and risk) modulate activation within these neural systems. In the current fMRI study, we modified the monetary wheel of fortune (WOF) task [Ernst, M., Nelson, E.E., McClure, E.B., Monk, C.S., Munson, S., Eshel, N., et al. (2004). Choice selection and reward anticipation: an fMRI study. Neuropsychologia 42(12), 1585-1597.] to examine in 25 healthy young adults the neural responses to selections of different reward magnitudes, probabilities, or risks. Selection of high, relative to low, reward magnitude increased activity in insula, amygdala, middle and posterior cingulate cortex, and basal ganglia. Selection of low-probability, as opposed to high-probability reward, increased activity in anterior cingulate cortex, as did selection of risky, relative to safe reward. In summary, decision-making that did not involve conflict, as in the magnitude contrast, recruited structures known to support the coding of reward values, and those that integrate motivational and perceptual information for behavioral responses. In contrast, decision-making under conflict, as in the probability and risk contrasts, engaged the dorsal anterior cingulate cortex whose role in conflict monitoring is well established. However, decision-making under conflict failed to activate the structures that track reward values per se. Thus, the presence of conflict in decision-making seemed to significantly alter the pattern of neural responses to simple rewards. In addition, this paradigm further clarifies the functional specialization of the cingulate cortex in processes of decision-making.

  10. A Novel Strategy for Numerical Simulation of High-speed Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Sheikhi, M. R. H.; Drozda, T. G.; Givi, P.

    2003-01-01

    The objective of this research is to improve and implement the filtered mass density function (FDF) methodology for large eddy simulation (LES) of high-speed reacting turbulent flows. We have just completed Year 1 of this research. This is the Final Report on our activities during the period: January 1, 2003 to December 31, 2003. 2002. In the efforts during the past year, LES is conducted of the Sandia Flame D, which is a turbulent piloted nonpremixed methane jet flame. The subgrid scale (SGS) closure is based on the scalar filtered mass density function (SFMDF) methodology. The SFMDF is basically the mass weighted probability density function (PDF) of the SGS scalar quantities. For this flame (which exhibits little local extinction), a simple flamelet model is used to relate the instantaneous composition to the mixture fraction. The modelled SFMDF transport equation is solved by a hybrid finite-difference/Monte Carlo scheme.

  11. Music, pandas, and muggers: on the affective psychology of value.

    PubMed

    Hsee, Christopher K; Rottenstreich, Yuval

    2004-03-01

    This research investigated the relationship between the magnitude or scope of a stimulus and its subjective value by contrasting 2 psychological processes that may be used to construct preferences: valuation by feeling and valuation by calculation. The results show that when people rely on feeling, they are sensitive to the presence or absence of a stimulus (i.e., the difference between 0 and some scope) but are largely insensitive to further variations of scope. In contrast, when people rely on calculation, they reveal relatively more constant sensitivity to scope. Thus, value is nearly a step function of scope when feeling predominates and is closer to a linear function when calculation predominates. These findings may allow for a novel interpretation of why most real-world value functions are concave and how the processes responsible for nonlinearity of value may also contribute to nonlinear probability weighting. ((c) 2004 APA, all rights reserved)

  12. Monte Carlo Perturbation Theory Estimates of Sensitivities to System Dimensions

    DOE PAGES

    Burke, Timothy P.; Kiedrowski, Brian C.

    2017-12-11

    Here, Monte Carlo methods are developed using adjoint-based perturbation theory and the differential operator method to compute the sensitivities of the k-eigenvalue, linear functions of the flux (reaction rates), and bilinear functions of the forward and adjoint flux (kinetics parameters) to system dimensions for uniform expansions or contractions. The calculation of sensitivities to system dimensions requires computing scattering and fission sources at material interfaces using collisions occurring at the interface—which is a set of events with infinitesimal probability. Kernel density estimators are used to estimate the source at interfaces using collisions occurring near the interface. The methods for computing sensitivitiesmore » of linear and bilinear ratios are derived using the differential operator method and adjoint-based perturbation theory and are shown to be equivalent to methods previously developed using a collision history–based approach. The methods for determining sensitivities to system dimensions are tested on a series of fast, intermediate, and thermal critical benchmarks as well as a pressurized water reactor benchmark problem with iterated fission probability used for adjoint-weighting. The estimators are shown to agree within 5% and 3σ of reference solutions obtained using direct perturbations with central differences for the majority of test problems.« less

  13. Many-body calculations of low energy eigenstates in magnetic and periodic systems with self healing diffusion Monte Carlo: steps beyond the fixed-phase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reboredo, Fernando A.

    The self-healing diffusion Monte Carlo algorithm (SHDMC) [Reboredo, Hood and Kent, Phys. Rev. B {\\bf 79}, 195117 (2009), Reboredo, {\\it ibid.} {\\bf 80}, 125110 (2009)] is extended to study the ground and excited states of magnetic and periodic systems. A recursive optimization algorithm is derived from the time evolution of the mixed probability density. The mixed probability density is given by an ensemble of electronic configurations (walkers) with complex weight. This complex weigh allows the amplitude of the fix-node wave function to move away from the trial wave function phase. This novel approach is both a generalization of SHDMC andmore » the fixed-phase approximation [Ortiz, Ceperley and Martin Phys Rev. Lett. {\\bf 71}, 2777 (1993)]. When used recursively it improves simultaneously the node and phase. The algorithm is demonstrated to converge to the nearly exact solutions of model systems with periodic boundary conditions or applied magnetic fields. The method is also applied to obtain low energy excitations with magnetic field or periodic boundary conditions. The potential applications of this new method to study periodic, magnetic, and complex Hamiltonians are discussed.« less

  14. Assessment of Data Fusion Algorithms for Earth Observation Change Detection Processes.

    PubMed

    Molina, Iñigo; Martinez, Estibaliz; Morillo, Carmen; Velasco, Jesus; Jara, Alvaro

    2016-09-30

    In this work a parametric multi-sensor Bayesian data fusion approach and a Support Vector Machine (SVM) are used for a Change Detection problem. For this purpose two sets of SPOT5-PAN images have been used, which are in turn used for Change Detection Indices (CDIs) calculation. For minimizing radiometric differences, a methodology based on zonal "invariant features" is suggested. The choice of one or the other CDI for a change detection process is a subjective task as each CDI is probably more or less sensitive to certain types of changes. Likewise, this idea might be employed to create and improve a "change map", which can be accomplished by means of the CDI's informational content. For this purpose, information metrics such as the Shannon Entropy and "Specific Information" have been used to weight the changes and no-changes categories contained in a certain CDI and thus introduced in the Bayesian information fusion algorithm. Furthermore, the parameters of the probability density functions (pdf's) that best fit the involved categories have also been estimated. Conversely, these considerations are not necessary for mapping procedures based on the discriminant functions of a SVM. This work has confirmed the capabilities of probabilistic information fusion procedure under these circumstances.

  15. The Evaluation of Bias of the Weighted Random Effects Model Estimators. Research Report. ETS RR-11-13

    ERIC Educational Resources Information Center

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    Estimation of parameters of random effects models from samples collected via complex multistage designs is considered. One way to reduce estimation bias due to unequal probabilities of selection is to incorporate sampling weights. Many researchers have been proposed various weighting methods (Korn, & Graubard, 2003; Pfeffermann, Skinner,…

  16. Multiple Imputation in Two-Stage Cluster Samples Using The Weighted Finite Population Bayesian Bootstrap.

    PubMed

    Zhou, Hanzhi; Elliott, Michael R; Raghunathan, Trivellore E

    2016-06-01

    Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in "Delta-V," a key crash severity measure.

  17. Multiple Imputation in Two-Stage Cluster Samples Using The Weighted Finite Population Bayesian Bootstrap

    PubMed Central

    Zhou, Hanzhi; Elliott, Michael R.; Raghunathan, Trivellore E.

    2017-01-01

    Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in “Delta-V,” a key crash severity measure. PMID:29226161

  18. TU-H-CAMPUS-IeP1-01: Bias and Computational Efficiency of Variance Reduction Methods for the Monte Carlo Simulation of Imaging Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, D; Badano, A; Sempau, J

    Purpose: Variance reduction techniques (VRTs) are employed in Monte Carlo simulations to obtain estimates with reduced statistical uncertainty for a given simulation time. In this work, we study the bias and efficiency of a VRT for estimating the response of imaging detectors. Methods: We implemented Directed Sampling (DS), preferentially directing a fraction of emitted optical photons directly towards the detector by altering the isotropic model. The weight of each optical photon is appropriately modified to maintain simulation estimates unbiased. We use a Monte Carlo tool called fastDETECT2 (part of the hybridMANTIS open-source package) for optical transport, modified for VRT. Themore » weight of each photon is calculated as the ratio of original probability (no VRT) and the new probability for a particular direction. For our analysis of bias and efficiency, we use pulse height spectra, point response functions, and Swank factors. We obtain results for a variety of cases including analog (no VRT, isotropic distribution), and DS with 0.2 and 0.8 optical photons directed towards the sensor plane. We used 10,000, 25-keV primaries. Results: The Swank factor for all cases in our simplified model converged fast (within the first 100 primaries) to a stable value of 0.9. The root mean square error per pixel for DS VRT for the point response function between analog and VRT cases was approximately 5e-4. Conclusion: Our preliminary results suggest that DS VRT does not affect the estimate of the mean for the Swank factor. Our findings indicate that it may be possible to design VRTs for imaging detector simulations to increase computational efficiency without introducing bias.« less

  19. Elicitation of quantitative data from a heterogeneous expert panel: formal process and application in animal health.

    PubMed

    Van der Fels-Klerx, Ine H J; Goossens, Louis H J; Saatkamp, Helmut W; Horst, Suzan H S

    2002-02-01

    This paper presents a protocol for a formal expert judgment process using a heterogeneous expert panel aimed at the quantification of continuous variables. The emphasis is on the process's requirements related to the nature of expertise within the panel, in particular the heterogeneity of both substantive and normative expertise. The process provides the opportunity for interaction among the experts so that they fully understand and agree upon the problem at hand, including qualitative aspects relevant to the variables of interest, prior to the actual quantification task. Individual experts' assessments on the variables of interest, cast in the form of subjective probability density functions, are elicited with a minimal demand for normative expertise. The individual experts' assessments are aggregated into a single probability density function per variable, thereby weighting the experts according to their expertise. Elicitation techniques proposed include the Delphi technique for the qualitative assessment task and the ELI method for the actual quantitative assessment task. Appropriately, the Classical model was used to weight the experts' assessments in order to construct a single distribution per variable. Applying this model, the experts' quality typically was based on their performance on seed variables. An application of the proposed protocol in the broad and multidisciplinary field of animal health is presented. Results of this expert judgment process showed that the proposed protocol in combination with the proposed elicitation and analysis techniques resulted in valid data on the (continuous) variables of interest. In conclusion, the proposed protocol for a formal expert judgment process aimed at the elicitation of quantitative data from a heterogeneous expert panel provided satisfactory results. Hence, this protocol might be useful for expert judgment studies in other broad and/or multidisciplinary fields of interest.

  20. An alternative empirical likelihood method in missing response problems and causal inference.

    PubMed

    Ren, Kaili; Drummond, Christopher A; Brewster, Pamela S; Haller, Steven T; Tian, Jiang; Cooper, Christopher J; Zhang, Biao

    2016-11-30

    Missing responses are common problems in medical, social, and economic studies. When responses are missing at random, a complete case data analysis may result in biases. A popular debias method is inverse probability weighting proposed by Horvitz and Thompson. To improve efficiency, Robins et al. proposed an augmented inverse probability weighting method. The augmented inverse probability weighting estimator has a double-robustness property and achieves the semiparametric efficiency lower bound when the regression model and propensity score model are both correctly specified. In this paper, we introduce an empirical likelihood-based estimator as an alternative to Qin and Zhang (2007). Our proposed estimator is also doubly robust and locally efficient. Simulation results show that the proposed estimator has better performance when the propensity score is correctly modeled. Moreover, the proposed method can be applied in the estimation of average treatment effect in observational causal inferences. Finally, we apply our method to an observational study of smoking, using data from the Cardiovascular Outcomes in Renal Atherosclerotic Lesions clinical trial. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Estimating inverse-probability weights for longitudinal data with dropout or truncation: The xtrccipw command.

    PubMed

    Daza, Eric J; Hudgens, Michael G; Herring, Amy H

    Individuals may drop out of a longitudinal study, rendering their outcomes unobserved but still well defined. However, they may also undergo truncation (for example, death), beyond which their outcomes are no longer meaningful. Kurland and Heagerty (2005, Biostatistics 6: 241-258) developed a method to conduct regression conditioning on nontruncation, that is, regression conditioning on continuation (RCC), for longitudinal outcomes that are monotonically missing at random (for example, because of dropout). This method first estimates the probability of dropout among continuing individuals to construct inverse-probability weights (IPWs), then fits generalized estimating equations (GEE) with these IPWs. In this article, we present the xtrccipw command, which can both estimate the IPWs required by RCC and then use these IPWs in a GEE estimator by calling the glm command from within xtrccipw. In the absence of truncation, the xtrccipw command can also be used to run a weighted GEE analysis. We demonstrate the xtrccipw command by analyzing an example dataset and the original Kurland and Heagerty (2005) data. We also use xtrccipw to illustrate some empirical properties of RCC through a simulation study.

  2. A probabilistic NF2 relational algebra for integrated information retrieval and database systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuhr, N.; Roelleke, T.

    The integration of information retrieval (IR) and database systems requires a data model which allows for modelling documents as entities, representing uncertainty and vagueness and performing uncertain inference. For this purpose, we present a probabilistic data model based on relations in non-first-normal-form (NF2). Here, tuples are assigned probabilistic weights giving the probability that a tuple belongs to a relation. Thus, the set of weighted index terms of a document are represented as a probabilistic subrelation. In a similar way, imprecise attribute values are modelled as a set-valued attribute. We redefine the relational operators for this type of relations such thatmore » the result of each operator is again a probabilistic NF2 relation, where the weight of a tuple gives the probability that this tuple belongs to the result. By ordering the tuples according to decreasing probabilities, the model yields a ranking of answers like in most IR models. This effect also can be used for typical database queries involving imprecise attribute values as well as for combinations of database and IR queries.« less

  3. Estimating inverse-probability weights for longitudinal data with dropout or truncation: The xtrccipw command

    PubMed Central

    Hudgens, Michael G.; Herring, Amy H.

    2017-01-01

    Individuals may drop out of a longitudinal study, rendering their outcomes unobserved but still well defined. However, they may also undergo truncation (for example, death), beyond which their outcomes are no longer meaningful. Kurland and Heagerty (2005, Biostatistics 6: 241–258) developed a method to conduct regression conditioning on nontruncation, that is, regression conditioning on continuation (RCC), for longitudinal outcomes that are monotonically missing at random (for example, because of dropout). This method first estimates the probability of dropout among continuing individuals to construct inverse-probability weights (IPWs), then fits generalized estimating equations (GEE) with these IPWs. In this article, we present the xtrccipw command, which can both estimate the IPWs required by RCC and then use these IPWs in a GEE estimator by calling the glm command from within xtrccipw. In the absence of truncation, the xtrccipw command can also be used to run a weighted GEE analysis. We demonstrate the xtrccipw command by analyzing an example dataset and the original Kurland and Heagerty (2005) data. We also use xtrccipw to illustrate some empirical properties of RCC through a simulation study. PMID:29755297

  4. An ensemble-based dynamic Bayesian averaging approach for discharge simulations using multiple global precipitation products and hydrological models

    NASA Astrophysics Data System (ADS)

    Qi, Wei; Liu, Junguo; Yang, Hong; Sweetapple, Chris

    2018-03-01

    Global precipitation products are very important datasets in flow simulations, especially in poorly gauged regions. Uncertainties resulting from precipitation products, hydrological models and their combinations vary with time and data magnitude, and undermine their application to flow simulations. However, previous studies have not quantified these uncertainties individually and explicitly. This study developed an ensemble-based dynamic Bayesian averaging approach (e-Bay) for deterministic discharge simulations using multiple global precipitation products and hydrological models. In this approach, the joint probability of precipitation products and hydrological models being correct is quantified based on uncertainties in maximum and mean estimation, posterior probability is quantified as functions of the magnitude and timing of discharges, and the law of total probability is implemented to calculate expected discharges. Six global fine-resolution precipitation products and two hydrological models of different complexities are included in an illustrative application. e-Bay can effectively quantify uncertainties and therefore generate better deterministic discharges than traditional approaches (weighted average methods with equal and varying weights and maximum likelihood approach). The mean Nash-Sutcliffe Efficiency values of e-Bay are up to 0.97 and 0.85 in training and validation periods respectively, which are at least 0.06 and 0.13 higher than traditional approaches. In addition, with increased training data, assessment criteria values of e-Bay show smaller fluctuations than traditional approaches and its performance becomes outstanding. The proposed e-Bay approach bridges the gap between global precipitation products and their pragmatic applications to discharge simulations, and is beneficial to water resources management in ungauged or poorly gauged regions across the world.

  5. Obesity Management: What Should We Do If Fat Gain Is Necessary to Maintain Body Homeostasis in a Modern World?

    PubMed

    Tremblay, Angelo

    2018-01-01

    The prevalence of overweight has substantially increased over the last decades despite the intent of health professionals and the general population to prevent this trend. Traditionally, this phenomenon has been attributed to unhealthy dietary macronutrient composition and/or to the decrease in physical activity participation. Beyond the influence of these factors, it is more than likely that other factors have influenced energy balance in a context of modernity. These include inadequate sleep, demanding cognitive effort, chemical pollution, and probably others which also have the potential to promote a positive energy balance but which are also part of the reality of success and productivity in a globalized world. As discussed in this paper, many individuals may become conflicted with themselves if they wish to prevent weight gain while influencing factors which are determinants of their socioeconomic success. In this regard, this paper reminds us of the contribution of adipose tissue gain in body homeostasis which is essential to permit energy balance, especially under lifestyle conditions promoting overfeeding. From a clinical standpoint, this imposes the consideration of a weight loss program as a search for compromise between what can be changed to promote a negative energy balance and what can be tolerated by the body in terms of fat loss. Furthermore, if we also consider the impact of pollution on energy balance for which we currently do not hold solutions of reversibility, we probably must accept that the mankind of today will have to be more corpulent than its ancestors. In this pessimistic environment, there are still possibilities to do better; however, this will probably require the revisiting of lifestyle practices according to what the human body and planet can tolerate as deviation from optimal functioning.

  6. Tackling missing radiographic progression data: multiple imputation technique compared with inverse probability weights and complete case analysis.

    PubMed

    Descalzo, Miguel Á; Garcia, Virginia Villaverde; González-Alvaro, Isidoro; Carbonell, Jordi; Balsa, Alejandro; Sanmartí, Raimon; Lisbona, Pilar; Hernandez-Barrera, Valentín; Jiménez-Garcia, Rodrigo; Carmona, Loreto

    2013-02-01

    To describe the results of different statistical ways of addressing radiographic outcome affected by missing data--multiple imputation technique, inverse probability weights and complete case analysis--using data from an observational study. A random sample of 96 RA patients was selected for a follow-up study in which radiographs of hands and feet were scored. Radiographic progression was tested by comparing the change in the total Sharp-van der Heijde radiographic score (TSS) and the joint erosion score (JES) from baseline to the end of the second year of follow-up. MI technique, inverse probability weights in weighted estimating equation (WEE) and CC analysis were used to fit a negative binomial regression. Major predictors of radiographic progression were JES and joint space narrowing (JSN) at baseline, together with baseline disease activity measured by DAS28 for TSS and MTX use for JES. Results from CC analysis show larger coefficients and s.e.s compared with MI and weighted techniques. The results from the WEE model were quite in line with those of MI. If it seems plausible that CC or MI analysis may be valid, then MI should be preferred because of its greater efficiency. CC analysis resulted in inefficient estimates or, translated into non-statistical terminology, could guide us into inaccurate results and unwise conclusions. The methods discussed here will contribute to the use of alternative approaches for tackling missing data in observational studies.

  7. A Comparison of EPI Sampling, Probability Sampling, and Compact Segment Sampling Methods for Micro and Small Enterprises

    PubMed Central

    Chao, Li-Wei; Szrek, Helena; Peltzer, Karl; Ramlagan, Shandir; Fleming, Peter; Leite, Rui; Magerman, Jesswill; Ngwenya, Godfrey B.; Pereira, Nuno Sousa; Behrman, Jere

    2011-01-01

    Finding an efficient method for sampling micro- and small-enterprises (MSEs) for research and statistical reporting purposes is a challenge in developing countries, where registries of MSEs are often nonexistent or outdated. This lack of a sampling frame creates an obstacle in finding a representative sample of MSEs. This study uses computer simulations to draw samples from a census of businesses and non-businesses in the Tshwane Municipality of South Africa, using three different sampling methods: the traditional probability sampling method, the compact segment sampling method, and the World Health Organization’s Expanded Programme on Immunization (EPI) sampling method. Three mechanisms by which the methods could differ are tested, the proximity selection of respondents, the at-home selection of respondents, and the use of inaccurate probability weights. The results highlight the importance of revisits and accurate probability weights, but the lesser effect of proximity selection on the samples’ statistical properties. PMID:22582004

  8. MR findings in seven patients with organic mercury poisoning (Minamata disease).

    PubMed

    Korogi, Y; Takahashi, M; Shinzato, J; Okajima, T

    1994-09-01

    To study the long-term MR findings in seven patients with Minamata disease. All patients examined were affected after eating daily considerable amounts of the methylmercury-contaminated seafoods from 1955 through 1958 and showed typical neurologic findings. T1- and T2-weighted images were obtained in axial, coronal, and sagittal sections. The visual cortex, the cerebellar vermis and hemispheres, and the postcentral cortex were significantly atrophic. The visual cortex was slightly hypointense on T1-weighted images and hyperintense on T2-weighted images, probably representing the pathologic changes of status spongiosus. MR demonstrated the lesions, located in the calcarine area, cerebellum, and postcentral gyri, which are probably related to three of the characteristic manifestations of this disease: the constriction of the visual fields, ataxia, and sensory disturbance, respectively.

  9. Dissociating error-based and reinforcement-based loss functions during sensorimotor learning

    PubMed Central

    McGregor, Heather R.; Mohatarem, Ayman

    2017-01-01

    It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback. PMID:28753634

  10. Dissociating error-based and reinforcement-based loss functions during sensorimotor learning.

    PubMed

    Cashaback, Joshua G A; McGregor, Heather R; Mohatarem, Ayman; Gribble, Paul L

    2017-07-01

    It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback.

  11. Socioeconomic status and misperception of body mass index among Mexican adults.

    PubMed

    Arantxa Colchero, M; Caro-Vega, Yanink; Kaufer-Horwitz, Martha

    2014-01-01

    To estimate the association between perceived body mass index (BMI) and socioeconomic variables in adults in Mexico. We studied 32052 adults from the Mexican National Health and Nutrition Survey of 2006. We estimated BMI misperception by comparing the respondent's weight perception (as categories of BMI) with the corresponding category according to measured weight and height. Misperception was defined as respondent's perception of a BMI category different from their actual category. Socioeconomic status was assessed using household assets. Logistic and multinomial regression models by gender and BMI category were estimated. Adult women and men highly underestimate their BMI category. We found that the probability of a correct classification was lower than the probability of getting a correct result by chance alone. Better educated and more affluent individuals are more likely to have a correct perception of their weight status, particularly among overweight adults. Given that a correct perception of weight has been associated with an increased search of weight control and that our results show that the studied population underestimated their BMI, interventions providing definitions and consequences of overweight and obesity and encouraging the population to monitor their weight could be beneficial.

  12. Comparisons of linear and nonlinear pyramid schemes for signal and image processing

    NASA Astrophysics Data System (ADS)

    Morales, Aldo W.; Ko, Sung-Jea

    1997-04-01

    Linear filters banks are being used extensively in image and video applications. New research results in wavelet applications for compression and de-noising are constantly appearing in the technical literature. On the other hand, non-linear filter banks are also being used regularly in image pyramid algorithms. There are some inherent advantages in using non-linear filters instead of linear filters when non-Gaussian processes are present in images. However, a consistent way of comparing performance criteria between these two schemes has not been fully developed yet. In this paper a recently discovered tool, sample selection probabilities, is used to compare the behavior of linear and non-linear filters. In the conversion from weights of order statistics (OS) filters to coefficients of the impulse response is obtained through these probabilities. However, the reverse problem: the conversion from coefficients of the impulse response to the weights of OS filters is not yet fully understood. One of the reasons for this difficulty is the highly non-linear nature of the partitions and generating function used. In the present paper the problem is posed as an optimization of integer linear programming subject to constraints directly obtained from the coefficients of the impulse response. Although the technique to be presented in not completely refined, it certainly appears to be promising. Some results will be shown.

  13. Probability of success for phase III after exploratory biomarker analysis in phase II.

    PubMed

    Götte, Heiko; Kirchner, Marietta; Sailer, Martin Oliver

    2017-05-01

    The probability of success or average power describes the potential of a future trial by weighting the power with a probability distribution of the treatment effect. The treatment effect estimate from a previous trial can be used to define such a distribution. During the development of targeted therapies, it is common practice to look for predictive biomarkers. The consequence is that the trial population for phase III is often selected on the basis of the most extreme result from phase II biomarker subgroup analyses. In such a case, there is a tendency to overestimate the treatment effect. We investigate whether the overestimation of the treatment effect estimate from phase II is transformed into a positive bias for the probability of success for phase III. We simulate a phase II/III development program for targeted therapies. This simulation allows to investigate selection probabilities and allows to compare the estimated with the true probability of success. We consider the estimated probability of success with and without subgroup selection. Depending on the true treatment effects, there is a negative bias without selection because of the weighting by the phase II distribution. In comparison, selection increases the estimated probability of success. Thus, selection does not lead to a bias in probability of success if underestimation due to the phase II distribution and overestimation due to selection cancel each other out. We recommend to perform similar simulations in practice to get the necessary information about the risk and chances associated with such subgroup selection designs. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Quantization and training of object detection networks with low-precision weights and activations

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Liu, Jian; Zhou, Li; Wang, Yun; Chen, Jie

    2018-01-01

    As convolutional neural networks have demonstrated state-of-the-art performance in object recognition and detection, there is a growing need for deploying these systems on resource-constrained mobile platforms. However, the computational burden and energy consumption of inference for these networks are significantly higher than what most low-power devices can afford. To address these limitations, this paper proposes a method to train object detection networks with low-precision weights and activations. The probability density functions of weights and activations of each layer are first directly estimated using piecewise Gaussian models. Then, the optimal quantization intervals and step sizes for each convolution layer are adaptively determined according to the distribution of weights and activations. As the most computationally expensive convolutions can be replaced by effective fixed point operations, the proposed method can drastically reduce computation complexity and memory footprint. Performing on the tiny you only look once (YOLO) and YOLO architectures, the proposed method achieves comparable accuracy to their 32-bit counterparts. As an illustration, the proposed 4-bit and 8-bit quantized versions of the YOLO model achieve a mean average precision of 62.6% and 63.9%, respectively, on the Pascal visual object classes 2012 test dataset. The mAP of the 32-bit full-precision baseline model is 64.0%.

  15. Freezing transition of the random bond RNA model: Statistical properties of the pairing weights

    NASA Astrophysics Data System (ADS)

    Monthus, Cécile; Garel, Thomas

    2007-03-01

    To characterize the pairing specificity of RNA secondary structures as a function of temperature, we analyze the statistics of the pairing weights as follows: for each base (i) of the sequence of length N , we consider the (N-1) pairing weights wi(j) with the other bases (j≠i) of the sequence. We numerically compute the probability distributions P1(w) of the maximal weight wimax=maxj[wi(j)] , the probability distribution Π(Y2) of the parameter Y2(i)=∑jwi2(j) , as well as the average values of the moments Yk(i)=∑jwik(j) . We find that there are two important temperatures TcTgap , the distribution P1(w) vanishes at some value w0(T)<1 , and accordingly the moments Yk(i)¯ decay exponentially as [w0(T)]k in k . For T

  16. A Categorification of the Crystal Isomorphism B 1,1 B + B(Lambda i) = B(Lambdasigma (i) and a Graphical Calculus for the Shifted Symmetric Functions

    NASA Astrophysics Data System (ADS)

    Kvinge, Henry

    We prove two results at the intersection of Lie theory and the representation theory of symmetric groups, Hecke algebras, and their generalizations. The first is a categorification of the crystal isomorphism B. (1,1) tensor B1,1 ⊕ B(Lambdai ) ≅ B(Lambdasigma (i)). Here B(Lambdai and B(Lambda sigma(i)) are two affine type highest weight crystals of weight Lambdai and Lambdasigma (i) respectively, sigma is a specific map from the Dynkin indexing set I to itself, and B1,1 is a Kirillov-Reshetikhin crystal. We show that this crystal isomorphism is in fact the shadow of a richer module-theoretic phenomenon in the representation theory of Khovanov-Lauda-Rouquier algebras of classical affine type. Our second result identifies the center EndH'( 1) of Khovanov's Heisenberg category H', as the algebra of shifted symmetric functions Lambda* of Okounkov and Olshanski, i.e. End H'(1) ≅ Lambda*. This isomorphism provides us with a graphical calculus for Lambda*. It also allows us to describe EndH'(1) in terms of the transition and co-transition measure of Kerov and the noncommutative probability spaces of Biane.

  17. Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity

    PubMed Central

    Nessler, Bernhard; Pfeiffer, Michael; Buesing, Lars; Maass, Wolfgang

    2013-01-01

    The principles by which networks of neurons compute, and how spike-timing dependent plasticity (STDP) of synaptic weights generates and maintains their computational function, are unknown. Preceding work has shown that soft winner-take-all (WTA) circuits, where pyramidal neurons inhibit each other via interneurons, are a common motif of cortical microcircuits. We show through theoretical analysis and computer simulations that Bayesian computation is induced in these network motifs through STDP in combination with activity-dependent changes in the excitability of neurons. The fundamental components of this emergent Bayesian computation are priors that result from adaptation of neuronal excitability and implicit generative models for hidden causes that are created in the synaptic weights through STDP. In fact, a surprising result is that STDP is able to approximate a powerful principle for fitting such implicit generative models to high-dimensional spike inputs: Expectation Maximization. Our results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Furthermore it suggests networks of Bayesian computation modules as a new model for distributed information processing in the cortex. PMID:23633941

  18. Low-energy isovector and isoscalar dipole response in neutron-rich nuclei

    NASA Astrophysics Data System (ADS)

    Vretenar, D.; Niu, Y. F.; Paar, N.; Meng, J.

    2012-04-01

    The self-consistent random-phase approximation, based on the framework of relativistic energy density functionals, is employed in the study of isovector and isoscalar dipole response in 68Ni,132Sn, and 208Pb. The evolution of pygmy dipole states (PDSs) in the region of low excitation energies is analyzed as a function of the density dependence of the symmetry energy for a set of relativistic effective interactions. The occurrence of PDSs is predicted in the response to both the isovector and the isoscalar dipole operators, and its strength is enhanced with the increase in the symmetry energy at saturation and the slope of the symmetry energy. In both channels, the PDS exhausts a relatively small fraction of the energy-weighted sum rule but a much larger percentage of the inverse energy-weighted sum rule. For the isovector dipole operator, the reduced transition probability B(E1) of the PDSs is generally small because of pronounced cancellation of neutron and proton partial contributions. The isoscalar-reduced transition amplitude is predominantly determined by neutron particle-hole configurations, most of which add coherently, and this results in a collective response of the PDSs to the isoscalar dipole operator.

  19. Excursion Processes Associated with Elliptic Combinatorics

    NASA Astrophysics Data System (ADS)

    Baba, Hiroya; Katori, Makoto

    2018-06-01

    Researching elliptic analogues for equalities and formulas is a new trend in enumerative combinatorics which has followed the previous trend of studying q-analogues. Recently Schlosser proposed a lattice path model in the square lattice with a family of totally elliptic weight-functions including several complex parameters and discussed an elliptic extension of the binomial theorem. In the present paper, we introduce a family of discrete-time excursion processes on Z starting from the origin and returning to the origin in a given time duration 2 T associated with Schlosser's elliptic combinatorics. The processes are inhomogeneous both in space and time and hence expected to provide new models in non-equilibrium statistical mechanics. By numerical calculation we show that the maximum likelihood trajectories on the spatio-temporal plane of the elliptic excursion processes and of their reduced trigonometric versions are not straight lines in general but are nontrivially curved depending on parameters. We analyze asymptotic probability laws in the long-term limit T → ∞ for a simplified trigonometric version of excursion process. Emergence of nontrivial curves of trajectories in a large scale of space and time from the elementary elliptic weight-functions exhibits a new aspect of elliptic combinatorics.

  20. Excursion Processes Associated with Elliptic Combinatorics

    NASA Astrophysics Data System (ADS)

    Baba, Hiroya; Katori, Makoto

    2018-04-01

    Researching elliptic analogues for equalities and formulas is a new trend in enumerative combinatorics which has followed the previous trend of studying q-analogues. Recently Schlosser proposed a lattice path model in the square lattice with a family of totally elliptic weight-functions including several complex parameters and discussed an elliptic extension of the binomial theorem. In the present paper, we introduce a family of discrete-time excursion processes on Z starting from the origin and returning to the origin in a given time duration 2T associated with Schlosser's elliptic combinatorics. The processes are inhomogeneous both in space and time and hence expected to provide new models in non-equilibrium statistical mechanics. By numerical calculation we show that the maximum likelihood trajectories on the spatio-temporal plane of the elliptic excursion processes and of their reduced trigonometric versions are not straight lines in general but are nontrivially curved depending on parameters. We analyze asymptotic probability laws in the long-term limit T → ∞ for a simplified trigonometric version of excursion process. Emergence of nontrivial curves of trajectories in a large scale of space and time from the elementary elliptic weight-functions exhibits a new aspect of elliptic combinatorics.

  1. Efficacy and safety assessment of isolated ultrafiltration compared to intravenous diuretics for acutely decompensated heart failure: a systematic review with meta-analysis.

    PubMed

    De Vecchis, R; Esposito, C; Ariano, C

    2014-04-01

    Intravenous diuretics at relatively high doses are currently used for treating acute decompensated heart failure (ADHF). However, the existence of harmful side effects diuretic-related, such as electrolyte abnormalities, symptomatic hypotension and marked neuro-hormonal activation have led researchers to implement alternative therapeutic tools such as isolated ultrafiltration (IUF). Our study aimed to compare intravenous diuretics vs. IUF as regards their respective efficacy and safety in ADHF patients through systematic review and meta-analysis of data derived from relevant randomized controlled trials. 6 studies grouping a total of 477 patients were included in the systematic review. By contrast, data from only three studies were pooled for the meta-analysis, because of different adopted outcomes or marked dissimilarities in the data presentation . Weight loss at 48 h was greater in IUF group compared to the diuretics group [weighted mean difference (WMD)=1.77 kg; 95%CI: 1.18-2.36 kg; P<0.001)]. Similarly, greater fluid loss at 48 h was found in IUF group in comparison with diuretics group (WMD=1.2 liters; 95%CI: 0.73-1.67 liters; P< 0.001). In contrast, the probability of exhibiting worsening renal function (WRF), i.e., increase in serum creatinine > 0.3 mg/dl at 48 hours, was similar to the one found in the diuretics group (OR=1.33; 95% CI: 0.81-2.16 P=0.26). On the basis of this meta-analysis, IUF induced greater weight loss and larger fluid removal compared to iv diuretics in ADHF patients, whereas the probability of developing WRF was not significantly different in the comparison between iv diuretics and IUF.

  2. Robust Estimation of Electron Density From Anatomic Magnetic Resonance Imaging of the Brain Using a Unifying Multi-Atlas Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Shangjie; Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California; Hara, Wendy

    Purpose: To develop a reliable method to estimate electron density based on anatomic magnetic resonance imaging (MRI) of the brain. Methods and Materials: We proposed a unifying multi-atlas approach for electron density estimation based on standard T1- and T2-weighted MRI. First, a composite atlas was constructed through a voxelwise matching process using multiple atlases, with the goal of mitigating effects of inherent anatomic variations between patients. Next we computed for each voxel 2 kinds of conditional probabilities: (1) electron density given its image intensity on T1- and T2-weighted MR images; and (2) electron density given its spatial location in a referencemore » anatomy, obtained by deformable image registration. These were combined into a unifying posterior probability density function using the Bayesian formalism, which provided the optimal estimates for electron density. We evaluated the method on 10 patients using leave-one-patient-out cross-validation. Receiver operating characteristic analyses for detecting different tissue types were performed. Results: The proposed method significantly reduced the errors in electron density estimation, with a mean absolute Hounsfield unit error of 119, compared with 140 and 144 (P<.0001) using conventional T1-weighted intensity and geometry-based approaches, respectively. For detection of bony anatomy, the proposed method achieved an 89% area under the curve, 86% sensitivity, 88% specificity, and 90% accuracy, which improved upon intensity and geometry-based approaches (area under the curve: 79% and 80%, respectively). Conclusion: The proposed multi-atlas approach provides robust electron density estimation and bone detection based on anatomic MRI. If validated on a larger population, our work could enable the use of MRI as a primary modality for radiation treatment planning.« less

  3. Design and operation of the national home health aide survey: 2007-2008.

    PubMed

    Bercovitz, Anita; Moss, Abigail J; Sengupta, Manisha; Harris-Kojetin, Lauren D; Squillace, Marie R; Emily, Rosenoff; Branden, Laura

    2010-03-01

    This report provides an overview of the National Home Health Aide Survey (NHHAS), the first national probability survey of home health aides. NHHAS was designed to provide national estimates of home health aides who provided assistance in activities of daily living (ADLs) and were directly employed by agencies that provide home health and/or hospice care. This report discusses the need for and objectives of the survey, the design process, the survey methods, and data availability. METHODS NHHAS, a multistage probability sample survey, was conducted as a supplement to the 2007 National Home and Hospice Care Survey (NHHCS). Agencies providing home health and/or hospice care were sampled, and then aides employed by these agencies were sampled and interviewed by telephone. Survey topics included recruitment, training, job history, family life, client relations, work-related injuries, and demographics. NHHAS was virtually identical to the 2004 National Nursing Assistant Survey of certified nursing assistants employed in sampled nursing homes with minor changes to account for differences in workplace environment and responsibilities. RESULTS From September 2007 to April 2008, interviews were completed with 3,416 aides. A public-use data file that contains the interview responses, sampling weights, and design variables is available. The NHHAS overall response rate weighted by the inverse of the probability of selection was 41 percent. This rate is the product of the weighted first-stage agency response rate of 57 percent (i.e., weighted response rate of 59 percent for agency participation in NHHCS times the weighted response rate of 97 percent for agencies participating in NHHCS that also participated in NHHAS) and the weighted second-stage aide response rate of 72 percent to NHHAS.

  4. Staircase tableaux, the asymmetric exclusion process, and Askey-Wilson polynomials

    PubMed Central

    Corteel, Sylvie; Williams, Lauren K.

    2010-01-01

    We introduce some combinatorial objects called staircase tableaux, which have cardinality 4nn !, and connect them to both the asymmetric exclusion process (ASEP) and Askey-Wilson polynomials. The ASEP is a model from statistical mechanics introduced in the late 1960s, which describes a system of interacting particles hopping left and right on a one-dimensional lattice of n sites with open boundaries. It has been cited as a model for traffic flow and translation in protein synthesis. In its most general form, particles may enter and exit at the left with probabilities α and γ, and they may exit and enter at the right with probabilities β and δ. In the bulk, the probability of hopping left is q times the probability of hopping right. Our first result is a formula for the stationary distribution of the ASEP with all parameters general, in terms of staircase tableaux. Our second result is a formula for the moments of (the weight function of) Askey-Wilson polynomials, also in terms of staircase tableaux. Since the 1980s there has been a great deal of work giving combinatorial formulas for moments of classical orthogonal polynomials (e.g. Hermite, Charlier, Laguerre); among these polynomials, the Askey-Wilson polynomials are the most important, because they are at the top of the hierarchy of classical orthogonal polynomials. PMID:20348417

  5. Staircase tableaux, the asymmetric exclusion process, and Askey-Wilson polynomials.

    PubMed

    Corteel, Sylvie; Williams, Lauren K

    2010-04-13

    We introduce some combinatorial objects called staircase tableaux, which have cardinality 4(n)n!, and connect them to both the asymmetric exclusion process (ASEP) and Askey-Wilson polynomials. The ASEP is a model from statistical mechanics introduced in the late 1960s, which describes a system of interacting particles hopping left and right on a one-dimensional lattice of n sites with open boundaries. It has been cited as a model for traffic flow and translation in protein synthesis. In its most general form, particles may enter and exit at the left with probabilities alpha and gamma, and they may exit and enter at the right with probabilities beta and delta. In the bulk, the probability of hopping left is q times the probability of hopping right. Our first result is a formula for the stationary distribution of the ASEP with all parameters general, in terms of staircase tableaux. Our second result is a formula for the moments of (the weight function of) Askey-Wilson polynomials, also in terms of staircase tableaux. Since the 1980s there has been a great deal of work giving combinatorial formulas for moments of classical orthogonal polynomials (e.g. Hermite, Charlier, Laguerre); among these polynomials, the Askey-Wilson polynomials are the most important, because they are at the top of the hierarchy of classical orthogonal polynomials.

  6. Average weighted receiving time on the non-homogeneous double-weighted fractal networks

    NASA Astrophysics Data System (ADS)

    Ye, Dandan; Dai, Meifeng; Sun, Yu; Su, Weiyi

    2017-05-01

    In this paper, based on actual road networks, a model of the non-homogeneous double-weighted fractal networks is introduced depending on the number of copies s and two kinds of weight factors wi ,ri(i = 1 , 2 , … , s) . The double-weights represent the capacity-flowing weights and the cost-traveling weights, respectively. Denote by wijF the capacity-flowing weight connecting the nodes i and j, and denote by wijC the cost-traveling weight connecting the nodes i and j. Let wijF be related to the weight factors w1 ,w2 , … ,ws, and let wijC be related to the weight factors r1 ,r2 , … ,rs. Assuming that the walker, at each step, starting from its current node, moves to any of its neighbors with probability proportional to the capacity-flowing weight of edge linking them. The weighted time for two adjacency nodes is the cost-traveling weight connecting the two nodes. The average weighted receiving time (AWRT) is defined on the non-homogeneous double-weighted fractal networks. AWRT depends on the relationships of the number of copies s and two kinds of weight factors wi ,ri(i = 1 , 2 , … , s) . The obtained remarkable results display that in the large network, the AWRT grows as a power-law function of the network size Ng with the exponent, represented by θ =logs(w1r1 +w2r2 + ⋯ +wsrs) < 1 when w1r1 +w2r2 + ⋯ +wsrs ≠ 1, which means that the smaller the value of w1r1 +w2r2 + ⋯ +wsrs is, the more efficient the process of receiving information is. Especially when w1r1 +w2r2 + ⋯ +wsrs = 1, AWRT grows with increasing order Ng as logNg or (logNg) 2 . In the classic fractal networks, the average receiving time (ART) grows with linearly with the network size Ng. Thus, the non-homogeneous double-weighted fractal networks are more efficient than classic fractal networks in term of receiving information.

  7. Probabilistic combination of static and dynamic gait features for verification

    NASA Astrophysics Data System (ADS)

    Bazin, Alex I.; Nixon, Mark S.

    2005-03-01

    This paper describes a novel probabilistic framework for biometric identification and data fusion. Based on intra and inter-class variation extracted from training data, posterior probabilities describing the similarity between two feature vectors may be directly calculated from the data using the logistic function and Bayes rule. Using a large publicly available database we show the two imbalanced gait modalities may be fused using this framework. All fusion methods tested provide an improvement over the best modality, with the weighted sum rule giving the best performance, hence showing that highly imbalanced classifiers may be fused in a probabilistic setting; improving not only the performance, but also generalized application capability.

  8. Space-based sensor management and geostationary satellites tracking

    NASA Astrophysics Data System (ADS)

    El-Fallah, A.; Zatezalo, A.; Mahler, R.; Mehra, R. K.; Donatelli, D.

    2007-04-01

    Sensor management for space situational awareness presents a daunting theoretical and practical challenge as it requires the use of multiple types of sensors on a variety of platforms to ensure that the space environment is continuously monitored. We demonstrate a new approach utilizing the Posterior Expected Number of Targets (PENT) as the sensor management objective function, an observation model for a space-based EO/IR sensor platform, and a Probability Hypothesis Density Particle Filter (PHD-PF) tracker. Simulation and results using actual Geostationary Satellites are presented. We also demonstrate enhanced performance by applying the ProgressiveWeighting Correction (PWC) method for regularization in the implementation of the PHD-PF tracker.

  9. Neural coding using telegraphic switching of magnetic tunnel junction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suh, Dong Ik; Bae, Gi Yoon; Oh, Heong Sik

    2015-05-07

    In this work, we present a synaptic transmission representing neural coding with spike trains by using a magnetic tunnel junction (MTJ). Telegraphic switching generates an artificial neural signal with both the applied magnetic field and the spin-transfer torque that act as conflicting inputs for modulating the number of spikes in spike trains. The spiking probability is observed to be weighted with modulation between 27.6% and 99.8% by varying the amplitude of the voltage input or the external magnetic field. With a combination of the reverse coding scheme and the synaptic characteristic of MTJ, an artificial function for the synaptic transmissionmore » is achieved.« less

  10. On the importance of incorporating sampling weights in ...

    EPA Pesticide Factsheets

    Occupancy models are used extensively to assess wildlife-habitat associations and to predict species distributions across large geographic regions. Occupancy models were developed as a tool to properly account for imperfect detection of a species. Current guidelines on survey design requirements for occupancy models focus on the number of sample units and the pattern of revisits to a sample unit within a season. We focus on the sampling design or how the sample units are selected in geographic space (e.g., stratified, simple random, unequal probability, etc). In a probability design, each sample unit has a sample weight which quantifies the number of sample units it represents in the finite (oftentimes areal) sampling frame. We demonstrate the importance of including sampling weights in occupancy model estimation when the design is not a simple random sample or equal probability design. We assume a finite areal sampling frame as proposed for a national bat monitoring program. We compare several unequal and equal probability designs and varying sampling intensity within a simulation study. We found the traditional single season occupancy model produced biased estimates of occupancy and lower confidence interval coverage rates compared to occupancy models that accounted for the sampling design. We also discuss how our findings inform the analyses proposed for the nascent North American Bat Monitoring Program and other collaborative synthesis efforts that propose h

  11. The Impact of Breastfeeding on Early Childhood Obesity: Evidence From the National Survey of Children's Health.

    PubMed

    Hansstein, Francesca V

    2016-03-01

    To investigate how breastfeeding initiation and duration affect the likelihood of being overweight and obese in children aged 2 to 5. Cross-sectional data from the 2003 National Survey of Children's Health. Rural and urban areas of the United States. Households where at least one member was between the ages of 2 and 5 (sample size 8207). Parent-reported body mass index, breastfeeding initiation and duration, covariates (gender, family income and education, ethnicity, child care attendance, maternal health and physical activity, residential area). Partial proportional odds models. In early childhood, breastfed children had 5.3% higher probability of being normal weight (p = .002) and 8.9% (p < .001) lower probability of being obese compared to children who had never been breastfed. Children who had been breastfed for less than 3 months had 3.1% lower probability of being normal weight (p = .013) and 4.7% higher probability of being obese (p = .013) with respect to children who had been breastfed for 3 months and above. Study findings suggest that length of breastfeeding, whether exclusive or not, may be associated with lower risk of obesity in early childhood. However, caution is needed in generalizing results because of the limitations of the analysis. Based on findings from this study and others, breastfeeding promotion policies can cite the potential protective effect that breastfeeding has on weight in early childhood. © The Author(s) 2016.

  12. An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion

    PubMed Central

    Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng

    2015-01-01

    The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy. PMID:26334278

  13. An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion.

    PubMed

    Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng

    2015-08-31

    The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy.

  14. The effect of Hymenolepis diminuta (Cestoda) cysticercoids on the weight change, frass production, and food intake of the intermediate host, Tenebrio molitor (Coleoptera).

    PubMed

    Shea, John F

    2005-12-01

    Parasitism results in nutritionally related changes in hosts, often leading to altered feeding behavior. Infected hosts that increase their feeding also increase their probability of reinfection. To study this, I used a beetle (Tenebrio molitor)-tapeworm (Hymenolepis diminuta) system. Infected and uninfected male and female beetles were individually housed in vials with food. Each beetle's weight change, food intake, and frass production were measured over 24-h periods at 3, 7, 12, and 16 days postinfection. Treatment (infection) had no effect on weight change, but males lost more weight and produced more frass than females. Additionally, treatment had no effect on food consumption, but males had a higher food intake than females. These results suggest that infection status will not alter the probability of reinfection, but males will be more susceptible to infection than females. However, despite the male's greater food intake during the experimental infection period, parasite loads did not differ between males and females.

  15. Enhancing pairwise state-transition weights: A new weighting scheme in simulated tempering that can minimize transition time between a pair of conformational states

    NASA Astrophysics Data System (ADS)

    Qiao, Qin; Zhang, Hou-Dao; Huang, Xuhui

    2016-04-01

    Simulated tempering (ST) is a widely used enhancing sampling method for Molecular Dynamics simulations. As one expanded ensemble method, ST is a combination of canonical ensembles at different temperatures and the acceptance probability of cross-temperature transitions is determined by both the temperature difference and the weights of each temperature. One popular way to obtain the weights is to adopt the free energy of each canonical ensemble, which achieves uniform sampling among temperature space. However, this uniform distribution in temperature space may not be optimal since high temperatures do not always speed up the conformational transitions of interest, as anti-Arrhenius kinetics are prevalent in protein and RNA folding. Here, we propose a new method: Enhancing Pairwise State-transition Weights (EPSW), to obtain the optimal weights by minimizing the round-trip time for transitions among different metastable states at the temperature of interest in ST. The novelty of the EPSW algorithm lies in explicitly considering the kinetics of conformation transitions when optimizing the weights of different temperatures. We further demonstrate the power of EPSW in three different systems: a simple two-temperature model, a two-dimensional model for protein folding with anti-Arrhenius kinetics, and the alanine dipeptide. The results from these three systems showed that the new algorithm can substantially accelerate the transitions between conformational states of interest in the ST expanded ensemble and further facilitate the convergence of thermodynamics compared to the widely used free energy weights. We anticipate that this algorithm is particularly useful for studying functional conformational changes of biological systems where the initial and final states are often known from structural biology experiments.

  16. Enhancing pairwise state-transition weights: A new weighting scheme in simulated tempering that can minimize transition time between a pair of conformational states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiao, Qin, E-mail: qqiao@ust.hk; Zhang, Hou-Dao; Huang, Xuhui, E-mail: xuhuihuang@ust.hk

    2016-04-21

    Simulated tempering (ST) is a widely used enhancing sampling method for Molecular Dynamics simulations. As one expanded ensemble method, ST is a combination of canonical ensembles at different temperatures and the acceptance probability of cross-temperature transitions is determined by both the temperature difference and the weights of each temperature. One popular way to obtain the weights is to adopt the free energy of each canonical ensemble, which achieves uniform sampling among temperature space. However, this uniform distribution in temperature space may not be optimal since high temperatures do not always speed up the conformational transitions of interest, as anti-Arrhenius kineticsmore » are prevalent in protein and RNA folding. Here, we propose a new method: Enhancing Pairwise State-transition Weights (EPSW), to obtain the optimal weights by minimizing the round-trip time for transitions among different metastable states at the temperature of interest in ST. The novelty of the EPSW algorithm lies in explicitly considering the kinetics of conformation transitions when optimizing the weights of different temperatures. We further demonstrate the power of EPSW in three different systems: a simple two-temperature model, a two-dimensional model for protein folding with anti-Arrhenius kinetics, and the alanine dipeptide. The results from these three systems showed that the new algorithm can substantially accelerate the transitions between conformational states of interest in the ST expanded ensemble and further facilitate the convergence of thermodynamics compared to the widely used free energy weights. We anticipate that this algorithm is particularly useful for studying functional conformational changes of biological systems where the initial and final states are often known from structural biology experiments.« less

  17. Study on Failure of Third-Party Damage for Urban Gas Pipeline Based on Fuzzy Comprehensive Evaluation

    PubMed Central

    Li, Jun; Zhang, Hong; Han, Yinshan; Wang, Baodong

    2016-01-01

    Focusing on the diversity, complexity and uncertainty of the third-party damage accident, the failure probability of third-party damage to urban gas pipeline was evaluated on the theory of analytic hierarchy process and fuzzy mathematics. The fault tree of third-party damage containing 56 basic events was built by hazard identification of third-party damage. The fuzzy evaluation of basic event probabilities were conducted by the expert judgment method and using membership function of fuzzy set. The determination of the weight of each expert and the modification of the evaluation opinions were accomplished using the improved analytic hierarchy process, and the failure possibility of the third-party to urban gas pipeline was calculated. Taking gas pipelines of a certain large provincial capital city as an example, the risk assessment structure of the method was proved to conform to the actual situation, which provides the basis for the safety risk prevention. PMID:27875545

  18. Finding exact constants in a Markov model of Zipfs law generation

    NASA Astrophysics Data System (ADS)

    Bochkarev, V. V.; Lerner, E. Yu.; Nikiforov, A. A.; Pismenskiy, A. A.

    2017-12-01

    According to the classical Zipfs law, the word frequency is a power function of the word rank with an exponent -1. The objective of this work is to find multiplicative constant in a Markov model of word generation. Previously, the case of independent letters was mathematically strictly investigated in [Bochkarev V V and Lerner E Yu 2017 International Journal of Mathematics and Mathematical Sciences Article ID 914374]. Unfortunately, the methods used in this paper cannot be generalized in case of Markov chains. The search of the correct formulation of the Markov generalization of this results was performed using experiments with different ergodic matrices of transition probability P. Combinatory technique allowed taking into account all the words with probability of more than e -300 in case of 2 by 2 matrices. It was experimentally proved that the required constant in the limit is equal to the value reciprocal to conditional entropy of matrix row P with weights presenting the elements of the vector π of the stationary distribution of the Markov chain.

  19. Statistically Qualified Neuro-Analytic system and Method for Process Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    1998-11-04

    An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertaintymore » in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.« less

  20. IGF-I and relation to growth in infancy and early childhood in very-low-birth-weight infants and term born infants.

    PubMed

    de Jong, Miranda; Cranendonk, Anneke; Twisk, Jos W R; van Weissenbruch, Mirjam M

    2017-01-01

    In very-low-birth-weight infants IGF-I plays an important role in postnatal growth restriction and is probably also involved in growth restriction in childhood. We compared IGF-I and its relation to growth in early childhood in very-low-birth-weight infants and term appropriate for gestational age born infants. We included 41 very-low-birth-weight and 64 term infants. Anthropometry was performed at all visits to the outpatient clinic. IGF-I and insulin were measured in blood samples taken at 6 months and 2 years corrected age (very-low-birth-weight children) and at 3 months, 1 and 2 years (term children). Over the first 2 years of life growth parameters are lower in very-low-birth-weight children compared to term children, but the difference in length decreases significantly. During the first 2 years of life IGF-I is higher in very-low-birth-weight children compared to term children. In both groups there is a significant relationship between IGF-I and (change in) length and weight over the first 2 years of life and between insulin and change in total body fat. Considering the relation of IGF-I to growth and the decrease in difference in length, higher IGF-I levels in very-low-birth-weight infants in early childhood probably have an important role in catch-up growth in length.

  1. [Natural evolution of excess body weight (overweight and obesity) in children].

    PubMed

    Durá Travé, T; Gallinas Victoriano, F

    2013-11-01

    To analyze the chronological evolution of excess body weight (overweight and obesity) in order to raise public awareness within the different areas of intervention (family, school, business environment, health services) and to take effective actions. Weight, height and body mass index (BMI) of 604 healthy subjects (307 males and 297 females) have been recorded at birth and at the age of 1, 2, 3, 4, 6, 8, 10, 12 and 14 years. The excess body weight has been calculated according to national references from Ferrández et al. Prevalence of excess body weight at age 14 years was significantly higher (P<.05) in males (29%) than in females (12.8%). BMI (kg/m2) was significantly higher (P<.05) for both sexes in every age period, except for birth and age 1 year, in those patients with excess body weight at age 14, with respect to patients with normal nutritional status of the same age. Those groups with excess body weight at age 14 showed a BMI (Z-score) reaching overweight or obesity levels at age 4, and progressively increasing. Excess body weight probably starts at early stages in life, when dietary habits of the child depends almost exclusively on family habits, and may be aggravated during school attendance. Finally, a disproportionate weight increase occurs in adolescence that is probably related to unhealthy dietary habits and way of life. Copyright © 2013 Asociación Española de Pediatría. Published by Elsevier Espana. All rights reserved.

  2. IGF-I and relation to growth in infancy and early childhood in very-low-birth-weight infants and term born infants

    PubMed Central

    de Jong, Miranda; Cranendonk, Anneke; Twisk, Jos W. R.; van Weissenbruch, Mirjam M.

    2017-01-01

    Background In very-low-birth-weight infants IGF-I plays an important role in postnatal growth restriction and is probably also involved in growth restriction in childhood. We compared IGF-I and its relation to growth in early childhood in very-low-birth-weight infants and term appropriate for gestational age born infants. Methods We included 41 very-low-birth-weight and 64 term infants. Anthropometry was performed at all visits to the outpatient clinic. IGF-I and insulin were measured in blood samples taken at 6 months and 2 years corrected age (very-low-birth-weight children) and at 3 months, 1 and 2 years (term children). Results Over the first 2 years of life growth parameters are lower in very-low-birth-weight children compared to term children, but the difference in length decreases significantly. During the first 2 years of life IGF-I is higher in very-low-birth-weight children compared to term children. In both groups there is a significant relationship between IGF-I and (change in) length and weight over the first 2 years of life and between insulin and change in total body fat. Conclusions Considering the relation of IGF-I to growth and the decrease in difference in length, higher IGF-I levels in very-low-birth-weight infants in early childhood probably have an important role in catch-up growth in length. PMID:28182752

  3. Resting bold fMRI differentiates dementia with Lewy bodies vs Alzheimer disease

    PubMed Central

    Price, J.L.; Yan, Z.; Morris, J.C.; Sheline, Y.I.

    2011-01-01

    Objective: Clinicopathologic phenotypes of dementia with Lewy bodies (DLB) and Alzheimer disease (AD) often overlap, making discrimination difficult. We performed resting state blood oxygen level–dependent (BOLD) functional connectivity MRI (fcMRI) to determine whether there were differences between AD and DLB. Methods: Participants (n = 88) enrolled in a longitudinal study of memory and aging underwent 3-T fcMRI. Clinical diagnoses of probable DLB (n = 15) were made according to published criteria. Cognitively normal control participants (n = 38) were selected for the absence of cerebral amyloid burden as imaged with Pittsburgh compound B (PiB). Probable AD cases (n = 35) met published criteria and had appreciable amyloid deposits with PiB imaging. Functional images were collected using a gradient spin-echo sequence sensitive to BOLD contrast (T2* weighting). Correlation maps selected a seed region in the combined bilateral precuneus. Results: Participants with DLB had a functional connectivity pattern for the precuneus seed region that was distinct from AD; both the DLB and AD groups had functional connectivity patterns that differed from the cognitively normal group. In the DLB group, we found increased connectivity between the precuneus and regions in the dorsal attention network and the putamen. In contrast, we found decreased connectivity between the precuneus and other task-negative default regions and visual cortices. There was also a reversal of connectivity in the right hippocampus. Conclusions: Changes in functional connectivity in DLB indicate patterns of activation that are distinct from those seen in AD and may improve discrimination of DLB from AD and cognitively normal individuals. Since patterns of connectivity differ between AD and DLB groups, measurements of BOLD functional connectivity can shed further light on neuroanatomic connections that distinguish DLB from AD. PMID:21525427

  4. Bayesian functional integral method for inferring continuous data from discrete measurements.

    PubMed

    Heuett, William J; Miller, Bernard V; Racette, Susan B; Holloszy, John O; Chow, Carson C; Periwal, Vipul

    2012-02-08

    Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a "model". An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  5. Ensemble learning of inverse probability weights for marginal structural modeling in large observational datasets.

    PubMed

    Gruber, Susan; Logan, Roger W; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A

    2015-01-15

    Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However, a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V-fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. Copyright © 2014 John Wiley & Sons, Ltd.

  6. Ensemble learning of inverse probability weights for marginal structural modeling in large observational datasets

    PubMed Central

    Gruber, Susan; Logan, Roger W.; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A.

    2014-01-01

    Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V -fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. PMID:25316152

  7. Quasar probabilities and redshifts from WISE mid-IR through GALEX UV photometry

    NASA Astrophysics Data System (ADS)

    DiPompeo, M. A.; Bovy, J.; Myers, A. D.; Lang, D.

    2015-09-01

    Extreme deconvolution (XD) of broad-band photometric data can both separate stars from quasars and generate probability density functions for quasar redshifts, while incorporating flux uncertainties and missing data. Mid-infrared photometric colours are now widely used to identify hot dust intrinsic to quasars, and the release of all-sky WISE data has led to a dramatic increase in the number of IR-selected quasars. Using forced photometry on public WISE data at the locations of Sloan Digital Sky Survey (SDSS) point sources, we incorporate this all-sky data into the training of the XDQSOz models originally developed to select quasars from optical photometry. The combination of WISE and SDSS information is far more powerful than SDSS alone, particularly at z > 2. The use of SDSS+WISE photometry is comparable to the use of SDSS+ultraviolet+near-IR data. We release a new public catalogue of 5537 436 (total; 3874 639 weighted by probability) potential quasars with probability PQSO > 0.2. The catalogue includes redshift probabilities for all objects. We also release an updated version of the publicly available set of codes to calculate quasar and redshift probabilities for various combinations of data. Finally, we demonstrate that this method of selecting quasars using WISE data is both more complete and efficient than simple WISE colour-cuts, especially at high redshift. Our fits verify that above z ˜ 3 WISE colours become bluer than the standard cuts applied to select quasars. Currently, the analysis is limited to quasars with optical counterparts, and thus cannot be used to find highly obscured quasars that WISE colour-cuts identify in significant numbers.

  8. Mathematical modeling of synthetic unit hydrograph case study: Citarum watershed

    NASA Astrophysics Data System (ADS)

    Islahuddin, Muhammad; Sukrainingtyas, Adiska L. A.; Kusuma, M. Syahril B.; Soewono, Edy

    2015-09-01

    Deriving unit hydrograph is very important in analyzing watershed's hydrologic response of a rainfall event. In most cases, hourly measures of stream flow data needed in deriving unit hydrograph are not always available. Hence, one needs to develop methods for deriving unit hydrograph for ungagged watershed. Methods that have evolved are based on theoretical or empirical formulas relating hydrograph peak discharge and timing to watershed characteristics. These are usually referred to Synthetic Unit Hydrograph. In this paper, a gamma probability density function and its variant are used as mathematical approximations of a unit hydrograph for Citarum Watershed. The model is adjusted with real field condition by translation and scaling. Optimal parameters are determined by using Particle Swarm Optimization method with weighted objective function. With these models, a synthetic unit hydrograph can be developed and hydrologic parameters can be well predicted.

  9. Brain networks for confidence weighting and hierarchical inference during probabilistic learning.

    PubMed

    Meyniel, Florent; Dehaene, Stanislas

    2017-05-09

    Learning is difficult when the world fluctuates randomly and ceaselessly. Classical learning algorithms, such as the delta rule with constant learning rate, are not optimal. Mathematically, the optimal learning rule requires weighting prior knowledge and incoming evidence according to their respective reliabilities. This "confidence weighting" implies the maintenance of an accurate estimate of the reliability of what has been learned. Here, using fMRI and an ideal-observer analysis, we demonstrate that the brain's learning algorithm relies on confidence weighting. While in the fMRI scanner, human adults attempted to learn the transition probabilities underlying an auditory or visual sequence, and reported their confidence in those estimates. They knew that these transition probabilities could change simultaneously at unpredicted moments, and therefore that the learning problem was inherently hierarchical. Subjective confidence reports tightly followed the predictions derived from the ideal observer. In particular, subjects managed to attach distinct levels of confidence to each learned transition probability, as required by Bayes-optimal inference. Distinct brain areas tracked the likelihood of new observations given current predictions, and the confidence in those predictions. Both signals were combined in the right inferior frontal gyrus, where they operated in agreement with the confidence-weighting model. This brain region also presented signatures of a hierarchical process that disentangles distinct sources of uncertainty. Together, our results provide evidence that the sense of confidence is an essential ingredient of probabilistic learning in the human brain, and that the right inferior frontal gyrus hosts a confidence-based statistical learning algorithm for auditory and visual sequences.

  10. Assessing the performance of the generalized propensity score for estimating the effect of quantitative or continuous exposures on survival or time-to-event outcomes.

    PubMed

    Austin, Peter C

    2018-01-01

    Propensity score methods are frequently used to estimate the effects of interventions using observational data. The propensity score was originally developed for use with binary exposures. The generalized propensity score (GPS) is an extension of the propensity score for use with quantitative or continuous exposures (e.g. pack-years of cigarettes smoked, dose of medication, or years of education). We describe how the GPS can be used to estimate the effect of continuous exposures on survival or time-to-event outcomes. To do so we modified the concept of the dose-response function for use with time-to-event outcomes. We used Monte Carlo simulations to examine the performance of different methods of using the GPS to estimate the effect of quantitative exposures on survival or time-to-event outcomes. We examined covariate adjustment using the GPS and weighting using weights based on the inverse of the GPS. The use of methods based on the GPS was compared with the use of conventional G-computation and weighted G-computation. Conventional G-computation resulted in estimates of the dose-response function that displayed the lowest bias and the lowest variability. Amongst the two GPS-based methods, covariate adjustment using the GPS tended to have the better performance. We illustrate the application of these methods by estimating the effect of average neighbourhood income on the probability of survival following hospitalization for an acute myocardial infarction.

  11. A Note on the Validity and Reliability of Multi-Criteria Decision Analysis for the Benefit-Risk Assessment of Medicines.

    PubMed

    Garcia-Hernandez, Alberto

    2015-11-01

    The comparative evaluation of benefits and risks is one of the most important tasks during the development, market authorization and post-approval pharmacovigilance of medicinal products. Multi-criteria decision analysis (MCDA) has been recommended to support decision making in the benefit-risk assessment (BRA) of medicines. This paper identifies challenges associated with bias or variability that practitioners may encounter in this field and presents solutions to overcome them. The inclusion of overlapping or preference-complementary criteria, which are frequent violations to the assumptions of this model, should be avoided. For each criterion, a value function translates the original outcomes into preference-related scores. Applying non-linear value functions to criteria defined as the risk of suffering a certain event during the study introduces specific risk behaviours in this prescriptive, rather than descriptive, model and is therefore a questionable practice. MCDA uses weights to compare the importance of the model criteria with each other; during their elicitation a frequent situation where (generally favourable) mild effects are directly traded off against low probabilities of suffering (generally unfavourable) severe effects during the study is known to lead to biased and variable weights and ought to be prevented. The way the outcomes are framed during the elicitation process, positively versus negatively for instance, may also lead to differences in the preference weights, warranting an appropriate justification during each implementation. Finally, extending the weighted-sum MCDA model into a fully inferential tool through a probabilistic sensitivity analysis is desirable. However, this task is troublesome and should not ignore that clinical trial endpoints generally are positively correlated.

  12. Prospect theory reflects selective allocation of attention.

    PubMed

    Pachur, Thorsten; Schulte-Mecklenbeck, Michael; Murphy, Ryan O; Hertwig, Ralph

    2018-02-01

    There is a disconnect in the literature between analyses of risky choice based on cumulative prospect theory (CPT) and work on predecisional information processing. One likely reason is that for expectation models (e.g., CPT), it is often assumed that people behaved only as if they conducted the computations leading to the predicted choice and that the models are thus mute regarding information processing. We suggest that key psychological constructs in CPT, such as loss aversion and outcome and probability sensitivity, can be interpreted in terms of attention allocation. In two experiments, we tested hypotheses about specific links between CPT parameters and attentional regularities. Experiment 1 used process tracing to monitor participants' predecisional attention allocation to outcome and probability information. As hypothesized, individual differences in CPT's loss-aversion, outcome-sensitivity, and probability-sensitivity parameters (estimated from participants' choices) were systematically associated with individual differences in attention allocation to outcome and probability information. For instance, loss aversion was associated with the relative attention allocated to loss and gain outcomes, and a more strongly curved weighting function was associated with less attention allocated to probabilities. Experiment 2 manipulated participants' attention to losses or gains, causing systematic differences in CPT's loss-aversion parameter. This result indicates that attention allocation can to some extent cause choice regularities that are captured by CPT. Our findings demonstrate an as-if model's capacity to reflect characteristics of information processing. We suggest that the observed CPT-attention links can be harnessed to inform the development of process models of risky choice. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  13. Fitting distributions to microbial contamination data collected with an unequal probability sampling design.

    PubMed

    Williams, M S; Ebel, E D; Cao, Y

    2013-01-01

    The fitting of statistical distributions to microbial sampling data is a common application in quantitative microbiology and risk assessment applications. An underlying assumption of most fitting techniques is that data are collected with simple random sampling, which is often times not the case. This study develops a weighted maximum likelihood estimation framework that is appropriate for microbiological samples that are collected with unequal probabilities of selection. A weighted maximum likelihood estimation framework is proposed for microbiological samples that are collected with unequal probabilities of selection. Two examples, based on the collection of food samples during processing, are provided to demonstrate the method and highlight the magnitude of biases in the maximum likelihood estimator when data are inappropriately treated as a simple random sample. Failure to properly weight samples to account for how data are collected can introduce substantial biases into inferences drawn from the data. The proposed methodology will reduce or eliminate an important source of bias in inferences drawn from the analysis of microbial data. This will also make comparisons between studies and the combination of results from different studies more reliable, which is important for risk assessment applications. © 2012 No claim to US Government works.

  14. Projected Hg dietary exposure of 3 bird species nesting on a contaminated floodplain (South River, Virginia, USA).

    PubMed

    Wang, Jincheng; Newman, Michael C

    2013-04-01

    Dietary Hg exposure was modeled for Carolina wren (Thryothorus ludovicianus), Eastern song sparrow (Melospiza melodia), and Eastern screech owl (Otus asio) nesting on the contaminated South River floodplain (Virginia, USA). Parameterization of Monte-Carlo models required formal expert elicitation to define bird body weight and feeding ecology characteristics because specific information was either unavailable in the published literature or too difficult to collect reliably by field survey. Mercury concentrations and weights for candidate food items were obtained directly by field survey. Simulations predicted the probability that an adult bird during breeding season would ingest specific amounts of Hg during daily foraging and the probability that the average Hg ingestion rate for the breeding season of an adult bird would exceed published rates reported to cause harm to other birds (>100 ng total Hg/g body weight per day). Despite the extensive floodplain contamination, the probabilities that these species' average ingestion rates exceeded the threshold value were all <0.01. Sensitivity analysis indicated that overall food ingestion rate was the most important factor determining projected Hg ingestion rates. Expert elicitation was useful in providing sufficiently reliable information for Monte-Carlo simulation. Copyright © 2013 SETAC.

  15. Maximizing Information Diffusion in the Cyber-physical Integrated Network †

    PubMed Central

    Lu, Hongliang; Lv, Shaohe; Jiao, Xianlong; Wang, Xiaodong; Liu, Juan

    2015-01-01

    Nowadays, our living environment has been embedded with smart objects, such as smart sensors, smart watches and smart phones. They make cyberspace and physical space integrated by their abundant abilities of sensing, communication and computation, forming a cyber-physical integrated network. In order to maximize information diffusion in such a network, a group of objects are selected as the forwarding points. To optimize the selection, a minimum connected dominating set (CDS) strategy is adopted. However, existing approaches focus on minimizing the size of the CDS, neglecting an important factor: the weight of links. In this paper, we propose a distributed maximizing the probability of information diffusion (DMPID) algorithm in the cyber-physical integrated network. Unlike previous approaches that only consider the size of CDS selection, DMPID also considers the information spread probability that depends on the weight of links. To weaken the effects of excessively-weighted links, we also present an optimization strategy that can properly balance the two factors. The results of extensive simulation show that DMPID can nearly double the information diffusion probability, while keeping a reasonable size of selection with low overhead in different distributed networks. PMID:26569254

  16. Deficits of spatial and task-related attentional selection in mild cognitive impairment and Alzheimer's disease.

    PubMed

    Redel, P; Bublak, P; Sorg, C; Kurz, A; Förstl, H; Müller, H J; Schneider, W X; Perneczky, R; Finke, K

    2012-01-01

    Visual selective attention was assessed with a partial-report task in patients with probable Alzheimer's disease (AD), amnestic mild cognitive impairment (MCI), and healthy elderly controls. Based on Bundesen's "theory of visual attention" (TVA), two parameters were derived: top-down control of attentional selection, representing task-related attentional weighting for prioritizing relevant visual objects, and spatial distribution of attentional weights across the left and the right hemifield. Compared with controls, MCI patients showed significantly reduced top-down controlled selection, which was further deteriorated in AD subjects. Moreover, attentional weighting was significantly unbalanced across hemifields in MCI and tended to be more lateralized in AD. Across MCI and AD patients, carriers of the apolipoprotein E ε4 allele (ApoE4) displayed a leftward spatial bias, which was the more pronounced the younger the ApoE4-positive patients and the earlier disease onset. These results indicate that impaired top-down control may be linked to early dysfunction of fronto-parietal networks. An early temporo-parietal interhemispheric asymmetry might cause a pathological spatial bias which is associated with ApoE4 genotype and may therefore function as early cognitive marker of upcoming AD. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Exaggerated Risk: Prospect Theory and Probability Weighting in Risky Choice

    ERIC Educational Resources Information Center

    Kusev, Petko; van Schaik, Paul; Ayton, Peter; Dent, John; Chater, Nick

    2009-01-01

    In 5 experiments, we studied precautionary decisions in which participants decided whether or not to buy insurance with specified cost against an undesirable event with specified probability and cost. We compared the risks taken for precautionary decisions with those taken for equivalent monetary gambles. Fitting these data to Tversky and…

  18. Calibration of micromechanical parameters for DEM simulations by using the particle filter

    NASA Astrophysics Data System (ADS)

    Cheng, Hongyang; Shuku, Takayuki; Thoeni, Klaus; Yamamoto, Haruyuki

    2017-06-01

    The calibration of DEM models is typically accomplished by trail and error. However, the procedure lacks of objectivity and has several uncertainties. To deal with these issues, the particle filter is employed as a novel approach to calibrate DEM models of granular soils. The posterior probability distribution of the microparameters that give numerical results in good agreement with the experimental response of a Toyoura sand specimen is approximated by independent model trajectories, referred as `particles', based on Monte Carlo sampling. The soil specimen is modeled by polydisperse packings with different numbers of spherical grains. Prepared in `stress-free' states, the packings are subjected to triaxial quasistatic loading. Given the experimental data, the posterior probability distribution is incrementally updated, until convergence is reached. The resulting `particles' with higher weights are identified as the calibration results. The evolutions of the weighted averages and posterior probability distribution of the micro-parameters are plotted to show the advantage of using a particle filter, i.e., multiple solutions are identified for each parameter with known probabilities of reproducing the experimental response.

  19. Exploiting the Maximum Entropy Principle to Increase Retrieval Effectiveness.

    ERIC Educational Resources Information Center

    Cooper, William S.

    1983-01-01

    Presents information retrieval design approach in which queries of computer-based system consist of sets of terms, either unweighted or weighted with subjective term precision estimates, and retrieval outputs ranked by probability of usefulness estimated by "maximum entropy principle." Boolean and weighted request systems are discussed.…

  20. Equivalent weight of humic acid from peat

    USGS Publications Warehouse

    Pommer, A.M.; Breger, I.A.

    1960-01-01

    By means of discontinuous titration, the equivalent weight of humic acid isolated from a peat was found to increase from 144 to 183 between the third and fifty-second day after the humic acid was dissolved. Infra-red studies showed that the material had probably condensed with loss of carbonyl groups. ?? 1960.

  1. Propensity Score Weighting with Error-Prone Covariates

    ERIC Educational Resources Information Center

    McCaffrey, Daniel F.; Lockwood, J. R.; Setodji, Claude M.

    2011-01-01

    Inverse probability weighting (IPW) estimates are widely used in applications where data are missing due to nonresponse or censoring or in observational studies of causal effects where the counterfactuals cannot be observed. This extensive literature has shown the estimators to be consistent and asymptotically normal under very general conditions,…

  2. Marginal Structural Cox Models for Estimating the Association Between β-Interferon Exposure and Disease Progression in a Multiple Sclerosis Cohort

    PubMed Central

    Karim, Mohammad Ehsanul; Gustafson, Paul; Petkau, John; Zhao, Yinshan; Shirani, Afsaneh; Kingwell, Elaine; Evans, Charity; van der Kop, Mia; Oger, Joel; Tremlett, Helen

    2014-01-01

    Longitudinal observational data are required to assess the association between exposure to β-interferon medications and disease progression among relapsing-remitting multiple sclerosis (MS) patients in the “real-world” clinical practice setting. Marginal structural Cox models (MSCMs) can provide distinct advantages over traditional approaches by allowing adjustment for time-varying confounders such as MS relapses, as well as baseline characteristics, through the use of inverse probability weighting. We assessed the suitability of MSCMs to analyze data from a large cohort of 1,697 relapsing-remitting MS patients in British Columbia, Canada (1995–2008). In the context of this observational study, which spanned more than a decade and involved patients with a chronic yet fluctuating disease, the recently proposed “normalized stabilized” weights were found to be the most appropriate choice of weights. Using this model, no association between β-interferon exposure and the hazard of disability progression was found (hazard ratio = 1.36, 95% confidence interval: 0.95, 1.94). For sensitivity analyses, truncated normalized unstabilized weights were used in additional MSCMs and to construct inverse probability weight-adjusted survival curves; the findings did not change. Additionally, qualitatively similar conclusions from approximation approaches to the weighted Cox model (i.e., MSCM) extend confidence in the findings. PMID:24939980

  3. Exploring the Specifications of Spatial Adjacencies and Weights in Bayesian Spatial Modeling with Intrinsic Conditional Autoregressive Priors in a Small-area Study of Fall Injuries

    PubMed Central

    Law, Jane

    2016-01-01

    Intrinsic conditional autoregressive modeling in a Bayeisan hierarchical framework has been increasingly applied in small-area ecological studies. This study explores the specifications of spatial structure in this Bayesian framework in two aspects: adjacency, i.e., the set of neighbor(s) for each area; and (spatial) weight for each pair of neighbors. Our analysis was based on a small-area study of falling injuries among people age 65 and older in Ontario, Canada, that was aimed to estimate risks and identify risk factors of such falls. In the case study, we observed incorrect adjacencies information caused by deficiencies in the digital map itself. Further, when equal weights was replaced by weights based on a variable of expected count, the range of estimated risks increased, the number of areas with probability of estimated risk greater than one at different probability thresholds increased, and model fit improved. More importantly, significance of a risk factor diminished. Further research to thoroughly investigate different methods of variable weights; quantify the influence of specifications of spatial weights; and develop strategies for better defining spatial structure of a map in small-area analysis in Bayesian hierarchical spatial modeling is recommended. PMID:29546147

  4. Probability function of breaking-limited surface elevation. [wind generated waves of ocean

    NASA Technical Reports Server (NTRS)

    Tung, C. C.; Huang, N. E.; Yuan, Y.; Long, S. R.

    1989-01-01

    The effect of wave breaking on the probability function of surface elevation is examined. The surface elevation limited by wave breaking zeta sub b(t) is first related to the original wave elevation zeta(t) and its second derivative. An approximate, second-order, nonlinear, non-Gaussian model for zeta(t) of arbitrary but moderate bandwidth is presented, and an expression for the probability density function zeta sub b(t) is derived. The results show clearly that the effect of wave breaking on the probability density function of surface elevation is to introduce a secondary hump on the positive side of the probability density function, a phenomenon also observed in wind wave tank experiments.

  5. An improved probabilistic approach for linking progenitor and descendant galaxy populations using comoving number density

    NASA Astrophysics Data System (ADS)

    Wellons, Sarah; Torrey, Paul

    2017-06-01

    Galaxy populations at different cosmic epochs are often linked by cumulative comoving number density in observational studies. Many theoretical works, however, have shown that the cumulative number densities of tracked galaxy populations not only evolve in bulk, but also spread out over time. We present a method for linking progenitor and descendant galaxy populations which takes both of these effects into account. We define probability distribution functions that capture the evolution and dispersion of galaxy populations in number density space, and use these functions to assign galaxies at redshift zf probabilities of being progenitors/descendants of a galaxy population at another redshift z0. These probabilities are used as weights for calculating distributions of physical progenitor/descendant properties such as stellar mass, star formation rate or velocity dispersion. We demonstrate that this probabilistic method provides more accurate predictions for the evolution of physical properties than the assumption of either a constant number density or an evolving number density in a bin of fixed width by comparing predictions against galaxy populations directly tracked through a cosmological simulation. We find that the constant number density method performs least well at recovering galaxy properties, the evolving method density slightly better and the probabilistic method best of all. The improvement is present for predictions of stellar mass as well as inferred quantities such as star formation rate and velocity dispersion. We demonstrate that this method can also be applied robustly and easily to observational data, and provide a code package for doing so.

  6. Assessment of Data Fusion Algorithms for Earth Observation Change Detection Processes

    PubMed Central

    Molina, Iñigo; Martinez, Estibaliz; Morillo, Carmen; Velasco, Jesus; Jara, Alvaro

    2016-01-01

    In this work a parametric multi-sensor Bayesian data fusion approach and a Support Vector Machine (SVM) are used for a Change Detection problem. For this purpose two sets of SPOT5-PAN images have been used, which are in turn used for Change Detection Indices (CDIs) calculation. For minimizing radiometric differences, a methodology based on zonal “invariant features” is suggested. The choice of one or the other CDI for a change detection process is a subjective task as each CDI is probably more or less sensitive to certain types of changes. Likewise, this idea might be employed to create and improve a “change map”, which can be accomplished by means of the CDI’s informational content. For this purpose, information metrics such as the Shannon Entropy and “Specific Information” have been used to weight the changes and no-changes categories contained in a certain CDI and thus introduced in the Bayesian information fusion algorithm. Furthermore, the parameters of the probability density functions (pdf’s) that best fit the involved categories have also been estimated. Conversely, these considerations are not necessary for mapping procedures based on the discriminant functions of a SVM. This work has confirmed the capabilities of probabilistic information fusion procedure under these circumstances. PMID:27706048

  7. Urinary incontinence, depression, and economic outcomes in a cohort of women between the ages of 54 and 65 years.

    PubMed

    Hung, Kristin J; Awtrey, Christopher S; Tsai, Alexander C

    2014-04-01

    To estimate the association between urinary incontinence (UI) and probable depression, work disability, and workforce exit. The analytic sample consisted of 4,511 women enrolled in the population-based Health and Retirement Study cohort. The analysis baseline was 1996, the year that questions about UI were added to the survey instrument, and at which time study participants were 54-65 years of age. Women were followed-up with biennial interviews until 2010-2011. Outcomes of interest were onset of probable depression, work disability, and workforce exit. Urinary incontinence was specified in different ways based on questions about experience and frequency of urine loss. We fit Cox proportional hazards regression models to the data, adjusting the estimates for baseline sociodemographic and health status variables previously found to confound the association between UI and the outcomes of interest. At baseline, 727 participants (survey-weighted prevalence, 16.6%; 95% confidence interval [CI] 15.4-18.0) reported any UI, of which 212 (survey-weighted prevalence, 29.2%; 95% CI 25.4-33.3) reported urine loss on more than 15 days in the past month; and 1,052 participants were categorized as having probable depression (survey-weighted prevalence, 21.6%; 95% CI 19.8-23.6). Urinary incontinence was associated with increased risks for probable depression (adjusted hazard ratio, 1.43; 95% CI 1.27-1.62) and work disability (adjusted hazard ratio, 1.21; 95% CI 1.01-1.45), but not workforce exit (adjusted hazard ratio, 1.06; 95% CI 0.93-1.21). In a population-based cohort of women between ages 54 and 65 years, UI was associated with increased risks for probable depression and work disability. Improved diagnosis and management of UI may yield significant economic and psychosocial benefits.

  8. Effects of pathogen exposure on life history variation in the gypsy moth (Lymantria dispar)

    PubMed Central

    Páez, David J.; Fleming-Davies, Arietta E.; Dwyer, Greg

    2015-01-01

    Investment in host defenses against pathogens may lead to tradeoffs with host fecundity. When such tradeoffs arise from genetic correlations, rates of phenotypic change by natural selection may be affected. However, genetic correlations between host survival and fecundity are rarely quantified. To understand tradeoffs between immune responses to baculovirus exposure and fecundity in the gypsy moth (Lymantria dispar), we estimated genetic correlations between survival probability and traits related to fecundity, such as pupal weight. In addition, we tested whether different virus isolates have different effects on male and female pupal weight. To estimate genetic correlations, we exposed individuals of known relatedness to a single baculovirus isolate. To then evaluate the effect of virus isolate on pupal weight, we exposed a single gypsy moth strain to 16 baculovirus isolates. We found a negative genetic correlation between survival and pupal weight. In addition, virus exposure caused late-pupating females to be identical in weight to males, whereas unexposed females were 2–3 times as large as unexposed males. Finally, we found that female pupal weight is a quadratic function of host mortality across virus isolates, which is likely due to tradeoffs and compensatory growth processes acting at high and low mortality levels, respectively. Overall, our results suggest that fecundity costs may strongly affect the response to selection for disease resistance. In nature, baculoviruses contribute to the regulation of gypsy moth outbreaks, as pathogens often do in forest-defoliating insects. We therefore argue that tradeoffs between host life-history traits may help explain outbreak dynamics. PMID:26201381

  9. A counterfactual p-value approach for benefit-risk assessment in clinical trials.

    PubMed

    Zeng, Donglin; Chen, Ming-Hui; Ibrahim, Joseph G; Wei, Rachel; Ding, Beiying; Ke, Chunlei; Jiang, Qi

    2015-01-01

    Clinical trials generally allow various efficacy and safety outcomes to be collected for health interventions. Benefit-risk assessment is an important issue when evaluating a new drug. Currently, there is a lack of standardized and validated benefit-risk assessment approaches in drug development due to various challenges. To quantify benefits and risks, we propose a counterfactual p-value (CP) approach. Our approach considers a spectrum of weights for weighting benefit-risk values and computes the extreme probabilities of observing the weighted benefit-risk value in one treatment group as if patients were treated in the other treatment group. The proposed approach is applicable to single benefit and single risk outcome as well as multiple benefit and risk outcomes assessment. In addition, the prior information in the weight schemes relevant to the importance of outcomes can be incorporated in the approach. The proposed CPs plot is intuitive with a visualized weight pattern. The average area under CP and preferred probability over time are used for overall treatment comparison and a bootstrap approach is applied for statistical inference. We assess the proposed approach using simulated data with multiple efficacy and safety endpoints and compare its performance with a stochastic multi-criteria acceptability analysis approach.

  10. [Prevalence of depression and anxiety in a cohort of 761 obese patients: impact in adherence to therapy and its outcome].

    PubMed

    Violante, Rafael; Santoro, Silvina; González, Claudio

    2011-01-01

    To determine the prevalence of psychiatric disorders in 761 obese patients, prospectively assessing their impact in both adherence to therapy and its outcome. Overweight and obesity were defined by body mass index (BMI), depression and anxiety according to "The Hospital Anxiety and Depressio Scale". Patients received a physical and biochemical evaluation, a hypochaloric diet and a training plan. Sibutramine was prescribed as anti-obesity drug. The mean age was 31,28 (SD 11,26) years. 74.77% were women. The mean weight was 91.36 kg with a BMI of 34,49 Kg/m2 (SD 6,29). The prevalence of possible, probable and definite anxiety/depression was: 56.3%/22.0%, 29.8%/6.2%, and 7.2%/0.8% respectively. Both initial and final weight and BMI were higher in definite and probable depression respectively, with a minor percentage of weight loss likewise. The studied psychiatric disturbances were prevalent in our population. Initial and final weight and BMI were higher in groups with more severe anxiety or depression. The percentage of weight loss and adherence to therapy were greater in groups of milder psychiatric disorders.

  11. Disability Weight of Clonorchis sinensis Infection: Captured from Community Study and Model Simulation

    PubMed Central

    Qian, Men-Bao; Chen, Ying-Dan; Fang, Yue-Yi; Xu, Long-Qi; Zhu, Ting-Jun; Tan, Tan; Zhou, Chang-Hai; Wang, Guo-Fei; Jia, Tie-Wu; Yang, Guo-Jing; Zhou, Xiao-Nong

    2011-01-01

    Background Clonorchiasis is among the most neglected tropical diseases. It is caused by ingesting raw or undercooked fish or shrimp containing the larval of Clonorchis sinensis and mainly endemic in Southeast Asia including China, Korea and Vietnam. The global estimations for population at risk and infected are 601 million and 35 million, respectively. However, it is still not listed among the Global Burden of Disease (GBD) and no disability weight is available for it. Disability weight reflects the average degree of loss of life value due to certain chronic disease condition and ranges between 0 (complete health) and 1 (death). It is crucial parameter for calculating the morbidity part of any disease burden in terms of disability-adjusted life years (DALYs). Methodology/Principal Findings According to the probability and disability weight of single sequelae caused by C. sinensis infection, the overall disability weight could be captured through Monte Carlo simulation. The probability of single sequelae was gained from one community investigation, while the corresponding disability weight was searched from the literatures in evidence-based approach. The overall disability weights of the male and female were 0.101 and 0.050, respectively. The overall disability weights of the age group of 5–14, 15–29, 30–44, 45–59 and 60+ were 0.022, 0.052, 0.072, 0.094 and 0.118, respectively. There was some evidence showing that the disability weight and geometric mean of eggs per gram of feces (GMEPG) fitted a logarithmic equation. Conclusion/Significance The overall disability weights of C. sinensis infection are differential in different sex and age groups. The disability weight captured here may be referred for estimating the disease burden of C. sinensis infection. PMID:22180791

  12. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    PubMed

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  13. Improved Horvitz-Thompson Estimation of Model Parameters from Two-phase Stratified Samples: Applications in Epidemiology

    PubMed Central

    Breslow, Norman E.; Lumley, Thomas; Ballantyne, Christie M; Chambless, Lloyd E.; Kulich, Michal

    2009-01-01

    The case-cohort study involves two-phase sampling: simple random sampling from an infinite super-population at phase one and stratified random sampling from a finite cohort at phase two. Standard analyses of case-cohort data involve solution of inverse probability weighted (IPW) estimating equations, with weights determined by the known phase two sampling fractions. The variance of parameter estimates in (semi)parametric models, including the Cox model, is the sum of two terms: (i) the model based variance of the usual estimates that would be calculated if full data were available for the entire cohort; and (ii) the design based variance from IPW estimation of the unknown cohort total of the efficient influence function (IF) contributions. This second variance component may be reduced by adjusting the sampling weights, either by calibration to known cohort totals of auxiliary variables correlated with the IF contributions or by their estimation using these same auxiliary variables. Both adjustment methods are implemented in the R survey package. We derive the limit laws of coefficients estimated using adjusted weights. The asymptotic results suggest practical methods for construction of auxiliary variables that are evaluated by simulation of case-cohort samples from the National Wilms Tumor Study and by log-linear modeling of case-cohort data from the Atherosclerosis Risk in Communities Study. Although not semiparametric efficient, estimators based on adjusted weights may come close to achieving full efficiency within the class of augmented IPW estimators. PMID:20174455

  14. Household Financial Distress and Initial Endowments: Evidence from the 2008 Financial Crisis.

    PubMed

    Olafsson, Arna

    2016-11-01

    This paper studies in utero exposure to the 2008 financial crisis. Exploiting the sudden and unexpected collapse of the Icelandic economy, I find that first-trimester exposure to the crisis led to a sizable and significant reduction in birth weight, increased the probability of a low birth weight ( < 2500 g), and decreased the probability of a high birth weight ( > 4000 g). I also find evidence that the collapse reduced the sex ratio, indicating selection in utero due to maternal prenatal stress exposure. My results imply large welfare losses from financial distress that have hitherto been ignored - because children with worse health at birth can expect substantially lower lifetime earnings - and suggest that economic hardships may in general exacerbate income inequalities in the long run as low-income households are typically more exposed to financial distress. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Probability and Quantum Paradigms: the Interplay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kracklauer, A. F.

    Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a fewmore » details, this variant is appealing in its reliance on well tested concepts and technology.« less

  16. Probability and Quantum Paradigms: the Interplay

    NASA Astrophysics Data System (ADS)

    Kracklauer, A. F.

    2007-12-01

    Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a few details, this variant is appealing in its reliance on well tested concepts and technology.

  17. Interaction between body mass index and central adiposity and risk of incident cognitive impairment and dementia: results from the Women's Health Initiative Memory Study.

    PubMed

    Kerwin, Diana R; Gaussoin, Sarah A; Chlebowski, Rowan T; Kuller, Lewis H; Vitolins, Mara; Coker, Laura H; Kotchen, Jane M; Nicklas, Barbara J; Wassertheil-Smoller, Sylvia; Hoffmann, Raymond G; Espeland, Mark A

    2011-01-01

    To assess the relationship between body mass index (BMI) and waist-hip ratio (WHR) and the clinical end points of cognitive impairment and probable dementia in a cohort of older women enrolled in the Women's Health Initiative Memory Study (WHIMS). Prospective, randomized clinical trial of hormone therapies with annual cognitive assessments and anthropometrics. Fourteen U.S. clinical sites of the WHIMS. Seven thousand one hundred sixty-three postmenopausal women aged 65 to 80 without dementia. Annual cognitive assessments, average follow-up of 4.4 years, including classification of incident cognitive impairment and probable dementia. Height, weight, waist, and hip measurements were assessed at baseline, and a waist-hip ratio (WHR) of 0.8 or greater was used as a marker of central adiposity. There were statistically significant interactions between BMI and WHR and incident cognitive impairment and probable dementia with and without adjustment for a panel of cognitive risk factors. Women with a WHR of 0.80 or greater with a BMI of 20.0 to 24.9 kg/m² had a greater risk of cognitive impairment and probable dementia than more-obese women or women with a WHR less than 0.80, although women with a WHR less than 0.80 and a BMI of 20.0 to 24.9 kg/m² had poorer scores on cognitive assessments. WHR affects the relationship between BMI and risk of cognitive impairment and probable dementia in older women. Underweight women (BMI < 20.0 kg/m²) with a WHR less than 0.80 had a greater risk than those with higher BMIs. In normal-weight to obese women (20.0-29.9 kg/m², central adiposity (WHR ≥ 0.80) is associated with greater risk of cognitive impairment and probable dementia than in women with higher BMI. These data suggest that central adiposity as a risk factor for cognitive impairment and probable dementia in normal-weight women. © 2011, Copyright the Authors. Journal compilation © 2011, The American Geriatrics Society.

  18. Links between quantum physics and thought.

    PubMed

    Robson, Barry

    2009-01-01

    Quantum mechanics (QM) provides a variety of ideas that can assist in developing Artificial Intelligence for healthcare, and opens the possibility of developing a unified system of Best Practice for inference that will embrace both QM and classical inference. Of particular interest is inference in the hyperbolic-complex plane, the counterpart of the normal i-complex plane of basic QM. There are two reasons. First, QM appears to rotate from i-complex Hilbert space to hyperbolic-complex descriptions when observations are made on wave functions as particles, yielding classical results, and classical laws of probability manipulation (e.g. the law of composition of probabilities) then hold, whereas in the i-complex plane they do not. Second, i-complex Hilbert space is not the whole story in physics. Hyperbolic complex planes arise in extension from the Dirac-Clifford calculus to particle physics, in relativistic correction thereby, and in regard to spinors and twisters. Generalization of these forms resemble grammatical constructions and promote the idea that probability-weighted algebraic elements can be used to hold dimensions of syntactic and semantic meaning. It is also starting to look as though when a solution is reached by an inference system in the hyperbolic-complex, the hyperbolic-imaginary values disappear, while conversely hyperbolic-imaginary values are associated with the un-queried state of a system and goal seeking behavior.

  19. Method for detecting and avoiding flight hazards

    NASA Astrophysics Data System (ADS)

    von Viebahn, Harro; Schiefele, Jens

    1997-06-01

    Today's aircraft equipment comprise several independent warning and hazard avoidance systems like GPWS, TCAS or weather radar. It is the pilot's task to monitor all these systems and take the appropriate action in case of an emerging hazardous situation. The developed method for detecting and avoiding flight hazards combines all potential external threats for an aircraft into a single system. It is based on an aircraft surrounding airspace model consisting of discrete volume elements. For each element of the volume the threat probability is derived or computed from sensor output, databases, or information provided via datalink. The position of the own aircraft is predicted by utilizing a probability distribution. This approach ensures that all potential positions of the aircraft within the near future are considered while weighting the most likely flight path. A conflict detection algorithm initiates an alarm in case the threat probability exceeds a threshold. An escape manoeuvre is generated taking into account all potential hazards in the vicinity, not only the one which caused the alarm. The pilot gets a visual information about the type, the locating, and severeness o the threat. The algorithm was implemented and tested in a flight simulator environment. The current version comprises traffic, terrain and obstacle hazards avoidance functions. Its general formulation allows an easy integration of e.g. weather information or airspace restrictions.

  20. Statistical power in parallel group point exposure studies with time-to-event outcomes: an empirical comparison of the performance of randomized controlled trials and the inverse probability of treatment weighting (IPTW) approach.

    PubMed

    Austin, Peter C; Schuster, Tibor; Platt, Robert W

    2015-10-15

    Estimating statistical power is an important component of the design of both randomized controlled trials (RCTs) and observational studies. Methods for estimating statistical power in RCTs have been well described and can be implemented simply. In observational studies, statistical methods must be used to remove the effects of confounding that can occur due to non-random treatment assignment. Inverse probability of treatment weighting (IPTW) using the propensity score is an attractive method for estimating the effects of treatment using observational data. However, sample size and power calculations have not been adequately described for these methods. We used an extensive series of Monte Carlo simulations to compare the statistical power of an IPTW analysis of an observational study with time-to-event outcomes with that of an analysis of a similarly-structured RCT. We examined the impact of four factors on the statistical power function: number of observed events, prevalence of treatment, the marginal hazard ratio, and the strength of the treatment-selection process. We found that, on average, an IPTW analysis had lower statistical power compared to an analysis of a similarly-structured RCT. The difference in statistical power increased as the magnitude of the treatment-selection model increased. The statistical power of an IPTW analysis tended to be lower than the statistical power of a similarly-structured RCT.

  1. Systemic Sclerosis Classification Criteria: Developing methods for multi-criteria decision analysis with 1000Minds

    PubMed Central

    Johnson, Sindhu R.; Naden, Raymond P.; Fransen, Jaap; van den Hoogen, Frank; Pope, Janet E.; Baron, Murray; Tyndall, Alan; Matucci-Cerinic, Marco; Denton, Christopher P.; Distler, Oliver; Gabrielli, Armando; van Laar, Jacob M.; Mayes, Maureen; Steen, Virginia; Seibold, James R.; Clements, Phillip; Medsger, Thomas A.; Carreira, Patricia E.; Riemekasten, Gabriela; Chung, Lorinda; Fessler, Barri J.; Merkel, Peter A.; Silver, Richard; Varga, John; Allanore, Yannick; Mueller-Ladner, Ulf; Vonk, Madelon C.; Walker, Ulrich A.; Cappelli, Susanna; Khanna, Dinesh

    2014-01-01

    Objective Classification criteria for systemic sclerosis (SSc) are being developed. The objectives were to: develop an instrument for collating case-data and evaluate its sensibility; use forced-choice methods to reduce and weight criteria; and explore agreement between experts on the probability that cases were classified as SSc. Study Design and Setting A standardized instrument was tested for sensibility. The instrument was applied to 20 cases covering a range of probabilities that each had SSc. Experts rank-ordered cases from highest to lowest probability; reduced and weighted the criteria using forced-choice methods; and re-ranked the cases. Consistency in rankings was evaluated using intraclass correlation coefficients (ICC). Results Experts endorsed clarity (83%), comprehensibility (100%), face and content validity (100%). Criteria were weighted (points): finger skin thickening (14–22), finger-tip lesions (9–21), friction rubs (21), finger flexion contractures (16), pulmonary fibrosis (14), SSc-related antibodies (15), Raynaud’s phenomenon (13), calcinosis (12), pulmonary hypertension (11), renal crisis (11), telangiectasia (10), abnormal nailfold capillaries (10), esophageal dilation (7) and puffy fingers (5). The ICC across experts was 0.73 (95%CI 0.58,0.86) and improved to 0.80 (95%CI 0.68,0.90). Conclusions Using a sensible instrument and forced-choice methods, the number of criteria were reduced by 39% (23 to 14) and weighted. Our methods reflect the rigors of measurement science, and serves as a template for developing classification criteria. PMID:24721558

  2. Variance estimation when using inverse probability of treatment weighting (IPTW) with survival analysis.

    PubMed

    Austin, Peter C

    2016-12-30

    Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naïve model-based variance estimator; (ii) a robust sandwich-type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  3. Errors in the estimation of the variance: implications for multiple-probability fluctuation analysis.

    PubMed

    Saviane, Chiara; Silver, R Angus

    2006-06-15

    Synapses play a crucial role in information processing in the brain. Amplitude fluctuations of synaptic responses can be used to extract information about the mechanisms underlying synaptic transmission and its modulation. In particular, multiple-probability fluctuation analysis can be used to estimate the number of functional release sites, the mean probability of release and the amplitude of the mean quantal response from fits of the relationship between the variance and mean amplitude of postsynaptic responses, recorded at different probabilities. To determine these quantal parameters, calculate their uncertainties and the goodness-of-fit of the model, it is important to weight the contribution of each data point in the fitting procedure. We therefore investigated the errors associated with measuring the variance by determining the best estimators of the variance of the variance and have used simulations of synaptic transmission to test their accuracy and reliability under different experimental conditions. For central synapses, which generally have a low number of release sites, the amplitude distribution of synaptic responses is not normal, thus the use of a theoretical variance of the variance based on the normal assumption is not a good approximation. However, appropriate estimators can be derived for the population and for limited sample sizes using a more general expression that involves higher moments and introducing unbiased estimators based on the h-statistics. Our results are likely to be relevant for various applications of fluctuation analysis when few channels or release sites are present.

  4. A multi-source probabilistic hazard assessment of tephra dispersal in the Neapolitan area

    NASA Astrophysics Data System (ADS)

    Sandri, Laura; Costa, Antonio; Selva, Jacopo; Folch, Arnau; Macedonio, Giovanni; Tonini, Roberto

    2015-04-01

    In this study we present the results obtained from a long-term Probabilistic Hazard Assessment (PHA) of tephra dispersal in the Neapolitan area. Usual PHA for tephra dispersal needs the definition of eruptive scenarios (usually by grouping eruption sizes and possible vent positions in a limited number of classes) with associated probabilities, a meteorological dataset covering a representative time period, and a tephra dispersal model. PHA then results from combining simulations considering different volcanological and meteorological conditions through weights associated to their specific probability of occurrence. However, volcanological parameters (i.e., erupted mass, eruption column height, eruption duration, bulk granulometry, fraction of aggregates) typically encompass a wide range of values. Because of such a natural variability, single representative scenarios or size classes cannot be adequately defined using single values for the volcanological inputs. In the present study, we use a method that accounts for this within-size-class variability in the framework of Event Trees. The variability of each parameter is modeled with specific Probability Density Functions, and meteorological and volcanological input values are chosen by using a stratified sampling method. This procedure allows for quantifying hazard without relying on the definition of scenarios, thus avoiding potential biases introduced by selecting single representative scenarios. Embedding this procedure into the Bayesian Event Tree scheme enables the tephra fall PHA and its epistemic uncertainties. We have appied this scheme to analyze long-term tephra fall PHA from Vesuvius and Campi Flegrei, in a multi-source paradigm. We integrate two tephra dispersal models (the analytical HAZMAP and the numerical FALL3D) into BET_VH. The ECMWF reanalysis dataset are used for exploring different meteorological conditions. The results obtained show that PHA accounting for the whole natural variability are consistent with previous probabilities maps elaborated for Vesuvius and Campi Flegrei on the basis of single representative scenarios, but show significant differences. In particular, the area characterized by a 300 kg/m2-load exceedance probability larger than 5%, accounting for the whole range of variability (that is, from small violent strombolian to plinian eruptions), is similar to that displayed in the maps based on the medium magnitude reference eruption, but it is of a smaller extent. This is due to the relatively higher weight of the small magnitude eruptions considered in this study, but neglected in the reference scenario maps. On the other hand, in our new maps the area characterized by a 300 kg/m2-load exceedance probability larger than 1% is much larger than that of the medium magnitude reference eruption, due to the contribution of plinian eruptions at lower probabilities, again neglected in the reference scenario maps.

  5. Role of the parameters involved in the plan optimization based on the generalized equivalent uniform dose and radiobiological implications

    NASA Astrophysics Data System (ADS)

    Widesott, L.; Strigari, L.; Pressello, M. C.; Benassi, M.; Landoni, V.

    2008-03-01

    We investigated the role and the weight of the parameters involved in the intensity modulated radiation therapy (IMRT) optimization based on the generalized equivalent uniform dose (gEUD) method, for prostate and head-and-neck plans. We systematically varied the parameters (gEUDmax and weight) involved in the gEUD-based optimization of rectal wall and parotid glands. We found that the proper value of weight factor, still guaranteeing planning treatment volumes coverage, produced similar organs at risks dose-volume (DV) histograms for different gEUDmax with fixed a = 1. Most of all, we formulated a simple relation that links the reference gEUDmax and the associated weight factor. As secondary objective, we evaluated plans obtained with the gEUD-based optimization and ones based on DV criteria, using the normal tissue complication probability (NTCP) models. gEUD criteria seemed to improve sparing of rectum and parotid glands with respect to DV-based optimization: the mean dose, the V40 and V50 values to the rectal wall were decreased of about 10%, the mean dose to parotids decreased of about 20-30%. But more than the OARs sparing, we underlined the halving of the OARs optimization time with the implementation of the gEUD-based cost function. Using NTCP models we enhanced differences between the two optimization criteria for parotid glands, but no for rectum wall.

  6. Law of large numbers for the SIR model with random vertex weights on Erdős-Rényi graph

    NASA Astrophysics Data System (ADS)

    Xue, Xiaofeng

    2017-11-01

    In this paper we are concerned with the SIR model with random vertex weights on Erdős-Rényi graph G(n , p) . The Erdős-Rényi graph G(n , p) is generated from the complete graph Cn with n vertices through independently deleting each edge with probability (1 - p) . We assign i. i. d. copies of a positive r. v. ρ on each vertex as the vertex weights. For the SIR model, each vertex is in one of the three states 'susceptible', 'infective' and 'removed'. An infective vertex infects a given susceptible neighbor at rate proportional to the production of the weights of these two vertices. An infective vertex becomes removed at a constant rate. A removed vertex will never be infected again. We assume that at t = 0 there is no removed vertex and the number of infective vertices follows a Bernoulli distribution B(n , θ) . Our main result is a law of large numbers of the model. We give two deterministic functions HS(ψt) ,HV(ψt) for t ≥ 0 and show that for any t ≥ 0, HS(ψt) is the limit proportion of susceptible vertices and HV(ψt) is the limit of the mean capability of an infective vertex to infect a given susceptible neighbor at moment t as n grows to infinity.

  7. Effects of alpha-2 agonists on renal function in hypertensive humans.

    PubMed

    Goldberg, M; Gehr, M

    1985-01-01

    Centrally acting adrenergic agonists, by decreasing peripheral adrenergic activity, are effective antihypertensive agents. The older agents, however, especially methyldopa, have been associated with weight gain, clinical edema, and antihypertensive tolerance when used as monotherapy. While acute studies in humans have demonstrated weight gain and sodium retention with clonidine and guanabenz, chronic administration results in a decrease in weight and plasma volume. The absence of chronic weight gain and of sodium retention could be the result of a counterbalance between hypotension-related antinatriuresis, secondary to a decrease in glomerular filtration rate and renal blood flow, and natriuretic activity, as a result of a decrease in renal sympathetic tone. Whereas natriuresis and water diuresis have been demonstrated in animals with acute clonidine or guanabenz administration, this has not been demonstrated in humans. Recent studies in which saline administration was used to precondition humans to a subsequent natriuretic stimulus (i.e., guanabenz-induced decreased renal adrenergic activity) resulted in stabilization of renal blood flow and natriuresis. Selective reduction renal sympathetic activity affecting salt and water transport may explain why guanabenz and probably also clonidine seem to be devoid of the sodium/fluid-retaining properties that are common with other antihypertensive agents. Because agents of this class have effects other than pure central alpha-2 agonism (such as alpha-1 activity), they might have confounding and counterbalancing side effects leading to sodium and water retention.

  8. Arsenite in drinking water produces glucose intolerance in pregnant rats and their female offspring.

    PubMed

    Bonaventura, María Marta; Bourguignon, Nadia Soledad; Bizzozzero, Marianne; Rodriguez, Diego; Ventura, Clara; Cocca, Claudia; Libertun, Carlos; Lux-Lantos, Victoria Adela

    2017-02-01

    Drinking water is the main source of arsenic exposure. Chronic exposure has been associated with metabolic disorders. Here we studied the effects of arsenic on glucose metabolism, in pregnant and post-partum of dams and their offspring. We administered 5 (A5) or 50 (A50) mg/L of sodium arsenite in drinking water to rats from gestational day 1 (GD1) until two months postpartum (2MPP), and to their offspring from weaning until 8 weeks old. Liver arsenic dose-dependently increased in arsenite-treated rats to levels similar to exposed population. Pregnant A50 rats gained less weight than controls and recovered normal weight at 2MPP. Arsenite-treated pregnant animals showed glucose intolerance on GD16-17, with impaired insulin secretion but normal insulin sensitivity; they showed dose-dependent increased pancreas insulin on GD18. All alterations reverted at 2MPP. Offspring from A50-treated mothers showed lower body weight at birth, 4 and 8 weeks of age, and glucose intolerance in adult females, probably due to insulin secretion and sensitivity alterations. Arsenic alters glucose homeostasis during pregnancy by altering beta-cell function, increasing risk of developing gestational diabetes. In pups, it induces low body weight from birth to 8 weeks of age, and glucose intolerance in females, demonstrating a sex specific response. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Height and Weight of Children: United States.

    ERIC Educational Resources Information Center

    Hamill, Peter V. V.; And Others

    This report contains national estimates based on findings from the Health Examination Survey in 1963-65 on height and weight measurements of children 6- to 11-years-old. A nationwide probability sample of 7,119 children was selected to represent the noninstitutionalized children (about 24 million) in this age group. Height was obtained in stocking…

  10. Performance of Random Effects Model Estimators under Complex Sampling Designs

    ERIC Educational Resources Information Center

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  11. Statistical comparison of pooled nitrogen washout data of various altitude decompression response groups

    NASA Technical Reports Server (NTRS)

    Edwards, B. F.; Waligora, J. M.; Horrigan, D. J., Jr.

    1985-01-01

    This analysis was done to determine whether various decompression response groups could be characterized by the pooled nitrogen (N2) washout profiles of the group members, pooling individual washout profiles provided a smooth time dependent function of means representative of the decompression response group. No statistically significant differences were detected. The statistical comparisons of the profiles were performed by means of univariate weighted t-test at each 5 minute profile point, and with levels of significance of 5 and 10 percent. The estimated powers of the tests (i.e., probabilities) to detect the observed differences in the pooled profiles were of the order of 8 to 30 percent.

  12. A Study of Term Proximity and Document Weighting Normalization in Pseudo Relevance Feedback - UIUC at TREC 2009 Million Query Track

    DTIC Science & Technology

    2009-11-01

    is estimated using the Gaussian kernel function: c′(w, i) = N∑ j =1 c(w, j ) exp [−(i− j )2 2σ2 ] (2) where i and j are absolute positions of the...corresponding terms in the document, and N is the length of the document; c(w, j ) is the actual count of term w at position j . The PLM P (·|D, i) needs to...probability of rel- evance well. The distribution of relevance can be approximated as fol- lows: p(i|θrel) = ∑ j δ(Qj , i)∑ i ∑ j δ(Qj , i) (10

  13. Egg production of turbot, Scophthalmus maximus, in the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Nissling, Anders; Florin, Ann-Britt; Thorsen, Anders; Bergström, Ulf

    2013-11-01

    In the brackish water Baltic Sea turbot spawn at ~ 6-9 psu along the coast and on offshore banks in ICES SD 24-29, with salinity influencing the reproductive success. The potential fecundity (the stock of vitellogenic oocytes in the pre-spawning ovary), egg size (diameter and dry weight of artificially fertilized 1-day-old eggs) and gonad dry weight were assessed for fish sampled in SD 25 and SD 28. Multiple regression analysis identified somatic weight, or total length in combination with Fulton's condition factor, as main predictors of fecundity and gonad dry weight with stage of maturity (oocyte packing density or leading cohort) as an additional predictor. For egg size, somatic weight was identified as main predictor while otolith weight (proxy for age) was an additional predictor. Univariate analysis using GLM revealed significantly higher fecundity and gonad dry weight for turbot from SD 28 (3378-3474 oocytes/g somatic weight) compared to those from SD 25 (2343 oocytes/g somatic weight), with no difference in egg size (1.05 ± 0.03 mm diameter and 46.8 ± 6.5 μg dry weight; mean ± sd). The difference in egg production matched egg survival probabilities in relation to salinity conditions suggesting selection for higher fecundity as a consequence of poorer reproductive success at lower salinities. This supports the hypothesis of higher size-specific fecundity towards the limit of the distribution of a species as an adaptation to harsher environmental conditions and lower offspring survival probabilities. Within SD 28 comparisons were made between two major fishing areas targeting spawning aggregations and a marine protected area without fishing. The outcome was inconclusive and is discussed with respect to potential fishery induced effects, effects of the salinity gradient, effects of specific year-classes, and effects of maturation status of sampled fish.

  14. On-line node fault injection training algorithm for MLP networks: objective function and convergence analysis.

    PubMed

    Sum, John Pui-Fai; Leung, Chi-Sing; Ho, Kevin I-J

    2012-02-01

    Improving fault tolerance of a neural network has been studied for more than two decades. Various training algorithms have been proposed in sequel. The on-line node fault injection-based algorithm is one of these algorithms, in which hidden nodes randomly output zeros during training. While the idea is simple, theoretical analyses on this algorithm are far from complete. This paper presents its objective function and the convergence proof. We consider three cases for multilayer perceptrons (MLPs). They are: (1) MLPs with single linear output node; (2) MLPs with multiple linear output nodes; and (3) MLPs with single sigmoid output node. For the convergence proof, we show that the algorithm converges with probability one. For the objective function, we show that the corresponding objective functions of cases (1) and (2) are of the same form. They both consist of a mean square errors term, a regularizer term, and a weight decay term. For case (3), the objective function is slight different from that of cases (1) and (2). With the objective functions derived, we can compare the similarities and differences among various algorithms and various cases.

  15. The evolutionary ecology of decorating behaviour

    PubMed Central

    Ruxton, Graeme D.; Stevens, Martin

    2015-01-01

    Many animals decorate themselves through the accumulation of environmental material on their exterior. Decoration has been studied across a range of different taxa, but there are substantial limits to current understanding. Decoration in non-humans appears to function predominantly in defence against predators and parasites, although an adaptive function is often assumed rather than comprehensively demonstrated. It seems predominantly an aquatic phenomenon—presumably because buoyancy helps reduce energetic costs associated with carrying the decorative material. In terrestrial examples, decorating is relatively common in the larval stages of insects. Insects are small and thus able to generate the power to carry a greater mass of material relative to their own body weight. In adult forms, the need to be lightweight for flight probably rules out decoration. We emphasize that both benefits and costs to decoration are rarely quantified, and that costs should include those associated with collecting as well as carrying the material. PMID:26041868

  16. A Partially-Stirred Batch Reactor Model for Under-Ventilated Fire Dynamics

    NASA Astrophysics Data System (ADS)

    McDermott, Randall; Weinschenk, Craig

    2013-11-01

    A simple discrete quadrature method is developed for closure of the mean chemical source term in large-eddy simulations (LES) and implemented in the publicly available fire model, Fire Dynamics Simulator (FDS). The method is cast as a partially-stirred batch reactor model for each computational cell. The model has three distinct components: (1) a subgrid mixing environment, (2) a mixing model, and (3) a set of chemical rate laws. The subgrid probability density function (PDF) is described by a linear combination of Dirac delta functions with quadrature weights set to satisfy simple integral constraints for the computational cell. It is shown that under certain limiting assumptions, the present method reduces to the eddy dissipation concept (EDC). The model is used to predict carbon monoxide concentrations in direct numerical simulation (DNS) of a methane slot burner and in LES of an under-ventilated compartment fire.

  17. Divergence of perturbation theory in large scale structures

    NASA Astrophysics Data System (ADS)

    Pajer, Enrico; van der Woude, Drian

    2018-05-01

    We make progress towards an analytical understanding of the regime of validity of perturbation theory for large scale structures and the nature of some non-perturbative corrections. We restrict ourselves to 1D gravitational collapse, for which exact solutions before shell crossing are known. We review the convergence of perturbation theory for the power spectrum, recently proven by McQuinn and White [1], and extend it to non-Gaussian initial conditions and the bispectrum. In contrast, we prove that perturbation theory diverges for the real space two-point correlation function and for the probability density function (PDF) of the density averaged in cells and all the cumulants derived from it. We attribute these divergences to the statistical averaging intrinsic to cosmological observables, which, even on very large and "perturbative" scales, gives non-vanishing weight to all extreme fluctuations. Finally, we discuss some general properties of non-perturbative effects in real space and Fourier space.

  18. Simulations of Turbulent Momentum and Scalar Transport in Non-Reacting Confined Swirling Coaxial Jets

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey; Moder, Jeffrey P.

    2015-01-01

    This paper presents the numerical simulations of confined three-dimensional coaxial water jets. The objectives are to validate the newly proposed nonlinear turbulence models of momentum and scalar transport, and to evaluate the newly introduced scalar APDF and DWFDF equation along with its Eulerian implementation in the National Combustion Code (NCC). Simulations conducted include the steady RANS, the unsteady RANS (URANS), and the time-filtered Navier-Stokes (TFNS); both without and with invoking the APDF or DWFDF equation. When the APDF (ensemble averaged probability density function) or DWFDF (density weighted filtered density function) equation is invoked, the simulations are of a hybrid nature, i.e., the transport equations of energy and species are replaced by the APDF or DWFDF equation. Results of simulations are compared with the available experimental data. Some positive impacts of the nonlinear turbulence models and the Eulerian scalar APDF and DWFDF approach are observed.

  19. Automated real-time structure health monitoring via signature pattern recognition

    NASA Astrophysics Data System (ADS)

    Sun, Fanping P.; Chaudhry, Zaffir A.; Rogers, Craig A.; Majmundar, M.; Liang, Chen

    1995-05-01

    Described in this paper are the details of an automated real-time structure health monitoring system. The system is based on structural signature pattern recognition. It uses an array of piezoceramic patches bonded to the structure as integrated sensor-actuators, an electric impedance analyzer for structural frequency response function acquisition and a PC for control and graphic display. An assembled 3-bay truss structure is employed as a test bed. Two issues, the localization of sensing area and the sensor temperature drift, which are critical for the success of this technique are addressed and a novel approach of providing temperature compensation using probability correlation function is presented. Due to the negligible weight and size of the solid-state sensor array and its ability to sense incipient-type damage, the system can eventually be implemented on many types of structures such as aircraft, spacecraft, large-span dome roof and steel bridges requiring multilocation and real-time health monitoring.

  20. Volume-weighted measure for eternal inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winitzki, Sergei

    2008-08-15

    I propose a new volume-weighted probability measure for cosmological 'multiverse' scenarios involving eternal inflation. The 'reheating-volume (RV) cutoff' calculates the distribution of observable quantities on a portion of the reheating hypersurface that is conditioned to be finite. The RV measure is gauge-invariant, does not suffer from the 'youngness paradox', and is independent of initial conditions at the beginning of inflation. In slow-roll inflationary models with a scalar inflaton, the RV-regulated probability distributions can be obtained by solving nonlinear diffusion equations. I discuss possible applications of the new measure to 'landscape' scenarios with bubble nucleation. As an illustration, I compute themore » predictions of the RV measure in a simple toy landscape.« less

  1. From instinct to intellect: the challenge of maintaining healthy weight in the modern world.

    PubMed

    Peters, J C; Wyatt, H R; Donahoo, W T; Hill, J O

    2002-05-01

    The global obesity epidemic is being driven in large part by a mismatch between our environment and our metabolism. Human physiology developed to function within an environment where high levels of physical activity were needed in daily life and food was inconsistently available. For most of mankind's history, physical activity has 'pulled' appetite so that the primary challenge to the physiological system for body weight control was to obtain sufficient energy intake to prevent negative energy balance and body energy loss. The current environment is characterized by a situation whereby minimal physical activity is required for daily life and food is abundant, inexpensive, high in energy density and widely available. Within this environment, food intake 'pushes' the system, and the challenge to the control system becomes to increase physical activity sufficiently to prevent positive energy balance. There does not appear to be a strong drive to increase physical activity in response to excess energy intake and there appears to be only a weak adaptive increase in resting energy expenditure in response to excess energy intake. In the modern world, the prevailing environment constitutes a constant background pressure that promotes weight gain. We propose that the modern environment has taken body weight control from an instinctual (unconscious) process to one that requires substantial cognitive effort. In the current environment, people who are not devoting substantial conscious effort to managing body weight are probably gaining weight. It is unlikely that we would be able to build the political will to undo our modern lifestyle, to change the environment back to one in which body weight control again becomes instinctual. In order to combat the growing epidemic we should focus our efforts on providing the knowledge, cognitive skills and incentives for controlling body weight and at the same time begin creating a supportive environment to allow better management of body weight.

  2. Systematic review: an evaluation of major commercial weight loss programs in the United States.

    PubMed

    Tsai, Adam Gilden; Wadden, Thomas A

    2005-01-04

    Each year millions of Americans enroll in commercial and self-help weight loss programs. Health care providers and their obese patients know little about these programs because of the absence of systematic reviews. To describe the components, costs, and efficacy of the major commercial and organized self-help weight loss programs in the United States that provide structured in-person or online counseling. Review of company Web sites, telephone discussion with company representatives, and search of the MEDLINE database. Randomized trials at least 12 weeks in duration that enrolled only adults and assessed interventions as they are usually provided to the public, or case series that met these criteria, stated the number of enrollees, and included a follow-up evaluation that lasted 1 year or longer. Data were extracted on study design, attrition, weight loss, duration of follow-up, and maintenance of weight loss. We found studies of eDiets.com, Health Management Resources, Take Off Pounds Sensibly, OPTIFAST, and Weight Watchers. Of 3 randomized, controlled trials of Weight Watchers, the largest reported a loss of 3.2% of initial weight at 2 years. One randomized trial and several case series of medically supervised very-low-calorie diet programs found that patients who completed treatment lost approximately 15% to 25% of initial weight. These programs were associated with high costs, high attrition rates, and a high probability of regaining 50% or more of lost weight in 1 to 2 years. Commercial interventions available over the Internet and organized self-help programs produced minimal weight loss. Because many studies did not control for high attrition rates, the reported results are probably a best-case scenario. With the exception of 1 trial of Weight Watchers, the evidence to support the use of the major commercial and self-help weight loss programs is suboptimal. Controlled trials are needed to assess the efficacy and cost-effectiveness of these interventions.

  3. Analysis of mean seismic ground motion and its uncertainty based on the UCERF3 geologic slip rate model with uncertainty for California

    USGS Publications Warehouse

    Zeng, Yuehua

    2018-01-01

    The Uniform California Earthquake Rupture Forecast v.3 (UCERF3) model (Field et al., 2014) considers epistemic uncertainty in fault‐slip rate via the inclusion of multiple rate models based on geologic and/or geodetic data. However, these slip rates are commonly clustered about their mean value and do not reflect the broader distribution of possible rates and associated probabilities. Here, we consider both a double‐truncated 2σ Gaussian and a boxcar distribution of slip rates and use a Monte Carlo simulation to sample the entire range of the distribution for California fault‐slip rates. We compute the seismic hazard following the methodology and logic‐tree branch weights applied to the 2014 national seismic hazard model (NSHM) for the western U.S. region (Petersen et al., 2014, 2015). By applying a new approach developed in this study to the probabilistic seismic hazard analysis (PSHA) using precomputed rates of exceedance from each fault as a Green’s function, we reduce the computer time by about 10^5‐fold and apply it to the mean PSHA estimates with 1000 Monte Carlo samples of fault‐slip rates to compare with results calculated using only the mean or preferred slip rates. The difference in the mean probabilistic peak ground motion corresponding to a 2% in 50‐yr probability of exceedance is less than 1% on average over all of California for both the Gaussian and boxcar probability distributions for slip‐rate uncertainty but reaches about 18% in areas near faults compared with that calculated using the mean or preferred slip rates. The average uncertainties in 1σ peak ground‐motion level are 5.5% and 7.3% of the mean with the relative maximum uncertainties of 53% and 63% for the Gaussian and boxcar probability density function (PDF), respectively.

  4. Exact Scheffé-type confidence intervals for output from groundwater flow models: 1. Use of hydrogeologic information

    USGS Publications Warehouse

    Cooley, Richard L.

    1993-01-01

    A new method is developed to efficiently compute exact Scheffé-type confidence intervals for output (or other function of parameters) g(β) derived from a groundwater flow model. The method is general in that parameter uncertainty can be specified by any statistical distribution having a log probability density function (log pdf) that can be expanded in a Taylor series. However, for this study parameter uncertainty is specified by a statistical multivariate beta distribution that incorporates hydrogeologic information in the form of the investigator's best estimates of parameters and a grouping of random variables representing possible parameter values so that each group is defined by maximum and minimum bounds and an ordering according to increasing value. The new method forms the confidence intervals from maximum and minimum limits of g(β) on a contour of a linear combination of (1) the quadratic form for the parameters used by Cooley and Vecchia (1987) and (2) the log pdf for the multivariate beta distribution. Three example problems are used to compare characteristics of the confidence intervals for hydraulic head obtained using different weights for the linear combination. Different weights generally produced similar confidence intervals, whereas the method of Cooley and Vecchia (1987) often produced much larger confidence intervals.

  5. On the Log-Normality of Historical Magnetic-Storm Intensity Statistics: Implications for Extreme-Event Probabilities

    NASA Astrophysics Data System (ADS)

    Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.

    2015-12-01

    An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.

  6. Perceptual-center modeling is affected by including acoustic rate-of-change modulations.

    PubMed

    Harsin, C A

    1997-02-01

    This study investigated the acoustic correlates of perceptual centers (p-centers) in CV and VC syllables and developed an acoustic p-center model. In Part 1, listeners located syllables' p-centers by a method-of-adjustment procedure. The CV syllables contained the consonants /s/,/r/,/n/,/t/,/d/,/k/, and /g/; the VCs, the consonants /s/,/r/, and /n/. The vowel in all syllables was /a/. The results of this experiment replicated and extended previous findings regarding the effects of phonetic variation on p-centers. In Part 2, a digital signal processing procedure was used to acoustically model p-center perception. Each stimulus was passed through a six-band digital filter, and the outputs were processed to derive low-frequency modulation components. These components were weighted according to a perceived modulation magnitude function and recombined to create six psychoacoustic envelopes containing modulation energies from 3 to 47 Hz. In this analysis, p-centers were found to be highly correlated with the time-weighted function of the rate-of-change in the psychoacoustic envelopes, multiplied by the psychoacoustic envelope magnitude increment. The results were interpreted as suggesting (1) the probable role of low-frequency energy modulations in p-center perception, and (2) the presence of perceptual processes that integrate multiple articulatory events into a single syllabic event.

  7. Validation of the Dominant Sequence Paradigm and Role of Dynamic Contrast-enhanced Imaging in PI-RADS Version 2.

    PubMed

    Greer, Matthew D; Shih, Joanna H; Lay, Nathan; Barrett, Tristan; Kayat Bittencourt, Leonardo; Borofsky, Samuel; Kabakus, Ismail M; Law, Yan Mee; Marko, Jamie; Shebel, Haytham; Mertan, Francesca V; Merino, Maria J; Wood, Bradford J; Pinto, Peter A; Summers, Ronald M; Choyke, Peter L; Turkbey, Baris

    2017-12-01

    Purpose To validate the dominant pulse sequence paradigm and limited role of dynamic contrast material-enhanced magnetic resonance (MR) imaging in the Prostate Imaging Reporting and Data System (PI-RADS) version 2 for prostate multiparametric MR imaging by using data from a multireader study. Materials and Methods This HIPAA-compliant retrospective interpretation of prospectively acquired data was approved by the local ethics committee. Patients were treatment-naïve with endorectal coil 3-T multiparametric MR imaging. A total of 163 patients were evaluated, 110 with prostatectomy after multiparametric MR imaging and 53 with negative multiparametric MR imaging and systematic biopsy findings. Nine radiologists participated in this study and interpreted images in 58 patients, on average (range, 56-60 patients). Lesions were detected with PI-RADS version 2 and were compared with whole-mount prostatectomy findings. Probability of cancer detection for overall, T2-weighted, and diffusion-weighted (DW) imaging PI-RADS scores was calculated in the peripheral zone (PZ) and transition zone (TZ) by using generalized estimating equations. To determine dominant pulse sequence and benefit of dynamic contrast-enhanced (DCE) imaging, odds ratios (ORs) were calculated as the ratio of odds of cancer of two consecutive scores by logistic regression. Results A total of 654 lesions (420 in the PZ) were detected. The probability of cancer detection for PI-RADS category 2, 3, 4, and 5 lesions was 15.7%, 33.1%, 70.5%, and 90.7%, respectively. DW imaging outperformed T2-weighted imaging in the PZ (OR, 3.49 vs 2.45; P = .008). T2-weighted imaging performed better but did not clearly outperform DW imaging in the TZ (OR, 4.79 vs 3.77; P = .494). Lesions classified as PI-RADS category 3 at DW MR imaging and as positive at DCE imaging in the PZ showed a higher probability of cancer detection than did DCE-negative PI-RADS category 3 lesions (67.8% vs 40.0%, P = .02). The addition of DCE imaging to DW imaging in the PZ was beneficial (OR, 2.0; P = .027), with an increase in the probability of cancer detection of 15.7%, 16.0%, and 9.2% for PI-RADS category 2, 3, and 4 lesions, respectively. Conclusion DW imaging outperforms T2-weighted imaging in the PZ; T2-weighted imaging did not show a significant difference when compared with DW imaging in the TZ by PI-RADS version 2 criteria. The addition of DCE imaging to DW imaging scores in the PZ yields meaningful improvements in probability of cancer detection. © RSNA, 2017 An earlier incorrect version of this article appeared online. This article was corrected on July 27, 2017. Online supplemental material is available for this article.

  8. Reduction of tablet weight variability by optimizing paddle speed in the forced feeder of a high-speed rotary tablet press.

    PubMed

    Peeters, Elisabeth; De Beer, Thomas; Vervaet, Chris; Remon, Jean-Paul

    2015-04-01

    Tableting is a complex process due to the large number of process parameters that can be varied. Knowledge and understanding of the influence of these parameters on the final product quality is of great importance for the industry, allowing economic efficiency and parametric release. The aim of this study was to investigate the influence of paddle speeds and fill depth at different tableting speeds on the weight and weight variability of tablets. Two excipients possessing different flow behavior, microcrystalline cellulose (MCC) and dibasic calcium phosphate dihydrate (DCP), were selected as model powders. Tablets were manufactured via a high-speed rotary tablet press using design of experiments (DoE). During each experiment also the volume of powder in the forced feeder was measured. Analysis of the DoE revealed that paddle speeds are of minor importance for tablet weight but significantly affect volume of powder inside the feeder in case of powders with excellent flowability (DCP). The opposite effect of paddle speed was observed for fairly flowing powders (MCC). Tableting speed played a role in weight and weight variability, whereas changing fill depth exclusively influenced tablet weight. The DoE approach allowed predicting the optimum combination of process parameters leading to minimum tablet weight variability. Monte Carlo simulations allowed assessing the probability to exceed the acceptable response limits if factor settings were varied around their optimum. This multi-dimensional combination and interaction of input variables leading to response criteria with acceptable probability reflected the design space.

  9. Life cycle efficiency of beef production: II. Relationship of cow efficiency ratios to traits of the dam and progeny weaned.

    PubMed

    Davis, M E; Rutledge, J J; Cundiff, L V; Hauser, E R

    1983-10-01

    Several measures of life cycle cow efficiency were calculated using weights and individual feed consumptions recorded on 160 dams of beef, dairy and beef X dairy breeding and their progeny. Ratios of output to input were used to estimate efficiency, where outputs included weaning weights of progeny plus salvage value of the dam and inputs included creep feed consumed by progeny plus feed consumed by the dam over her entire lifetime. In one approach to estimating efficiency, inputs and outputs were weighted by probabilities that were a function of the cow herd age distribution and percentage calf crop in a theoretical herd. The second approach to estimating cow efficiency involved dividing the sum of the weights by the sum of the feed consumption values, with all pieces of information being given equal weighting. Relationships among efficiency estimates and various traits of dams and progeny were examined. Weights, heights, and weight:height ratios of dams at 240 d of age were not correlated significantly with subsequent efficiency of calf production, indicating that indirect selection for lifetime cow efficiency at an early age based on these traits would be ineffective. However, females exhibiting more efficient weight gains from 240 d to first calving tended to become more efficient dams. Correlations of efficiency with weight of dam at calving and at weaning were negative and generally highly significant. Height at withers was negatively related to efficiency. Ratio of weight to height indicated that fatter dams generally were less efficient. The effect of milk production on efficiency depended upon the breed combinations involved. Dams calving for the first time at an early age and continuing to calve at short intervals were superior in efficiency. Weaning rate was closely related to life cycle efficiency. Large negative correlations between efficiency and feed consumption of dams were observed, while correlations of efficiency with progeny weights and feed consumptions in individual parities tended to be positive though nonsignificant. However, correlations of efficiency with accumulative progeny weights and feed consumptions generally were significant.

  10. The application of geostationary propagation models to non-geostationary propagation measurements

    NASA Astrophysics Data System (ADS)

    Haddock, Paul Christopher

    Atmospheric attenuation becomes evident above 10 GHz due to the absorption of microwave energy from the molecular motion of the atmospheric constituents. Atmospheric effects on satellite communications systems operating at frequencies greater than 10 GHz become more pronounced. Most geostationary (GEO) climate models, which predict the fading statistics for earth-space telecommunications, have satellite elevation angle as one of the input parameters. There has been an interest in the industry to apply the propagation models developed for the GEO satellites to the non-geostationary (NGO) satellite case. With the NGO satellites, the elevation angle to the satellite is time-variable, and as a result the earth-space propagation medium is time varying. We can calculate the expected probability that a satellite, in a given orbit, will be found at a given elevation angle as a percentage of the year based on the satellite orbital elements, the minimum elevation angle allowed in the constellation operation plan, and the constellation configuration. From this calculation, we can develop an empirical fit to a given probability density function (PDF) to account for the distribution of elevation angles. This PDF serves as a weighting function for the elevation input into the GEO climate model to produce the overall fading statistics for the NGO case. In this research, a Ka-band total power radiometer was developed to measure the down-dwelling incoherent radiant electromagnetic energy from the atmosphere. This whole sky sampling radiometer collected 1 year of radiometric measurements. These observations occurred at varying elevation and azimuthal angles, in close proximity to a weak water vapor absorption line. By referencing the output power of the radiometer to known radiometric emissions and by performing frequent internal calibrations, the developed radiometer provided long term highly accurate and stable low-level derived attenuation measurements. By correlating the 1 year of atmospheric measurements to the modified GEO climate model, the hypothesis is tested. That by application of the proper elevation weighting factors, the GEO model is applicable to the NGO case, where the time-varying angle changes are occurring on a short-time period. Finally, we look at the joint statistics of multiple link failures. Using the 1 year of observed attenuations for multiple sky sections, we show that for a given sky section what the probability is that its attenuation level will be equaled or exceeded for each of the remaining sky sections.

  11. Path probability of stochastic motion: A functional approach

    NASA Astrophysics Data System (ADS)

    Hattori, Masayuki; Abe, Sumiyoshi

    2016-06-01

    The path probability of a particle undergoing stochastic motion is studied by the use of functional technique, and the general formula is derived for the path probability distribution functional. The probability of finding paths inside a tube/band, the center of which is stipulated by a given path, is analytically evaluated in a way analogous to continuous measurements in quantum mechanics. Then, the formalism developed here is applied to the stochastic dynamics of stock price in finance.

  12. Lipid mobilising factors specifically associated with cancer cachexia.

    PubMed Central

    Beck, S. A.; Tisdale, M. J.

    1991-01-01

    Both urine and plasma from mice and humans with cancer cachexia have been shown to contain higher levels of lipid mobilising activity than normal controls, even after acute starvation. There was no significant increase in the urinary lipid mobilising activity of either mice or humans after acute starvation, suggesting that the material in the cachectic situation was probably not due to an elevation of hormones normally associated with the catabolic state in starvation. Further characterisation of the lipid mobilising activity in the urine of cachectic mice using Sephadex G50 exclusion chromatography showed four distinct peaks of activity of apparent molecular weights of greater than 20, 3, 1.5 and less than 0.7 kDa. No comparable peaks of activity were found in the urine of a non tumour-bearing mouse. The high molecular weight activity was probably formed by aggregation of low molecular weight material, since treatment with 0.5 M NaCl caused dissociation to material with a broad spectrum of molecular weights between 3 and 0.7 kDa. Lipolytic species of similar molecular weights were also found in the urine of cachectic cancer patients, but not in normal urine even after 24 h starvation. The lipid mobilising species may be responsible for catabolism of host adipose tissue in the cachectic state. PMID:2069843

  13. Assessing the performance of the generalized propensity score for estimating the effect of quantitative or continuous exposures on binary outcomes

    PubMed Central

    2018-01-01

    Propensity score methods are increasingly being used to estimate the effects of treatments and exposures when using observational data. The propensity score was initially developed for use with binary exposures. The generalized propensity score (GPS) is an extension of the propensity score for use with quantitative or continuous exposures (eg, dose or quantity of medication, income, or years of education). We used Monte Carlo simulations to examine the performance of different methods of using the GPS to estimate the effect of continuous exposures on binary outcomes. We examined covariate adjustment using the GPS and weighting using weights based on the inverse of the GPS. We examined both the use of ordinary least squares to estimate the propensity function and the use of the covariate balancing propensity score algorithm. The use of methods based on the GPS was compared with the use of G‐computation. All methods resulted in essentially unbiased estimation of the population dose‐response function. However, GPS‐based weighting tended to result in estimates that displayed greater variability and had higher mean squared error when the magnitude of confounding was strong. Of the methods based on the GPS, covariate adjustment using the GPS tended to result in estimates with lower variability and mean squared error when the magnitude of confounding was strong. We illustrate the application of these methods by estimating the effect of average neighborhood income on the probability of death within 1 year of hospitalization for an acute myocardial infarction. PMID:29508424

  14. Assessing the performance of the generalized propensity score for estimating the effect of quantitative or continuous exposures on binary outcomes.

    PubMed

    Austin, Peter C

    2018-05-20

    Propensity score methods are increasingly being used to estimate the effects of treatments and exposures when using observational data. The propensity score was initially developed for use with binary exposures. The generalized propensity score (GPS) is an extension of the propensity score for use with quantitative or continuous exposures (eg, dose or quantity of medication, income, or years of education). We used Monte Carlo simulations to examine the performance of different methods of using the GPS to estimate the effect of continuous exposures on binary outcomes. We examined covariate adjustment using the GPS and weighting using weights based on the inverse of the GPS. We examined both the use of ordinary least squares to estimate the propensity function and the use of the covariate balancing propensity score algorithm. The use of methods based on the GPS was compared with the use of G-computation. All methods resulted in essentially unbiased estimation of the population dose-response function. However, GPS-based weighting tended to result in estimates that displayed greater variability and had higher mean squared error when the magnitude of confounding was strong. Of the methods based on the GPS, covariate adjustment using the GPS tended to result in estimates with lower variability and mean squared error when the magnitude of confounding was strong. We illustrate the application of these methods by estimating the effect of average neighborhood income on the probability of death within 1 year of hospitalization for an acute myocardial infarction. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  15. Back-Propagation Operation for Analog Neural Network Hardware with Synapse Components Having Hysteresis Characteristics

    PubMed Central

    Ueda, Michihito; Nishitani, Yu; Kaneko, Yukihiro; Omote, Atsushi

    2014-01-01

    To realize an analog artificial neural network hardware, the circuit element for synapse function is important because the number of synapse elements is much larger than that of neuron elements. One of the candidates for this synapse element is a ferroelectric memristor. This device functions as a voltage controllable variable resistor, which can be applied to a synapse weight. However, its conductance shows hysteresis characteristics and dispersion to the input voltage. Therefore, the conductance values vary according to the history of the height and the width of the applied pulse voltage. Due to the difficulty of controlling the accurate conductance, it is not easy to apply the back-propagation learning algorithm to the neural network hardware having memristor synapses. To solve this problem, we proposed and simulated a learning operation procedure as follows. Employing a weight perturbation technique, we derived the error change. When the error reduced, the next pulse voltage was updated according to the back-propagation learning algorithm. If the error increased the amplitude of the next voltage pulse was set in such way as to cause similar memristor conductance but in the opposite voltage scanning direction. By this operation, we could eliminate the hysteresis and confirmed that the simulation of the learning operation converged. We also adopted conductance dispersion numerically in the simulation. We examined the probability that the error decreased to a designated value within a predetermined loop number. The ferroelectric has the characteristics that the magnitude of polarization does not become smaller when voltages having the same polarity are applied. These characteristics greatly improved the probability even if the learning rate was small, if the magnitude of the dispersion is adequate. Because the dispersion of analog circuit elements is inevitable, this learning operation procedure is useful for analog neural network hardware. PMID:25393715

  16. New S control chart using skewness correction method for monitoring process dispersion of skewed distributions

    NASA Astrophysics Data System (ADS)

    Atta, Abdu; Yahaya, Sharipah; Zain, Zakiyah; Ahmed, Zalikha

    2017-11-01

    Control chart is established as one of the most powerful tools in Statistical Process Control (SPC) and is widely used in industries. The conventional control charts rely on normality assumption, which is not always the case for industrial data. This paper proposes a new S control chart for monitoring process dispersion using skewness correction method for skewed distributions, named as SC-S control chart. Its performance in terms of false alarm rate is compared with various existing control charts for monitoring process dispersion, such as scaled weighted variance S chart (SWV-S); skewness correction R chart (SC-R); weighted variance R chart (WV-R); weighted variance S chart (WV-S); and standard S chart (STD-S). Comparison with exact S control chart with regards to the probability of out-of-control detections is also accomplished. The Weibull and gamma distributions adopted in this study are assessed along with the normal distribution. Simulation study shows that the proposed SC-S control chart provides good performance of in-control probabilities (Type I error) in almost all the skewness levels and sample sizes, n. In the case of probability of detection shift the proposed SC-S chart is closer to the exact S control chart than the existing charts for skewed distributions, except for the SC-R control chart. In general, the performance of the proposed SC-S control chart is better than all the existing control charts for monitoring process dispersion in the cases of Type I error and probability of detection shift.

  17. Speech processing using conditional observable maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, John; Nix, David

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less

  18. Developing a weighting strategy to include mobile phone numbers into an ongoing population health survey using an overlapping dual-frame design with limited benchmark information

    PubMed Central

    2014-01-01

    Background In 2012 mobile phone numbers were included into the ongoing New South Wales Population Health Survey (NSWPHS) using an overlapping dual-frame design. Previously in the NSWPHS the sample was selected using random digit dialing (RDD) of landline phone numbers. The survey was undertaken using computer assisted telephone interviewing (CATI). The weighting strategy needed to be significantly expanded to manage the differing probabilities of selection by frame, including that of children of mobile-only phone users, and to adjust for the increased chance of selection of dual-phone users. This paper describes the development of the final weighting strategy to properly combine the data from two overlapping sample frames accounting for the fact that population benchmarks for the different sampling frames were not available at the state or regional level. Methods Estimates of the number of phone numbers for the landline and mobile phone frames used to calculate the differing probabilities of selection by frame, for New South Wales (NSW) and by stratum, were obtained by apportioning Australian estimates as none were available for NSW. The weighting strategy was then developed by calculating person selection probabilities, selection weights, applying a constant composite factor to the dual-phone users sample weights, and benchmarking to the latest NSW population by age group, sex and stratum. Results Data from the NSWPHS for the first quarter of 2012 was used to test the weighting strategy. This consisted of data on 3395 respondents with 2171 (64%) from the landline frame and 1224 (36%) from the mobile frame. However, in order to calculate the weights, data needed to be available for all core weighting variables and so 3378 respondents, 2933 adults and 445 children, had sufficient data to be included. Average person weights were 3.3 times higher for the mobile-only respondents, 1.3 times higher for the landline-only respondents and 1.7 times higher for dual-phone users in the mobile frame compared to the dual-phone users in the landline frame. The overall weight effect for the first quarter of 2012 was 1.93 and the coefficient of variation of the weights was 0.96. The weight effects for 2012 were similar to, and in many cases less than, the effects found in the corresponding quarter of the 2011 NSWPHS when only a landline based sample was used. Conclusions The inclusion of mobile phone numbers, through an overlapping dual-frame design, improved the coverage of the survey and an appropriate weighing procedure is feasible, although it added substantially to the complexity of the weighting strategy. Access to accurate Australian, State and Territory estimates of the number of landline and mobile phone numbers and type of phone use by at least age group and sex would greatly assist in the weighting of dual-frame surveys in Australia. PMID:25189826

  19. Developing a weighting strategy to include mobile phone numbers into an ongoing population health survey using an overlapping dual-frame design with limited benchmark information.

    PubMed

    Barr, Margo L; Ferguson, Raymond A; Hughes, Phil J; Steel, David G

    2014-09-04

    In 2012 mobile phone numbers were included into the ongoing New South Wales Population Health Survey (NSWPHS) using an overlapping dual-frame design. Previously in the NSWPHS the sample was selected using random digit dialing (RDD) of landline phone numbers. The survey was undertaken using computer assisted telephone interviewing (CATI). The weighting strategy needed to be significantly expanded to manage the differing probabilities of selection by frame, including that of children of mobile-only phone users, and to adjust for the increased chance of selection of dual-phone users. This paper describes the development of the final weighting strategy to properly combine the data from two overlapping sample frames accounting for the fact that population benchmarks for the different sampling frames were not available at the state or regional level. Estimates of the number of phone numbers for the landline and mobile phone frames used to calculate the differing probabilities of selection by frame, for New South Wales (NSW) and by stratum, were obtained by apportioning Australian estimates as none were available for NSW. The weighting strategy was then developed by calculating person selection probabilities, selection weights, applying a constant composite factor to the dual-phone users sample weights, and benchmarking to the latest NSW population by age group, sex and stratum. Data from the NSWPHS for the first quarter of 2012 was used to test the weighting strategy. This consisted of data on 3395 respondents with 2171 (64%) from the landline frame and 1224 (36%) from the mobile frame. However, in order to calculate the weights, data needed to be available for all core weighting variables and so 3378 respondents, 2933 adults and 445 children, had sufficient data to be included. Average person weights were 3.3 times higher for the mobile-only respondents, 1.3 times higher for the landline-only respondents and 1.7 times higher for dual-phone users in the mobile frame compared to the dual-phone users in the landline frame. The overall weight effect for the first quarter of 2012 was 1.93 and the coefficient of variation of the weights was 0.96. The weight effects for 2012 were similar to, and in many cases less than, the effects found in the corresponding quarter of the 2011 NSWPHS when only a landline based sample was used. The inclusion of mobile phone numbers, through an overlapping dual-frame design, improved the coverage of the survey and an appropriate weighing procedure is feasible, although it added substantially to the complexity of the weighting strategy. Access to accurate Australian, State and Territory estimates of the number of landline and mobile phone numbers and type of phone use by at least age group and sex would greatly assist in the weighting of dual-frame surveys in Australia.

  20. Littelmann path model for geometric crystals, Whittaker functions on Lie groups and Brownian motion

    NASA Astrophysics Data System (ADS)

    Chhaibi, Reda

    2013-02-01

    Generally speaking, this thesis focuses on the interplay between the representations of Lie groups and probability theory. It subdivides into essentially three parts. In a first rather algebraic part, we construct a path model for geometric crystals in the sense of Berenstein and Kazhdan, for complex semi-simple Lie groups. We will mainly describe the algebraic structure, its natural morphisms and parameterizations. The theory of total positivity will play a particularly important role. Then, we anticipate on the probabilistic part by exhibiting a canonical measure on geometric crystals. It uses as ingredients the superpotential for the flag manifold and a measure invariant under the crystal actions. The image measure under the weight map plays the role of Duistermaat-Heckman measure. Its Laplace transform defines Whittaker functions, providing an interesting formula for all Lie groups. Then it appears clearly that Whittaker functions are to geometric crystals, what characters are to combinatorial crystals. The Littlewood-Richardson rule is also exposed. Finally we present the probabilistic approach that allows to find the canonical measure. It is based on the fundamental idea that the Wiener measure will induce the adequate measure on the algebraic structures through the path model. In the last chapter, we show how our geometric model degenerates to the continuous classical Littelmann path model and thus recover known results. For example, the canonical measure on a geometric crystal of highest weight degenerates into a uniform measure on a polytope, and recovers the parameterizations of continuous crystals.

  1. Total Bregman Divergence and its Applications to Shape Retrieval.

    PubMed

    Liu, Meizhu; Vemuri, Baba C; Amari, Shun-Ichi; Nielsen, Frank

    2010-01-01

    Shape database search is ubiquitous in the world of biometric systems, CAD systems etc. Shape data in these domains is experiencing an explosive growth and usually requires search of whole shape databases to retrieve the best matches with accuracy and efficiency for a variety of tasks. In this paper, we present a novel divergence measure between any two given points in [Formula: see text] or two distribution functions. This divergence measures the orthogonal distance between the tangent to the convex function (used in the definition of the divergence) at one of its input arguments and its second argument. This is in contrast to the ordinate distance taken in the usual definition of the Bregman class of divergences [4]. We use this orthogonal distance to redefine the Bregman class of divergences and develop a new theory for estimating the center of a set of vectors as well as probability distribution functions. The new class of divergences are dubbed the total Bregman divergence (TBD). We present the l 1 -norm based TBD center that is dubbed the t-center which is then used as a cluster center of a class of shapes The t-center is weighted mean and this weight is small for noise and outliers. We present a shape retrieval scheme using TBD and the t-center for representing the classes of shapes from the MPEG-7 database and compare the results with other state-of-the-art methods in literature.

  2. Association Analysis of Genomic Loci Important for Grain Weight Control in Elite Common Wheat Varieties Cultivated with Variable Water and Fertiliser Supply

    PubMed Central

    Zhang, Kunpu; Wang, Junjun; Zhang, Liyi; Rong, Chaowu; Zhao, Fengwu; Peng, Tao; Li, Huimin; Cheng, Dongmei; Liu, Xin; Qin, Huanju; Zhang, Aimin; Tong, Yiping; Wang, Daowen

    2013-01-01

    Grain weight, an essential yield component, is under strong genetic control and markedly influenced by the environment. Here, by genome-wide association analysis with a panel of 94 elite common wheat varieties, 37 loci were found significantly associated with thousand-grain weight (TGW) in one or more environments differing in water and fertiliser levels. Five loci were stably associated with TGW under all 12 environments examined. Their elite alleles had positive effects on TGW. Four, two, three, and two loci were consistently associated with TGW in the irrigated and fertilised (IF), rainfed (RF), reduced nitrogen (RN), and reduced phosphorus (RP) environments. The elite alleles of the IF-specific loci enhanced TGW under well-resourced conditions, whereas those of the RF-, RN-, or RP-specific loci conferred tolerance to the TGW decrease when irrigation, nitrogen, or phosphorus were reduced. Moreover, the elite alleles of the environment-independent and -specific loci often acted additively to enhance TGW. Four additional loci were found associated with TGW in specific locations, one of which was shown to contribute to the TGW difference between two experimental sites. Further analysis of 14 associated loci revealed that nine affected both grain length and width, whereas the remaining loci influenced either grain length or width, indicating that these loci control grain weight by regulating kernel size. Finally, the elite allele of Xpsp3152 frequently co-segregated with the larger grain haplotype of TaGW2-6A, suggesting probable genetic and functional linkages between Xpsp3152 and GW2 that are important for grain weight control in cereal plants. Our study provides new knowledge on TGW control in elite common wheat lines, which may aid the improvement of wheat grain weight trait in further research. PMID:23469248

  3. Statistically qualified neuro-analytic failure detection method and system

    DOEpatents

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    2002-03-02

    An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.

  4. Effect of bow-type initial imperfection on reliability of minimum-weight, stiffened structural panels

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson; Krishnamurthy, Thiagaraja; Sykes, Nancy P.; Elishakoff, Isaac

    1993-01-01

    Computations were performed to determine the effect of an overall bow-type imperfection on the reliability of structural panels under combined compression and shear loadings. A panel's reliability is the probability that it will perform the intended function - in this case, carry a given load without buckling or exceeding in-plane strain allowables. For a panel loaded in compression, a small initial bow can cause large bending stresses that reduce both the buckling load and the load at which strain allowables are exceeded; hence, the bow reduces the reliability of the panel. In this report, analytical studies on two stiffened panels quantified that effect. The bow is in the shape of a half-sine wave along the length of the panel. The size e of the bow at panel midlength is taken to be the single random variable. Several probability density distributions for e are examined to determine the sensitivity of the reliability to details of the bow statistics. In addition, the effects of quality control are explored with truncated distributions.

  5. Brain networks for confidence weighting and hierarchical inference during probabilistic learning

    PubMed Central

    Meyniel, Florent; Dehaene, Stanislas

    2017-01-01

    Learning is difficult when the world fluctuates randomly and ceaselessly. Classical learning algorithms, such as the delta rule with constant learning rate, are not optimal. Mathematically, the optimal learning rule requires weighting prior knowledge and incoming evidence according to their respective reliabilities. This “confidence weighting” implies the maintenance of an accurate estimate of the reliability of what has been learned. Here, using fMRI and an ideal-observer analysis, we demonstrate that the brain’s learning algorithm relies on confidence weighting. While in the fMRI scanner, human adults attempted to learn the transition probabilities underlying an auditory or visual sequence, and reported their confidence in those estimates. They knew that these transition probabilities could change simultaneously at unpredicted moments, and therefore that the learning problem was inherently hierarchical. Subjective confidence reports tightly followed the predictions derived from the ideal observer. In particular, subjects managed to attach distinct levels of confidence to each learned transition probability, as required by Bayes-optimal inference. Distinct brain areas tracked the likelihood of new observations given current predictions, and the confidence in those predictions. Both signals were combined in the right inferior frontal gyrus, where they operated in agreement with the confidence-weighting model. This brain region also presented signatures of a hierarchical process that disentangles distinct sources of uncertainty. Together, our results provide evidence that the sense of confidence is an essential ingredient of probabilistic learning in the human brain, and that the right inferior frontal gyrus hosts a confidence-based statistical learning algorithm for auditory and visual sequences. PMID:28439014

  6. Direct-method SAD phasing with partial-structure iteration: towards automation.

    PubMed

    Wang, J W; Chen, J R; Gu, Y X; Zheng, C D; Fan, H F

    2004-11-01

    The probability formula of direct-method SAD (single-wavelength anomalous diffraction) phasing proposed by Fan & Gu (1985, Acta Cryst. A41, 280-284) contains partial-structure information in the form of a Sim-weighting term. Previously, only the substructure of anomalous scatterers has been included in this term. In the case that the subsequent density modification and model building yields only structure fragments, which do not straightforwardly lead to the complete solution, the partial structure can be fed back into the Sim-weighting term of the probability formula in order to strengthen its phasing power and to benefit the subsequent automatic model building. The procedure has been tested with experimental SAD data from two known proteins with copper and sulfur as the anomalous scatterers.

  7. On splice site prediction using weight array models: a comparison of smoothing techniques

    NASA Astrophysics Data System (ADS)

    Taher, Leila; Meinicke, Peter; Morgenstern, Burkhard

    2007-11-01

    In most eukaryotic genes, protein-coding exons are separated by non-coding introns which are removed from the primary transcript by a process called "splicing". The positions where introns are cut and exons are spliced together are called "splice sites". Thus, computational prediction of splice sites is crucial for gene finding in eukaryotes. Weight array models are a powerful probabilistic approach to splice site detection. Parameters for these models are usually derived from m-tuple frequencies in trusted training data and subsequently smoothed to avoid zero probabilities. In this study we compare three different ways of parameter estimation for m-tuple frequencies, namely (a) non-smoothed probability estimation, (b) standard pseudo counts and (c) a Gaussian smoothing procedure that we recently developed.

  8. MR findings of Minamata disease--organic mercury poisoning.

    PubMed

    Korogi, Y; Takahashi, M; Okajima, T; Eto, K

    1998-01-01

    We describe MR findings in patients with Minamata disease who have been followed for a long time. All patients examined were affected after daily eating of a large quantity of methylmercury-contaminated seafood, from 1955 to 1958, and showed typical neurological findings. On MR images, the visual cortex, the cerebellar vermis and hemispheres, and the postcentral cortex are significantly atrophic in Minamata disease. The visual cortex is slightly hypointense on T1-weighted images and hyperintense on T2-weighted images, probably representing the pathologic changes of status spongiosus. MRI can demonstrate the lesions located in the calcarine area, cerebellum, and postcentral gyri, which are probably related to three of the characteristic manifestations of this disease: the constriction of the visual fields, ataxia, and sensory disturbance, respectively.

  9. Dietary macronutrients and food consumption as determinants of long-term weight change in adult populations: a systematic literature review

    PubMed Central

    Fogelholm, Mikael; Anderssen, Sigmund; Gunnarsdottir, Ingibjörg; Lahti-Koski, Marjaana

    2012-01-01

    This systematic literature review examined the role of dietary macronutrient composition, food consumption and dietary patterns in predicting weight or waist circumference (WC) change, with and without prior weight reduction. The literature search covered year 2000 and onwards. Prospective cohort studies, case–control studies and interventions were included. The studies had adult (18–70 y), mostly Caucasian participants. Out of a total of 1,517 abstracts, 119 full papers were identified as potentially relevant. After a careful scrutiny, 50 papers were quality graded as A (highest), B or C. Forty-three papers with grading A or B were included in evidence grading, which was done separately for all exposure-outcome combinations. The grade of evidence was classified as convincing, probable, suggestive or no conclusion. We found probable evidence for high intake of dietary fibre and nuts predicting less weight gain, and for high intake of meat in predicting more weight gain. Suggestive evidence was found for a protective role against increasing weight from whole grains, cereal fibre, high-fat dairy products and high scores in an index describing a prudent dietary pattern. Likewise, there was suggestive evidence for both fibre and fruit intake in protection against larger increases in WC. Also suggestive evidence was found for high intake of refined grains, and sweets and desserts in predicting more weight gain, and for refined (white) bread and high energy density in predicting larger increases in WC. The results suggested that the proportion of macronutrients in the diet was not important in predicting changes in weight or WC. In contrast, plenty of fibre-rich foods and dairy products, and less refined grains, meat and sugar-rich foods and drinks were associated with less weight gain in prospective cohort studies. The results on the role of dietary macronutrient composition in prevention of weight regain (after prior weight loss) were inconclusive. PMID:22893781

  10. Automated segmentation of neuroanatomical structures in multispectral MR microscopy of the mouse brain.

    PubMed

    Ali, Anjum A; Dale, Anders M; Badea, Alexandra; Johnson, G Allan

    2005-08-15

    We present the automated segmentation of magnetic resonance microscopy (MRM) images of the C57BL/6J mouse brain into 21 neuroanatomical structures, including the ventricular system, corpus callosum, hippocampus, caudate putamen, inferior colliculus, internal capsule, globus pallidus, and substantia nigra. The segmentation algorithm operates on multispectral, three-dimensional (3D) MR data acquired at 90-microm isotropic resolution. Probabilistic information used in the segmentation is extracted from training datasets of T2-weighted, proton density-weighted, and diffusion-weighted acquisitions. Spatial information is employed in the form of prior probabilities of occurrence of a structure at a location (location priors) and the pairwise probabilities between structures (contextual priors). Validation using standard morphometry indices shows good consistency between automatically segmented and manually traced data. Results achieved in the mouse brain are comparable with those achieved in human brain studies using similar techniques. The segmentation algorithm shows excellent potential for routine morphological phenotyping of mouse models.

  11. Understanding the universality of the immigrant health paradox: the Spanish perspective.

    PubMed

    Speciale, Anna Maria; Regidor, Enrique

    2011-06-01

    This study sought the existence of an immigrant health paradox by evaluating the relationship between region of origin and the perinatal indicators of low birth weight and preterm birth in Spain. The data consist of individual records from the 2006 National Birth Registry of Spain. Mother's origin was divided into eleven groups based on geographic region. We calculated the frequency of Low Birth Weight (LBW) and Prematurity. Logistic regressions were conducted evaluating relationship between origin and LBW and origin and prematurity. After adjusting for socio-demographic variables mothers from Sub-Saharan Africa had an increased probability of having a neonate of LBW over the Spanish mothers, whereas in the mothers of the rest of regions the probability was lower. No differences were found in prematurity in babies born to foreign mothers when compared to babies born to Spanish mothers. While our findings largely support an immigrant paradox with regard to low birth weight, they also suggest that region of origin may play an important role.

  12. Estimating disease prevalence from two-phase surveys with non-response at the second phase

    PubMed Central

    Gao, Sujuan; Hui, Siu L.; Hall, Kathleen S.; Hendrie, Hugh C.

    2010-01-01

    SUMMARY In this paper we compare several methods for estimating population disease prevalence from data collected by two-phase sampling when there is non-response at the second phase. The traditional weighting type estimator requires the missing completely at random assumption and may yield biased estimates if the assumption does not hold. We review two approaches and propose one new approach to adjust for non-response assuming that the non-response depends on a set of covariates collected at the first phase: an adjusted weighting type estimator using estimated response probability from a response model; a modelling type estimator using predicted disease probability from a disease model; and a regression type estimator combining the adjusted weighting type estimator and the modelling type estimator. These estimators are illustrated using data from an Alzheimer’s disease study in two populations. Simulation results are presented to investigate the performances of the proposed estimators under various situations. PMID:10931514

  13. Negative Social Evaluation Impairs Executive Functions in Adolescents With Excess Weight: Associations With Autonomic Responses.

    PubMed

    Padilla, María Moreno; Fernández-Serrano, María J; Verdejo García, Antonio; Reyes Del Paso, Gustavo A

    2018-06-22

    Adolescents with excess weight suffer social stress more frequently than their peers with normal weight. To examine the impact of social stress, specifically negative social evaluation, on executive functions in adolescents with excess weight. We also examined associations between subjective stress, autonomic reactivity, and executive functioning. Sixty adolescents (aged 13-18 years) classified into excess weight or normal weight groups participated. We assessed executive functioning (working memory, inhibition, and shifting) and subjective stress levels before and after the Trier Social Stress Task (TSST). The TSST was divided into two phases according to the feedback of the audience: positive and negative social evaluation. Heart rate and skin conductance were recorded. Adolescents with excess weight showed poorer executive functioning after exposure to TSST compared with adolescents with normal weight. Subjective stress and autonomic reactivity were also greater in adolescents with excess weight than adolescents with normal weight. Negative social evaluation was associated with worse executive functioning and increased autonomic reactivity in adolescents with excess weight. The findings suggest that adolescents with excess weight are more sensitive to social stress triggered by negative evaluations. Social stress elicited deterioration of executive functioning in adolescents with excess weight. Evoked increases in subjective stress and autonomic responses predicted decreased executive function. Deficits in executive skills could reduce cognitive control abilities and lead to overeating in adolescents with excess weight. Strategies to cope with social stress to prevent executive deficits could be useful to prevent future obesity in this population.

  14. Bone mineral density and nutritional status in children with quadriplegic cerebral palsy.

    PubMed

    Alvarez Zaragoza, Citlalli; Vasquez Garibay, Edgar Manuel; García Contreras, Andrea A; Larrosa Haro, Alfredo; Romero Velarde, Enrique; Rea Rosas, Alejandro; Cabrales de Anda, José Luis; Vega Olea, Israel

    2018-03-04

    This study demonstrated the relationship of low bone mineral density (BMD) with the degree of motor impairment, method of feeding, anthropometric indicators, and malnutrition in children with quadriplegic cerebral palsy (CP). The control of these factors could optimize adequate bone mineralization, avoid the risk of osteoporosis, and would improve the quality of life. The purpose of the study is to explore the relationship between low BMD and nutritional status in children with quadriplegic CP. A cross-sectional analytical study included 59 participants aged 6 to 18 years with quadriplegic CP. Weight and height were obtained with alternative measurements, and weight/age, height/age, and BMI/age indexes were estimated. The BMD measurement obtained from the lumbar spine was expressed in grams per square centimeter and Z score (Z). Unpaired Student's t tests, chi-square tests, odds ratios, Pearson's correlations, and linear regressions were performed. The mean of BMD Z score was lower in adolescents than in school-aged children (p = 0.002). Patients with low BMD were at the most affected levels of the Gross Motor Function Classification System (GMFCS). Participants at level V of the GMFCS were more likely to have low BMD than levels III and IV [odds ratio (OR) = 5.8 (confidence interval [CI] 95% 1.4, 24.8), p = 0.010]. There was a higher probability of low BMD in tube-feeding patients [OR = 8.6 (CI 95% 1.0, 73.4), p = 0.023]. The probability of low BMD was higher in malnourished children with weight/age and BMI indices [OR = 11.4 (1.3, 94), p = 0.009] and [OR = 9.4 (CI 95% 1.1, 79.7), p = 0.017], respectively. There was a significant relationship between low BMD, degree of motor impairment, method of feeding, and malnutrition. Optimizing these factors could reduce the risk of osteopenia and osteoporosis and attain a significant improvement of quality of life in children with quadriplegic CP.

  15. Approach range and velocity determination using laser sensors and retroreflector targets

    NASA Technical Reports Server (NTRS)

    Donovan, William J.

    1991-01-01

    A laser docking sensor study is currently in the third year of development. The design concept is considered to be validated. The concept is based on using standard radar techniques to provide range, velocity, and bearing information. Multiple targets are utilized to provide relative attitude data. The design requirements were to utilize existing space-qualifiable technology and require low system power, weight, and size yet, operate from 0.3 to 150 meters with a range accuracy greater than 3 millimeters and a range rate accuracy greater than 3 mm per second. The field of regard for the system is +/- 20 deg. The transmitter and receiver design features a diode laser, microlens beam steering, and power control as a function of range. The target design consists of five target sets, each having seven 3-inch retroreflectors, arranged around the docking port. The target map is stored in the sensor memory. Phase detection is used for ranging, with the frequency range-optimized. Coarse bearing measurement is provided by the scanning system (one set of binary optics) angle. Fine bearing measurement is provided by a quad detector. A MIL-STD-1750 A/B computer is used for processing. Initial test results indicate a probability of detection greater than 99 percent and a probability of false alarm less than 0.0001. The functional system is currently at the MIT/Lincoln Lab for demonstration.

  16. Meta-heuristic CRPS minimization for the calibration of short-range probabilistic forecasts

    NASA Astrophysics Data System (ADS)

    Mohammadi, Seyedeh Atefeh; Rahmani, Morteza; Azadi, Majid

    2016-08-01

    This paper deals with the probabilistic short-range temperature forecasts over synoptic meteorological stations across Iran using non-homogeneous Gaussian regression (NGR). NGR creates a Gaussian forecast probability density function (PDF) from the ensemble output. The mean of the normal predictive PDF is a bias-corrected weighted average of the ensemble members and its variance is a linear function of the raw ensemble variance. The coefficients for the mean and variance are estimated by minimizing the continuous ranked probability score (CRPS) during a training period. CRPS is a scoring rule for distributional forecasts. In the paper of Gneiting et al. (Mon Weather Rev 133:1098-1118, 2005), Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is used to minimize the CRPS. Since BFGS is a conventional optimization method with its own limitations, we suggest using the particle swarm optimization (PSO), a robust meta-heuristic method, to minimize the CRPS. The ensemble prediction system used in this study consists of nine different configurations of the weather research and forecasting model for 48-h forecasts of temperature during autumn and winter 2011 and 2012. The probabilistic forecasts were evaluated using several common verification scores including Brier score, attribute diagram and rank histogram. Results show that both BFGS and PSO find the optimal solution and show the same evaluation scores, but PSO can do this with a feasible random first guess and much less computational complexity.

  17. CONTROL FUNCTION ASSISTED IPW ESTIMATION WITH A SECONDARY OUTCOME IN CASE-CONTROL STUDIES.

    PubMed

    Sofer, Tamar; Cornelis, Marilyn C; Kraft, Peter; Tchetgen Tchetgen, Eric J

    2017-04-01

    Case-control studies are designed towards studying associations between risk factors and a single, primary outcome. Information about additional, secondary outcomes is also collected, but association studies targeting such secondary outcomes should account for the case-control sampling scheme, or otherwise results may be biased. Often, one uses inverse probability weighted (IPW) estimators to estimate population effects in such studies. IPW estimators are robust, as they only require correct specification of the mean regression model of the secondary outcome on covariates, and knowledge of the disease prevalence. However, IPW estimators are inefficient relative to estimators that make additional assumptions about the data generating mechanism. We propose a class of estimators for the effect of risk factors on a secondary outcome in case-control studies that combine IPW with an additional modeling assumption: specification of the disease outcome probability model. We incorporate this model via a mean zero control function. We derive the class of all regular and asymptotically linear estimators corresponding to our modeling assumption, when the secondary outcome mean is modeled using either the identity or the log link. We find the efficient estimator in our class of estimators and show that it reduces to standard IPW when the model for the primary disease outcome is unrestricted, and is more efficient than standard IPW when the model is either parametric or semiparametric.

  18. Spatial vent opening probability map of El Hierro Island (Canary Islands, Spain)

    NASA Astrophysics Data System (ADS)

    Becerril, Laura; Cappello, Annalisa; Galindo, Inés; Neri, Marco; Del Negro, Ciro

    2013-04-01

    The assessment of the probable spatial distribution of new eruptions is useful to manage and reduce the volcanic risk. It can be achieved in different ways, but it becomes especially hard when dealing with volcanic areas less studied, poorly monitored and characterized by a low frequent activity, as El Hierro. Even though it is the youngest of the Canary Islands, before the 2011 eruption in the "Las Calmas Sea", El Hierro had been the least studied volcanic Island of the Canaries, with more historically devoted attention to La Palma, Tenerife and Lanzarote. We propose a probabilistic method to build the susceptibility map of El Hierro, i.e. the spatial distribution of vent opening for future eruptions, based on the mathematical analysis of the volcano-structural data collected mostly on the Island and, secondly, on the submerged part of the volcano, up to a distance of ~10-20 km from the coast. The volcano-structural data were collected through new fieldwork measurements, bathymetric information, and analysis of geological maps, orthophotos and aerial photographs. They have been divided in different datasets and converted into separate and weighted probability density functions, which were then included in a non-homogeneous Poisson process to produce the volcanic susceptibility map. Future eruptive events on El Hierro is mainly concentrated on the rifts zones, extending also beyond the shoreline. The major probabilities to host new eruptions are located on the distal parts of the South and West rifts, with the highest probability reached in the south-western area of the West rift. High probabilities are also observed in the Northeast and South rifts, and the submarine parts of the rifts. This map represents the first effort to deal with the volcanic hazard at El Hierro and can be a support tool for decision makers in land planning, emergency plans and civil defence actions.

  19. Cyber-Physical Correlations for Infrastructure Resilience: A Game-Theoretic Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; He, Fei; Ma, Chris Y. T.

    In several critical infrastructures, the cyber and physical parts are correlated so that disruptions to one affect the other and hence the whole system. These correlations may be exploited to strategically launch components attacks, and hence must be accounted for ensuring the infrastructure resilience, specified by its survival probability. We characterize the cyber-physical interactions at two levels: (i) the failure correlation function specifies the conditional survival probability of cyber sub-infrastructure given the physical sub-infrastructure as a function of their marginal probabilities, and (ii) the individual survival probabilities of both sub-infrastructures are characterized by first-order differential conditions. We formulate a resiliencemore » problem for infrastructures composed of discrete components as a game between the provider and attacker, wherein their utility functions consist of an infrastructure survival probability term and a cost term expressed in terms of the number of components attacked and reinforced. We derive Nash Equilibrium conditions and sensitivity functions that highlight the dependence of infrastructure resilience on the cost term, correlation function and sub-infrastructure survival probabilities. These results generalize earlier ones based on linear failure correlation functions and independent component failures. We apply the results to models of cloud computing infrastructures and energy grids.« less

  20. Transition Probabilities for Hydrogen-Like Atoms

    NASA Astrophysics Data System (ADS)

    Jitrik, Oliverio; Bunge, Carlos F.

    2004-12-01

    E1, M1, E2, M2, E3, and M3 transition probabilities for hydrogen-like atoms are calculated with point-nucleus Dirac eigenfunctions for Z=1-118 and up to large quantum numbers l=25 and n=26, increasing existing data more than a thousandfold. A critical evaluation of the accuracy shows a higher reliability with respect to previous works. Tables for hydrogen containing a subset of the results are given explicitly, listing the states involved in each transition, wavelength, term energies, statistical weights, transition probabilities, oscillator strengths, and line strengths. The complete results, including 1 863 574 distinct transition probabilities, lifetimes, and branching fractions are available at http://www.fisica.unam.mx/research/tables/spectra/1el

  1. Parental Predictions and Perceptions Regarding Long-Term Childhood Obesity-Related Health Risks

    PubMed Central

    Wright, Davene R.; Lozano, Paula; Dawson-Hahn, Elizabeth; Christakis, Dimitri A.; Haaland, Wren; Basu, Anirban

    2016-01-01

    Objectives To assess how parents perceive long-term risks for developing obesity-related chronic health conditions. Methods A web-based nationally representative survey was administered to 502 U.S. parents with a 5–12 year old child. Parents reported whether their child was most likely to be at a healthy weight or overweight, and the probability that their child would develop hypertension, heart disease, depression, or type 2 diabetes in adulthood. Responses of parents of children with overweight and obesity were compared to those of healthy weight children using multivariate models. Results The survey had an overall response rate of 39.2%. The mean (SD) unadjusted parent predicted health risks were 15.4% (17.7%), 11.2% (14.7%), 12.5% (16.2%), and 12.1% (16.1%) for hypertension, heart disease, depression, and diabetes, respectively. Despite under-perceiving their child’s current BMI class, parents of children with obesity estimate their children to be at greater risk for obesity-related health conditions than parents of healthy weight children by 5–6 percentage points. Having a family history of a chronic disease, higher quality of care, and older parent age were also significant predictors of estimating higher risk probabilities. Conclusions Despite evidence that parents of overweight children may not perceive these children as being overweight, parents unexpectedly estimate greater future risk of weight-related health conditions for these children. Focusing communication about weight on screening for and reducing the risk of weight-related diseases may prove useful in engaging parents and children in weight management PMID:26875508

  2. Estimating the effect of gang membership on nonviolent and violent delinquency: a counterfactual analysis.

    PubMed

    Barnes, J C; Beaver, Kevin M; Miller, J Mitchell

    2010-01-01

    This study reconsiders the well-known link between gang membership and criminal involvement. Recently developed analytical techniques enabled the approximation of an experimental design to determine whether gang members, after being matched with similarly situated nongang members, exhibited greater involvement in nonviolent and violent delinquency. Findings indicated that while gang membership is a function of self-selection, selection effects alone do not account for the greater involvement in delinquency exhibited by gang members. After propensity score matching was employed, gang members maintained a greater involvement in both nonviolent and violent delinquency when measured cross-sectionally, but only violent delinquency when measured longitudinally. Additional analyses using inverse probability of treatment weights reaffirmed these conclusions. © 2010 Wiley-Liss, Inc.

  3. The impact of the minimum wage on health.

    PubMed

    Andreyeva, Elena; Ukert, Benjamin

    2018-03-07

    This study evaluates the effect of minimum wage on risky health behaviors, healthcare access, and self-reported health. We use data from the 1993-2015 Behavioral Risk Factor Surveillance System, and employ a difference-in-differences strategy that utilizes time variation in new minimum wage laws across U.S. states. Results suggest that the minimum wage increases the probability of being obese and decreases daily fruit and vegetable intake, but also decreases days with functional limitations while having no impact on healthcare access. Subsample analyses reveal that the increase in weight and decrease in fruit and vegetable intake are driven by the older population, married, and whites. The improvement in self-reported health is especially strong among non-whites, females, and married.

  4. Force Density Function Relationships in 2-D Granular Media

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.

    2004-01-01

    An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms

  5. Accounting for length-bias and selection effects in estimating the distribution of menstrual cycle length

    PubMed Central

    Lum, Kirsten J.; Sundaram, Rajeshwari; Louis, Thomas A.

    2015-01-01

    Prospective pregnancy studies are a valuable source of longitudinal data on menstrual cycle length. However, care is needed when making inferences of such renewal processes. For example, accounting for the sampling plan is necessary for unbiased estimation of the menstrual cycle length distribution for the study population. If couples can enroll when they learn of the study as opposed to waiting for the start of a new menstrual cycle, then due to length-bias, the enrollment cycle will be stochastically larger than the general run of cycles, a typical property of prevalent cohort studies. Furthermore, the probability of enrollment can depend on the length of time since a woman’s last menstrual period (a backward recurrence time), resulting in selection effects. We focus on accounting for length-bias and selection effects in the likelihood for enrollment menstrual cycle length, using a recursive two-stage approach wherein we first estimate the probability of enrollment as a function of the backward recurrence time and then use it in a likelihood with sampling weights that account for length-bias and selection effects. To broaden the applicability of our methods, we augment our model to incorporate a couple-specific random effect and time-independent covariate. A simulation study quantifies performance for two scenarios of enrollment probability when proper account is taken of sampling plan features. In addition, we estimate the probability of enrollment and the distribution of menstrual cycle length for the study population of the Longitudinal Investigation of Fertility and the Environment Study. PMID:25027273

  6. Auditory Weighting Functions and TTS/PTS Exposure Functions for Marine Mammals Exposed to Underwater Noise

    DTIC Science & Technology

    2016-12-01

    weighting functions utilized the “M-weighting” functions at lower frequencies, where no TTS existed at that time . Since derivation of the Phase 2...resulting shapes of the weighting functions (left) and exposure functions (right). The arrows indicate the direction of change when the designated parameter...thresholds are in dB re 1 μPa ..................................... iv 1. Species group designations for Navy Phase 3 auditory weighting functions

  7. Lateralization of temporal lobe epilepsy by multimodal multinomial hippocampal response-driven models.

    PubMed

    Nazem-Zadeh, Mohammad-Reza; Elisevich, Kost V; Schwalb, Jason M; Bagher-Ebadian, Hassan; Mahmoudi, Fariborz; Soltanian-Zadeh, Hamid

    2014-12-15

    Multiple modalities are used in determining laterality in mesial temporal lobe epilepsy (mTLE). It is unclear how much different imaging modalities should be weighted in decision-making. The purpose of this study is to develop response-driven multimodal multinomial models for lateralization of epileptogenicity in mTLE patients based upon imaging features in order to maximize the accuracy of noninvasive studies. The volumes, means and standard deviations of FLAIR intensity and means of normalized ictal-interictal SPECT intensity of the left and right hippocampi were extracted from preoperative images of a retrospective cohort of 45 mTLE patients with Engel class I surgical outcomes, as well as images of a cohort of 20 control, nonepileptic subjects. Using multinomial logistic function regression, the parameters of various univariate and multivariate models were estimated. Based on the Bayesian model averaging (BMA) theorem, response models were developed as compositions of independent univariate models. A BMA model composed of posterior probabilities of univariate response models of hippocampal volumes, means and standard deviations of FLAIR intensity, and means of SPECT intensity with the estimated weighting coefficients of 0.28, 0.32, 0.09, and 0.31, respectively, as well as a multivariate response model incorporating all mentioned attributes, demonstrated complete reliability by achieving a probability of detection of one with no false alarms to establish proper laterality in all mTLE patients. The proposed multinomial multivariate response-driven model provides a reliable lateralization of mesial temporal epileptogenicity including those patients who require phase II assessment. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Bioluminescent Study of the Distribution of High-Molecular-Weight Protein Fraction of Cellex Daily Preparation in the Brain after Intranasal Administation.

    PubMed

    Baklaushev, V P; Yusubalieva, G M; Burenkov, M S; Mel'nikov, P A; Bozhko, E A; Mentyukov, G A; Lavrent'eva, L S; Sokolov, M A; Chekhonin, V P

    2017-12-01

    Permeability of the blood-brain barrier for protein fractions 50-100 kDa (PF 50-100 ) of Cellex Daily preparation labeled with fluorescent tracer FITC and non-conjugated FITC were compared after intranasal administration of the preparations to healthy rats. Fluorimetrical analysis of the serum and cerebrospinal fluid samples showed that Cellex Daily PF 50-100 -FITC administered intranasally penetrated into the blood and cerebrospinal fluid with maximum accumulation in 2 h after administration and persists in the circulation for 24 h probably due to binding with plasma proteins. The differences in the kinetic profile of PF 50-100 -FITC and free FITC indirectly suggest that the major part of the preparation is not degraded within 24 h and FITC is probably not cleaved from the protein components of the preparation. In vivo fluorescence analysis showed significant fluorescent signal in the olfactory bulbs in 6 h after intranasal administration; hence, the preparation administered via this route can bypass the blood-brain barrier. Scanning laser confocal microscopy of rat brain sections confirmed penetration of the high-molecular weight protein fraction PF 50-100 -FITC into CNS structures. The most pronounced accumulation of the labeled drug was observed in the olfactory bulb in 6 and 12 h after administration. In contrast to free FITC administered in the control group, significant accumulation of PF 50-100 -FITC in the olfactory cortex and frontal cortex neurons with functionally active nuclei was observed in 6, 12 and 24 h after intranasal administration.

  9. Ecology of a Maryland population of black rat snakes (Elaphe o. obsoleta)

    USGS Publications Warehouse

    Stickel, L.F.; Stickel, W.H.; Schmid, F.C.

    1980-01-01

    Behavior, growth and age of black rat snakes under natural conditions were investigated by mark-recapture methods at the Patuxent Wildlife Research Center for 22 years (1942-1963), with limited observations for 13 more years (1964-1976). Over the 35-year period, 330 snakes were recorded a total of 704 times. Individual home ranges remained stable for many years; male ranges averaged at least 600 m in diam and female ranges at least 500 m, each including a diversity of habitats, evidenced also in records of foods. Population density was low, probably less than 0.5 snake/ha. Peak activity of both sexes was in May and June, with a secondary peak in September. Large trees in the midst of open areas appeared to serve a significant functional role in the behavioral life pattern of the snake population. Male combat was observed three times in the field. Male snakes grew more rapidly than females, attained larger sizes and lived longer. Some individuals of both sexes probably lived 20 years or more. Weight-length relationships changed as the snakes grew and developed heavier bodies in proportion to length. Growth apparently continued throughout life. Some individuals, however, both male and female, stopped growing for periods of I or 2 years and then resumed, a condition probably related to poor health, suggested by skin ailments.

  10. Social interaction as a heuristic for combinatorial optimization problems

    NASA Astrophysics Data System (ADS)

    Fontanari, José F.

    2010-11-01

    We investigate the performance of a variant of Axelrod’s model for dissemination of culture—the Adaptive Culture Heuristic (ACH)—on solving an NP-Complete optimization problem, namely, the classification of binary input patterns of size F by a Boolean Binary Perceptron. In this heuristic, N agents, characterized by binary strings of length F which represent possible solutions to the optimization problem, are fixed at the sites of a square lattice and interact with their nearest neighbors only. The interactions are such that the agents’ strings (or cultures) become more similar to the low-cost strings of their neighbors resulting in the dissemination of these strings across the lattice. Eventually the dynamics freezes into a homogeneous absorbing configuration in which all agents exhibit identical solutions to the optimization problem. We find through extensive simulations that the probability of finding the optimal solution is a function of the reduced variable F/N1/4 so that the number of agents must increase with the fourth power of the problem size, N∝F4 , to guarantee a fixed probability of success. In this case, we find that the relaxation time to reach an absorbing configuration scales with F6 which can be interpreted as the overall computational cost of the ACH to find an optimal set of weights for a Boolean binary perceptron, given a fixed probability of success.

  11. Clinical Correlations of Brain Lesion Location in Multiple Sclerosis: Voxel-Based Analysis of a Large Clinical Trial Dataset.

    PubMed

    Altermatt, Anna; Gaetano, Laura; Magon, Stefano; Häring, Dieter A; Tomic, Davorka; Wuerfel, Jens; Radue, Ernst-Wilhelm; Kappos, Ludwig; Sprenger, Till

    2018-05-29

    There is a limited correlation between white matter (WM) lesion load as determined by magnetic resonance imaging and disability in multiple sclerosis (MS). The reasons for this so-called clinico-radiological paradox are diverse and may, at least partly, relate to the fact that not just the overall lesion burden, but also the exact anatomical location of lesions predict the severity and type of disability. We aimed at studying the relationship between lesion distribution and disability using a voxel-based lesion probability mapping approach in a very large dataset of MS patients. T2-weighted lesion masks of 2348 relapsing-remitting MS patients were spatially normalized to standard stereotaxic space by non-linear registration. Relations between supratentorial WM lesion locations and disability measures were assessed using a non-parametric ANCOVA (Expanded Disability Status Scale [EDSS]; Multiple Sclerosis Functional Composite, and subscores; Modified Fatigue Impact Scale) or multinomial ordinal logistic regression (EDSS functional subscores). Data from 1907 (81%) patients were included in the analysis because of successful registration. The lesion mapping showed similar areas to be associated with the different disability scales: periventricular regions in temporal, frontal, and limbic lobes were predictive, mainly affecting the posterior thalamic radiation, the anterior, posterior, and superior parts of the corona radiata. In summary, significant associations between lesion location and clinical scores were found in periventricular areas. Such lesion clusters appear to be associated with impairment of different physical and cognitive abilities, probably because they affect commissural and long projection fibers, which are relevant WM pathways supporting many different brain functions.

  12. Locally Weighted Ensemble Clustering.

    PubMed

    Huang, Dong; Wang, Chang-Dong; Lai, Jian-Huang

    2018-05-01

    Due to its ability to combine multiple base clusterings into a probably better and more robust clustering, the ensemble clustering technique has been attracting increasing attention in recent years. Despite the significant success, one limitation to most of the existing ensemble clustering methods is that they generally treat all base clusterings equally regardless of their reliability, which makes them vulnerable to low-quality base clusterings. Although some efforts have been made to (globally) evaluate and weight the base clusterings, yet these methods tend to view each base clustering as an individual and neglect the local diversity of clusters inside the same base clustering. It remains an open problem how to evaluate the reliability of clusters and exploit the local diversity in the ensemble to enhance the consensus performance, especially, in the case when there is no access to data features or specific assumptions on data distribution. To address this, in this paper, we propose a novel ensemble clustering approach based on ensemble-driven cluster uncertainty estimation and local weighting strategy. In particular, the uncertainty of each cluster is estimated by considering the cluster labels in the entire ensemble via an entropic criterion. A novel ensemble-driven cluster validity measure is introduced, and a locally weighted co-association matrix is presented to serve as a summary for the ensemble of diverse clusters. With the local diversity in ensembles exploited, two novel consensus functions are further proposed. Extensive experiments on a variety of real-world datasets demonstrate the superiority of the proposed approach over the state-of-the-art.

  13. Reference equations for 6-min walk test in healthy Indian subjects (25-80 years).

    PubMed

    Palaniappan Ramanathan, Ramanathan; Chandrasekaran, Baskaran

    2014-01-01

    Six-min walk test (6MWT), a simple functional capacity evaluation tool used globally to determine the prognosis and effectiveness of any therapeutic/medical intervention. However, variability in reference equations derived from western population (due to racial and ethnicity variations) hinders from adequate use of 6MWT clinically. Further, there are no valid Indian studies that predict reference values for 6-min walk distance (6MWD) in healthy Indian normal. We aimed for framing individualized reference equations for 6MWT in healthy Indian population. Anthropometric variables (age, weight, height, and body mass index (BMI)) and 6-min walk in a 30 m corridor were evaluated in 125 subjects (67 females) in a cross-sectional trial. 6MWD significantly correlated with age (r = -0.29), height (r = 0.393), weight (r = 0.08), and BMI (r = -0.17). The gender specific reference equations for healthy Indian individuals were: (1) Males: 561.022 - (2.507 × age [years]) + (1.505 × weight [kg]) - (0.055 × height [cm]). R (2) = 0.288. (2) Indian females: 30.325 - (0.809 × age [years]) - (2.074 × weight [kg]) + (4.235 × height [cm]). R (2) = 0.272. Though the equations possess a small coefficient of determination and larger standard error estimate, the former applicability to Indian population is justified. These reference equations are probably most appropriate for evaluating the walked capacity of Indian patients with chronic diseases.

  14. Genetic polymorphisms and weight loss in obesity: a randomised trial of hypo-energetic high- versus low-fat diets.

    PubMed

    Sørensen, Thorkild I A; Boutin, Philippe; Taylor, Moira A; Larsen, Lesli H; Verdich, Camilla; Petersen, Liselotte; Holst, Claus; Echwald, Søren M; Dina, Christian; Toubro, Søren; Petersen, Martin; Polak, Jan; Clément, Karine; Martínez, J Alfredo; Langin, Dominique; Oppert, Jean-Michel; Stich, Vladimir; Macdonald, Ian; Arner, Peter; Saris, Wim H M; Pedersen, Oluf; Astrup, Arne; Froguel, Philippe

    2006-06-01

    To study if genes with common single nucleotide polymorphisms (SNPs) associated with obesity-related phenotypes influence weight loss (WL) in obese individuals treated by a hypo-energetic low-fat or high-fat diet. Randomised, parallel, two-arm, open-label multi-centre trial. Eight clinical centres in seven European countries. 771 obese adult individuals. 10-wk dietary intervention to hypo-energetic (-600 kcal/d) diets with a targeted fat energy of 20%-25% or 40%-45%, completed in 648 participants. WL during the 10 wk in relation to genotypes of 42 SNPs in 26 candidate genes, probably associated with hypothalamic regulation of appetite, efficiency of energy expenditure, regulation of adipocyte differentiation and function, lipid and glucose metabolism, or production of adipocytokines, determined in 642 participants. Compared with the noncarriers of each of the SNPs, and after adjusting for gender, age, baseline weight and centre, heterozygotes showed WL differences that ranged from -0.6 to 0.8 kg, and homozygotes, from -0.7 to 3.1 kg. Genotype-dependent additional WL on low-fat diet ranged from 1.9 to -1.6 kg in heterozygotes, and from 3.8 kg to -2.1 kg in homozygotes relative to the noncarriers. Considering the multiple testing conducted, none of the associations was statistically significant. Polymorphisms in a panel of obesity-related candidate genes play a minor role, if any, in modulating weight changes induced by a moderate hypo-energetic low-fat or high-fat diet.

  15. A comparison of the weights-of-evidence method and probabilistic neural networks

    USGS Publications Warehouse

    Singer, Donald A.; Kouda, Ryoichi

    1999-01-01

    The need to integrate large quantities of digital geoscience information to classify locations as mineral deposits or nondeposits has been met by the weights-of-evidence method in many situations. Widespread selection of this method may be more the result of its ease of use and interpretation rather than comparisons with alternative methods. A comparison of the weights-of-evidence method to probabilistic neural networks is performed here with data from Chisel Lake-Andeson Lake, Manitoba, Canada. Each method is designed to estimate the probability of belonging to learned classes where the estimated probabilities are used to classify the unknowns. Using these data, significantly lower classification error rates were observed for the neural network, not only when test and training data were the same (0.02 versus 23%), but also when validation data, not used in any training, were used to test the efficiency of classification (0.7 versus 17%). Despite these data containing too few deposits, these tests of this set of data demonstrate the neural network's ability at making unbiased probability estimates and lower error rates when measured by number of polygons or by the area of land misclassified. For both methods, independent validation tests are required to ensure that estimates are representative of real-world results. Results from the weights-of-evidence method demonstrate a strong bias where most errors are barren areas misclassified as deposits. The weights-of-evidence method is based on Bayes rule, which requires independent variables in order to make unbiased estimates. The chi-square test for independence indicates no significant correlations among the variables in the Chisel Lake–Andeson Lake data. However, the expected number of deposits test clearly demonstrates that these data violate the independence assumption. Other, independent simulations with three variables show that using variables with correlations of 1.0 can double the expected number of deposits as can correlations of −1.0. Studies done in the 1970s on methods that use Bayes rule show that moderate correlations among attributes seriously affect estimates and even small correlations lead to increases in misclassifications. Adverse effects have been observed with small to moderate correlations when only six to eight variables were used. Consistent evidence of upward biased probability estimates from multivariate methods founded on Bayes rule must be of considerable concern to institutions and governmental agencies where unbiased estimates are required. In addition to increasing the misclassification rate, biased probability estimates make classification into deposit and nondeposit classes an arbitrary subjective decision. The probabilistic neural network has no problem dealing with correlated variables—its performance depends strongly on having a thoroughly representative training set. Probabilistic neural networks or logistic regression should receive serious consideration where unbiased estimates are required. The weights-of-evidence method would serve to estimate thresholds between anomalies and background and for exploratory data analysis.

  16. Probability of the moiré effect in barrier and lenticular autostereoscopic 3D displays.

    PubMed

    Saveljev, Vladimir; Kim, Sung-Kyu

    2015-10-05

    The probability of the moiré effect in LCD displays is estimated as a function of angle based on the experimental data; a theoretical function (node spacing) is proposed basing on the distance between nodes. Both functions are close to each other. The connection between the probability of the moiré effect and the Thomae's function is also found. The function proposed in this paper can be used in the minimization of the moiré effect in visual displays, especially in autostereoscopic 3D displays.

  17. A network of discrete events for the representation and analysis of diffusion dynamics.

    PubMed

    Pintus, Alberto M; Pazzona, Federico G; Demontis, Pierfranco; Suffritti, Giuseppe B

    2015-11-14

    We developed a coarse-grained description of the phenomenology of diffusive processes, in terms of a space of discrete events and its representation as a network. Once a proper classification of the discrete events underlying the diffusive process is carried out, their transition matrix is calculated on the basis of molecular dynamics data. This matrix can be represented as a directed, weighted network where nodes represent discrete events, and the weight of edges is given by the probability that one follows the other. The structure of this network reflects dynamical properties of the process of interest in such features as its modularity and the entropy rate of nodes. As an example of the applicability of this conceptual framework, we discuss here the physics of diffusion of small non-polar molecules in a microporous material, in terms of the structure of the corresponding network of events, and explain on this basis the diffusivity trends observed. A quantitative account of these trends is obtained by considering the contribution of the various events to the displacement autocorrelation function.

  18. A feature-weighting account of priming in conjunction search.

    PubMed

    Becker, Stefanie I; Horstmann, Gernot

    2009-02-01

    Previous research on the priming effect in conjunction search has shown that repeating the target and distractor features across displays speeds mean response times but does not improve search efficiency: Repetitions do not reduce the set size effect-that is, the effect of the number of distractor items-but only modulate the intercept of the search function. In the present study, we investigated whether priming modulates search efficiency when a conjunctively defined target randomly changes between red and green. The results from an eyetracking experiment show that repeating the target across trials reduced the set size effect and, thus, did enhance search efficiency. Moreover, the probability of selecting the target as the first item in the display was higher when the target-distractor displays were repeated across trials than when they changed. Finally, red distractors were selected more frequently than green distractors when the previous target had been red (and vice versa). Taken together, these results indicate that priming in conjunction search modulates processes concerned with guiding attention to the target, by assigning more attentional weight to features sharing the previous target's color.

  19. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes

    NASA Astrophysics Data System (ADS)

    Jing, Lin; Brun, Todd; Quantum Research Team

    Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.

  20. Weighted LCS

    NASA Astrophysics Data System (ADS)

    Amir, Amihood; Gotthilf, Zvi; Shalom, B. Riva

    The Longest Common Subsequence (LCS) of two strings A and B is a well studied problem having a wide range of applications. When each symbol of the input strings is assigned a positive weight the problem becomes the Heaviest Common Subsequence (HCS) problem. In this paper we consider a different version of weighted LCS on Position Weight Matrices (PWM). The Position Weight Matrix was introduced as a tool to handle a set of sequences that are not identical, yet, have many local similarities. Such a weighted sequence is a 'statistical image' of this set where we are given the probability of every symbol's occurrence at every text location. We consider two possible definitions of LCS on PWM. For the first, we solve the weighted LCS problem of z sequences in time O(zn z + 1). For the second, we prove \\cal{NP}-hardness and provide an approximation algorithm.

  1. Failure detection system risk reduction assessment

    NASA Technical Reports Server (NTRS)

    Aguilar, Robert B. (Inventor); Huang, Zhaofeng (Inventor)

    2012-01-01

    A process includes determining a probability of a failure mode of a system being analyzed reaching a failure limit as a function of time to failure limit, determining a probability of a mitigation of the failure mode as a function of a time to failure limit, and quantifying a risk reduction based on the probability of the failure mode reaching the failure limit and the probability of the mitigation.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hesheng, E-mail: hesheng@umich.edu; Feng, Mary; Jackson, Andrew

    Purpose: To develop a local and global function model in the liver based on regional and organ function measurements to support individualized adaptive radiation therapy (RT). Methods and Materials: A local and global model for liver function was developed to include both functional volume and the effect of functional variation of subunits. Adopting the assumption of parallel architecture in the liver, the global function was composed of a sum of local function probabilities of subunits, varying between 0 and 1. The model was fit to 59 datasets of liver regional and organ function measures from 23 patients obtained before, during, andmore » 1 month after RT. The local function probabilities of subunits were modeled by a sigmoid function in relating to MRI-derived portal venous perfusion values. The global function was fitted to a logarithm of an indocyanine green retention rate at 15 minutes (an overall liver function measure). Cross-validation was performed by leave-m-out tests. The model was further evaluated by fitting to the data divided according to whether the patients had hepatocellular carcinoma (HCC) or not. Results: The liver function model showed that (1) a perfusion value of 68.6 mL/(100 g · min) yielded a local function probability of 0.5; (2) the probability reached 0.9 at a perfusion value of 98 mL/(100 g · min); and (3) at a probability of 0.03 [corresponding perfusion of 38 mL/(100 g · min)] or lower, the contribution to global function was lost. Cross-validations showed that the model parameters were stable. The model fitted to the data from the patients with HCC indicated that the same amount of portal venous perfusion was translated into less local function probability than in the patients with non-HCC tumors. Conclusions: The developed liver function model could provide a means to better assess individual and regional dose-responses of hepatic functions, and provide guidance for individualized treatment planning of RT.« less

  3. Role of Glucocorticoids in the Response to Unloading of Muscle Protein and Amino Acid Metabolism

    NASA Technical Reports Server (NTRS)

    Tischler, M. E.; Jaspers, S. R.

    1985-01-01

    Intact control (weight bearing) and suspended rats gained weight at a similar rate during a 6 day period. Adrenaectomized (adx) weight bearing rats gained less weight during this period while adrenalectomized suspended rats showed no significant weight gain. Cortisol treatment of both of these groups of animals caused a loss of body weight. Results from these studies show several important findings: (1) Metabolic changes in the extensor digitorum longus muscle of suspended rats are due primarily to increased circulating gluccorticoids; (2) Metabolic changes in the soleus due to higher steroid levels are probably potentiated by greater numbers of receptors; and (3) Not all metabolic responses in the unloaded soleus muscle are due to direct action of elevated glucocorticoids or increased sensitivity to these hormones.

  4. A prominent large high-density lipoprotein at birth enriched in apolipoprotein C-I identifies a new group of infancts of lower birth weight and younger gestational age

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwiterovich Jr., Peter O.; Cockrill, Steven L.; Virgil, Donna G.

    2003-10-01

    Because low birth weight is associated with adverse cardiovascular risk and death in adults, lipoprotein heterogeneity at birth was studied. A prominent, large high-density lipoprotein (HDL) subclass enriched in apolipoprotein C-I (apoC-I) was found in 19 percent of infants, who had significantly lower birth weights and younger gestational ages and distinctly different lipoprotein profiles than infants with undetectable, possible or probable amounts of apoC-I-enriched HDL. An elevated amount of an apoC-I-enriched HDL identifies a new group of low birth weight infants.

  5. Organic Over-the-Horizon Targeting for the 2025 Surface Fleet

    DTIC Science & Technology

    2015-06-01

    Detection Phit Probability of Hit Pk Probability of Kill PLAN People’s Liberation Army Navy PMEL Pacific Marine Environmental Laboratory...probability of hit ( Phit ). 2. Top-Level Functional Flow Block Diagram With the high-level functions of the project’s systems of systems properly

  6. Improvement in cyclic oxidation of the nickel-base superalloy B-1900 by addition of one percent silicon

    NASA Technical Reports Server (NTRS)

    Lowell, C. E.; Miner, R. V.

    1973-01-01

    Cast B-1900 with and without 1 weight percent Si was subjected to cyclic oxidation at 1000 and 1100 C in air for 700 and 200 hours, respectively. The results were judged by specific weight change, metallography and X-ray diffraction. Si was found to be of significant value in reducing oxidation attack, probably by increasing scale adherence.

  7. Particle Filtering Methods for Incorporating Intelligence Updates

    DTIC Science & Technology

    2017-03-01

    methodology for incorporating intelligence updates into a stochastic model for target tracking. Due to the non -parametric assumptions of the PF...samples are taken with replacement from the remaining non -zero weighted particles at each iteration. With this methodology , a zero-weighted particle is...incorporation of information updates. A common method for incorporating information updates is Kalman filtering. However, given the probable nonlinear and non

  8. Measuring Work Functioning: Validity of a Weighted Composite Work Functioning Approach.

    PubMed

    Boezeman, Edwin J; Sluiter, Judith K; Nieuwenhuijsen, Karen

    2015-09-01

    To examine the construct validity of a weighted composite work functioning measurement approach. Workers (health-impaired/healthy) (n = 117) completed a composite measure survey that recorded four central work functioning aspects with existing scales: capacity to work, quality of work performance, quantity of work, and recovery from work. Previous derived weights reflecting the relative importance of these aspects of work functioning were used to calculate the composite weighted work functioning score of the workers. Work role functioning, productivity, and quality of life were used for validation. Correlations were calculated and norms applied to examine convergent and divergent construct validity. A t test was conducted and a norm applied to examine discriminative construct validity. Overall the weighted composite work functioning measure demonstrated construct validity. As predicted, the weighted composite score correlated (p < .001) strongly (r > .60) with work role functioning and productivity (convergent construct validity), and moderately (.30 < r < .60) with physical quality of life and less strongly than work role functioning and productivity with mental quality of life (divergent validity). Further, the weighted composite measure detected that health-impaired workers show with a large effect size (Cohen's d > .80) significantly worse work functioning than healthy workers (discriminative validity). The weighted composite work functioning measurement approach takes into account the relative importance of the different work functioning aspects and demonstrated good convergent, fair divergent, and good discriminative construct validity.

  9. Economic Outcomes with Anatomic versus Functional Diagnostic Testing for Coronary Artery Disease

    PubMed Central

    Mark, Daniel B.; Federspiel, Jerome J.; Cowper, Patricia A.; Anstrom, Kevin J.; Hoffmann, Udo; Patel, Manesh R.; Davidson-Ray, Linda; Daniels, Melanie R.; Cooper, Lawton S.; Knight, J. David; Lee, Kerry L.; Douglas, Pamela S.

    2016-01-01

    Background The PROMISE trial found that initial use of ≥64-slice multidetector computed tomographic angiography (CTA) versus functional diagnostic testing strategies did not improve clinical outcomes in stable symptomatic patients with suspected coronary artery disease (CAD) requiring noninvasive testing. Objective Economic analysis of PROMISE, a major secondary aim. Design Prospective economic study from the US perspective. Comparisons were made by intention-to-treat. Confidence intervals were calculated using bootstrap methods. Setting 190 U.S. centers Patients 9649 U.S. patients enrolled in PROMISE. Enrollment began July 2010 and completed September 2013. Median follow-up was 25 months. Measurements Technical costs of the initial (outpatient) testing strategy were estimated from Premier Research Database data. Hospital-based costs were estimated using hospital bills and Medicare cost-to-charge ratios. Physician fees were taken from the Medicare Fee Schedule. Costs were expressed in 2014 US dollars discounted at 3% and estimated out to 3 years using inverse probability weighting methods. Results The mean initial testing costs were: $174 for exercise ECG; $404 for CTA; $501 to $514 for (exercise, pharmacologic) stress echo; $946 to $1132 for (exercise, pharmacologic) stress nuclear. Mean costs at 90 days for the CTA strategy were $2494 versus $2240 for the functional strategy (mean difference $254, 95% CI −$634 to $906). The difference was associated with more revascularizations and catheterizations (4.25 per 100 patients) with CTA use. After 90 days, the mean cost difference between the arms out to 3 years remained small ($373). Limitations Cost weights for test strategies obtained from sources outside PROMISE. Conclusions CTA and functional diagnostic testing strategies in patients with suspected CAD have similar costs through three years of follow-up. PMID:27214597

  10. Added value of non-calibrated and BMA calibrated AEMET-SREPS probabilistic forecasts: the 24 January 2009 extreme wind event over Catalonia

    NASA Astrophysics Data System (ADS)

    Escriba, P. A.; Callado, A.; Santos, D.; Santos, C.; Simarro, J.; García-Moya, J. A.

    2009-09-01

    At 00 UTC 24 January 2009 an explosive ciclogenesis originated over the Atlantic Ocean reached its maximum intensity with observed surface pressures lower than 970 hPa on its center and placed at Gulf of Vizcaya. During its path through southern France this low caused strong westerly and north-westerly winds over the Iberian Peninsula higher than 150 km/h at some places. These extreme winds leaved 10 casualties in Spain, 8 of them in Catalonia. The aim of this work is to show whether exists an added value in the short range prediction of the 24 January 2009 strong winds when using the Short Range Ensemble Prediction System (SREPS) of the Spanish Meteorological Agency (AEMET), with respect to the operational forecasting tools. This study emphasizes two aspects of probabilistic forecasting: the ability of a 3-day forecast of warn an extreme windy event and the ability of quantifying the predictability of the event so that giving value to deterministic forecast. Two type of probabilistic forecasts of wind are carried out, a non-calibrated and a calibrated one using Bayesian Model Averaging (BMA). AEMET runs daily experimentally SREPS twice a day (00 and 12 UTC). This system consists of 20 members that are constructed by integrating 5 local area models, COSMO (COSMO), HIRLAM (HIRLAM Consortium), HRM (DWD), MM5 (NOAA) and UM (UKMO), at 25 km of horizontal resolution. Each model uses 4 different initial and boundary conditions, the global models GFS (NCEP), GME (DWD), IFS (ECMWF) and UM. By this way it is obtained a probabilistic forecast that takes into account the initial, the contour and the model errors. BMA is a statistical tool for combining predictive probability functions from different sources. The BMA predictive probability density function (PDF) is a weighted average of PDFs centered on the individual bias-corrected forecasts. The weights are equal to posterior probabilities of the models generating the forecasts and reflect the skill of the ensemble members. Here BMA is applied to provide probabilistic forecasts of wind speed. In this work several forecasts for different time ranges (H+72, H+48 and H+24) of 10 meters wind speed over Catalonia are verified subjectively at one of the instants of maximum intensity, 12 UTC 24 January 2009. On one hand, three probabilistic forecasts are compared, ECMWF EPS, non-calibrated SREPS and calibrated SREPS. On the other hand, the relationship between predictability and skill of deterministic forecast is studied by looking at HIRLAM 0.16 deterministic forecasts of the event. Verification is focused on location and intensity of 10 meters wind speed and 10-minutal measures from AEMET automatic ground stations are used as observations. The results indicate that SREPS is able to forecast three days ahead mean winds higher than 36 km/h and that correctly localizes them with a significant probability of ocurrence in the affected area. The probability is higher after BMA calibration of the ensemble. The fact that probability of strong winds is high allows us to state that the predictability of the event is also high and, as a consequence, deterministic forecasts are more reliable. This is confirmed when verifying HIRLAM deterministic forecasts against observed values.

  11. The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates

    NASA Technical Reports Server (NTRS)

    Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2008-01-01

    We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.

  12. Exploring the Subtleties of Inverse Probability Weighting and Marginal Structural Models.

    PubMed

    Breskin, Alexander; Cole, Stephen R; Westreich, Daniel

    2018-05-01

    Since being introduced to epidemiology in 2000, marginal structural models have become a commonly used method for causal inference in a wide range of epidemiologic settings. In this brief report, we aim to explore three subtleties of marginal structural models. First, we distinguish marginal structural models from the inverse probability weighting estimator, and we emphasize that marginal structural models are not only for longitudinal exposures. Second, we explore the meaning of the word "marginal" in "marginal structural model." Finally, we show that the specification of a marginal structural model can have important implications for the interpretation of its parameters. Each of these concepts have important implications for the use and understanding of marginal structural models, and thus providing detailed explanations of them may lead to better practices for the field of epidemiology.

  13. Average receiving scaling of the weighted polygon Koch networks with the weight-dependent walk

    NASA Astrophysics Data System (ADS)

    Ye, Dandan; Dai, Meifeng; Sun, Yanqiu; Shao, Shuxiang; Xie, Qi

    2016-09-01

    Based on the weighted Koch networks and the self-similarity of fractals, we present a family of weighted polygon Koch networks with a weight factor r(0 < r ≤ 1) . We study the average receiving time (ART) on weight-dependent walk (i.e., the walker moves to any of its neighbors with probability proportional to the weight of edge linking them), whose key step is to calculate the sum of mean first-passage times (MFPTs) for all nodes absorpt at a hub node. We use a recursive division method to divide the weighted polygon Koch networks in order to calculate the ART scaling more conveniently. We show that the ART scaling exhibits a sublinear or linear dependence on network order. Thus, the weighted polygon Koch networks are more efficient than expended Koch networks in receiving information. Finally, compared with other previous studies' results (i.e., Koch networks, weighted Koch networks), we find out that our models are more general.

  14. Calculating weighted estimates of peak streamflow statistics

    USGS Publications Warehouse

    Cohn, Timothy A.; Berenbrock, Charles; Kiang, Julie E.; Mason, Jr., Robert R.

    2012-01-01

    According to the Federal guidelines for flood-frequency estimation, the uncertainty of peak streamflow statistics, such as the 1-percent annual exceedance probability (AEP) flow at a streamgage, can be reduced by combining the at-site estimate with the regional regression estimate to obtain a weighted estimate of the flow statistic. The procedure assumes the estimates are independent, which is reasonable in most practical situations. The purpose of this publication is to describe and make available a method for calculating a weighted estimate from the uncertainty or variance of the two independent estimates.

  15. Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Chang, K. C.

    2005-05-01

    Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.

  16. Persistence of weight loss and acquired behaviors 2 y after stopping a 2-y calorie restriction intervention.

    PubMed

    Marlatt, Kara L; Redman, Leanne M; Burton, Jeff H; Martin, Corby K; Ravussin, Eric

    2017-04-01

    Background: Calorie restriction (CR) influences aging processes and extends average and maximal life spans. The CALERIE 2 (Comprehensive Assessment of Long-Term Effects of Reducing Intake of Energy Phase 2) study was the first randomized clinical trial to examine the metabolic and psychological effects of CR in nonobese humans. Objective: We conducted a 2-y follow-up study of adults who underwent 2 y of CR or ad libitum (control) consumption and determined whether weight loss and acquired behaviors persisted after the study ended when participants determined their own lifestyle behaviors. Design: In this prospective, longitudinal study, we assessed differences in weight, body composition, psychological function, and energy expenditure in 39 nonobese [body mass index (in kg/m 2 ): 22-28] men and women (25% CR: n = 24; control: n = 15) 12 and 24 mo after they completed the CALERIE 2 study at Pennington Biomedical. Results: Of 39 participants who were in the follow-up study, 29 subjects (CR: n = 18; control: n = 11) completed all visits at follow-up months 12 and 24. After the CR intervention, a mean ± SEM weight loss of 9.0 ± 0.6 kg was observed in the CR group, in which only 54% of the weight was regained 2 y later. Despite such a regain, weight, the percentage of body fat, and fat mass remained significantly reduced from baseline throughout follow-up and remained significantly less than in the control group ( P < 0.05). At follow-up, the CR group retained higher degrees of dietary restraint and avoidance of certain foods. Conclusion: After a 2-y intensive CR intervention, ∼50% of CR-induced weight loss was maintained 2 y later, which was probably the result of lasting effects on acquired behaviors and dietary restraint. This trial was registered at clinicaltrials.gov as NCT00943215. © 2017 American Society for Nutrition.

  17. Evaluation of the Three Parameter Weibull Distribution Function for Predicting Fracture Probability in Composite Materials

    DTIC Science & Technology

    1978-03-01

    for the risk of rupture for a unidirectionally laminat - ed composite subjected to pure bending. (5D This equation can be simplified further by use of...C EVALUATION OF THE THREE PARAMETER WEIBULL DISTRIBUTION FUNCTION FOR PREDICTING FRACTURE PROBABILITY IN COMPOSITE MATERIALS. THESIS / AFIT/GAE...EVALUATION OF THE THREE PARAMETER WE1BULL DISTRIBUTION FUNCTION FOR PREDICTING FRACTURE PROBABILITY IN COMPOSITE MATERIALS THESIS Presented

  18. Decomposition of conditional probability for high-order symbolic Markov chains.

    PubMed

    Melnik, S S; Usatenko, O V

    2017-07-01

    The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.

  19. Decomposition of conditional probability for high-order symbolic Markov chains

    NASA Astrophysics Data System (ADS)

    Melnik, S. S.; Usatenko, O. V.

    2017-07-01

    The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.

  20. Multi-hazard Assessment and Scenario Toolbox (MhAST): A Framework for Analyzing Compounding Effects of Multiple Hazards

    NASA Astrophysics Data System (ADS)

    Sadegh, M.; Moftakhari, H.; AghaKouchak, A.

    2017-12-01

    Many natural hazards are driven by multiple forcing variables, and concurrence/consecutive extreme events significantly increases risk of infrastructure/system failure. It is a common practice to use univariate analysis based upon a perceived ruling driver to estimate design quantiles and/or return periods of extreme events. A multivariate analysis, however, permits modeling simultaneous occurrence of multiple forcing variables. In this presentation, we introduce the Multi-hazard Assessment and Scenario Toolbox (MhAST) that comprehensively analyzes marginal and joint probability distributions of natural hazards. MhAST also offers a wide range of scenarios of return period and design levels and their likelihoods. Contribution of this study is four-fold: 1. comprehensive analysis of marginal and joint probability of multiple drivers through 17 continuous distributions and 26 copulas, 2. multiple scenario analysis of concurrent extremes based upon the most likely joint occurrence, one ruling variable, and weighted random sampling of joint occurrences with similar exceedance probabilities, 3. weighted average scenario analysis based on a expected event, and 4. uncertainty analysis of the most likely joint occurrence scenario using a Bayesian framework.

  1. The first Polish liver transplantation after Roux-en-Y gastric bypass surgery for morbid obesity: a case report and literature review.

    PubMed

    Marszałek, Rafał; Ziemiański, Paweł; Łągiewska, Beata; Pacholczyk, Marek; Domienik-Karłowicz, Justyna; Trzebicki, Janusz; Wierzbicki, Zbigniew; Jankowski, Krzysztof; Kosieradzki, Maciej; Wasiak, Dariusz; Jonas, Maurycy; Pruszczyk, Piotr; Durlik, Magdalena; Lisik, Wojciech; Chmura, Andrzej

    2015-02-25

    Morbid obesity is associated with liver pathology, most commonly non-alcoholic steatohepatitis (NASH) leading to cirrhosis. However, the morbid obesity impedes qualification for organ transplantation. We present a case report of a 56-year-old woman who underwent bariatric procedure followed by liver transplantation (LTx). Her initial weight was 130.2 kg (BMI 50.9 kg/m2). The patient had a history of arterial hypertension, diabetes, gonarthrosis, and obstructive sleep apnea syndrome and no history of alcohol abuse. She underwent Roux-en-Y gastric bypass (RYGB) procedure. The routine intraoperative liver biopsy revealed fibrosis (III°), steatosis (II°), and intra-acinar inflammation. The operation led to a substantial loss of weight. Two years after the surgery the patient was referred to the Transplantation Clinic of Department of General Surgery and Transplantology with suspicion of liver failure due to advanced cirrhosis, which could be a result of previously diagnosed NASH and, probably, excessive alcohol use after bariatric surgery. The patient was qualified for elective LTx, which was performed 3 years after the RYGB. Immediately before LTx, the patient's weight was 65 kg (BMI 25.4 kg/m²). The postoperative period was complicated by bleeding into the peritoneal cavity, which required reoperation. She also had renal failure, requiring renal replacement therapy. One year after LTx, she showed stable liver function with normal transaminases activity and bilirubin concentration, remission of diabetes, and good renal function. Steatohepatitis in morbidly obese patients may lead to cirrhosis. Bariatric procedure can be a bridge to liver transplantation for morbidly obese patients with advanced liver fibrosis.

  2. Cellular and Hormonal Disruption of Fetal Testis Development in Sheep Reared on Pasture Treated with Sewage Sludge

    PubMed Central

    Paul, Catriona; Rhind, Stewart M.; Kyle, Carol E.; Scott, Hayley; McKinnell, Chris; Sharpe, Richard M.

    2005-01-01

    The purpose of this study was to evaluate whether experimental exposure of pregnant sheep to a mixture of environmental chemicals added to pasture as sewage sludge (n = 9 treated animals) exerted effects on fetal testis development or function; application of sewage sludge was undertaken so as to maximize exposure of the ewes to its contents. Control ewes (n = 9) were reared on pasture treated with an equivalent amount of inorganic nitrogenous fertilizer. Treatment had no effect on body weight of ewes, but it reduced body weight by 12–15% in male (n = 12) and female (n = 8) fetuses on gestation day 110. In treated male fetuses (n = 11), testis weight was significantly reduced (32%), as were the numbers of Sertoli cells (34% reduction), Leydig cells (37% reduction), and gonocytes (44% reduction), compared with control fetuses (n = 8). Fetal blood levels of testosterone and inhibin A were also reduced (36% and 38%, respectively) in treated compared with control fetuses, whereas blood levels of luteinizing hormone and follicle-stimulating hormone were unchanged. Based on immunoexpression of anti-Müllerian hormone, cytochrome P450 side chain cleavage enzyme, and Leydig cell cytoplasmic volume, we conclude that the hormone changes in treated male fetuses probably result from the reduction in somatic cell numbers. This reduction could result from fetal growth restriction in male fetuses and/or from the lowered testosterone action; reduced immunoexpression of α-smooth muscle actin in peritubular cells and of androgen receptor in testes of treated animals supports the latter possibility. These findings indicate that exposure of the developing male sheep fetus to real-world mixtures of environmental chemicals can result in major attenuation of testicular development and hormonal function, which may have consequences in adulthood. PMID:16263515

  3. High throughput nonparametric probability density estimation.

    PubMed

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  4. High throughput nonparametric probability density estimation

    PubMed Central

    Farmer, Jenny

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803

  5. Seismic velocity structure of the crust and upper mantle beneath the Texas-Gulf of Mexico margin from joint inversion of Ps and Sp receiver functions and surface wave dispersion

    NASA Astrophysics Data System (ADS)

    Agrawal, M.; Pulliam, J.; Sen, M. K.

    2013-12-01

    The seismic structure beneath Texas Gulf Coast Plain (GCP) is determined via velocity analysis of stacked common conversion point (CCP) Ps and Sp receiver functions and surface wave dispersion. The GCP is a portion of a ocean-continental transition zone, or 'passive margin', where seismic imaging of lithospheric Earth structure via passive seismic techniques has been rare. Seismic data from a temporary array of 22 broadband stations, spaced 16-20 km apart, on a ~380-km-long profile from Matagorda Island, a barrier island in the Gulf of Mexico, to Johnson City, Texas were employed to construct a coherent image of the crust and uppermost mantle. CCP stacking was applied to data from teleseismic earthquakes to enhance the signal-to-noise ratios of converted phases, such as Ps phases. An inaccurate velocity model, used for time-to-depth conversion in CCP stacking, may produce higher errors, especially in a region of substantial lateral velocity variations. An accurate velocity model is therefore essential to constructing high quality depth-domain images. To find accurate velocity P- and S-wave models, we applied a joint modeling approach that searches for best-fitting models via simulated annealing. This joint inversion approach, which we call 'multi objective optimization in seismology' (MOOS), simultaneously models Ps receiver functions, Sp receiver functions and group velocity surface wave dispersion curves after assigning relative weights for each objective function. Weights are computed from the standard deviations of the data. Statistical tools such as the posterior parameter correlation matrix and posterior probability density (PPD) function are used to evaluate the constraints that each data type places on model parameters. They allow us to identify portions of the model that are well or poorly constrained.

  6. Obesity, change of body mass index and subsequent physical and mental health functioning: a 12-year follow-up study among ageing employees.

    PubMed

    Svärd, Anna; Lahti, Jouni; Roos, Eira; Rahkonen, Ossi; Lahelma, Eero; Lallukka, Tea; Mänty, Minna

    2017-09-26

    Studies suggest an association between weight change and subsequent poor physical health functioning, whereas the association with mental health functioning is inconsistent. We aimed to examine whether obesity and change of body mass index among normal weight, overweight and obese women and men associate with changes in physical and mental health functioning. The Helsinki Health Study cohort includes Finnish municipal employees aged 40 to 60 in 2000-02 (phase 1, response rate 67%). Phase 2 mail survey (response rate 82%) took place in 2007 and phase 3 in 2012 (response rate 76%). This study included 5668 participants (82% women). Seven weight change categories were formed based on body mass index (BMI) (phase 1) and weight change (BMI change ≥5%) (phase 1-2). The Short Form 36 Health Survey (SF-36) measured physical and mental health functioning. The change in health functioning (phase 1-3) score was examined with repeated measures analyses. Covariates were age, sociodemographic factors, health behaviours, and somatic ill-health. Weight gain was common among women (34%) and men (25%). Weight-gaining normal weight (-1.3 points), overweight (-1.3 points) and obese (-3.6 points) women showed a greater decline in physical component summary scores than weight-maintaining normal weight women. Among weight-maintainers, only obese (-1.8 points) women showed a greater decline than weight-maintaining normal weight women. The associations were similar, but statistically non-significant for obese men. No statistically significant differences in the change in mental health functioning occurred. Preventing weight gain likely helps maintaining good physical health functioning and work ability.

  7. Parents' education and child body weight in France: The trajectory of the gradient in the early years.

    PubMed

    Apouey, Bénédicte H; Geoffard, Pierre-Yves

    2016-03-01

    This paper explores the relationship between parental education and offspring body weight in France. Using two large datasets spanning the 1991-2010 period, we examine the existence of inequalities in maternal and paternal education and reported child body weight measures, as well as their evolution across childhood. Our empirical specification is flexible and allows this evolution to be non-monotonic. Significant inequalities are observed for both parents' education--maternal (respectively paternal) high education is associated with a 7.20 (resp. 7.10) percentage points decrease in the probability that the child is reported to be overweight or obese, on average for children of all ages. The gradient with respect to parents' education follows an inverted U-shape across childhood, meaning that the association between parental education and child body weight widens from birth to age 8, and narrows afterward. Specifically, maternal high education is correlated with a 5.30 percentage points decrease in the probability that the child is reported to be overweight or obese at age 2, but a 9.62 percentage points decrease at age 8, and a 1.25 percentage point decrease at age 17. The figures for paternal high education are respectively 5.87, 9.11, and 4.52. This pattern seems robust, since it is found in the two datasets, when alternative variables for parental education and reported child body weight are employed, and when controls for potential confounding factors are included. The findings for the trajectory of the income gradient corroborate those of the education gradient. The results may be explained by an equalization in actual body weight across socioeconomic groups during youth, or by changes in reporting styles of height and weight. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Neural processes mediating the preparation and release of focal motor output are suppressed or absent during imagined movement

    PubMed Central

    Eagles, Jeremy S.; Carlsen, Anthony N.

    2016-01-01

    Movements that are executed or imagined activate a similar subset of cortical regions, but the extent to which this activity represents functionally equivalent neural processes is unclear. During preparation for an executed movement, presentation of a startling acoustic stimulus (SAS) evokes a premature release of the planned movement with the spatial and temporal features of the tasks essentially intact. If imagined movement incorporates the same preparatory processes as executed movement, then a SAS should release the planned movement during preparation. This hypothesis was tested using an instructed-delay cueing paradigm during which subjects were required to rapidly release a handheld weight while maintaining the posture of the arm or to perform first-person imagery of the same task while holding the weight. In a subset of trials, a SAS was presented at 1500, 500, or 200 ms prior to the release cue. Task-appropriate preparation during executed and imagined movements was confirmed by electroencephalographic recording of a contingent negative variation waveform. During preparation for executed movement, a SAS often resulted in premature release of the weight with the probability of release progressively increasing from 24 % at −1500 ms to 80 % at −200 ms. In contrast, the SAS rarely (<2 % of trials) triggered a release of the weight during imagined movement. However, the SAS frequently evoked the planned postural response (suppression of bicep brachii muscle activity) irrespective of the task or timing of stimulation (even during periods of postural hold without preparation). These findings provide evidence that neural processes mediating the preparation and release of the focal motor task (release of the weight) are markedly attenuated or absent during imagined movement and that postural and focal components of the task are prepared independently. PMID:25744055

  9. Mechanisms of chronic waterborne Zn toxicity in Daphnia magna.

    PubMed

    Muyssen, Brita T A; De Schamphelaere, Karel A C; Janssen, Colin R

    2006-05-25

    In order to gain better insights in the integrated response of Daphnia magna following chronic zinc exposure, several physiological parameters were measured in a time-dependent manner. D. magna juveniles were exposed for 21 days to dissolved Zn concentrations up to 340 microg/L. Next to standard endpoints such as mortality, growth and reproduction the following sub-lethal endpoints were measured: filtration and ingestion rate, respiration rate, energy reserves, internal Zn and total Ca concentrations in the organisms. Organisms exposed to 80 microg/L generally performed better than the Zn deprived control organisms. The former were used to elucidate the effects of higher Zn concentrations on the endpoints mentioned above. After 1 week, only 7% of the organisms exposed to 340 microg/L survived. Body Zn contents of these organisms were 281 +/- 76 microg g dry weight and a 37% decrease of the Ca contents was observed. This suggests a competitive effect of Zn on Ca uptake. Filtration rate (-51%), individual weight (-58%) and energy reserves (-35%) also exhibited a decreasing trend as a function of increasing Zn exposure concentrations. During the second and third exposure week an overall repair process was observed. In the surviving organisms mortality and reproduction were only slightly affected. This can be explained by (over)compensation reactions at lower levels of biological organisation: Ca contents (+24%) and filtration rate (+90%) increased as a function of the exposure concentration while respiration rate decreased (-29%) resulting in energy reserves remaining constant as a function of Zn exposure. It is hypothesized that a disturbed Ca balance is probably the first cause for zinc toxicity effects in D. magna.

  10. Changes in cardiac energy metabolic pathways in overweighed rats fed a high-fat diet.

    PubMed

    Modrego, Javier; de las Heras, Natalia; Zamorano-León, Jose J; Mateos-Cáceres, Petra J; Martín-Fernández, Beatriz; Valero-Muñoz, Maria; Lahera, Vicente; López-Farré, Antonio J

    2013-03-01

    Heart produces ATP through long-chain fatty acids beta oxidation. To analyze whether in ventricular myocardium, high-fat diet may modify the expression of proteins associated with energy metabolism before myocardial function was affected. Wistar Kyoto rats were divided into two groups: (a) rats fed standard diet (control; n = 6) and (b) rats fed high-fat diet (HFD; n = 6). Proteins from left ventricles were analyzed by two-dimensional electrophoresis, mass spectrometry and Western blotting. Rats fed with HFD showed higher body weight, insulin, glucose, leptin and total cholesterol plasma levels as compared with those fed with standard diet. However, myocardial functional parameters were not different between them. The protein expression of 3-ketoacyl-CoA thiolase, acyl-CoA hydrolase mitochondrial precursor and enoyl-CoA hydratase, three long-chain fatty acid β-oxidation-related enzymes, and carnitine-O-palmitoyltransferase I was significantly higher in left ventricles from HFD rats. Protein expression of triosephosphate isomerase was higher in left ventricles from HFD rats than in those from control. Two α/β-enolase isotypes and glyceraldehyde-3-phosphate isomerase were significantly increased in HFD rats as compared with control. Pyruvate and lactate contents were similar in HFD and control groups. Expression of proteins associated with Krebs cycle and mitochondrial oxidative phosphorylation was higher in HFD rats. Expression of proteins involved in left ventricle metabolic energy was enhanced before myocardial functionality was affected in rats fed with HFD. These findings may probably indicate higher cardiac energy requirement due to weight increase by HFD.

  11. 14 CFR 23.21 - Proof of compliance.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... center of gravity within the range of loading conditions for which certification is requested. This must... each probable combination of weight and center of gravity, if compliance cannot be reasonably inferred...

  12. 14 CFR 23.21 - Proof of compliance.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... center of gravity within the range of loading conditions for which certification is requested. This must... each probable combination of weight and center of gravity, if compliance cannot be reasonably inferred...

  13. 14 CFR 23.21 - Proof of compliance.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... center of gravity within the range of loading conditions for which certification is requested. This must... each probable combination of weight and center of gravity, if compliance cannot be reasonably inferred...

  14. 14 CFR 23.21 - Proof of compliance.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... center of gravity within the range of loading conditions for which certification is requested. This must... each probable combination of weight and center of gravity, if compliance cannot be reasonably inferred...

  15. 14 CFR 23.21 - Proof of compliance.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... center of gravity within the range of loading conditions for which certification is requested. This must... each probable combination of weight and center of gravity, if compliance cannot be reasonably inferred...

  16. Quantile Functions, Convergence in Quantile, and Extreme Value Distribution Theory.

    DTIC Science & Technology

    1980-11-01

    Gnanadesikan (1968). Quantile functions are advocated by Parzen (1979) as providing an approach to probability-based data analysis. Quantile functions are... Gnanadesikan , R. (1968). Probability Plotting Methods for the Analysis of Data, Biomtrika, 55, 1-17.

  17. Propensity score analysis with partially observed covariates: How should multiple imputation be used?

    PubMed

    Leyrat, Clémence; Seaman, Shaun R; White, Ian R; Douglas, Ian; Smeeth, Liam; Kim, Joseph; Resche-Rigon, Matthieu; Carpenter, James R; Williamson, Elizabeth J

    2017-01-01

    Inverse probability of treatment weighting is a popular propensity score-based approach to estimate marginal treatment effects in observational studies at risk of confounding bias. A major issue when estimating the propensity score is the presence of partially observed covariates. Multiple imputation is a natural approach to handle missing data on covariates: covariates are imputed and a propensity score analysis is performed in each imputed dataset to estimate the treatment effect. The treatment effect estimates from each imputed dataset are then combined to obtain an overall estimate. We call this method MIte. However, an alternative approach has been proposed, in which the propensity scores are combined across the imputed datasets (MIps). Therefore, there are remaining uncertainties about how to implement multiple imputation for propensity score analysis: (a) should we apply Rubin's rules to the inverse probability of treatment weighting treatment effect estimates or to the propensity score estimates themselves? (b) does the outcome have to be included in the imputation model? (c) how should we estimate the variance of the inverse probability of treatment weighting estimator after multiple imputation? We studied the consistency and balancing properties of the MIte and MIps estimators and performed a simulation study to empirically assess their performance for the analysis of a binary outcome. We also compared the performance of these methods to complete case analysis and the missingness pattern approach, which uses a different propensity score model for each pattern of missingness, and a third multiple imputation approach in which the propensity score parameters are combined rather than the propensity scores themselves (MIpar). Under a missing at random mechanism, complete case and missingness pattern analyses were biased in most cases for estimating the marginal treatment effect, whereas multiple imputation approaches were approximately unbiased as long as the outcome was included in the imputation model. Only MIte was unbiased in all the studied scenarios and Rubin's rules provided good variance estimates for MIte. The propensity score estimated in the MIte approach showed good balancing properties. In conclusion, when using multiple imputation in the inverse probability of treatment weighting context, MIte with the outcome included in the imputation model is the preferred approach.

  18. Two-dimensional analytic weighting functions for limb scattering

    NASA Astrophysics Data System (ADS)

    Zawada, D. J.; Bourassa, A. E.; Degenstein, D. A.

    2017-10-01

    Through the inversion of limb scatter measurements it is possible to obtain vertical profiles of trace species in the atmosphere. Many of these inversion methods require what is often referred to as weighting functions, or derivatives of the radiance with respect to concentrations of trace species in the atmosphere. Several radiative transfer models have implemented analytic methods to calculate weighting functions, alleviating the computational burden of traditional numerical perturbation methods. Here we describe the implementation of analytic two-dimensional weighting functions, where derivatives are calculated relative to atmospheric constituents in a two-dimensional grid of altitude and angle along the line of sight direction, in the SASKTRAN-HR radiative transfer model. Two-dimensional weighting functions are required for two-dimensional inversions of limb scatter measurements. Examples are presented where the analytic two-dimensional weighting functions are calculated with an underlying one-dimensional atmosphere. It is shown that the analytic weighting functions are more accurate than ones calculated with a single scatter approximation, and are orders of magnitude faster than a typical perturbation method. Evidence is presented that weighting functions for stratospheric aerosols calculated under a single scatter approximation may not be suitable for use in retrieval algorithms under solar backscatter conditions.

  19. Exercise, dietary obesity, and growth in the rat

    NASA Technical Reports Server (NTRS)

    Pitts, G. C.; Bull, L. S.

    1977-01-01

    Experiments were conducted on weanling male rats 35 days old and weighing about 100 g to determine how endurance-type exercise and high-fat diet administered during growth influence body mass and composition. The animals were divided into four weight-matched groups of 25 animals each: group I - high-fat diet, exercised; group II - chow, exercised; group III - high-fat diet, sedentary; and group IV - chow, sedentary. During growth, masses of water, muscle and skin increased as functions of body size; bone as a function of age; and heart, liver, gut, testes, and CNS were affected by combinations of size, age, activity, and diet. Major conclusions are that growth in body size is expressed more precisely with fat-free body mass (FFBM), that late rectilinear growth is probably attributable to fat accretion, and that the observed influences on FFBM of exercise and high-fat diet are obtained only if the regimen is started at or before age 5-7 weeks.

  20. Tracking the potyviral P1 protein in Nicotiana benthamiana plants during plum pox virus infection.

    PubMed

    Vozárová, Z; Glasa, M; Šubr, Z W

    The P1 protein is derived from the N terminus of potyvirus-coded polyprotein. In addition to the proteolytic activity essential for its maturation, it probably participates in suppression of host defense and/or in virus replication. Clear validation of the P1 in vivo function(s), however, is not yet available. We applied an infectious cDNA clone of plum pox virus (PPV), where the P1 was N-fused with a hexahistidine tag, to trace this protein in Nicotiana benthamiana plants during the PPV infection. Immunoblot analysis with the anti-his antibody showed a diffuse band corresponding to the molecular weight about 70-80 kDa (about twice larger than expected) in the root samples from early stage of infection. This signal culminated on the sixth day post inoculation, later it rapidly disappeared. Sample denaturation by boiling in SDS before centrifugal clarification was essential, indicating strong affinity of P1-his to some plant compound sedimenting with the tissue and cell debris.

  1. A simple low-computation-intensity model for approximating the distribution function of a sum of non-identical lognormals for financial applications

    NASA Astrophysics Data System (ADS)

    Messica, A.

    2016-10-01

    The probability distribution function of a weighted sum of non-identical lognormal random variables is required in various fields of science and engineering and specifically in finance for portfolio management as well as exotic options valuation. Unfortunately, it has no known closed form and therefore has to be approximated. Most of the approximations presented to date are complex as well as complicated for implementation. This paper presents a simple, and easy to implement, approximation method via modified moments matching and a polynomial asymptotic series expansion correction for a central limit theorem of a finite sum. The method results in an intuitively-appealing and computation-efficient approximation for a finite sum of lognormals of at least ten summands and naturally improves as the number of summands increases. The accuracy of the method is tested against the results of Monte Carlo simulationsand also compared against the standard central limit theorem andthe commonly practiced Markowitz' portfolio equations.

  2. Adaptive Detector Arrays for Optical Communications Receivers

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V.; Srinivasan, M.

    2000-01-01

    The structure of an optimal adaptive array receiver for ground-based optical communications is described and its performance investigated. Kolmogorov phase screen simulations are used to model the sample functions of the focal-plane signal distribution due to turbulence and to generate realistic spatial distributions of the received optical field. This novel array detector concept reduces interference from background radiation by effectively assigning higher confidence levels at each instant of time to those detector elements that contain significant signal energy and suppressing those that do not. A simpler suboptimum structure that replaces the continuous weighting function of the optimal receiver by a hard decision on the selection of the signal detector elements also is described and evaluated. Approximations and bounds to the error probability are derived and compared with the exact calculations and receiver simulation results. It is shown that, for photon-counting receivers observing Poisson-distributed signals, performance improvements of approximately 5 dB can be obtained over conventional single-detector photon-counting receivers, when operating in high background environments.

  3. Moisture Absorption in Certain Tropical American Woods

    DTIC Science & Technology

    1949-08-01

    surface area was in unobstructed contact with the salt water. Similar wire mesh racks were weighted and placed on top of the specimens to keep them...Oak (Quercus alba)" Br. Guiana Honduras United States (control) II II Total absorption by 2 x 2 x 6-inch uncoated specimens. Probably sapwood ...only. /2 ~~ Probably sapwood Table 3 (Continued) Species Source Increase over 40 percent Fiddlewood (Vit ex Gaumeri) Roble Blanco (Tabebuia

  4. Calculation of Radar Probability of Detection in K-Distributed Sea Clutter and Noise

    DTIC Science & Technology

    2011-04-01

    Laguerre polynomials are generated from a recurrence relation, and the nodes and weights are calculated from the eigenvalues and eigenvectors of a...B.P. Flannery, Numerical Recipes in Fortran, Second Edition, Cambridge University Press (1992). 12. W. Gautschi, Orthogonal Polynomials (in Matlab...the integration, with the nodes and weights calculated using matrix methods, so that a general purpose numerical integration routine is not required

  5. Capital Investment Motivational Techniques Used by Prime Contractors on Subcontractors

    DTIC Science & Technology

    1984-12-01

    by block number) Productivity; Profit Policy; Subcontractors; Weighted Guidelines; Profitability; Profit 20. ABSTRACT (Continue on reverse aide If...probably productivity gains that could be made if defense contractors increased their investment [6:39]. A major deterrent to the Weighted Guideline...any profit gained would be offset to some degree by a profit loss from a reduction in profit based on costs . This result is a consequence of the cost

  6. Exposure to traffic-related air pollution during pregnancy and term low birth weight: estimation of causal associations in a semiparametric model.

    PubMed

    Padula, Amy M; Mortimer, Kathleen; Hubbard, Alan; Lurmann, Frederick; Jerrett, Michael; Tager, Ira B

    2012-11-01

    Traffic-related air pollution is recognized as an important contributor to health problems. Epidemiologic analyses suggest that prenatal exposure to traffic-related air pollutants may be associated with adverse birth outcomes; however, there is insufficient evidence to conclude that the relation is causal. The Study of Air Pollution, Genetics and Early Life Events comprises all births to women living in 4 counties in California's San Joaquin Valley during the years 2000-2006. The probability of low birth weight among full-term infants in the population was estimated using machine learning and targeted maximum likelihood estimation for each quartile of traffic exposure during pregnancy. If everyone lived near high-volume freeways (approximated as the fourth quartile of traffic density), the estimated probability of term low birth weight would be 2.27% (95% confidence interval: 2.16, 2.38) as compared with 2.02% (95% confidence interval: 1.90, 2.12) if everyone lived near smaller local roads (first quartile of traffic density). Assessment of potentially causal associations, in the absence of arbitrary model assumptions applied to the data, should result in relatively unbiased estimates. The current results support findings from previous studies that prenatal exposure to traffic-related air pollution may adversely affect birth weight among full-term infants.

  7. The Impact of WIC on Birth Outcomes: New Evidence from South Carolina.

    PubMed

    Sonchak, Lyudmyla

    2016-07-01

    Objectives To investigate the impact of the Special Supplemental Nutrition Program for Women, Infants and Children (WIC) on a variety of infant health outcomes using recent South Carolina Vital Statistics data (2004-2012). Methods To account for non-random WIC participation, the study relies on a maternal fixed effects estimation, due to the availability of unique maternally linked data. Results The results indicate that WIC participation is associated with an increase in birth weight and length of gestation, decrease in the probability of low birth weight, prematurity, and Neonatal Intensive Care Unit admission. Additionally, addressing gestational bias and accounting for the length of gestation, WIC participation is associated with a decrease in the probability of delivering a low weight infant and a small for gestational age infant among black mothers. Conclusions for Practice Accounting for non-random program participation, the study documents a large improvement in birth outcomes among infants of WIC participating mothers. Even in the context of somewhat restrictive gestation-adjusted specification, the positive impact of WIC remains within the subsample of black mothers.

  8. Robust mislabel logistic regression without modeling mislabel probabilities.

    PubMed

    Hung, Hung; Jou, Zhi-Yu; Huang, Su-Yun

    2018-03-01

    Logistic regression is among the most widely used statistical methods for linear discriminant analysis. In many applications, we only observe possibly mislabeled responses. Fitting a conventional logistic regression can then lead to biased estimation. One common resolution is to fit a mislabel logistic regression model, which takes into consideration of mislabeled responses. Another common method is to adopt a robust M-estimation by down-weighting suspected instances. In this work, we propose a new robust mislabel logistic regression based on γ-divergence. Our proposal possesses two advantageous features: (1) It does not need to model the mislabel probabilities. (2) The minimum γ-divergence estimation leads to a weighted estimating equation without the need to include any bias correction term, that is, it is automatically bias-corrected. These features make the proposed γ-logistic regression more robust in model fitting and more intuitive for model interpretation through a simple weighting scheme. Our method is also easy to implement, and two types of algorithms are included. Simulation studies and the Pima data application are presented to demonstrate the performance of γ-logistic regression. © 2017, The International Biometric Society.

  9. Functional weight-bearing mobilization after Achilles tendon rupture enhances early healing response: a single-blinded randomized controlled trial.

    PubMed

    Valkering, Kars P; Aufwerber, Susanna; Ranuccio, Francesco; Lunini, Enricomaria; Edman, Gunnar; Ackermann, Paul W

    2017-06-01

    Functional weight-bearing mobilization may improve repair of Achilles tendon rupture (ATR), but the underlying mechanisms and outcome were unknown. We hypothesized that functional weight-bearing mobilization by means of increased metabolism could improve both early and long-term healing. In this prospective randomized controlled trial, patients with acute ATR were randomized to either direct post-operative functional weight-bearing mobilization (n = 27) in an orthosis or to non-weight-bearing (n = 29) plaster cast immobilization. During the first two post-operative weeks, 15°-30° of plantar flexion was allowed and encouraged in the functional weight-bearing mobilization group. At 2 weeks, patients in the non-weight-bearing cast immobilization group received a stiff orthosis, while the functional weight-bearing mobilization group continued with increased range of motion. At 6 weeks, all patients discontinued immobilization. At 2 weeks, healing metabolites and markers of procollagen type I (PINP) and III (PIIINP) were examined using microdialysis. At 6 and 12 months, functional outcome using heel-rise test was assessed. Healing tendons of both groups exhibited increased levels of metabolites glutamate, lactate, pyruvate, and of PIIINP (all p < 0.05). Patients in functional weight-bearing mobilization group demonstrated significantly higher concentrations of glutamate compared to the non-weight-bearing cast immobilization group (p = 0.045).The upregulated glutamate levels were significantly correlated with the concentrations of PINP (r = 0.5, p = 0.002) as well as with improved functional outcome at 6 months (r = 0.4; p = 0.014). Heel-rise tests at 6 and 12 months did not display any differences between the two groups. Functional weight-bearing mobilization enhanced the early healing response of ATR. In addition, early ankle range of motion was improved without the risk of Achilles tendon elongation and without altering long-term functional outcome. The relationship between functional weight-bearing mobilization-induced upregulation of glutamate and enhanced healing suggests novel opportunities to optimize post-operative rehabilitation.

  10. Prospect evaluation as a function of numeracy and probability denominator.

    PubMed

    Millroth, Philip; Juslin, Peter

    2015-05-01

    This study examines how numeracy and probability denominator (a direct-ratio probability, a relative frequency with denominator 100, a relative frequency with denominator 10,000) affect the evaluation of prospects in an expected-value based pricing task. We expected that numeracy would affect the results due to differences in the linearity of number perception and the susceptibility to denominator neglect with different probability formats. An analysis with functional measurement verified that participants integrated value and probability into an expected value. However, a significant interaction between numeracy and probability format and subsequent analyses of the parameters of cumulative prospect theory showed that the manipulation of probability denominator changed participants' psychophysical response to probability and value. Standard methods in decision research may thus confound people's genuine risk attitude with their numerical capacities and the probability format used. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. [Renal effects and metabolism of sevoflurane in Fisher 3444 rats: an in-vivo and in-vitro comparison with methoxyflurane].

    PubMed

    Cook, T L; Beppu, W J; Hitt, B A; Kosek, J C; Mazze, R I

    1975-07-01

    Sevoflurane, 1.4 per cent (MAC), was administered to groups of Fischer 344 rats for 10 hours, 4 hours, or 1 hour; additional rats received 0.5 per cent methoxyflurane for 3 hours or 1 hour. Urinary inorganic fluoride excretion of sevoflurane in vivo was a third to a fourth that of methoxyflurane. However, using hepatic microsomes, sevoflurane and methoxyflurane were defluorinated in vitro at essentially the same rate. The discrepancy between defluorination of sevoflurane and methoxyflurane in vivo and in vitro was probably due to differences in tissue solubility between the drugs. There were no renal functional or morphologic defects following sevoflurane administration. An unexplained adverse effect was significant weight loss, which occurred following all exposures to sevoflurane.

  12. The propagation of Lamb waves in multilayered plates: phase-velocity measurement

    NASA Astrophysics Data System (ADS)

    Grondel, Sébastien; Assaad, Jamal; Delebarre, Christophe; Blanquet, Pierrick; Moulin, Emmanuel

    1999-05-01

    Owing to the dispersive nature and complexity of the Lamb waves generated in a composite plate, the measurement of the phase velocities by using classical methods is complicated. This paper describes a measurement method based upon the spectrum-analysis technique, which allows one to overcome these problems. The technique consists of using the fast Fourier transform to compute the spatial power-density spectrum. Additionally, weighted functions are used to increase the probability of detecting the various propagation modes. Experimental Lamb-wave dispersion curves of multilayered plates are successfully compared with the analytical ones. This technique is expected to be a useful way to design composite parts integrating ultrasonic transducers in the field of health monitoring. Indeed, Lamb waves and particularly their velocities are very sensitive to defects.

  13. Implicit particle filtering for equations with partial noise and application to geomagnetic data assimilation

    NASA Astrophysics Data System (ADS)

    Morzfeld, M.; Atkins, E.; Chorin, A. J.

    2011-12-01

    The task in data assimilation is to identify the state of a system from an uncertain model supplemented by a stream of incomplete and noisy data. The model is typically given in form of a discretization of an Ito stochastic differential equation (SDE), x(n+1) = R(x(n))+ G W(n), where x is an m-dimensional vector and n=0,1,2,.... The m-dimensional vector function R and the m x m matrix G depend on the SDE as well as on the discretization scheme, and W is an m-dimensional vector whose elements are independent standard normal variates. The data are y(n) = h(x(n))+QV(n) where h is a k-dimensional vector function, Q is a k x k matrix and V is a vector whose components are independent standard normal variates. One can use statistics of the conditional probability density (pdf) of the state given the observations, p(n+1)=p(x(n+1)|y(1), ... , y(n+1)), to identify the state x(n+1). Particle filters approximate p(n+1) by sequential Monte Carlo and rely on the recursive formulation of the target pdf, p(n+1)∝p(x(n+1)|x(n)) p(y(n+1)|x(n+1)). The pdf p(x(n+1)|x(n)) can be read off of the model equations to be a Gaussian with mean R(x(n)) and covariance matrix Σ = GG^T, where the T denotes a transposed; the pdf p(y(n+1)|x(n+1)) is a Gaussian with mean h(x(n+1)) and covariance QQ^T. In a sampling-importance-resampling (SIR) filter one samples new values for the particles from a prior pdf and then one weighs these samples with weights determined by the observations, to yield an approximation to p(n+1). Such weighting schemes often yield small weights for many of the particles. Implicit particle filtering overcomes this problem by using the observations to generate the particles, thus focusing attention on regions of large probability. A suitable algebraic equation that depends on the model and the observations is constructed for each particle, and its solution yields high probability samples of p(n+1). In the current formulation of the implicit particle filter, the state covariance matrix Σ is assumed to be non-singular. In the present work we consider the case where the covariance Σ is singular. This happens in particular when the noise is spatially smooth and can be represented by a small number of Fourier coefficients, as is often the case in geophysical applications. We derive an implicit filter for this problem and show that it is very efficient, because the filter operates in a space whose dimension is the rank of Σ, rather than the full model dimension. We compare the implicit filter to SIR, to the Ensemble Kalman Filter and to variational methods, and also study how information from data is propagated from observed to unobserved variables. We illustrate the theory on two coupled nonlinear PDE's in one space dimension that have been used as a test-bed for geomagnetic data assimilation. We observe that the implicit filter gives good results with few (2-10) particles, while SIR requires thousands of particles for similar accuracy. We also find lower limits to the accuracy of the filter's reconstruction as a function of data availability.

  14. Velocity-space sensitivity of the time-of-flight neutron spectrometer at JET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobsen, A. S., E-mail: Ajsen@fysik.dtu.dk; Salewski, M.; Korsholm, S. B.

    2014-11-15

    The velocity-space sensitivities of fast-ion diagnostics are often described by so-called weight functions. Recently, we formulated weight functions showing the velocity-space sensitivity of the often dominant beam-target part of neutron energy spectra. These weight functions for neutron emission spectrometry (NES) are independent of the particular NES diagnostic. Here we apply these NES weight functions to the time-of-flight spectrometer TOFOR at JET. By taking the instrumental response function of TOFOR into account, we calculate time-of-flight NES weight functions that enable us to directly determine the velocity-space sensitivity of a given part of a measured time-of-flight spectrum from TOFOR.

  15. Approach to testing growth hormone (GH) secretion in obese subjects.

    PubMed

    Popovic, Vera

    2013-05-01

    Identification of adults with GH deficiency (GHD) is challenging because clinical features of adult GHD are not distinctive and because clinical suspicion must be confirmed by biochemical tests. Adults are selected for testing for adult GHD if they have a high pretest probability of GHD, ie, if they have hypothalamic-pituitary disease, if they have received cranial irradiation or central nervous system tumor treatment, or if they survived traumatic brain injury or subarachnoid hemorrhage. Testing should only be carried out if a decision has already been made that if deficiency is found it will be treated. There are many pharmacological GH stimulation tests for the diagnosis of GHD; however, none fulfill the requirements for an ideal test having high discriminatory power; being reproducible, safe, convenient, and economical; and not being dependent on confounding factors such as age, gender, nutritional status, and in particular obesity. In obesity, GH secretion is reduced, GH clearance is enhanced, and stimulated GH secretion is reduced, causing a false-positive result. This functional hyposomatotropism in obesity is fully reversed by weight loss. In conclusion, GH stimulation tests should be avoided in obese subjects with very low pretest probability.

  16. Neural correlates of informational cascades: brain mechanisms of social influence on belief updating

    PubMed Central

    Klucharev, Vasily; Rieskamp, Jörg

    2015-01-01

    Informational cascades can occur when rationally acting individuals decide independently of their private information and follow the decisions of preceding decision-makers. In the process of updating beliefs, differences in the weighting of private and publicly available social information may modulate the probability that a cascade starts in a decisive way. By using functional magnetic resonance imaging, we examined neural activity while participants updated their beliefs based on the decisions of two fictitious stock market traders and their own private information, which led to a final decision of buying one of two stocks. Computational modeling of the behavioral data showed that a majority of participants overweighted private information. Overweighting was negatively correlated with the probability of starting an informational cascade in trials especially prone to conformity. Belief updating by private information was related to activity in the inferior frontal gyrus/anterior insula, the dorsolateral prefrontal cortex and the parietal cortex; the more a participant overweighted private information, the higher the activity in the inferior frontal gyrus/anterior insula and the lower in the parietal-temporal cortex. This study explores the neural correlates of overweighting of private information, which underlies the tendency to start an informational cascade. PMID:24974396

  17. Facial expression recognition under partial occlusion based on fusion of global and local features

    NASA Astrophysics Data System (ADS)

    Wang, Xiaohua; Xia, Chen; Hu, Min; Ren, Fuji

    2018-04-01

    Facial expression recognition under partial occlusion is a challenging research. This paper proposes a novel framework for facial expression recognition under occlusion by fusing the global and local features. In global aspect, first, information entropy are employed to locate the occluded region. Second, principal Component Analysis (PCA) method is adopted to reconstruct the occlusion region of image. After that, a replace strategy is applied to reconstruct image by replacing the occluded region with the corresponding region of the best matched image in training set, Pyramid Weber Local Descriptor (PWLD) feature is then extracted. At last, the outputs of SVM are fitted to the probabilities of the target class by using sigmoid function. For the local aspect, an overlapping block-based method is adopted to extract WLD features, and each block is weighted adaptively by information entropy, Chi-square distance and similar block summation methods are then applied to obtain the probabilities which emotion belongs to. Finally, fusion at the decision level is employed for the data fusion of the global and local features based on Dempster-Shafer theory of evidence. Experimental results on the Cohn-Kanade and JAFFE databases demonstrate the effectiveness and fault tolerance of this method.

  18. Proposed mechanism for learning and memory erasure in a white-noise-driven sleeping cortex.

    PubMed

    Steyn-Ross, Moira L; Steyn-Ross, D A; Sleigh, J W; Wilson, M T; Wilcocks, Lara C

    2005-12-01

    Understanding the structure and purpose of sleep remains one of the grand challenges of neurobiology. Here we use a mean-field linearized theory of the sleeping cortex to derive statistics for synaptic learning and memory erasure. The growth in correlated low-frequency high-amplitude voltage fluctuations during slow-wave sleep (SWS) is characterized by a probability density function that becomes broader and shallower as the transition into rapid-eye-movement (REM) sleep is approached. At transition, the Shannon information entropy of the fluctuations is maximized. If we assume Hebbian-learning rules apply to the cortex, then its correlated response to white-noise stimulation during SWS provides a natural mechanism for a synaptic weight change that will tend to shut down reverberant neural activity. In contrast, during REM sleep the weights will evolve in a direction that encourages excitatory activity. These entropy and weight-change predictions lead us to identify the final portion of deep SWS that occurs immediately prior to transition into REM sleep as a time of enhanced erasure of labile memory. We draw a link between the sleeping cortex and Landauer's dissipation theorem for irreversible computing [R. Landauer, IBM J. Res. Devel. 5, 183 (1961)], arguing that because information erasure is an irreversible computation, there is an inherent entropy cost as the cortex transits from SWS into REM sleep.

  19. Proposed mechanism for learning and memory erasure in a white-noise-driven sleeping cortex

    NASA Astrophysics Data System (ADS)

    Steyn-Ross, Moira L.; Steyn-Ross, D. A.; Sleigh, J. W.; Wilson, M. T.; Wilcocks, Lara C.

    2005-12-01

    Understanding the structure and purpose of sleep remains one of the grand challenges of neurobiology. Here we use a mean-field linearized theory of the sleeping cortex to derive statistics for synaptic learning and memory erasure. The growth in correlated low-frequency high-amplitude voltage fluctuations during slow-wave sleep (SWS) is characterized by a probability density function that becomes broader and shallower as the transition into rapid-eye-movement (REM) sleep is approached. At transition, the Shannon information entropy of the fluctuations is maximized. If we assume Hebbian-learning rules apply to the cortex, then its correlated response to white-noise stimulation during SWS provides a natural mechanism for a synaptic weight change that will tend to shut down reverberant neural activity. In contrast, during REM sleep the weights will evolve in a direction that encourages excitatory activity. These entropy and weight-change predictions lead us to identify the final portion of deep SWS that occurs immediately prior to transition into REM sleep as a time of enhanced erasure of labile memory. We draw a link between the sleeping cortex and Landauer’s dissipation theorem for irreversible computing [R. Landauer, IBM J. Res. Devel. 5, 183 (1961)], arguing that because information erasure is an irreversible computation, there is an inherent entropy cost as the cortex transits from SWS into REM sleep.

  20. Game-Theoretic strategies for systems of components using product-form utilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; Ma, Cheng-Yu; Hausken, K.

    Many critical infrastructures are composed of multiple systems of components which are correlated so that disruptions to one may propagate to others. We consider such infrastructures with correlations characterized in two ways: (i) an aggregate failure correlation function specifies the conditional failure probability of the infrastructure given the failure of an individual system, and (ii) a pairwise correlation function between two systems specifies the failure probability of one system given the failure of the other. We formulate a game for ensuring the resilience of the infrastructure, wherein the utility functions of the provider and attacker are products of an infrastructuremore » survival probability term and a cost term, both expressed in terms of the numbers of system components attacked and reinforced. The survival probabilities of individual systems satisfy first-order differential conditions that lead to simple Nash Equilibrium conditions. We then derive sensitivity functions that highlight the dependence of infrastructure resilience on the cost terms, correlation functions, and individual system survival probabilities. We apply these results to simplified models of distributed cloud computing and energy grid infrastructures.« less

Top