Sample records for wald sequential probability

  1. Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis; Gold, Dara

    2013-01-01

    We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.

  2. Safeguarding a Lunar Rover with Wald's Sequential Probability Ratio Test

    NASA Technical Reports Server (NTRS)

    Furlong, Michael; Dille, Michael; Wong, Uland; Nefian, Ara

    2016-01-01

    The virtual bumper is a safeguarding mechanism for autonomous and remotely operated robots. In this paper we take a new approach to the virtual bumper system by using an old statistical test. By using a modified version of Wald's sequential probability ratio test we demonstrate that we can reduce the number of false positive reported by the virtual bumper, thereby saving valuable mission time. We use the concept of sequential probability ratio to control vehicle speed in the presence of possible obstacles in order to increase certainty about whether or not obstacles are present. Our new algorithm reduces the chances of collision by approximately 98 relative to traditional virtual bumper safeguarding without speed control.

  3. Computerized Classification Testing with the Rasch Model

    ERIC Educational Resources Information Center

    Eggen, Theo J. H. M.

    2011-01-01

    If classification in a limited number of categories is the purpose of testing, computerized adaptive tests (CATs) with algorithms based on sequential statistical testing perform better than estimation-based CATs (e.g., Eggen & Straetmans, 2000). In these computerized classification tests (CCTs), the Sequential Probability Ratio Test (SPRT) (Wald,…

  4. Inverse sequential detection of parameter changes in developing time series

    NASA Technical Reports Server (NTRS)

    Radok, Uwe; Brown, Timothy J.

    1992-01-01

    Progressive values of two probabilities are obtained for parameter estimates derived from an existing set of values and from the same set enlarged by one or more new values, respectively. One probability is that of erroneously preferring the second of these estimates for the existing data ('type 1 error'), while the second probability is that of erroneously accepting their estimates for the enlarged test ('type 2 error'). A more stable combined 'no change' probability which always falls between 0.5 and 0 is derived from the (logarithmic) width of the uncertainty region of an equivalent 'inverted' sequential probability ratio test (SPRT, Wald 1945) in which the error probabilities are calculated rather than prescribed. A parameter change is indicated when the compound probability undergoes a progressive decrease. The test is explicitly formulated and exemplified for Gaussian samples.

  5. Statistical characteristics of the sequential detection of signals in correlated noise

    NASA Astrophysics Data System (ADS)

    Averochkin, V. A.; Baranov, P. E.

    1985-10-01

    A solution is given to the problem of determining the distribution of the duration of the sequential two-threshold Wald rule for the time-discrete detection of determinate and Gaussian correlated signals on a background of Gaussian correlated noise. Expressions are obtained for the joint probability densities of the likelihood ratio logarithms, and an analysis is made of the effect of correlation and SNR on the duration distribution and the detection efficiency. Comparison is made with Neumann-Pearson detection.

  6. Wald Sequential Probability Ratio Test for Space Object Conjunction Assessment

    NASA Technical Reports Server (NTRS)

    Carpenter, James R.; Markley, F Landis

    2014-01-01

    This paper shows how satellite owner/operators may use sequential estimates of collision probability, along with a prior assessment of the base risk of collision, in a compound hypothesis ratio test to inform decisions concerning collision risk mitigation maneuvers. The compound hypothesis test reduces to a simple probability ratio test, which appears to be a novel result. The test satisfies tolerances related to targeted false alarm and missed detection rates. This result is independent of the method one uses to compute the probability density that one integrates to compute collision probability. A well-established test case from the literature shows that this test yields acceptable results within the constraints of a typical operational conjunction assessment decision timeline. Another example illustrates the use of the test in a practical conjunction assessment scenario based on operations of the International Space Station.

  7. A Collision Avoidance Strategy for a Potential Natural Satellite around the Asteroid Bennu for the OSIRIS-REx Mission

    NASA Technical Reports Server (NTRS)

    Mashiku, Alinda K.; Carpenter, J. Russell

    2016-01-01

    The cadence of proximity operations for the OSIRIS-REx mission may have an extra induced challenge given the potential of the detection of a natural satellite orbiting the asteroid Bennu. Current ground radar observations for object detection orbiting Bennu show no found objects within bounds of specific size and rotation rates. If a natural satellite is detected during approach, a different proximity operation cadence will need to be implemented as well as a collision avoidance strategy for mission success. A collision avoidance strategy will be analyzed using the Wald Sequential Probability Ratio Test.

  8. A Collision Avoidance Strategy for a Potential Natural Satellite Around the Asteroid Bennu for the OSIRIS-REx Mission

    NASA Technical Reports Server (NTRS)

    Mashiku, Alinda; Carpenter, Russell

    2016-01-01

    The cadence of proximity operations for the OSIRIS-REx mission may have an extra induced challenge given the potential of the detection of a natural satellite orbiting the asteroid Bennu. Current ground radar observations for object detection orbiting Bennu show no found objects within bounds of specific size and rotation rates. If a natural satellite is detected during approach, a different proximity operation cadence will need to be implemented as well as a collision avoidance strategy for mission success. A collision avoidance strategy will be analyzed using the Wald Sequential Probability Ratio Test.

  9. A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies

    PubMed Central

    2014-01-01

    Background The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. Methods The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. Results The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. Conclusions If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used. PMID:24552686

  10. A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies.

    PubMed

    Kottas, Martina; Kuss, Oliver; Zapf, Antonia

    2014-02-19

    The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used.

  11. NASA DOE POD NDE Capabilities Data Book

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    2015-01-01

    This data book contains the Directed Design of Experiments for Validating Probability of Detection (POD) Capability of NDE Systems (DOEPOD) analyses of the nondestructive inspection data presented in the NTIAC, Nondestructive Evaluation (NDE) Capabilities Data Book, 3rd ed., NTIAC DB-97-02. DOEPOD is designed as a decision support system to validate inspection system, personnel, and protocol demonstrating 0.90 POD with 95% confidence at critical flaw sizes, a90/95. The test methodology used in DOEPOD is based on the field of statistical sequential analysis founded by Abraham Wald. Sequential analysis is a method of statistical inference whose characteristic feature is that the number of observations required by the procedure is not determined in advance of the experiment. The decision to terminate the experiment depends, at each stage, on the results of the observations previously made. A merit of the sequential method, as applied to testing statistical hypotheses, is that test procedures can be constructed which require, on average, a substantially smaller number of observations than equally reliable test procedures based on a predetermined number of observations.

  12. Implementing reduced-risk integrated pest management in fresh-market cabbage: influence of sampling parameters, and validation of binomial sequential sampling plans for the cabbage looper (Lepidoptera Noctuidae).

    PubMed

    Burkness, Eric C; Hutchison, W D

    2009-10-01

    Populations of cabbage looper, Trichoplusiani (Lepidoptera: Noctuidae), were sampled in experimental plots and commercial fields of cabbage (Brasicca spp.) in Minnesota during 1998-1999 as part of a larger effort to implement an integrated pest management program. Using a resampling approach and the Wald's sequential probability ratio test, sampling plans with different sampling parameters were evaluated using independent presence/absence and enumerative data. Evaluations and comparisons of the different sampling plans were made based on the operating characteristic and average sample number functions generated for each plan and through the use of a decision probability matrix. Values for upper and lower decision boundaries, sequential error rates (alpha, beta), and tally threshold were modified to determine parameter influence on the operating characteristic and average sample number functions. The following parameters resulted in the most desirable operating characteristic and average sample number functions; action threshold of 0.1 proportion of plants infested, tally threshold of 1, alpha = beta = 0.1, upper boundary of 0.15, lower boundary of 0.05, and resampling with replacement. We found that sampling parameters can be modified and evaluated using resampling software to achieve desirable operating characteristic and average sample number functions. Moreover, management of T. ni by using binomial sequential sampling should provide a good balance between cost and reliability by minimizing sample size and maintaining a high level of correct decisions (>95%) to treat or not treat.

  13. Current-State Constrained Filter Bank for Wald Testing of Spacecraft Conjunctions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis

    2012-01-01

    We propose a filter bank consisting of an ordinary current-state extended Kalman filter, and two similar but constrained filters: one is constrained by a null hypothesis that the miss distance between two conjuncting spacecraft is inside their combined hard body radius at the predicted time of closest approach, and one is constrained by an alternative complementary hypothesis. The unconstrained filter is the basis of an initial screening for close approaches of interest. Once the initial screening detects a possibly risky conjunction, the unconstrained filter also governs measurement editing for all three filters, and predicts the time of closest approach. The constrained filters operate only when conjunctions of interest occur. The computed likelihoods of the innovations of the two constrained filters form a ratio for a Wald sequential probability ratio test. The Wald test guides risk mitigation maneuver decisions based on explicit false alarm and missed detection criteria. Since only current-state Kalman filtering is required to compute the innovations for the likelihood ratio, the present approach does not require the mapping of probability density forward to the time of closest approach. Instead, the hard-body constraint manifold is mapped to the filter update time by applying a sigma-point transformation to a projection function. Although many projectors are available, we choose one based on Lambert-style differential correction of the current-state velocity. We have tested our method using a scenario based on the Magnetospheric Multi-Scale mission, scheduled for launch in late 2014. This mission involves formation flight in highly elliptical orbits of four spinning spacecraft equipped with antennas extending 120 meters tip-to-tip. Eccentricities range from 0.82 to 0.91, and close approaches generally occur in the vicinity of perigee, where rapid changes in geometry may occur. Testing the method using two 12,000-case Monte Carlo simulations, we found the method achieved a missed detection rate of 0.1%, and a false alarm rate of 2%.

  14. Fault detection on a sewer network by a combination of a Kalman filter and a binary sequential probability ratio test

    NASA Astrophysics Data System (ADS)

    Piatyszek, E.; Voignier, P.; Graillot, D.

    2000-05-01

    One of the aims of sewer networks is the protection of population against floods and the reduction of pollution rejected to the receiving water during rainy events. To meet these goals, managers have to equip the sewer networks with and to set up real-time control systems. Unfortunately, a component fault (leading to intolerable behaviour of the system) or sensor fault (deteriorating the process view and disturbing the local automatism) makes the sewer network supervision delicate. In order to ensure an adequate flow management during rainy events it is essential to set up procedures capable of detecting and diagnosing these anomalies. This article introduces a real-time fault detection method, applicable to sewer networks, for the follow-up of rainy events. This method consists in comparing the sensor response with a forecast of this response. This forecast is provided by a model and more precisely by a state estimator: a Kalman filter. This Kalman filter provides not only a flow estimate but also an entity called 'innovation'. In order to detect abnormal operations within the network, this innovation is analysed with the binary sequential probability ratio test of Wald. Moreover, by crossing available information on several nodes of the network, a diagnosis of the detected anomalies is carried out. This method provided encouraging results during the analysis of several rains, on the sewer network of Seine-Saint-Denis County, France.

  15. [Computer diagnosis of traumatic impact by hepatic lesion].

    PubMed

    Kimbar, V I; Sevankeev, V V

    2007-01-01

    A method of computer-assisted diagnosis of traumatic affection by liver damage (HEPAR-test program) is described. The program is based on calculated diagnostic coefficients using Bayes' probability method with Wald's recognition procedure.

  16. On-line Flagging of Anomalies and Adaptive Sequential Hypothesis Testing for Fine-feature Characterization of Geosynchronous Satellites

    DTIC Science & Technology

    2015-10-18

    statistical assessment of geosynchronous satellite status based on non- resolved photometry data,” AMOS Technical Conference, 2014 2. Wald, A...the seasonal changes. Its purpose is to enable an ongoing, automated assessment of satellite behavior through its life cycle using the photometry data...work on the decision to move the time slider [1], which is required in order to update the baseline signature (brightness) data for a satellite

  17. Testing the non-unity of rate ratio under inverse sampling.

    PubMed

    Tang, Man-Lai; Liao, Yi Jie; Ng, Hong Keung Tony; Chan, Ping Shing

    2007-08-01

    Inverse sampling is considered to be a more appropriate sampling scheme than the usual binomial sampling scheme when subjects arrive sequentially, when the underlying response of interest is acute, and when maximum likelihood estimators of some epidemiologic indices are undefined. In this article, we study various statistics for testing non-unity rate ratios in case-control studies under inverse sampling. These include the Wald, unconditional score, likelihood ratio and conditional score statistics. Three methods (the asymptotic, conditional exact, and Mid-P methods) are adopted for P-value calculation. We evaluate the performance of different combinations of test statistics and P-value calculation methods in terms of their empirical sizes and powers via Monte Carlo simulation. In general, asymptotic score and conditional score tests are preferable for their actual type I error rates are well controlled around the pre-chosen nominal level, and their powers are comparatively the largest. The exact version of Wald test is recommended if one wants to control the actual type I error rate at or below the pre-chosen nominal level. If larger power is expected and fluctuation of sizes around the pre-chosen nominal level are allowed, then the Mid-P version of Wald test is a desirable alternative. We illustrate the methodologies with a real example from a heart disease study. (c) 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

  18. Cost-effective binomial sequential sampling of western bean cutworm, Striacosta albicosta (Lepidoptera: Noctuidae), egg masses in corn.

    PubMed

    Paula-Moraes, S; Burkness, E C; Hunt, T E; Wright, R J; Hein, G L; Hutchison, W D

    2011-12-01

    Striacosta albicosta (Smith) (Lepidoptera: Noctuidae), is a native pest of dry beans (Phaseolus vulgaris L.) and corn (Zea mays L.). As a result of larval feeding damage on corn ears, S. albicosta has a narrow treatment window; thus, early detection of the pest in the field is essential, and egg mass sampling has become a popular monitoring tool. Three action thresholds for field and sweet corn currently are used by crop consultants, including 4% of plants infested with egg masses on sweet corn in the silking-tasseling stage, 8% of plants infested with egg masses on field corn with approximately 95% tasseled, and 20% of plants infested with egg masses on field corn during mid-milk-stage corn. The current monitoring recommendation is to sample 20 plants at each of five locations per field (100 plants total). In an effort to develop a more cost-effective sampling plan for S. albicosta egg masses, several alternative binomial sampling plans were developed using Wald's sequential probability ratio test, and validated using Resampling for Validation of Sampling Plans (RVSP) software. The benefit-cost ratio also was calculated and used to determine the final selection of sampling plans. Based on final sampling plans selected for each action threshold, the average sample number required to reach a treat or no-treat decision ranged from 38 to 41 plants per field. This represents a significant savings in sampling cost over the current recommendation of 100 plants.

  19. The probability of being identified as an outlier with commonly used funnel plot control limits for the standardised mortality ratio.

    PubMed

    Seaton, Sarah E; Manktelow, Bradley N

    2012-07-16

    Emphasis is increasingly being placed on the monitoring of clinical outcomes for health care providers. Funnel plots have become an increasingly popular graphical methodology used to identify potential outliers. It is assumed that a provider only displaying expected random variation (i.e. 'in-control') will fall outside a control limit with a known probability. In reality, the discrete count nature of these data, and the differing methods, can lead to true probabilities quite different from the nominal value. This paper investigates the true probability of an 'in control' provider falling outside control limits for the Standardised Mortality Ratio (SMR). The true probabilities of an 'in control' provider falling outside control limits for the SMR were calculated and compared for three commonly used limits: Wald confidence interval; 'exact' confidence interval; probability-based prediction interval. The probability of falling above the upper limit, or below the lower limit, often varied greatly from the nominal value. This was particularly apparent when there were a small number of expected events: for expected events ≤ 50 the median probability of an 'in-control' provider falling above the upper 95% limit was 0.0301 (Wald), 0.0121 ('exact'), 0.0201 (prediction). It is important to understand the properties and probability of being identified as an outlier by each of these different methods to aid the correct identification of poorly performing health care providers. The limits obtained using probability-based prediction limits have the most intuitive interpretation and their properties can be defined a priori. Funnel plot control limits for the SMR should not be based on confidence intervals.

  20. Risk of cutaneous adverse events with febuxostat treatment in patients with skin reaction to allopurinol. A retrospective, hospital-based study of 101 patients with consecutive allopurinol and febuxostat treatment.

    PubMed

    Bardin, Thomas; Chalès, Gérard; Pascart, Tristan; Flipo, René-Marc; Korng Ea, Hang; Roujeau, Jean-Claude; Delayen, Aurélie; Clerson, Pierre

    2016-05-01

    To investigate the cutaneous tolerance of febuxostat in gouty patients with skin intolerance to allopurinol. We identified all gouty patients who had sequentially received allopurinol and febuxostat in the rheumatology departments of 4 university hospitals in France and collected data from hospital files using a predefined protocol. Patients who had not visited the prescribing physician during at least 2 months after febuxostat prescription were excluded. The odds ratio (OR) for skin reaction to febuxostat in patients with a cutaneous reaction to allopurinol versus no reaction was calculated. For estimating the 95% confidence interval (95% CI), we used the usual Wald method and a bootstrap method. In total, 113 gouty patients had sequentially received allopurinol and febuxostat; 12 did not visit the prescribing physician after febuxostat prescription and were excluded. Among 101 patients (86 males, mean age 61±13.9 years), 2/22 (9.1%) with a history of cutaneous reactions to allopurinol showed skin reactions to febuxostat. Two of 79 patients (2.5%) without a skin reaction to allopurinol showed skin intolerance to febuxostat. The ORs were not statistically significant with the usual Wald method (3.85 [95% CI 0.51-29.04]) or bootstrap method (3.86 [95% CI 0.80-18.74]). The risk of skin reaction with febuxostat seems moderately increased in patients with a history of cutaneous adverse events with allopurinol. This moderate increase does not support the cross-reactivity of the two drugs. Copyright © 2015. Published by Elsevier SAS.

  1. A Monte Carlo Study of an Iterative Wald Test Procedure for DIF Analysis

    ERIC Educational Resources Information Center

    Cao, Mengyang; Tay, Louis; Liu, Yaowu

    2017-01-01

    This study examined the performance of a proposed iterative Wald approach for detecting differential item functioning (DIF) between two groups when preknowledge of anchor items is absent. The iterative approach utilizes the Wald-2 approach to identify anchor items and then iteratively tests for DIF items with the Wald-1 approach. Monte Carlo…

  2. Psychosis of Alzheimer disease: prevalence, incidence, persistence, risk factors, and mortality.

    PubMed

    Vilalta-Franch, Joan; López-Pousa, Secundino; Calvó-Perxas, Laia; Garre-Olmo, Josep

    2013-11-01

    To establish the prevalence, incidence, persistence, risk factors, and mortality risk increase of psychosis of Alzheimer disease (PoAD) in a clinical sample. Cross-sectional, observational study of 491 patients with probable AD who, at baseline visit, were evaluated with the Cambridge Examination for Mental Disorders of the Elderly, the Neuropsychiatric Inventory-10, the Rapid Disability Rating Scale-2, and the Zarit Burden Interview. All participants were reevaluated at 6, 12, 18, and 24 months. PoAD diagnoses were made using specific criteria. PoAD prevalence was 7.3%, and the cumulative incidence at 6, 12, 18, and 24 months was 5.8%, 10.6%, 13.5%, and 15.1%, respectively. After 1 year, psychotic symptoms persisted in 68.7% of the patients with initial PoAD. At baseline, patients with PoAD scored lower in the Cambridge Cognitive Examination and Mini-Mental State Examination and higher in the Rapid Disability Rating Scale-2 and Zarit Burden Interview tests. Both low scores in the Cambridge Cognitive Examination subscale of learning memory (hazard ratio [HR] = 0.874; 95% CI: 0.788-0.969; Wald χ2 = 6.515; df = 1) and perception (HR = 0.743; 95% CI: 0.610-0.904; Wald χ2 = 8.778; df = 1), and high scores in expressive language (HR = 1.179; 95% CI: 1.024-1.358; Wald χ2 = 5.261; df = 1) and calculation skills (HR = 1.763; 95% CI: 1.067-2.913; Wald χ2 = 4.905; df = 1) were found to be associated with PoAD. PoAD leads to a faster functional impairment, and it increases mortality risk (HR = 2.191; 95% CI: 1.136-4.228; Wald χ2 = 5.471; df = 1) after controlling for age, gender, cognitive and functional disability, general health status, and antipsychotic treatment. PoAD seems to define a phenotype of AD of greater severity, with worsened functional progression and increased mortality risk. Copyright © 2013 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  3. The Langer-Improved Wald Test for DIF Testing with Multiple Groups: Evaluation and Comparison to Two-Group IRT

    ERIC Educational Resources Information Center

    Woods, Carol M.; Cai, Li; Wang, Mian

    2013-01-01

    Differential item functioning (DIF) occurs when the probability of responding in a particular category to an item differs for members of different groups who are matched on the construct being measured. The identification of DIF is important for valid measurement. This research evaluates an improved version of Lord's chi [superscript 2]…

  4. Adjusted Wald Confidence Interval for a Difference of Binomial Proportions Based on Paired Data

    ERIC Educational Resources Information Center

    Bonett, Douglas G.; Price, Robert M.

    2012-01-01

    Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and…

  5. Human Deception Detection from Whole Body Motion Analysis

    DTIC Science & Technology

    2015-12-01

    9.3.2. Prediction Probability The output reports from SPSS detail the stepwise procedures for each series of analyses using Wald statistic values for... statistical significance in determining replication, but instead used a combination of significance and direction of means to determine partial or...and the independents need not be unbound. All data were analyzed utilizing the Statistical Package for Social Sciences ( SPSS , v.19.0, Chicago, IL

  6. Bringing care to the people: Lillian Wald's legacy to public health nursing.

    PubMed Central

    Buhler-Wilkerson, K

    1993-01-01

    Lillian Wald invented public health nursing in 1893, making this year the field's centennial. One of nursing's visionaries, Wald secured reforms in health, industry, education, recreation, and housing. This historical inquiry examines three of Wald's critical experiments, each of which illuminates the past of public health nursing and its contemporary dilemmas: invention of public health nursing itself, establishment of a nationwide system of insurance payments for home-based care, and creation of a national public health nursing service. Images p1779-a p1780-a p1781-a p1782-a PMID:7695663

  7. Lord's Wald Test for Detecting Dif in Multidimensional Irt Models: A Comparison of Two Estimation Approaches

    ERIC Educational Resources Information Center

    Lee, Soo; Suh, Youngsuk

    2018-01-01

    Lord's Wald test for differential item functioning (DIF) has not been studied extensively in the context of the multidimensional item response theory (MIRT) framework. In this article, Lord's Wald test was implemented using two estimation approaches, marginal maximum likelihood estimation and Bayesian Markov chain Monte Carlo estimation, to detect…

  8. Distributed Immune Systems for Wireless Network Information Assurance

    DTIC Science & Technology

    2010-04-26

    ratio test (SPRT), where the goal is to optimize a hypothesis testing problem given a trade-off between the probability of errors and the...using cumulative sum (CUSUM) and Girshik-Rubin-Shiryaev (GRSh) statistics. In sequential versions of the problem the sequential probability ratio ...the more complicated problems, in particular those where no clear mean can be established. We developed algorithms based on the sequential probability

  9. Valid statistical inference methods for a case-control study with missing data.

    PubMed

    Tian, Guo-Liang; Zhang, Chi; Jiang, Xuejun

    2018-04-01

    The main objective of this paper is to derive the valid sampling distribution of the observed counts in a case-control study with missing data under the assumption of missing at random by employing the conditional sampling method and the mechanism augmentation method. The proposed sampling distribution, called the case-control sampling distribution, can be used to calculate the standard errors of the maximum likelihood estimates of parameters via the Fisher information matrix and to generate independent samples for constructing small-sample bootstrap confidence intervals. Theoretical comparisons of the new case-control sampling distribution with two existing sampling distributions exhibit a large difference. Simulations are conducted to investigate the influence of the three different sampling distributions on statistical inferences. One finding is that the conclusion by the Wald test for testing independency under the two existing sampling distributions could be completely different (even contradictory) from the Wald test for testing the equality of the success probabilities in control/case groups under the proposed distribution. A real cervical cancer data set is used to illustrate the proposed statistical methods.

  10. Location Prediction Based on Transition Probability Matrices Constructing from Sequential Rules for Spatial-Temporal K-Anonymity Dataset

    PubMed Central

    Liu, Zhao; Zhu, Yunhong; Wu, Chenxue

    2016-01-01

    Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502

  11. Probabilistic Relationships between Ground‐Motion Parameters and Modified Mercalli Intensity in California

    USGS Publications Warehouse

    Worden, C.B.; Wald, David J.; Rhoades, D.A.

    2012-01-01

    We use a database of approximately 200,000 modified Mercalli intensity (MMI) observations of California earthquakes collected from USGS "Did You Feel It?" (DYFI) reports, along with a comparable number of peak ground-motion amplitudes from California seismic networks, to develop probabilistic relationships between MMI and peak ground velocity (PGV), peak ground acceleration (PGA), and 0.3-s, 1-s, and 3-s 5% damped pseudospectral acceleration (PSA). After associating each ground-motion observation with an MMI computed from all the DYFI responses within 2 km of the observation, we derived a joint probability distribution between MMI and ground motion. We then derived reversible relationships between MMI and each ground-motion parameter by using a total least squares regression to fit a bilinear function to the median of the stacked probability distributions. Among the relationships, the fit to peak ground velocity has the smallest errors, though linear combinations of PGA and PGV give nominally better results. We also find that magnitude and distance terms reduce the overall residuals and are justifiable on an information theoretic basis. For intensities MMI≥5, our results are in close agreement with the relations of Wald, Quitoriano, Heaton, and Kanamori (1999); for lower intensities, our results fall midway between Wald, Quitoriano, Heaton, and Kanamori (1999) and those of Atkinson and Kaka (2007). The earthquakes in the study ranged in magnitude from 3.0 to 7.3, and the distances ranged from less than a kilometer to about 400 km from the source.

  12. Evaluating the Wald Test for Item-Level Comparison of Saturated and Reduced Models in Cognitive Diagnosis

    ERIC Educational Resources Information Center

    de la Torre, Jimmy; Lee, Young-Sun

    2013-01-01

    This article used the Wald test to evaluate the item-level fit of a saturated cognitive diagnosis model (CDM) relative to the fits of the reduced models it subsumes. A simulation study was carried out to examine the Type I error and power of the Wald test in the context of the G-DINA model. Results show that when the sample size is small and a…

  13. Pre-Deployment Stress, Mental Health, and Help-Seeking Behaviors Among Marines

    DTIC Science & Technology

    2014-01-01

    associations between two categori- cal variables, and Wald tests were conducted to compare mean scores on continuous variables across groups (e.g...Cluster- adjusted wald tests were conducted to determine whether there were significant differences by rank on the average number of potentially...deployed to Iraq or Afghanistan in 2010 or 2011 of rank O6 or lower. a Omnibus rao-Scott chi-square test or adjusted wald test is statistically

  14. Aggressive behaviour among drug-using women from Cape Town, South Africa: ethnicity, heavy alcohol use, methamphetamine and intimate partner violence.

    PubMed

    Carney, Tara; Myers, Bronwyn; Kline, Tracy L; Johnson, Kim; Wechsberg, Wendee M

    2017-09-30

    Women have generally been found to be the victims of violence, but scant attention has been paid to the characteristics of women who perpetrate aggression and violence. In South Africa, violence is a prevalent societal issue, especially in the Western Cape. This study aimed at identifying factors that were associated with aggression among a sample of 720 substance-using women. We conducted multivariate logistic regression to identify factors that are significantly associated with these behaviours. Ethnicity (Wald Χ 2  = 17.07(2), p < 0.01) and heavy drinking (Wald Χ 2  = 6.60 (2), p = 0.01) were significantly related to verbal aggression, methamphetamine use was significantly related to physical (Wald Χ 2  = 2.73 (2), p = 0.01) and weapon aggression (Wald Χ 2  = 7.94 (2), p < 0.01) and intimate partner violence was significantly related to verbal (Wald Χ 2  = 12.43 (2), p < 0.01) and physical aggression (Wald Χ 2  = 25.92 (2), p < 0.01). The findings show high levels of aggression among this sample, and highlight the need for interventions that address methamphetamine, heavy drinking and intimate partner violence among vulnerable substance-using women.

  15. The human as a detector of changes in variance and bandwidth

    NASA Technical Reports Server (NTRS)

    Curry, R. E.; Govindaraj, T.

    1977-01-01

    The detection of changes in random process variance and bandwidth was studied. Psychophysical thresholds for these two parameters were determined using an adaptive staircase technique for second order random processes at two nominal periods (1 and 3 seconds) and damping ratios (0.2 and 0.707). Thresholds for bandwidth changes were approximately 9% of nominal except for the (3sec,0.2) process which yielded thresholds of 12%. Variance thresholds averaged 17% of nominal except for the (3sec,0.2) process in which they were 32%. Detection times for suprathreshold changes in the parameters may be roughly described by the changes in RMS velocity of the process. A more complex model is presented which consists of a Kalman filter designed for the nominal process using velocity as the input, and a modified Wald sequential test for changes in the variance of the residual. The model predictions agree moderately well with the experimental data. Models using heuristics, e.g. level crossing counters, were also examined and are found to be descriptive but do not afford the unification of the Kalman filter/sequential test model used for changes in mean.

  16. Quantal Response: Estimation and Inference

    DTIC Science & Technology

    2014-09-01

    considered. The CI-based test is just another way of looking at the Wald test. A small-sample simulation illustrates aberrant behavior of the Wald/CI...asymptotic power computation (Eq. 36) exhibits this behavior but not to such an extent as the simulated small-sample power. Sample size is n = 11 and...as |m1−m0| increases, but the power of the Wald test actually decreases for large |m1−m0| and eventually π → α . This type of behavior was reported as

  17. The Cannabis Withdrawal Scale development: patterns and predictors of cannabis withdrawal and distress.

    PubMed

    Allsop, David J; Norberg, Melissa M; Copeland, Jan; Fu, Shanlin; Budney, Alan J

    2011-12-01

    Rates of treatment seeking for cannabis are increasing, and relapse is common. Management of cannabis withdrawal is an important intervention point. No psychometrically sound measure for cannabis withdrawal exists, and as a result treatment developments cannot be optimally targeted. The aim is to develop and test the psychometrics of the Cannabis Withdrawal Scale and use it to explore predictors of cannabis withdrawal. A volunteer sample of 49 dependent cannabis users provided daily scores on the Cannabis Withdrawal Scale during a baseline week and 2 weeks of abstinence. Internal reliability (Cronbach's alpha=0.91), test-retest stability (average intra-class correlation=0.95) and content validity analysis show that the Cannabis Withdrawal Scale has excellent psychometric properties. Nightmares and/or strange dreams was the most valid item (Wald χ²=105.6, P<0.0001), but caused relatively little associated distress (Wald χ²=25.11, P=0.03). Angry outbursts were considered intense (Wald χ²=73.69, P<0.0001) and caused much associated distress (Wald χ²=45.54, P<0.0001). Trouble getting to sleep was also an intense withdrawal symptom (Wald χ²=42.31, P<0.0001) and caused significant associated distress (Wald χ²=47.76, P<0.0001). Scores on the Severity of Dependence Scale predicted cannabis withdrawal. The Cannabis Withdrawal Scale can be used as a diagnostic instrument in clinical and research settings where regular monitoring of withdrawal symptoms is required. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  18. Memory and decision making: Effects of sequential presentation of probabilities and outcomes in risky prospects.

    PubMed

    Millroth, Philip; Guath, Mona; Juslin, Peter

    2018-06-07

    The rationality of decision making under risk is of central concern in psychology and other behavioral sciences. In real-life, the information relevant to a decision often arrives sequentially or changes over time, implying nontrivial demands on memory. Yet, little is known about how this affects the ability to make rational decisions and a default assumption is rather that information about outcomes and probabilities are simultaneously available at the time of the decision. In 4 experiments, we show that participants receiving probability- and outcome information sequentially report substantially (29 to 83%) higher certainty equivalents than participants with simultaneous presentation. This holds also for monetary-incentivized participants with perfect recall of the information. Participants in the sequential conditions often violate stochastic dominance in the sense that they pay more for a lottery with low probability of an outcome than participants in the simultaneous condition pay for a high probability of the same outcome. Computational modeling demonstrates that Cumulative Prospect Theory (Tversky & Kahneman, 1992) fails to account for the effects of sequential presentation, but a model assuming anchoring-and adjustment constrained by memory can account for the data. By implication, established assumptions of rationality may need to be reconsidered to account for the effects of memory in many real-life tasks. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  19. Mining of high utility-probability sequential patterns from uncertain databases

    PubMed Central

    Zhang, Binbin; Fournier-Viger, Philippe; Li, Ting

    2017-01-01

    High-utility sequential pattern mining (HUSPM) has become an important issue in the field of data mining. Several HUSPM algorithms have been designed to mine high-utility sequential patterns (HUPSPs). They have been applied in several real-life situations such as for consumer behavior analysis and event detection in sensor networks. Nonetheless, most studies on HUSPM have focused on mining HUPSPs in precise data. But in real-life, uncertainty is an important factor as data is collected using various types of sensors that are more or less accurate. Hence, data collected in a real-life database can be annotated with existing probabilities. This paper presents a novel pattern mining framework called high utility-probability sequential pattern mining (HUPSPM) for mining high utility-probability sequential patterns (HUPSPs) in uncertain sequence databases. A baseline algorithm with three optional pruning strategies is presented to mine HUPSPs. Moroever, to speed up the mining process, a projection mechanism is designed to create a database projection for each processed sequence, which is smaller than the original database. Thus, the number of unpromising candidates can be greatly reduced, as well as the execution time for mining HUPSPs. Substantial experiments both on real-life and synthetic datasets show that the designed algorithm performs well in terms of runtime, number of candidates, memory usage, and scalability for different minimum utility and minimum probability thresholds. PMID:28742847

  20. On the error probability of general tree and trellis codes with applications to sequential decoding

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1973-01-01

    An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.

  1. The Sequential Probability Ratio Test and Binary Item Response Models

    ERIC Educational Resources Information Center

    Nydick, Steven W.

    2014-01-01

    The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…

  2. Dying with dignity in America: the transformational leadership of Florence Wald.

    PubMed

    Adams, Cynthia

    2010-03-01

    The aims of this study are to examine the constructs of transformational leadership as they played out for one nurse who steered significant change in the care of the dying in the United States and to provide deeper insights into how nursing leaders can design and direct meaningful changes in the delivery of health care in turbulent times. A significant problem was identified in how the terminally ill were treated in this country post World War II. The introduction of hospice care in the United States represented a paradigm shift in how the health care community viewed and treated dying patients. Critical to this transformation was the work of Florence Wald, who organized with community leaders, clergy, and other health care providers to create a vision and synergy around palliative care. She was instrumental in opening the first American hospice in 1971 in Connecticut. Within 15 years, there were more than 1,000 hospices in the United States. A single case study design was chosen for this qualitative research grounded in the theory of transformational leadership (J.M. Burns, 1978). The study used narrative inquiry to conduct an in-depth exploration of Florence Wald's transformational leadership and the perceptions of the group of founders she organized to conceptualize, build, and open the first hospice in the United States. The participants chosen for interview were involved directly in the designing, planning, and beginning of the first American hospice. In addition to the seven in-depth interviews conducted in 2007 in Connecticut, this research examined three groups of documents from The Florence and Henry Wald Archives in the Yale University Library. The findings from both interviews and the Yale Archives showed that Florence Wald based her leadership on the strong values of reverence for life and social justice for all. To direct meaningful change, Florence Wald elevated the consciousness of her hospice team by conducting a 2-year research study on the needs of dying patients to ensure interventions were based on evidence. To encourage a high level of participation, Florence Wald demonstrated a caring component in her leadership with a strong commitment to mentoring. Wald worked to transform the quality of end-of-life care by assessing the readiness for change prior to acting and by working to provide supports for success. Finally, the findings showed that Florence Wald built consensus on vision before executing purposeful change by collaborating with the Founders and asking the hard questions to examine standards of care. Florence Wald provided transformational leadership in creating a value-driven culture of inquiry among the Founders where decision making was evidence-based and significantly improved the quality of palliative care in the United States. Nursing leaders who build upon the shared values to provide direction and promote momentum critical to the change will have more success in reaching strategic outcomes of transformational efforts. Transformational nursing leaders who build consensus on vision before executing purposeful change by collaborating with a wide group of stakeholders will encourage a broader ownership of the change. When nursing leaders work to elevate the consciousness of their work groups to direct meaningful change by developing and sustaining value-driven cultures of inquire, decisions will more directly align with evidence and support successful outcomes.

  3. The Self-Organization of a Spoken Word

    PubMed Central

    Holden, John G.; Rajaraman, Srinivasan

    2012-01-01

    Pronunciation time probability density and hazard functions from large speeded word naming data sets were assessed for empirical patterns consistent with multiplicative and reciprocal feedback dynamics – interaction dominant dynamics. Lognormal and inverse power law distributions are associated with multiplicative and interdependent dynamics in many natural systems. Mixtures of lognormal and inverse power law distributions offered better descriptions of the participant’s distributions than the ex-Gaussian or ex-Wald – alternatives corresponding to additive, superposed, component processes. The evidence for interaction dominant dynamics suggests fundamental links between the observed coordinative synergies that support speech production and the shapes of pronunciation time distributions. PMID:22783213

  4. A detailed description of the sequential probability ratio test for 2-IMU FDI

    NASA Technical Reports Server (NTRS)

    Rich, T. M.

    1976-01-01

    The sequential probability ratio test (SPRT) for 2-IMU FDI (inertial measuring unit failure detection/isolation) is described. The SPRT is a statistical technique for detecting and isolating soft IMU failures originally developed for the strapdown inertial reference unit. The flowchart of a subroutine incorporating the 2-IMU SPRT is included.

  5. Comparison of the characteristics of long-term users of electronic cigarettes versus nicotine replacement therapy: A cross-sectional survey of English ex-smokers and current smokers.

    PubMed

    Nelson, Victoria A; Goniewicz, Maciej L; Beard, Emma; Brown, Jamie; Sheals, Kate; West, Robert; Shahab, Lion

    2015-08-01

    Electronic cigarettes (ECs) and nicotine replacement therapy (NRT) are non-combustible nicotine delivery devices being widely used as a partial or a complete long-term substitute for smoking. Little is known about the characteristics of long-term users, their smoking behaviour, attachment to smoking, experience of nicotine withdrawal symptoms, or their views on these devices. This study aimed to provide preliminary evidence on this and compare users of the different products. UK participants were recruited from four naturally occurring groups of long-term (≥6 months) users of either EC or NRT who had stopped or continued to smoke (N=36 per group, total N=144). Participants completed a questionnaire assessing socio-demographic and smoking characteristics, nicotine withdrawal symptoms, smoker identity and attitudes towards the products they were using. Adjusting for relevant confounders, EC use was associated with a stronger smoker identity (Wald X(2)(1)=3.9, p=0.048) and greater product endorsement (Wald X(2)(1)=4.6, p=0.024) than NRT use, irrespective of smoking status. Among ex-smokers, EC users reported less severe mood and physical symptoms (Wald X(2)(1)=6.1, p=0.014) and cravings (Wald X(2)(1)=8.5, p=0.003), higher perceived helpfulness of the product (Wald X(2)(1)=4.8, p=0.028) and lower intentions to stop using the product (Wald X(2)(1)=17.6, p<0.001) than NRT users. Compared with people who use NRT for at least 6 months, those who use EC over that time period appear to have a stronger smoker identity and like their products more. Among long-term users who have stopped smoking, ECs are perceived as more helpful than NRT, appear more effective in controlling withdrawal symptoms and continued use may be more likely. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  6. CMOS compatible thin-film ALD tungsten nanoelectromechanical devices

    NASA Astrophysics Data System (ADS)

    Davidson, Bradley Darren

    This research focuses on the development of a novel, low-temperature, CMOS compatible, atomic-layer-deposition (ALD) enabled NEMS fabrication process for the development of ALD Tungsten (WALD) NEMS devices. The devices are intended for use in CMOS/NEMS hybrid systems, and NEMS based micro-processors/controllers capable of reliable operation in harsh environments not accessible to standard CMOS technologies. The majority of NEMS switches/devices to date have been based on carbon-nano-tube (CNT) designs. The devices consume little power during actuation, and as expected, have demonstrated actuation voltages much smaller than MEMS switches. Unfortunately, NEMS CNT switches are not typically CMOS integrable due to the high temperatures required for their growth, and their fabrication typically results in extremely low and unpredictable yields. Thin-film NEMS devices offer great advantages over reported CNT devices for several reasons, including: higher fabrication yields, low-temperature (CMOS compatible) deposition techniques like ALD, and increased control over design parameters/device performance metrics, i.e., device geometry. Furthermore, top-down, thin-film, nano-fabrication techniques are better capable of producing complicated device geometries than CNT based processes, enabling the design and development of multi-terminal switches well-suited for low-power hybrid NEMS/CMOS systems as well as electromechanical transistors and logic devices for use in temperature/radiation hard computing architectures. In this work several novel, low-temperature, CMOS compatible fabrication technologies, employing WALD as a structural layer for MEMS or NEMS devices, were developed. The technologies developed are top-down nano-scale fabrication processes based on traditional micro-machining techniques commonly used in the fabrication of MEMS devices. Using these processes a variety of novel WALD NEMS devices have been successfully fabricated and characterized. Using two different WALD fabrication technologies two generations of 2-terminal WALD NEMS switches have been developed. These devices have functional gap heights of 30-50 nm, and actuation voltages typically ranging from 3--5 Volts. Via the extension of a two terminal WALD technology novel 3-terminal WALD NEMS devices were developed. These devices have actuation voltages ranging from 1.5--3 Volts, reliabilities in excess of 2 million cycles, and have been designed to be the fundamental building blocks for WALD NEMS complementary inverters. Through the development of these devices several advancements in the modeling and design of thin-film NEMS devices were achieved. A new model was developed to better characterize pre-actuation currents commonly measured for NEMS switches with nano-scale gate-to-source gap heights. The developed model is an extension of the standard field-emission model and considers the electromechanical response, and electric field effects specific to thin-film NEMS switches. Finally, a multi-physics FEM/FD based model was developed to simulate the dynamic behavior of 2 or 3-terminal electrostatically actuated devices whose electrostatic domains have an aspect ratio on the order of 10-3. The model uses a faux-Lagrangian finite difference method to solve Laplaces equation in a quasi-statatically deforming domain. This model allows for the numerical characterization and design of thin-film NEMS devices not feasible using typical non-specialized BEM/FEM based software. Using this model several novel and feasible designs for fixed-fixed 3-terminal WALD NEMS switches capable for the construction of complementary inverters were discovered.

  7. Gender discrimination in undernutrition with mediating factors among Bengalee school children from Eastern India.

    PubMed

    Mondal, Prakash Ranjan; Biswas, Sadaruddin; Bose, Kaushik

    2012-04-01

    This study was undertaken to determine age and sex variations in the prevalence of underweight and stunting, and to assess the impact of some socio-economic variables on undernutrition among 6-16 year old school children of Bengalee ethnicity in Chapra, West Bengal, India. The subjects were selected randomly from various schools and madrassas of the Chapra Block. A total of 725 children (342 boys and 383 girls) aged 6-16 years were measured and data on their socio-economic status were collected. Age and sex combined rates of underweight and stunting were 44.40% and 37.20%, respectively. Weight-for-age Z-score (WAZ) showed significant association with per-capita income (PCI) among boys (F=5.45) and girls (F=8.14). Height-for-age Z-score (HAZ) has also shown the association with per-capita income among boys (F=4.43) and girls (F=9.69). The WAZ was significantly associated with fathers' educational status (FOS) (t=-2.95) and the number of living rooms (NLR) (t=-2.91) among girls. The HAZ showed significant association with number of siblings (NS) among girls (F=4.25). Linear regression analyses revealed that NLR (t=2.04) and NS (t=1.95) had a significant impact on HAZ among boys. Among girls, PCI (t=3.38), FOS (t=2.87) and NLR (t=2.81) had a significant impact on WAZ and also PCI (t=3.28) and FOS (t=2.90) had a significant impact on HAZ. NLR had significant associations with underweight (χ(2)=3.59) and stunting (χ(2)=4.20) among boys. Among girls, PCI had significant associations with underweight (χ(2)=11.15) and stunting (χ(2)=11.64). FOS also showed significant associations with underweight (χ(2)=8.10) as well as stunting (χ(2)=8.28) among girls. NLR showed a significant association with underweight (χ(2)=7.75). Logistics regression analyses revealed that FOS (Wald=8.00) and NLR (Wald=4.09) were significant predictors of stunting among boys. Among girls, PCI was a significant predictor of underweight (Wald=10.95) as well as stunting (Wald=10.45). FOS, NLR and NS were also significant predictors of stunting (Wald=8.16), underweight (Wald=7.68) and stunting (Wald=6.97) respectively. The present study revealed that the nutritional status of the children was unsatisfactory and it is of paramount importance not only to increase the amount of food supplementation given but also to promote gender equality. Copyright © 2012 Elsevier GmbH. All rights reserved.

  8. Differential Item Functioning Assessment in Cognitive Diagnostic Modeling: Application of the Wald Test to Investigate DIF in the DINA Model

    ERIC Educational Resources Information Center

    Hou, Likun; de la Torre, Jimmy; Nandakumar, Ratna

    2014-01-01

    Analyzing examinees' responses using cognitive diagnostic models (CDMs) has the advantage of providing diagnostic information. To ensure the validity of the results from these models, differential item functioning (DIF) in CDMs needs to be investigated. In this article, the Wald test is proposed to examine DIF in the context of CDMs. This study…

  9. Metabolic syndrome and prostate abnormalities in male subjects of infertile couples

    PubMed Central

    Lotti, Francesco; Corona, Giovanni; Vignozzi, Linda; Rossi, Matteo; Maseroli, Elisa; Cipriani, Sarah; Gacci, Mauro; Forti, Gianni; Maggi, Mario

    2014-01-01

    No previous study has evaluated systematically the relationship between metabolic syndrome (MetS) and prostate-related symptoms and signs in young infertile men. We studied 171 (36.5 ± 8.3-years-old) males of infertile couples. MetS was defined based on the National Cholesterol Education Program Third Adult Treatment Panel. All men underwent hormonal (including total testosterone (TT) and insulin), seminal (including interleukin-8 (IL-8), seminal plasma IL-8 (sIL-8)), scrotal and transrectal ultrasound evaluations. Because we have previously assessed correlations between MetS and scrotal parameters in a larger cohort of infertile men, here, we focused on transrectal features. Prostate-related symptoms were assessed using the National Institutes of Health Chronic Prostatitis Symptom Index (NIH-CPSI) and the International Prostate Symptom Score (IPSS). Twenty-two subjects fulfilled MetS criteria. In an age-adjusted logistic ordinal model, insulin levels increased as a function of MetS components (Wald = 29.5, P < 0.0001) and showed an inverse correlation with TT (adjusted r = -0.359, P< 0.0001). No association between MetS and NIH-CPSI or IPSS scores was observed. In an age-, TT-, insulin-adjusted logistic ordinal model, an increase in number of MetS components correlated negatively with normal sperm morphology (Wald = 5.59, P< 0.02) and positively with sIL-8 levels (Wald = 4.32, P < 0.05), which is a marker of prostate inflammation, with prostate total and transitional zone volume assessed using ultrasound (Wald = 17.6 and 12.5, both P < 0.0001), with arterial peak systolic velocity (Wald = 9.57, P = 0.002), with texture nonhomogeneity (hazard ratio (HR) = 1.87 (1.05–3.33), P < 0.05), with calcification size (Wald = 3.11, P < 0.05), but not with parameters of seminal vesicle size or function. In conclusion, in males of infertile couples, MetS is positively associated with prostate enlargement, biochemical (sIL8) and ultrasound-derived signs of prostate inflammation but not with prostate-related symptoms, which suggests that MetS is a trigger for a subclinical, early-onset form of benign prostatic hyperplasia. PMID:24435050

  10. Which are the male factors associated with female sexual dysfunction (FSD)?

    PubMed

    Maseroli, E; Fanni, E; Mannucci, E; Fambrini, M; Jannini, E A; Maggi, M; Vignozzi, L

    2016-09-01

    It has been generally assumed that partner's erectile dysfunction, premature, and delayed ejaculation play a significant role in determining female sexual dysfunction (FSD). This study aimed to evaluate the role of the male partner's sexual function, as perceived by women, in determining FSD. A consecutive series of 156 heterosexual women consulting our clinic for FSD was retrospectively studied. All patients underwent a structured interview and completed the Female Sexual Function Index (FSFI). FSFI total score decreased as a function of partner's age, conflicts within the couple, relationship without cohabitation and the habit of engaging in intercourse to please the partner; FSFI total score increased as a function of frequency of intercourse, attempts to conceive and fertility-focused intercourse. FSFI total score showed a negative, stepwise correlation with partner's perceived hypoactive sexual desire (HSD) (r = -0.327; p < 0.0001), whereas no significant correlation was found between FSFI and erectile dysfunction, premature and delayed ejaculation. In an age-adjusted model, partner's HSD was negatively related to FSFI total score (Wald = 9.196, p = 0.002), arousal (Wald = 7.893, p = 0.005), lubrication (Wald = 5.042, p = 0.025), orgasm (Wald = 9.293, p = 0.002), satisfaction (Wald = 12.764, p < 0.0001), and pain (Wald = 6.492, p = 0.011) domains. Partner's HSD was also significantly associated with somatized anxiety, low frequency of intercourse, low partner's care for the patient's sexual pleasure, and with a higher frequency of masturbation, even after adjusting for age. In patients not reporting any reduction in libido, FSFI total score was significantly lower when their partner's libido was low (p = 0.041); the correlation disappeared if the patient also experienced HSD. In conclusion, the presence of erectile dysfunction, premature, and delayed ejaculation of the partner may not act as a primary contributing factor to FSD, as determined by FSFI scores; conversely, women's sexuality seems to be mostly impaired by the perceived reduction in their partner's sexual interest. © 2016 American Society of Andrology and European Academy of Andrology.

  11. [Analyzing the Wechsler Intelligence Scale for Children-Revised (WISC-R) in Children With Attention Deficit and Hyperactivity Disorder: Predictive Value of Subtests, Kaufman, and Bannatyne Categories].

    PubMed

    Tural Hesapçıoğlu, Selma; Çelik, Cihat; Özmen, Sevim; Yiğit, İbrahim

    2016-01-01

    The aim of this study is to evaluate the predictive value of intelligence quotients scores (IQs), subtests of Wechsler Intelligence Scale for Children-Revised (WISC-R) and Kaufman's and Bannatyne's categories scores which are the sums of subtests of WISC-R in attention deficit hyperactivity disorder (ADHD). Another aim is to examine the difference of some neurocognitive skills between the children with ADHD and their unaffected peers by WISC-R subtests. WISC-R's subtest and IQ scores, and scores of Kaufman's and Bannatyne's categories of the children who were diagnosed with only ADHD were compared with the same scores of the children who were in healthy control group (N= 111) and were in ADHD with co morbidity group (N= 82). It was found that the subtest scores (vocabulary, comprehension, digit span, picture completion and block design) of the children with only ADHD and ADHD with comorbidity were significantly lower than healthy group. It was observed that subtests of comprehension (Wald= 5.47, df= 1, p=0.05), digit span (Wald= 16.79, df= 1, p=0.001) and picture completion (Wald= 5.25, df= 1, p=0.05) predicted significantly ADHD. In addition, the categories of freedom from distractibility (Wald= 8.22, df= 1, p=0.01) and spatial abilities (Wald= 12.22, df= 1, p<0.0001) were predictive for ADHD in this study. Problem solving abilities in social processes, auditory short-term memories, visual-spatial abilities and visual configuration abilities of the children with ADHD was observed to be lower than their healthy peers. It was thought that in WISC-R's profile analysis, the categories of freedom from distractibility and spatial abilities can be distinctive in ADHD diagnose.

  12. Sequential Probability Ratio Test for Collision Avoidance Maneuver Decisions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis

    2010-01-01

    When facing a conjunction between space objects, decision makers must chose whether to maneuver for collision avoidance or not. We apply a well-known decision procedure, the sequential probability ratio test, to this problem. We propose two approaches to the problem solution, one based on a frequentist method, and the other on a Bayesian method. The frequentist method does not require any prior knowledge concerning the conjunction, while the Bayesian method assumes knowledge of prior probability densities. Our results show that both methods achieve desired missed detection rates, but the frequentist method's false alarm performance is inferior to the Bayesian method's

  13. Poster error probability in the Mu-11 Sequential Ranging System

    NASA Technical Reports Server (NTRS)

    Coyle, C. W.

    1981-01-01

    An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.

  14. Technical Reports Prepared Under Contract N00014-76-C-0475.

    DTIC Science & Technology

    1987-05-29

    264 Approximations to Densities in Geometric H. Solomon 10/27/78 Probability M.A. Stephens 3. Technical Relort No. Title Author Date 265 Sequential ...Certain Multivariate S. Iyengar 8/12/82 Normal Probabilities 323 EDF Statistics for Testing for the Gamma M.A. Stephens 8/13/82 Distribution with...20-85 Nets 360 Random Sequential Coding By Hamming Distance Yoshiaki Itoh 07-11-85 Herbert Solomon 361 Transforming Censored Samples And Testing Fit

  15. Observation of non-classical correlations in sequential measurements of photon polarization

    NASA Astrophysics Data System (ADS)

    Suzuki, Yutaro; Iinuma, Masataka; Hofmann, Holger F.

    2016-10-01

    A sequential measurement of two non-commuting quantum observables results in a joint probability distribution for all output combinations that can be explained in terms of an initial joint quasi-probability of the non-commuting observables, modified by the resolution errors and back-action of the initial measurement. Here, we show that the error statistics of a sequential measurement of photon polarization performed at different measurement strengths can be described consistently by an imaginary correlation between the statistics of resolution and back-action. The experimental setup was designed to realize variable strength measurements with well-controlled imaginary correlation between the statistical errors caused by the initial measurement of diagonal polarizations, followed by a precise measurement of the horizontal/vertical polarization. We perform the experimental characterization of an elliptically polarized input state and show that the same complex joint probability distribution is obtained at any measurement strength.

  16. Africa Command: U.S. Strategic Interests and the Role of the U.S. Military in Africa

    DTIC Science & Technology

    2011-03-22

    peacetime engagement, see General Charles Wald , “The Phase Zero Campaign,” Joint Force Quarterly, Issue 43, 4th Quarter 2006, available at http...USAID’s Office of U.S. Foreign 20 See, for example, Lisa Schirch and Aaron Kishbaugh...to the possibility of significant climate change.” Testimony of General Charles Wald , Member, Military Advisory Board, at a hearing on Climate

  17. Inhomogeneities detection in annual precipitation time series in Portugal using direct sequential simulation

    NASA Astrophysics Data System (ADS)

    Caineta, Júlio; Ribeiro, Sara; Costa, Ana Cristina; Henriques, Roberto; Soares, Amílcar

    2014-05-01

    Climate data homogenisation is of major importance in monitoring climate change, the validation of weather forecasting, general circulation and regional atmospheric models, modelling of erosion, drought monitoring, among other studies of hydrological and environmental impacts. This happens because non-climate factors can cause time series discontinuities which may hide the true climatic signal and patterns, thus potentially bias the conclusions of those studies. In the last two decades, many methods have been developed to identify and remove these inhomogeneities. One of those is based on geostatistical simulation (DSS - direct sequential simulation), where local probability density functions (pdf) are calculated at candidate monitoring stations, using spatial and temporal neighbouring observations, and then are used for detection of inhomogeneities. This approach has been previously applied to detect inhomogeneities in four precipitation series (wet day count) from a network with 66 monitoring stations located in the southern region of Portugal (1980-2001). This study revealed promising results and the potential advantages of geostatistical techniques for inhomogeneities detection in climate time series. This work extends the case study presented before and investigates the application of the geostatistical stochastic approach to ten precipitation series that were previously classified as inhomogeneous by one of six absolute homogeneity tests (Mann-Kendall test, Wald-Wolfowitz runs test, Von Neumann ratio test, Standard normal homogeneity test (SNHT) for a single break, Pettit test, and Buishand range test). Moreover, a sensibility analysis is implemented to investigate the number of simulated realisations that should be used to accurately infer the local pdfs. Accordingly, the number of simulations per iteration is increased from 50 to 500, which resulted in a more representative local pdf. A set of default and recommended settings is provided, which will help other users to implement this method. The need of user intervention is reduced to a minimum through the usage of a cross-platform script. Finally, as in the previous study, the results are compared with those from the SNHT, Pettit and Buishand range tests, which were applied to composite (ratio) reference series. Acknowledgements: The authors gratefully acknowledge the financial support of "Fundação para a Ciência e Tecnologia" (FCT), Portugal, through the research project PTDC/GEO-MET/4026/2012 ("GSIMCLI - Geostatistical simulation with local distributions for the homogenization and interpolation of climate data").

  18. Two-IMU FDI performance of the sequential probability ratio test during shuttle entry

    NASA Technical Reports Server (NTRS)

    Rich, T. M.

    1976-01-01

    Performance data for the sequential probability ratio test (SPRT) during shuttle entry are presented. Current modeling constants and failure thresholds are included for the full mission 3B from entry through landing trajectory. Minimum 100 percent detection/isolation failure levels and a discussion of the effects of failure direction are presented. Finally, a limited comparison of failures introduced at trajectory initiation shows that the SPRT algorithm performs slightly worse than the data tracking test.

  19. Affect and eating behavior in obese adults with and without elevated depression symptoms.

    PubMed

    Goldschmidt, Andrea B; Crosby, Ross D; Engel, Scott G; Crow, Scott J; Cao, Li; Peterson, Carol B; Durkin, Nora

    2014-04-01

    Although there is a modest relation between obesity and depression, mechanisms that contribute to this co-occurrence are unclear. This study examined mood and eating behavior among obese adults with and without elevated depression symptoms. Obese adults (N = 50) were subtyped according to a Beck Depression Inventory (BDI) cutoff of 14, indicating "probable depression." Participants with (BDI ≥ 14; n = 15) and without (BDI < 14; n = 35) elevated depression symptoms were compared on affect- and eating-related variables measured via questionnaire and ecological momentary assessment (EMA) using ANCOVA and mixed model regression. After adjusting for group differences in body mass index (BMI; p = .03), participants with elevated depression symptoms reported greater emotional eating via self-report questionnaire [F(1,50) = 4.3; p = .04], as well as more frequent binge eating (Wald χ(2)  = 13.8; p < .001) and higher daily negative affect (Wald χ(2)  = 7.7; p = .005) on EMA recordings. Emotional eating mediated the relationship between depression status and BMI (indirect effect estimate = 3.79; 95% CI = 1.02-7.46). Emotional eating and binge eating were more commonly reported by obese adults with elevated depression symptoms compared to those without and may occur against a general backdrop of overall low mood. Intervention and prevention programs for obesity and/or depression should address disordered eating to prevent or minimize adverse health consequences. Copyright © 2013 Wiley Periodicals, Inc.

  20. A Stitch in Time Saves Nine: A Comprehensive Conflict Prevention Strategy

    DTIC Science & Technology

    2010-04-02

    prevention plan, has led to ad-hoc conflict prevention. 1 Charles F. Wald , "The Phase Zero Campaign," JFQ: Joint Force Quarterly, no. 43 (10...14. 24 Joint Chiefs of Staff, Joint Operations, 2008), IV 27-28. 25 Wald , The Phase Zero Campaign, 73. 26 Ibid., 73. 27 secure...Unconventional Threats and Capabilities. Testimony of Michael Lund and Dr. Lisa Schrich on the Roles of Non-Military Programs within a Comprehensive

  1. Profile-likelihood Confidence Intervals in Item Response Theory Models.

    PubMed

    Chalmers, R Philip; Pek, Jolynn; Liu, Yang

    2017-01-01

    Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.

  2. Geometrothermodynamics for black holes and de Sitter space

    NASA Astrophysics Data System (ADS)

    Kurihara, Yoshimasa

    2018-02-01

    A general method to extract thermodynamic quantities from solutions of the Einstein equation is developed. In 1994, Wald established that the entropy of a black hole could be identified as a Noether charge associated with a Killing vector of a global space-time (pseudo-Riemann) manifold. We reconstruct Wald's method using geometrical language, e.g., via differential forms defined on the local space-time (Minkowski) manifold. Concurrently, the abstract thermodynamics are also reconstructed using geometrical terminology, which is parallel to general relativity. The correspondence between the thermodynamics and general relativity can be seen clearly by comparing the two expressions. This comparison requires a modification of Wald's method. The new method is applied to Schwarzschild, Kerr, and Kerr-Newman black holes and de Sitter space. The results are consistent with previous results obtained using various independent methods. This strongly supports the validity of the area theorem for black holes.

  3. Kullback-Leibler information function and the sequential selection of experiments to discriminate among several linear models

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1972-01-01

    The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.

  4. Classical subjective expected utility.

    PubMed

    Cerreia-Vioglio, Simone; Maccheroni, Fabio; Marinacci, Massimo; Montrucchio, Luigi

    2013-04-23

    We consider decision makers who know that payoff-relevant observations are generated by a process that belongs to a given class M, as postulated in Wald [Wald A (1950) Statistical Decision Functions (Wiley, New York)]. We incorporate this Waldean piece of objective information within an otherwise subjective setting à la Savage [Savage LJ (1954) The Foundations of Statistics (Wiley, New York)] and show that this leads to a two-stage subjective expected utility model that accounts for both state and model uncertainty.

  5. Applying Acquisition Lessons Learned to Operational Energy Initiatives

    DTIC Science & Technology

    2013-03-01

    current and future platforms to meet the demands of Energy-Informed Operations. Endnotes 1 Charles F. Wald , and Tom Captain, Energy Security America’s...2013); Wald and Captain, Energy Security America’s Best Defense, 1. 3 The Army’s agile process involves seven phases and three decision points to...https://acc.dau.mil/adl/en- US/329976/file/47235/EVM_Report_to_Congress.pdf (accessed January 14, 2013). 22 Lisa Pracchia, “The AV-8B Team Learns Synergy

  6. Childhood symptoms of inattention-hyperactivity predict cannabis use in first episode psychosis.

    PubMed

    Cassidy, Clifford M; Joober, Ridha; King, Suzanne; Malla, Ashok K

    2011-11-01

    A history of childhood symptoms of inattention-hyperactivity is often reported in first episode psychosis (FEP) as is cannabis use. In the general population childhood ADHD predicts future cannabis use but the relationship has not been tested in FEP. Parents of patients with a first episode of psychosis (n=75) retrospectively assessed their affected child for symptoms of early-life disorders, namely, attention deficit hyperactivity disorder (ADHD), conduct disorder (CD) and oppositional defiant disorder (ODD) using the Child Behaviour Checklist (CBCL). Assessments were made prospectively of cannabis use over two years following a FEP and of SCID diagnosis of cannabis-use disorder. Childhood hyperactivity-inattention symptoms predicted inability to maintain abstinence from cannabis following treatment (Wald=8.4, p=.004) and lifetime cannabis-use diagnosis (Wald=5.3, p=.022) in a logistic regression controlling for relevant covariates including symptoms of CD and ODD from ages 12 to 18. When the symptom of inattention was considered in place of the hyperactivity-inattention syndrome it predicted cannabis-use diagnosis (Wald=6.4, p=.011) and persistent abstinence from cannabis (Wald=5.3, p=.021). Symptoms of CD and ODD did not predict cannabis use when hyperactivity-inattention symptoms were controlled for. Symptoms of childhood inattention-hyperactivity predict subsequent cannabis use in FEP. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Buffer management for sequential decoding. [block erasure probability reduction

    NASA Technical Reports Server (NTRS)

    Layland, J. W.

    1974-01-01

    Sequential decoding has been found to be an efficient means of communicating at low undetected error rates from deep space probes, but erasure or computational overflow remains a significant problem. Erasure of a block occurs when the decoder has not finished decoding that block at the time that it must be output. By drawing upon analogies in computer time sharing, this paper develops a buffer-management strategy which reduces the decoder idle time to a negligible level, and therefore improves the erasure probability of a sequential decoder. For a decoder with a speed advantage of ten and a buffer size of ten blocks, operating at an erasure rate of .01, use of this buffer-management strategy reduces the erasure rate to less than .0001.

  8. Prognostic value of cardiopulmonary exercise testing in heart failure with preserved ejection fraction. The Henry Ford HospITal CardioPulmonary EXercise Testing (FIT-CPX) project.

    PubMed

    Shafiq, Ali; Brawner, Clinton A; Aldred, Heather A; Lewis, Barry; Williams, Celeste T; Tita, Christina; Schairer, John R; Ehrman, Jonathan K; Velez, Mauricio; Selektor, Yelena; Lanfear, David E; Keteyian, Steven J

    2016-04-01

    Although cardiopulmonary exercise (CPX) testing in patients with heart failure and reduced ejection fraction is well established, there are limited data on the value of CPX variables in patients with HF and preserved ejection fraction (HFpEF). We sought to determine the prognostic value of select CPX measures in patients with HFpEF. This was a retrospective analysis of patients with HFpEF (ejection fraction ≥ 50%) who performed a CPX test between 1997 and 2010. Selected CPX variables included peak oxygen uptake (VO2), percent predicted maximum oxygen uptake (ppMVO2), minute ventilation to carbon dioxide production slope (VE/VCO2 slope) and exercise oscillatory ventilation (EOV). Separate Cox regression analyses were performed to assess the relationship between each CPX variable and a composite outcome of all-cause mortality or cardiac transplant. We identified 173 HFpEF patients (45% women, 58% non-white, age 54 ± 14 years) with complete CPX data. During a median follow-up of 5.2 years, there were 42 deaths and 5 cardiac transplants. The 1-, 3-, and 5-year cumulative event-free survival was 96%, 90%, and 82%, respectively. Based on the Wald statistic from the Cox regression analyses adjusted for age, sex, and β-blockade therapy, ppMVO2 was the strongest predictor of the end point (Wald χ(2) = 15.0, hazard ratio per 10%, P < .001), followed by peak VO2 (Wald χ(2) = 11.8, P = .001). VE/VCO2 slope (Wald χ(2)= 0.4, P = .54) and EOV (Wald χ(2) = 0.15, P = .70) had no significant association to the composite outcome. These data support the prognostic utility of peak VO2 and ppMVO2 in patients with HFpEF. Additional studies are needed to define optimal cut points to identify low- and high-risk patients. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. The Relation Among the Likelihood Ratio-, Wald-, and Lagrange Multiplier Tests and Their Applicability to Small Samples,

    DTIC Science & Technology

    1982-04-01

    S. (1979), "Conflict Among Criteria for Testing Hypothesis: Extension and Comments," Econometrica, 47, 203-207 Breusch , T. S. and Pagan , A. R. (1980...Savin, N. E. (1977), "Conflict Among Criteria for Testing Hypothesis in the Multivariate Linear Regression Model," Econometrica, 45, 1263-1278 Breusch , T...VNCLASSIFIED RAND//-6756NL U l~ I- THE RELATION AMONG THE LIKELIHOOD RATIO-, WALD-, AND LAGRANGE MULTIPLIER TESTS AND THEIR APPLICABILITY TO SMALL SAMPLES

  10. Patronage Versus Professionalism in New Security Institutions

    DTIC Science & Technology

    2011-09-01

    27–IV-30. 49 Ibid., IV-27–IV-28. 50 Charles F. Wald , “New Thinking at USEUCOM: The Phase Zero Campaign,” Joint Force Quarterly 43 (4th Quarter, 2006...Stability Operations Require More U.S. Focus, Gates Says,” American Forces Press Service, April 14, 2009. 53 Wald , 73. 54 JP 3–0, VII-10. 55 Defense...friendly and a potential bulwark against the brutal Interior Ministry police. Yet the Egyptian military is known to practice patronage. Analyst Lisa

  11. Africa Command: U.S. Strategic Interests and the Role of the U.S. Military in Africa

    DTIC Science & Technology

    2009-07-28

    peacetime engagement, see General Charles Wald , “The Phase Zero Campaign,” Joint Force Quarterly, Issue 43, 4th Quarter 2006, available at http...with Principal Deputy Under Secretary Henry From the Pentagon,” February 7, 2007. 18 See, for example, Lisa Schirch and Aaron Kishbaugh, “Leveraging ‘3D...to the possibility of significant climate change.” Testimony of General Charles Wald , Member, Military Advisory Board, at a hearing on Climate

  12. Africa Command: U.S. Strategic Interests and the Role of the U.S. Military in Africa

    DTIC Science & Technology

    2010-04-03

    Charles Wald , “The Phase Zero Campaign,” Joint Force Quarterly, Issue 43, 4th Quarter 2006, available at http://www.ndu.edu/inss. 12 DOD, The Quadrennial...Deputy Under Secretary Henry From the Pentagon,” February 7, 2007. 19 See, for example, Lisa Schirch and Aaron Kishbaugh, “Leveraging ‘3D’ Security...Testimony of General Charles Wald , Member, Military Advisory Board, at a hearing on Climate Change and National Security Threats by the Senate Foreign

  13. Africa Command: U.S. Strategic Interests and the Role of the U.S. Military in Africa

    DTIC Science & Technology

    2008-08-22

    Phase Zero strategy and TSC, also known as peacetime engagement, see General Charles Wald , “The Phase Zero Campaign,” Joint Force Quarterly, Issue 43, 4th...16 DOD, “News Briefing with Principal Deputy Under Secretary Henry From the Pentagon,” February 7, 2007. 17 See, for example, Lisa Schirch and Aaron...Charles Wald , Member, Military Advisory Board, at a hearing on Climate Change and National Security Threats by the Senate Foreign Relations Committee on May

  14. Africa Command: U.S. Strategic Interests and the Role of the U.S. Military in Africa

    DTIC Science & Technology

    2009-01-05

    conflict. For more information on the Phase Zero strategy and TSC, also known as peacetime engagement, see General Charles Wald , “The Phase Zero...Principal Deputy Under Secretary Henry From the Pentagon,” February 7, 2007. 16 See, for example, Lisa Schirch and Aaron Kishbaugh, “Leveraging ‘3D...significant climate change.” Testimony of General Charles Wald , Member, Military Advisory Board, at a hearing on Climate Change and National Security

  15. Smarr formula for Lovelock black holes: A Lagrangian approach

    NASA Astrophysics Data System (ADS)

    Liberati, Stefano; Pacilio, Costantino

    2016-04-01

    The mass formula for black holes can be formally expressed in terms of a Noether charge surface integral plus a suitable volume integral, for any gravitational theory. The integrals can be constructed as an application of Wald's formalism. We apply this formalism to compute the mass and the Smarr formula for static Lovelock black holes. Finally, we propose a new prescription for Wald's entropy in the case of Lovelock black holes, which takes into account topological contributions to the entropy functional.

  16. Pure perceptual-based learning of second-, third-, and fourth-order sequential probabilities.

    PubMed

    Remillard, Gilbert

    2011-07-01

    There is evidence that sequence learning in the traditional serial reaction time task (SRTT), where target location is the response dimension, and sequence learning in the perceptual SRTT, where target location is not the response dimension, are handled by different mechanisms. The ability of the latter mechanism to learn sequential contingencies that can be learned by the former mechanism was examined. Prior research has established that people can learn second-, third-, and fourth-order probabilities in the traditional SRTT. The present study reveals that people can learn such probabilities in the perceptual SRTT. This suggests that the two mechanisms may have similar architectures. A possible neural basis of the two mechanisms is discussed.

  17. Asymptotically optimum multialternative sequential procedures for discernment of processes minimizing average length of observations

    NASA Astrophysics Data System (ADS)

    Fishman, M. M.

    1985-01-01

    The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.

  18. Logarithmic black hole entropy corrections and holographic Rényi entropy

    NASA Astrophysics Data System (ADS)

    Mahapatra, Subhash

    2018-01-01

    The entanglement and Rényi entropies for spherical entangling surfaces in CFTs with gravity duals can be explicitly calculated by mapping these entropies first to the thermal entropy on hyperbolic space and then, using the AdS/CFT correspondence, to the Wald entropy of topological black holes. Here we extend this idea by taking into account corrections to the Wald entropy. Using the method based on horizon symmetries and the asymptotic Cardy formula, we calculate corrections to the Wald entropy and find that these corrections are proportional to the logarithm of the area of the horizon. With the corrected expression for the entropy of the black hole, we then find corrections to the Rényi entropies. We calculate these corrections for both Einstein and Gauss-Bonnet gravity duals. Corrections with logarithmic dependence on the area of the entangling surface naturally occur at the order GD^0. The entropic c-function and the inequalities of the Rényi entropy are also satisfied even with the correction terms.

  19. An adaptive technique for a redundant-sensor navigation system.

    NASA Technical Reports Server (NTRS)

    Chien, T.-T.

    1972-01-01

    An on-line adaptive technique is developed to provide a self-contained redundant-sensor navigation system with a capability to utilize its full potentiality in reliability and performance. This adaptive system is structured as a multistage stochastic process of detection, identification, and compensation. It is shown that the detection system can be effectively constructed on the basis of a design value, specified by mission requirements, of the unknown parameter in the actual system, and of a degradation mode in the form of a constant bias jump. A suboptimal detection system on the basis of Wald's sequential analysis is developed using the concept of information value and information feedback. The developed system is easily implemented, and demonstrates a performance remarkably close to that of the optimal nonlinear detection system. An invariant transformation is derived to eliminate the effect of nuisance parameters such that the ambiguous identification system can be reduced to a set of disjoint simple hypotheses tests. By application of a technique of decoupled bias estimation in the compensation system the adaptive system can be operated without any complicated reorganization.

  20. The Role of Orthotactic Probability in Incidental and Intentional Vocabulary Acquisition L1 and L2

    ERIC Educational Resources Information Center

    Bordag, Denisa; Kirschenbaum, Amit; Rogahn, Maria; Tschirner, Erwin

    2017-01-01

    Four experiments were conducted to examine the role of orthotactic probability, i.e. the sequential letter probability, in the early stages of vocabulary acquisition by adult native speakers and advanced learners of German. The results show different effects for orthographic probability in incidental and intentional vocabulary acquisition: Whereas…

  1. Lower Hippocampal Volume Predicts Decrements in Lane Control among Drivers with Amnestic MCI

    PubMed Central

    Griffith, H Randall; Okonkwo, Ozioma C; Stewart, Christopher C; Stoeckel, Luke E; den Hollander, Jan A; Elgin, Jennifer M; Harrell, Lindy E; Brockington, John C; Clark, David G; Ball, Karlene K; Owsley, Cynthia; Marson, Daniel C; Wadley, Virginia G

    2014-01-01

    Objectives There are few methods to discern driving risks in patients with early dementia and Mild Cognitive Impairment (MCI). We aimed to determine whether structural MRI of the hippocampus – a biomarker of probable Alzheimer pathology and a measure of disease severity in those affected – is linked to objective ratings of on-road driving performance in older adults with and without amnestic MCI. Methods 49 consensus-diagnosed participants from an Alzheimer's Disease Research Center (15 diagnosed with amnestic MCI and 34 demographically similar controls) underwent structural MRI and on-road driving assessments. Results Mild atrophy of the left hippocampus was associated with less-than-optimal ratings in lane control but not with other discrete driving skills. Decrements in left hippocampal volume conferred higher risk for less-than-optimal lane control ratings in the MCI patients (B = −1.63, SE = .74, Wald = 4.85, P = .028), but not in controls (B = 0.13, SE = .415, Wald = 0.10, P = .752). The odds ratio (OR) and 95% confidence interval (CI) for below optimal lane control in the MCI group was 4.41 (1.18, 16.36), which was attenuated to 3.46 (0.88, 13.60) after accounting for the contribution of left hippocampal volume. Conclusion These findings suggest that there may be a link between hippocampal atrophy and difficulties with lane control in persons with amnestic MCI. Further study appears warranted to better discern patterns of brain atrophy in MCI and AD and whether these could be early markers of clinically meaningful driving risk. PMID:24212246

  2. Proceedings of the Conference on the Design of Experiments in Army Research, Development and Testing (29th)

    DTIC Science & Technology

    1984-06-01

    SEQUENTIAL TESTING (Bldg. A, Room C) 1300-1330 ’ 1330-1415 1415-1445 1445-1515 BREAK 1515-1545 A TRUNCATED SEQUENTIAL PROBABILITY RATIO TEST J...suicide optical data operational testing reliability random numbers bootstrap methods missing data sequential testing fire support complex computer model carcinogenesis studies EUITION Of 1 NOV 68 I% OBSOLETE a ...contributed papers can be ascertained from the titles of the

  3. Effect of number of rooms and sibs on nutritional status among rural Bengalee preschool children from eastern India.

    PubMed

    Biswas, Sadaruddin; Bose, Kaushik

    2011-12-01

    In developing countries including rural India, undernutrition among preschool children is one of the main barriers of the national development. However, there exists scanty information on the prevalence of underweight and stunting and their socio-demographic predictors among preschool children in India and West Bengal. The aim of the present study was to investigate the prevalence of underweight and stunting and the impact of two socio-demographic indicators, namely number of living rooms (NLR) and number of sibs (NS), on them among 1-5 year old Bengalee rural preschool children of Integrated Child Development Services (ICDS) Centres. This cross sectional study was undertaken at 30 randomly selected ICDS centre of Chapra Block, Nadia District, West Bengal, India. A total of 673 children, aged 1-5 years were studied. The overall (age and sex combined) rates of underweight and stunting were 54.40% and 39.20%, respectively. NLR was significantly associated with the prevalence of underweight (chi2 = 4.34, df = 1, p < 0.05) and stunting (chi2 = 8.98, df = 1, p < 0.01) among girls. Similarly, NS had a significant association with prevalence of underweight (chi2 = 10.29, df = 1, p < 0.001) and stunting (chi2 = 5.42, df = 1, p < 0.05) among girls. Girls with < 2 NLR had significant higher risk of being underweight (OR = 1.64, C.I = 1.30-2.62) or stunted (OR=2.23, C.I = 1.31-3.80) than those with > or = 2 NLR. Moreover, girls with > or = 3 NS had significant higher rate of underweight (OR = 2.03, CI = 1.32-3.146) or stunting (OR = 1.69, C.I = 1.09-2.63) than those with < 3 sibs. Logistic regression analyses also revealed that both NLR as well as NS were strong predictors of underweight (NLR: Wald = 4.30, p < 0.05; NS: Wald = 8.74, p < 0.001) and stunting (NLR: Wald = 10.17, p < 0.001; NS: Wald = 5.38, p < 0.05) among girls. Gender discrimination could be a likely cause for this sex difference in the impact of NRL and NS. Moreover, logistic regression were also undertaken with underweight and stunting status (yes/ no) as dependent variables and NLR and NS (combined) as independent variables to identify their effects, when considered together, on undernutrition. Results showed that NS had significant impact on underweight (Wald = 8.28, p < 0.001) rather than NLR among girls. Results also demonstrated that NLR had significant impact on stunting (Wald = 6.874, p < 0.01) rather than NS.

  4. Power, Status and Network Perceptions: The Effects of Network Bias on Organizational Outcomes

    DTIC Science & Technology

    2012-09-01

    OF RESPONSIBLE PERSON Lisa Boyce a. REPORT UNCLAS b. ABSTRACT UNCLAS c. THIS PAGE UNCLAS 19b. TELEPHONE NUMBER (Include area code) +44 (0...with our prediction, we found that power was a significant predictor (B = .03, SE = .01, Wald χ² = 5.81, p < .01) while controlling for the actual...density of the advice network (B = 1.37, SE = .63, Wald χ² = 4.78, p < .05), likelihood ratio χ² (2, N = 124) = 206.13, p < .001. As before, we

  5. Risk-adjusted sequential probability ratio tests: applications to Bristol, Shipman and adult cardiac surgery.

    PubMed

    Spiegelhalter, David; Grigg, Olivia; Kinsman, Robin; Treasure, Tom

    2003-02-01

    To investigate the use of the risk-adjusted sequential probability ratio test in monitoring the cumulative occurrence of adverse clinical outcomes. Retrospective analysis of three longitudinal datasets. Patients aged 65 years and over under the care of Harold Shipman between 1979 and 1997, patients under 1 year of age undergoing paediatric heart surgery in Bristol Royal Infirmary between 1984 and 1995, adult patients receiving cardiac surgery from a team of cardiac surgeons in London,UK. Annual and 30-day mortality rates. Using reasonable boundaries, the procedure could have indicated an 'alarm' in Bristol after publication of the 1991 Cardiac Surgical Register, and in 1985 or 1997 for Harold Shipman depending on the data source and the comparator. The cardiac surgeons showed no significant deviation from expected performance. The risk-adjusted sequential probability test is simple to implement, can be applied in a variety of contexts, and might have been useful to detect specific instances of past divergent performance. The use of this and related techniques deserves further attention in the context of prospectively monitoring adverse clinical outcomes.

  6. Inverse sequential procedures for the monitoring of time series

    NASA Technical Reports Server (NTRS)

    Radok, Uwe; Brown, Timothy

    1993-01-01

    Climate changes traditionally have been detected from long series of observations and long after they happened. The 'inverse sequential' monitoring procedure is designed to detect changes as soon as they occur. Frequency distribution parameters are estimated both from the most recent existing set of observations and from the same set augmented by 1,2,...j new observations. Individual-value probability products ('likelihoods') are then calculated which yield probabilities for erroneously accepting the existing parameter(s) as valid for the augmented data set and vice versa. A parameter change is signaled when these probabilities (or a more convenient and robust compound 'no change' probability) show a progressive decrease. New parameters are then estimated from the new observations alone to restart the procedure. The detailed algebra is developed and tested for Gaussian means and variances, Poisson and chi-square means, and linear or exponential trends; a comprehensive and interactive Fortran program is provided in the appendix.

  7. On the Possibility to Combine the Order Effect with Sequential Reproducibility for Quantum Measurements

    NASA Astrophysics Data System (ADS)

    Basieva, Irina; Khrennikov, Andrei

    2015-10-01

    In this paper we study the problem of a possibility to use quantum observables to describe a possible combination of the order effect with sequential reproducibility for quantum measurements. By the order effect we mean a dependence of probability distributions (of measurement results) on the order of measurements. We consider two types of the sequential reproducibility: adjacent reproducibility (A-A) (the standard perfect repeatability) and separated reproducibility(A-B-A). The first one is reproducibility with probability 1 of a result of measurement of some observable A measured twice, one A measurement after the other. The second one, A-B-A, is reproducibility with probability 1 of a result of A measurement when another quantum observable B is measured between two A's. Heuristically, it is clear that the second type of reproducibility is complementary to the order effect. We show that, surprisingly, this may not be the case. The order effect can coexist with a separated reproducibility as well as adjacent reproducibility for both observables A and B. However, the additional constraint in the form of separated reproducibility of the B-A-B type makes this coexistence impossible. The problem under consideration was motivated by attempts to apply the quantum formalism outside of physics, especially, in cognitive psychology and psychophysics. However, it is also important for foundations of quantum physics as a part of the problem about the structure of sequential quantum measurements.

  8. Associations between advanced cancer patients' survival and family caregiver presence and burden.

    PubMed

    Dionne-Odom, J Nicholas; Hull, Jay G; Martin, Michelle Y; Lyons, Kathleen Doyle; Prescott, Anna T; Tosteson, Tor; Li, Zhongze; Akyar, Imatullah; Raju, Dheeraj; Bakitas, Marie A

    2016-05-01

    We conducted a randomized controlled trial (RCT) of an early palliative care intervention (ENABLE: Educate, Nurture, Advise, Before Life Ends) for persons with advanced cancer and their family caregivers. Not all patient participants had a caregiver coparticipant; hence, we explored whether there were relationships between patient survival, having an enrolled caregiver, and caregiver outcomes prior to death. One hundred and twenty-three patient-caregiver dyads and 84 patients without a caregiver coparticipant participated in the ENABLE early versus delayed (12 weeks later) RCT. We collected caregiver quality-of-life (QOL), depression, and burden (objective, stress, and demand) measures every 6 weeks for 24 weeks and every 3 months thereafter until the patient's death or study completion. We conducted survival analyses using log-rank and Cox proportional hazards models. Patients with a caregiver coparticipant had significantly shorter survival (Wald = 4.31, HR = 1.52, CI: 1.02-2.25, P = 0.04). After including caregiver status, marital status (married/unmarried), their interaction, and relevant covariates, caregiver status (Wald = 6.25, HR = 2.62, CI: 1.23-5.59, P = 0.01), being married (Wald = 8.79, HR = 2.92, CI: 1.44-5.91, P = 0.003), and their interaction (Wald = 5.18, HR = 0.35, CI: 0.14-0.87, P = 0.02) were significant predictors of lower patient survival. Lower survival in patients with a caregiver was significantly related to higher caregiver demand burden (Wald = 4.87, CI: 1.01-1.20, P = 0.03) but not caregiver QOL, depression, and objective and stress burden. Advanced cancer patients with caregivers enrolled in a clinical trial had lower survival than patients without caregivers; however, this mortality risk was mostly attributable to higher survival by unmarried patients without caregivers. Higher caregiver demand burden was also associated with decreased patient survival. © 2016 The Authors. Cancer Medicine published by John Wiley & Sons Ltd.

  9. Dynamic Encoding of Speech Sequence Probability in Human Temporal Cortex

    PubMed Central

    Leonard, Matthew K.; Bouchard, Kristofer E.; Tang, Claire

    2015-01-01

    Sensory processing involves identification of stimulus features, but also integration with the surrounding sensory and cognitive context. Previous work in animals and humans has shown fine-scale sensitivity to context in the form of learned knowledge about the statistics of the sensory environment, including relative probabilities of discrete units in a stream of sequential auditory input. These statistics are a defining characteristic of one of the most important sequential signals humans encounter: speech. For speech, extensive exposure to a language tunes listeners to the statistics of sound sequences. To address how speech sequence statistics are neurally encoded, we used high-resolution direct cortical recordings from human lateral superior temporal cortex as subjects listened to words and nonwords with varying transition probabilities between sound segments. In addition to their sensitivity to acoustic features (including contextual features, such as coarticulation), we found that neural responses dynamically encoded the language-level probability of both preceding and upcoming speech sounds. Transition probability first negatively modulated neural responses, followed by positive modulation of neural responses, consistent with coordinated predictive and retrospective recognition processes, respectively. Furthermore, transition probability encoding was different for real English words compared with nonwords, providing evidence for online interactions with high-order linguistic knowledge. These results demonstrate that sensory processing of deeply learned stimuli involves integrating physical stimulus features with their contextual sequential structure. Despite not being consciously aware of phoneme sequence statistics, listeners use this information to process spoken input and to link low-level acoustic representations with linguistic information about word identity and meaning. PMID:25948269

  10. Genome-Wide Analysis of Gene-Gene and Gene-Environment Interactions Using Closed-Form Wald Tests.

    PubMed

    Yu, Zhaoxia; Demetriou, Michael; Gillen, Daniel L

    2015-09-01

    Despite the successful discovery of hundreds of variants for complex human traits using genome-wide association studies, the degree to which genes and environmental risk factors jointly affect disease risk is largely unknown. One obstacle toward this goal is that the computational effort required for testing gene-gene and gene-environment interactions is enormous. As a result, numerous computationally efficient tests were recently proposed. However, the validity of these methods often relies on unrealistic assumptions such as additive main effects, main effects at only one variable, no linkage disequilibrium between the two single-nucleotide polymorphisms (SNPs) in a pair or gene-environment independence. Here, we derive closed-form and consistent estimates for interaction parameters and propose to use Wald tests for testing interactions. The Wald tests are asymptotically equivalent to the likelihood ratio tests (LRTs), largely considered to be the gold standard tests but generally too computationally demanding for genome-wide interaction analysis. Simulation studies show that the proposed Wald tests have very similar performances with the LRTs but are much more computationally efficient. Applying the proposed tests to a genome-wide study of multiple sclerosis, we identify interactions within the major histocompatibility complex region. In this application, we find that (1) focusing on pairs where both SNPs are marginally significant leads to more significant interactions when compared to focusing on pairs where at least one SNP is marginally significant; and (2) parsimonious parameterization of interaction effects might decrease, rather than increase, statistical power. © 2015 WILEY PERIODICALS, INC.

  11. Comparison of rate one-half, equivalent constraint length 24, binary convolutional codes for use with sequential decoding on the deep-space channel

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.

  12. Simple and flexible SAS and SPSS programs for analyzing lag-sequential categorical data.

    PubMed

    O'Connor, B P

    1999-11-01

    This paper describes simple and flexible programs for analyzing lag-sequential categorical data, using SAS and SPSS. The programs read a stream of codes and produce a variety of lag-sequential statistics, including transitional frequencies, expected transitional frequencies, transitional probabilities, adjusted residuals, z values, Yule's Q values, likelihood ratio tests of stationarity across time and homogeneity across groups or segments, transformed kappas for unidirectional dependence, bidirectional dependence, parallel and nonparallel dominance, and significance levels based on both parametric and randomization tests.

  13. Optimal sequential measurements for bipartite state discrimination

    NASA Astrophysics Data System (ADS)

    Croke, Sarah; Barnett, Stephen M.; Weir, Graeme

    2017-05-01

    State discrimination is a useful test problem with which to clarify the power and limitations of different classes of measurement. We consider the problem of discriminating between given states of a bipartite quantum system via sequential measurement of the subsystems, with classical feed-forward of measurement results. Our aim is to understand when sequential measurements, which are relatively easy to implement experimentally, perform as well, or almost as well, as optimal joint measurements, which are in general more technologically challenging. We construct conditions that the optimal sequential measurement must satisfy, analogous to the well-known Helstrom conditions for minimum error discrimination in the unrestricted case. We give several examples and compare the optimal probability of correctly identifying the state via global versus sequential measurement strategies.

  14. A model for sequential decoding overflow due to a noisy carrier reference. [communication performance prediction

    NASA Technical Reports Server (NTRS)

    Layland, J. W.

    1974-01-01

    An approximate analysis of the effect of a noisy carrier reference on the performance of sequential decoding is presented. The analysis uses previously developed techniques for evaluating noisy reference performance for medium-rate uncoded communications adapted to sequential decoding for data rates of 8 to 2048 bits/s. In estimating the ten to the minus fourth power deletion probability thresholds for Helios, the model agrees with experimental data to within the experimental tolerances. The computational problem involved in sequential decoding, carrier loop effects, the main characteristics of the medium-rate model, modeled decoding performance, and perspectives on future work are discussed.

  15. Behavioral Health and Service Use Among Civilian Wives of Service Members and Veterans: Evidence from the National Survey of Drug Use and Health

    DTIC Science & Technology

    2015-01-01

    Prevalence Adjusted Odds Ratio* Disorder Category Comparison Group (%) Military Wives (%) OR 95% CI Wald Chi-Sq P-value Past-year major depression 5.93...Wives (%) OR 95% CI Wald Chi-Sq P-value Specialty treatment 6.8 7.4 1.15 (0.81, 1.62) 0.61 0.4368 Prescription drugs 12.6 18.1† 1.61 (1.23, 2.09) 12.29...Pfrommer, Lisa Miyashiro, Yashodhara Rana, and David M. Adamson, Access to Behavioral Health Care for Geographically Remote Service Members and Dependents

  16. Propagating probability distributions of stand variables using sequential Monte Carlo methods

    Treesearch

    Jeffrey H. Gove

    2009-01-01

    A general probabilistic approach to stand yield estimation is developed based on sequential Monte Carlo filters, also known as particle filters. The essential steps in the development of the sampling importance resampling (SIR) particle filter are presented. The SIR filter is then applied to simulated and observed data showing how the 'predictor - corrector'...

  17. Radiation detection method and system using the sequential probability ratio test

    DOEpatents

    Nelson, Karl E [Livermore, CA; Valentine, John D [Redwood City, CA; Beauchamp, Brock R [San Ramon, CA

    2007-07-17

    A method and system using the Sequential Probability Ratio Test to enhance the detection of an elevated level of radiation, by determining whether a set of observations are consistent with a specified model within a given bounds of statistical significance. In particular, the SPRT is used in the present invention to maximize the range of detection, by providing processing mechanisms for estimating the dynamic background radiation, adjusting the models to reflect the amount of background knowledge at the current point in time, analyzing the current sample using the models to determine statistical significance, and determining when the sample has returned to the expected background conditions.

  18. Descriptive and Experimental Analyses of Potential Precursors to Problem Behavior

    PubMed Central

    Borrero, Carrie S.W; Borrero, John C

    2008-01-01

    We conducted descriptive observations of severe problem behavior for 2 individuals with autism to identify precursors to problem behavior. Several comparative probability analyses were conducted in addition to lag-sequential analyses using the descriptive data. Results of the descriptive analyses showed that the probability of the potential precursor was greater given problem behavior compared to the unconditional probability of the potential precursor. Results of the lag-sequential analyses showed a marked increase in the probability of a potential precursor in the 1-s intervals immediately preceding an instance of problem behavior, and that the probability of problem behavior was highest in the 1-s intervals immediately following an instance of the precursor. We then conducted separate functional analyses of problem behavior and the precursor to identify respective operant functions. Results of the functional analyses showed that both problem behavior and the precursor served the same operant functions. These results replicate prior experimental analyses on the relation between problem behavior and precursors and extend prior research by illustrating a quantitative method to identify precursors to more severe problem behavior. PMID:18468281

  19. Predicted sequence of cortical tau and amyloid-β deposition in Alzheimer disease spectrum.

    PubMed

    Cho, Hanna; Lee, Hye Sun; Choi, Jae Yong; Lee, Jae Hoon; Ryu, Young Hoon; Lee, Myung Sik; Lyoo, Chul Hyoung

    2018-04-17

    We investigated sequential order between tau and amyloid-β (Aβ) deposition in Alzheimer disease spectrum using a conditional probability method. Two hundred twenty participants underwent 18 F-flortaucipir and 18 F-florbetaben positron emission tomography scans and neuropsychological tests. The presence of tau and Aβ in each region and impairment in each cognitive domain were determined by Z-score cutoffs. By comparing pairs of conditional probabilities, the sequential order of tau and Aβ deposition were determined. Probability for the presence of tau in the entorhinal cortex was higher than that of Aβ in all cortical regions, and in the medial temporal cortices, probability for the presence of tau was higher than that of Aβ. Conversely, in the remaining neocortex above the inferior temporal cortex, probability for the presence of Aβ was always higher than that of tau. Tau pathology in the entorhinal cortex may appear earlier than neocortical Aβ and may spread in the absence of Aβ within the neighboring medial temporal regions. However, Aβ may be required for massive tau deposition in the distant cortical areas. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. The association between booster seat use and risk of death among motor vehicle occupants aged 4-8: a matched cohort study.

    PubMed

    Rice, T M; Anderson, C L; Lee, A S

    2009-12-01

    To estimate the effectiveness of booster seats and of seatbelts in reducing the risk of child death during traffic collisions and to examine possible effect modification by various collision and vehicle characteristics. A matched cohort study was conducted using data from the Fatality Analysis Reporting System. Death risk ratios were estimated with conditional Poisson regression, bootstrapped coefficient standard errors, and multiply imputed missing values using chained equations. Estimated death risk ratios for booster seats used with seatbelts were 0.33 (95% CI 0.28 to 0.40) for children age 4-5 years and 0.45 (0.31 to 0.63) for children aged 6-8 years (Wald test of homogeneity p<0.005). The estimated risk ratios for seatbelt used alone were similar for the two age groups, 0.37 (0.32 to 0.43) and 0.39 (0.34 to 0.44) for ages 4-5 and 6-8, respectively (Wald p = 0.61). Estimated booster seat effectiveness was significantly greater for inbound seating positions (Wald p = 0.05) and during rollovers collisions (Wald p = 0.01). Significant variability in risk ratio estimates was not observed across levels of calendar year, vehicle model year, vehicle type, or land use. Seatbelts, used with or without booster seats, are highly effective in preventing death among motor vehicle occupants aged 4-8 years. Booster seats do not appear to improve the performance of seatbelts with respect to preventing death (risk ratio 0.92, 95% CI 0.79 to 1.08, comparing seatbelts with boosters to seatbelts alone), but because several studies have found that booster seats reduce non-fatal injury severity, clinicians and injury prevention specialists should continue to recommend the use of boosters to parents of young children.

  1. Influence of renal artery variants, number, location, and degree of renal artery stenoses on the atherosclerotic burden of the aorta.

    PubMed

    Petersen, Johannes; Plaikner, Michaela; Nasseri, Parinaz; Rehder, Peter; Koppelstätter, Christian; Pauli, Guido F; Glodny, Bernhard

    2012-10-01

    To determine the assumed influence of the number of renal arteries, the distribution and extent of renal artery stenosis (RAS), and the kidney length on calcified aortic atherosclerotic plaque burden. The computed tomographic angiographies of 1381 patients were analyzed retrospectively using a volumetric aortic calcium scoring method. The Spearman method was used to calculate the correlation between kidney length, number and diameter of renal arteries, as well as number, degree, and location of RASs on main or additional renal arteries with the extent of aortic atherosclerosis. Logistic regression analyses were conducted with the target variable "calcification present or absent." Patients with multiple renal arteries (38.3%) had lower plaque volumes than patients without such variants (0.55 ± 0.97 vs 0.64 ± 1.06 mL; P < 0.05). Renal artery stenoses affected all renal vessels with equal frequency. The aortic calcium score correlated with the number of RASs (P < 0.0001) and the maximum degree of RAS up to a threshold of 60%. Location of an RAS in the various renal arteries was irrelevant. In regression analyses, the presence of RAS (Wald = 5.523), the degree of RAS (Wald = 6.251), and age (Wald = 223.1) were positive predictors of the aortic calcium score, whereas kidney length (Wald = 9.564) proved to be a negative predictor. The aortic calcium score correlates with both the number of RASs and the maximum degree of RAS up to a threshold of 60% but correlates inversely with the number of renal arteries. Renal artery stenosis affects all renal vessels with equal frequency, and this finding should be considered in screening procedures.

  2. Mechanical System Reliability and Cost Integration Using a Sequential Linear Approximation Method

    NASA Technical Reports Server (NTRS)

    Kowal, Michael T.

    1997-01-01

    The development of new products is dependent on product designs that incorporate high levels of reliability along with a design that meets predetermined levels of system cost. Additional constraints on the product include explicit and implicit performance requirements. Existing reliability and cost prediction methods result in no direct linkage between variables affecting these two dominant product attributes. A methodology to integrate reliability and cost estimates using a sequential linear approximation method is proposed. The sequential linear approximation method utilizes probability of failure sensitivities determined from probabilistic reliability methods as well a manufacturing cost sensitivities. The application of the sequential linear approximation method to a mechanical system is demonstrated.

  3. Analyzing multicomponent receptive fields from neural responses to natural stimuli

    PubMed Central

    Rowekamp, Ryan; Sharpee, Tatyana O

    2011-01-01

    The challenge of building increasingly better models of neural responses to natural stimuli is to accurately estimate the multiple stimulus features that may jointly affect the neural spike probability. The selectivity for combinations of features is thought to be crucial for achieving classical properties of neural responses such as contrast invariance. The joint search for these multiple stimulus features is difficult because estimating spike probability as a multidimensional function of stimulus projections onto candidate relevant dimensions is subject to the curse of dimensionality. An attractive alternative is to search for relevant dimensions sequentially, as in projection pursuit regression. Here we demonstrate using analytic arguments and simulations of model cells that different types of sequential search strategies exhibit systematic biases when used with natural stimuli. Simulations show that joint optimization is feasible for up to three dimensions with current algorithms. When applied to the responses of V1 neurons to natural scenes, models based on three jointly optimized dimensions had better predictive power in a majority of cases compared to dimensions optimized sequentially, with different sequential methods yielding comparable results. Thus, although the curse of dimensionality remains, at least several relevant dimensions can be estimated by joint information maximization. PMID:21780916

  4. An association between the internalization of body image, depressive symptoms and restrictive eating habits among young males.

    PubMed

    Fortes, Leonardo de Sousa; Meireles, Juliana Fernandes Filgueiras; Paes, Santiago Tavares; Dias, Fernanda Coelho; Cipriani, Flávia Marcele; Ferreira, Maria Elisa Caputo

    2015-11-01

    The scope of this study was to analyze the relationship between the internalization of body image and depressive symptoms with restrictive eating habits among young males. Three hundred and eighty-three male adolescents, aged between twelve and seventeen, took part in this survey. The "Overall Internalization" and "Athletic Internalization" sub-scales taken from the Sociocultural Attitudes Towards Appearance Questionnaire-3 (SATAQ-3) were used to evaluate the internalization of body images. The Major Depression Inventory (MDI) was used to evaluate depressive symptoms. The "Diet" sub-scale from the Eating Attitudes Test (EAT-26) was used to evaluate restrictive eating habits. The logistic regression findings indicated 2.01 times greater chances of youngsters with a high level of overall internalization adopting restrictive eating habits (Wald = 6.16; p = 0.01) when compared with those with low levels. On the other hand, the regression model found no significant association between "Athletic Internalization" (Wald = 1.16; p = 0.23) and depressive symptoms (Wald = 0.81; p = 0.35) with eating restrictions. The findings made it possible to conclude that only overall internalization was related to eating restrictions among young males.

  5. Latent-level relations between DSM-5 PTSD symptom clusters and problematic smartphone use.

    PubMed

    Contractor, Ateka A; Frankfurt, Sheila B; Weiss, Nicole H; Elhai, Jon D

    2017-07-01

    Common mental health consequences following the experience of potentially traumatic events include Posttraumatic Stress Disorder (PTSD) and addictive behaviors. Problematic smartphone use is a newer manifestation of addictive behaviors. People with anxiety severity (such as PTSD) may be at risk for problematic smartphone use as a means of coping with their symptoms. Unique to our knowledge, we assessed relations between PTSD symptom clusters and problematic smartphone use. Participants ( N = 347), recruited through Amazon's Mechanical Turk (MTurk), completed measures of PTSD and smartphone addiction. Results of the Wald tests of parameter constraints indicated that problematic smartphone use was more related to PTSD's negative alterations in cognitions and mood (NACM) than to PTSD's avoidance factor, Wald χ 2 (1, N = 347) = 12.51, p = 0.0004; and more to PTSD's arousal compared to PTSD's avoidance factor, Wald χ 2 (1, N = 347) = 14.89, p = 0.0001. Results indicate that problematic smartphone use is most associated with negative affect and arousal among trauma-exposed individuals. Implications include the need to clinically assess problematic smartphone use among trauma-exposed individuals presenting with higher NACM and arousal severity; and targeting NACM and arousal symptoms to mitigate the effects of problematic smartphone use.

  6. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)

    2005-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  7. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2006-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  8. Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2008-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  9. Dosimetric comparison of standard three-dimensional conformal radiotherapy followed by intensity-modulated radiotherapy boost schedule (sequential IMRT plan) with simultaneous integrated boost-IMRT (SIB IMRT) treatment plan in patients with localized carcinoma prostate.

    PubMed

    Bansal, A; Kapoor, R; Singh, S K; Kumar, N; Oinam, A S; Sharma, S C

    2012-07-01

    DOSIMETERIC AND RADIOBIOLOGICAL COMPARISON OF TWO RADIATION SCHEDULES IN LOCALIZED CARCINOMA PROSTATE: Standard Three-Dimensional Conformal Radiotherapy (3DCRT) followed by Intensity Modulated Radiotherapy (IMRT) boost (sequential-IMRT) with Simultaneous Integrated Boost IMRT (SIB-IMRT). Thirty patients were enrolled. In all, the target consisted of PTV P + SV (Prostate and seminal vesicles) and PTV LN (lymph nodes) where PTV refers to planning target volume and the critical structures included: bladder, rectum and small bowel. All patients were treated with sequential-IMRT plan, but for dosimetric comparison, SIB-IMRT plan was also created. The prescription dose to PTV P + SV was 74 Gy in both strategies but with different dose per fraction, however, the dose to PTV LN was 50 Gy delivered in 25 fractions over 5 weeks for sequential-IMRT and 54 Gy delivered in 27 fractions over 5.5 weeks for SIB-IMRT. The treatment plans were compared in terms of dose-volume histograms. Also, Tumor Control Probability (TCP) and Normal Tissue Complication Probability (NTCP) obtained with the two plans were compared. The volume of rectum receiving 70 Gy or more (V > 70 Gy) was reduced to 18.23% with SIB-IMRT from 22.81% with sequential-IMRT. SIB-IMRT reduced the mean doses to both bladder and rectum by 13% and 17%, respectively, as compared to sequential-IMRT. NTCP of 0.86 ± 0.75% and 0.01 ± 0.02% for the bladder, 5.87 ± 2.58% and 4.31 ± 2.61% for the rectum and 8.83 ± 7.08% and 8.25 ± 7.98% for the bowel was seen with sequential-IMRT and SIB-IMRT plans respectively. For equal PTV coverage, SIB-IMRT markedly reduced doses to critical structures, therefore should be considered as the strategy for dose escalation. SIB-IMRT achieves lesser NTCP than sequential-IMRT.

  10. Human Inferences about Sequences: A Minimal Transition Probability Model

    PubMed Central

    2016-01-01

    The brain constantly infers the causes of the inputs it receives and uses these inferences to generate statistical expectations about future observations. Experimental evidence for these expectations and their violations include explicit reports, sequential effects on reaction times, and mismatch or surprise signals recorded in electrophysiology and functional MRI. Here, we explore the hypothesis that the brain acts as a near-optimal inference device that constantly attempts to infer the time-varying matrix of transition probabilities between the stimuli it receives, even when those stimuli are in fact fully unpredictable. This parsimonious Bayesian model, with a single free parameter, accounts for a broad range of findings on surprise signals, sequential effects and the perception of randomness. Notably, it explains the pervasive asymmetry between repetitions and alternations encountered in those studies. Our analysis suggests that a neural machinery for inferring transition probabilities lies at the core of human sequence knowledge. PMID:28030543

  11. Sequential Probability Ratio Test for Collision Avoidance Maneuver Decisions Based on a Bank of Norm-Inequality-Constrained Epoch-State Filters

    NASA Technical Reports Server (NTRS)

    Carpenter, J. R.; Markley, F. L.; Alfriend, K. T.; Wright, C.; Arcido, J.

    2011-01-01

    Sequential probability ratio tests explicitly allow decision makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models 1he null hypothesis 1ha1 the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming highly-elliptical orbit formation flying mission.

  12. Sequential Probability Ratio Test for Spacecraft Collision Avoidance Maneuver Decisions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis

    2013-01-01

    A document discusses sequential probability ratio tests that explicitly allow decision-makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models the null hypotheses that the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming, highly elliptical orbit formation flying mission.

  13. Exact Tests for the Rasch Model via Sequential Importance Sampling

    ERIC Educational Resources Information Center

    Chen, Yuguo; Small, Dylan

    2005-01-01

    Rasch proposed an exact conditional inference approach to testing his model but never implemented it because it involves the calculation of a complicated probability. This paper furthers Rasch's approach by (1) providing an efficient Monte Carlo methodology for accurately approximating the required probability and (2) illustrating the usefulness…

  14. Learning in Reverse: Eight-Month-Old Infants Track Backward Transitional Probabilities

    ERIC Educational Resources Information Center

    Pelucchi, Bruna; Hay, Jessica F.; Saffran, Jenny R.

    2009-01-01

    Numerous recent studies suggest that human learners, including both infants and adults, readily track sequential statistics computed between adjacent elements. One such statistic, transitional probability, is typically calculated as the likelihood that one element predicts another. However, little is known about whether listeners are sensitive to…

  15. Type I error probability spending for post-market drug and vaccine safety surveillance with binomial data.

    PubMed

    Silva, Ivair R

    2018-01-15

    Type I error probability spending functions are commonly used for designing sequential analysis of binomial data in clinical trials, but it is also quickly emerging for near-continuous sequential analysis of post-market drug and vaccine safety surveillance. It is well known that, for clinical trials, when the null hypothesis is not rejected, it is still important to minimize the sample size. Unlike in post-market drug and vaccine safety surveillance, that is not important. In post-market safety surveillance, specially when the surveillance involves identification of potential signals, the meaningful statistical performance measure to be minimized is the expected sample size when the null hypothesis is rejected. The present paper shows that, instead of the convex Type I error spending shape conventionally used in clinical trials, a concave shape is more indicated for post-market drug and vaccine safety surveillance. This is shown for both, continuous and group sequential analysis. Copyright © 2017 John Wiley & Sons, Ltd.

  16. A meta-analysis of response-time tests of the sequential two-systems model of moral judgment.

    PubMed

    Baron, Jonathan; Gürçay, Burcu

    2017-05-01

    The (generalized) sequential two-system ("default interventionist") model of utilitarian moral judgment predicts that utilitarian responses often arise from a system-two correction of system-one deontological intuitions. Response-time (RT) results that seem to support this model are usually explained by the fact that low-probability responses have longer RTs. Following earlier results, we predicted response probability from each subject's tendency to make utilitarian responses (A, "Ability") and each dilemma's tendency to elicit deontological responses (D, "Difficulty"), estimated from a Rasch model. At the point where A = D, the two responses are equally likely, so probability effects cannot account for any RT differences between them. The sequential two-system model still predicts that many of the utilitarian responses made at this point will result from system-two corrections of system-one intuitions, hence should take longer. However, when A = D, RT for the two responses was the same, contradicting the sequential model. Here we report a meta-analysis of 26 data sets, which replicated the earlier results of no RT difference overall at the point where A = D. The data sets used three different kinds of moral judgment items, and the RT equality at the point where A = D held for all three. In addition, we found that RT increased with A-D. This result holds for subjects (characterized by Ability) but not for items (characterized by Difficulty). We explain the main features of this unanticipated effect, and of the main results, with a drift-diffusion model.

  17. Protein classification using sequential pattern mining.

    PubMed

    Exarchos, Themis P; Papaloukas, Costas; Lampros, Christos; Fotiadis, Dimitrios I

    2006-01-01

    Protein classification in terms of fold recognition can be employed to determine the structural and functional properties of a newly discovered protein. In this work sequential pattern mining (SPM) is utilized for sequence-based fold recognition. One of the most efficient SPM algorithms, cSPADE, is employed for protein primary structure analysis. Then a classifier uses the extracted sequential patterns for classifying proteins of unknown structure in the appropriate fold category. The proposed methodology exhibited an overall accuracy of 36% in a multi-class problem of 17 candidate categories. The classification performance reaches up to 65% when the three most probable protein folds are considered.

  18. An Alternative Approach to the Total Probability Formula. Classroom Notes

    ERIC Educational Resources Information Center

    Wu, Dane W. Wu; Bangerter, Laura M.

    2004-01-01

    Given a set of urns, each filled with a mix of black chips and white chips, what is the probability of drawing a black chip from the last urn after some sequential random shifts of chips among the urns? The Total Probability Formula (TPF) is the common tool to solve such a problem. However, when the number of urns is more than two and the number…

  19. Statistical Segmentation of Tone Sequences Activates the Left Inferior Frontal Cortex: A Near-Infrared Spectroscopy Study

    ERIC Educational Resources Information Center

    Abla, Dilshat; Okanoya, Kazuo

    2008-01-01

    Word segmentation, that is, discovering the boundaries between words that are embedded in a continuous speech stream, is an important faculty for language learners; humans solve this task partly by calculating transitional probabilities between sounds. Behavioral and ERP studies suggest that detection of sequential probabilities (statistical…

  20. Sequential Probability Ratio Testing with Power Projective Base Method Improves Decision-Making for BCI

    PubMed Central

    Liu, Rong

    2017-01-01

    Obtaining a fast and reliable decision is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this study, the EEG signals were firstly analyzed with a power projective base method. Then we were applied a decision-making model, the sequential probability ratio testing (SPRT), for single-trial classification of motor imagery movement events. The unique strength of this proposed classification method lies in its accumulative process, which increases the discriminative power as more and more evidence is observed over time. The properties of the method were illustrated on thirteen subjects' recordings from three datasets. Results showed that our proposed power projective method outperformed two benchmark methods for every subject. Moreover, with sequential classifier, the accuracies across subjects were significantly higher than that with nonsequential ones. The average maximum accuracy of the SPRT method was 84.1%, as compared with 82.3% accuracy for the sequential Bayesian (SB) method. The proposed SPRT method provides an explicit relationship between stopping time, thresholds, and error, which is important for balancing the time-accuracy trade-off. These results suggest SPRT would be useful in speeding up decision-making while trading off errors in BCI. PMID:29348781

  1. Stochastic approach for an unbiased estimation of the probability of a successful separation in conventional chromatography and sequential elution liquid chromatography.

    PubMed

    Ennis, Erin J; Foley, Joe P

    2016-07-15

    A stochastic approach was utilized to estimate the probability of a successful isocratic or gradient separation in conventional chromatography for numbers of sample components, peak capacities, and saturation factors ranging from 2 to 30, 20-300, and 0.017-1, respectively. The stochastic probabilities were obtained under conditions of (i) constant peak width ("gradient" conditions) and (ii) peak width increasing linearly with time ("isocratic/constant N" conditions). The isocratic and gradient probabilities obtained stochastically were compared with the probabilities predicted by Martin et al. [Anal. Chem., 58 (1986) 2200-2207] and Davis and Stoll [J. Chromatogr. A, (2014) 128-142]; for a given number of components and peak capacity the same trend is always observed: probability obtained with the isocratic stochastic approach

  2. Physician Satisfaction in Treating Medically Unexplained Symptoms.

    PubMed

    Brauer, Simon G; Yoon, John D; Curlin, Farr A

    2017-05-01

    To determine whether treating conditions having medically unexplained symptoms is associated with lower physician satisfaction and higher ascribed patient responsibility, and to determine whether higher ascribed patient responsibility is associated with lower physician satisfaction in treating a given condition. We surveyed a nationally representative sample of 1504 US primary care physicians. Respondents were asked how responsible patients are for two conditions with more-developed medical explanations (depression and anxiety) and two conditions with less-developed medical explanations (chronic back pain and fibromyalgia), and how much satisfaction they experienced in treating each condition. We used Wald tests to compare mean satisfaction and ascribed patient responsibility between medically explained conditions and medically unexplained conditions. We conducted single-level and multilevel ordinal logistic models to test the relation between ascribed patient responsibility and physician satisfaction. Treating medically unexplained conditions elicited less satisfaction than treating medically explained conditions (Wald P < 0.001). Physicians attribute significantly more patient responsibility to the former (Wald P < 0.005), although the magnitude of the difference is small. Across all four conditions, physicians reported experiencing less satisfaction when treating symptoms that result from choices for which patients are responsible (multilevel odds ratio 0.57, P = 0.000). Physicians experience less satisfaction in treating conditions characterized by medically unexplained conditions and in treating conditions for which they believe the patient is responsible.

  3. Latent-level relations between DSM-5 PTSD symptom clusters and problematic smartphone use

    PubMed Central

    Contractor, Ateka A.; Frankfurt, Sheila B.; Weiss, Nicole H.; Elhai, Jon D.

    2017-01-01

    Common mental health consequences following the experience of potentially traumatic events include Posttraumatic Stress Disorder (PTSD) and addictive behaviors. Problematic smartphone use is a newer manifestation of addictive behaviors. People with anxiety severity (such as PTSD) may be at risk for problematic smartphone use as a means of coping with their symptoms. Unique to our knowledge, we assessed relations between PTSD symptom clusters and problematic smartphone use. Participants (N = 347), recruited through Amazon’s Mechanical Turk (MTurk), completed measures of PTSD and smartphone addiction. Results of the Wald tests of parameter constraints indicated that problematic smartphone use was more related to PTSD’s negative alterations in cognitions and mood (NACM) than to PTSD’s avoidance factor, Wald χ2(1, N = 347) = 12.51, p = 0.0004; and more to PTSD’s arousal compared to PTSD’s avoidance factor, Wald χ2(1, N = 347) = 14.89, p = 0.0001. Results indicate that problematic smartphone use is most associated with negative affect and arousal among trauma-exposed individuals. Implications include the need to clinically assess problematic smartphone use among trauma-exposed individuals presenting with higher NACM and arousal severity; and targeting NACM and arousal symptoms to mitigate the effects of problematic smartphone use. PMID:28993716

  4. Design and analysis of three-arm trials with negative binomially distributed endpoints.

    PubMed

    Mütze, Tobias; Munk, Axel; Friede, Tim

    2016-02-20

    A three-arm clinical trial design with an experimental treatment, an active control, and a placebo control, commonly referred to as the gold standard design, enables testing of non-inferiority or superiority of the experimental treatment compared with the active control. In this paper, we propose methods for designing and analyzing three-arm trials with negative binomially distributed endpoints. In particular, we develop a Wald-type test with a restricted maximum-likelihood variance estimator for testing non-inferiority or superiority. For this test, sample size and power formulas as well as optimal sample size allocations will be derived. The performance of the proposed test will be assessed in an extensive simulation study with regard to type I error rate, power, sample size, and sample size allocation. For the purpose of comparison, Wald-type statistics with a sample variance estimator and an unrestricted maximum-likelihood estimator are included in the simulation study. We found that the proposed Wald-type test with a restricted variance estimator performed well across the considered scenarios and is therefore recommended for application in clinical trials. The methods proposed are motivated and illustrated by a recent clinical trial in multiple sclerosis. The R package ThreeArmedTrials, which implements the methods discussed in this paper, is available on CRAN. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Psychological, Relational, and Biological Correlates of Ego-Dystonic Masturbation in a Clinical Setting.

    PubMed

    Castellini, Giovanni; Fanni, Egidia; Corona, Giovanni; Maseroli, Elisa; Ricca, Valdo; Maggi, Mario

    2016-09-01

    Attitudes toward masturbation are extremely varied, and this practice is often perceived with a sense of guilt. To evaluate the prevalence of ego-dystonic masturbation (EM), defined as masturbation activity followed by a sense of guilt, in a clinical setting of sexual medicine and the impact of EM on psychological and relational well-being. A series of 4,211 men attending an andrology and sexual medicine outpatient clinic was studied retrospectively. The presence and severity of EM were defined according to ANDROTEST items related to masturbation, determined by the mathematical product of the frequency of masturbation and the sense of guilt after masturbation. Clinical, biochemical, and psychological parameters were studied using the Structured Interview on Erectile Dysfunction, ANDROTEST, and modified Middlesex Hospital Questionnaire. Three hundred fifty-two subjects (8.4%) reported any sense of guilt after masturbation. Subjects with EM were younger than the remaining sample (mean age ± SD = 51.27 ± 13.43 vs 48.31 ± 12.04 years, P < .0001) and had more psychiatric comorbidities. EM severity was positively associated with higher free-floating (Wald = 35.94, P < .001) and depressive (Wald = 16.85, P < .001) symptoms, and subjects with a higher EM score reported less phobic anxiety (Wald = 4.02, P < .05) and obsessive-compulsive symptoms (Wald = 7.6, P < .01). A higher EM score was associated with a higher alcohol intake. Subjects with EM more often reported the partner's lower frequency of climax and more problems achieving an erection during sexual intercourse. EM severity was positively associated with worse relational and intrapsychic domain scores. Clinicians should consider that some subjects seeking treatment in a sexual medicine setting might report compulsive sexual behaviors. EM represents a clinically relevant cause of disability, given the high level of psychological distress reported by subjects with this condition, and the severe impact on quality of life in interpersonal relationships. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Two aspects of black hole entropy in Lanczos-Lovelock models of gravity

    NASA Astrophysics Data System (ADS)

    Kolekar, Sanved; Kothawala, Dawood; Padmanabhan, T.

    2012-03-01

    We consider two specific approaches to evaluate the black hole entropy which are known to produce correct results in the case of Einstein’s theory and generalize them to Lanczos-Lovelock models. In the first approach (which could be called extrinsic), we use a procedure motivated by earlier work by Pretorius, Vollick, and Israel, and by Oppenheim, and evaluate the entropy of a configuration of densely packed gravitating shells on the verge of forming a black hole in Lanczos-Lovelock theories of gravity. We find that this matter entropy is not equal to (it is less than) Wald entropy, except in the case of Einstein theory, where they are equal. The matter entropy is proportional to the Wald entropy if we consider a specific mth-order Lanczos-Lovelock model, with the proportionality constant depending on the spacetime dimensions D and the order m of the Lanczos-Lovelock theory as (D-2m)/(D-2). Since the proportionality constant depends on m, the proportionality between matter entropy and Wald entropy breaks down when we consider a sum of Lanczos-Lovelock actions involving different m. In the second approach (which could be called intrinsic), we generalize a procedure, previously introduced by Padmanabhan in the context of general relativity, to study off-shell entropy of a class of metrics with horizon using a path integral method. We consider the Euclidean action of Lanczos-Lovelock models for a class of metrics off shell and interpret it as a partition function. We show that in the case of spherically symmetric metrics, one can interpret the Euclidean action as the free energy and read off both the entropy and energy of a black hole spacetime. Surprisingly enough, this leads to exactly the Wald entropy and the energy of the spacetime in Lanczos-Lovelock models obtained by other methods. We comment on possible implications of the result.

  7. Simulation Model of Mobile Detection Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edmunds, T; Faissol, D; Yao, Y

    2009-01-27

    In this paper, we consider a mobile source that we attempt to detect with man-portable, vehicle-mounted or boat-mounted radiation detectors. The source is assumed to transit an area populated with these mobile detectors, and the objective is to detect the source before it reaches a perimeter. We describe a simulation model developed to estimate the probability that one of the mobile detectors will come in to close proximity of the moving source and detect it. We illustrate with a maritime simulation example. Our simulation takes place in a 10 km by 5 km rectangular bay patrolled by boats equipped withmore » 2-inch x 4-inch x 16-inch NaI detectors. Boats to be inspected enter the bay and randomly proceed to one of seven harbors on the shore. A source-bearing boat enters the mouth of the bay and proceeds to a pier on the opposite side. We wish to determine the probability that the source is detected and its range from target when detected. Patrol boats select the nearest in-bound boat for inspection and initiate an intercept course. Once within an operational range for the detection system, a detection algorithm is started. If the patrol boat confirms the source is not present, it selects the next nearest boat for inspection. Each run of the simulation ends either when a patrol successfully detects a source or when the source reaches its target. Several statistical detection algorithms have been implemented in the simulation model. First, a simple k-sigma algorithm, which alarms with the counts in a time window exceeds the mean background plus k times the standard deviation of background, is available to the user. The time window used is optimized with respect to the signal-to-background ratio for that range and relative speed. Second, a sequential probability ratio test [Wald 1947] is available, and configured in this simulation with a target false positive probability of 0.001 and false negative probability of 0.1. This test is utilized when the mobile detector maintains a constant range to the vessel being inspected. Finally, a variation of the sequential probability ratio test that is more appropriate when source and detector are not at constant range is available [Nelson 2005]. Each patrol boat in the fleet can be assigned a particular zone of the bay, or all boats can be assigned to monitor the entire bay. Boats assigned to a zone will only intercept and inspect other boats when they enter their zone. In our example simulation, each of two patrol boats operate in a 5 km by 5 km zone. Other parameters for this example include: (1) Detection range - 15 m range maintained between patrol boat and inspected boat; (2) Inbound boat arrival rate - Poisson process with mean arrival rate of 30 boats per hour; (3) Speed of boats to be inspected - Random between 4.5 and 9 knots; (4) Patrol boat speed - 10 knots; (5) Number of detectors per patrol boat - 4-2-inch x 4-inch x 16-inch NaI detectors; (6) Background radiation - 40 counts/sec per detector; and (7) Detector response due to radiation source at 1 meter - 1,589 counts/sec per detector. Simulation results indicate that two patrol boats are able to detect the source 81% of the time without zones and 90% of the time with zones. The average distances between the source and target at the end of the simulation is 5,866 km and 5,712 km for non-zoned and zoned patrols, respectively. Of those that did not reach the target, the average distance to the target is 7,305 km and 6,441 km respectively. Note that a design trade-off exists. While zoned patrols provide a higher probability of detection, the nonzoned patrols tend to detect the source farther from its target. Figure 1 displays the location of the source at the end of 1,000 simulations for the 5 x 10 km bay simulation. The simulation model and analysis described here can be used to determine the number of mobile detectors one would need to deploy in order to have a have reasonable chance of detecting a source in transit. By fixing the source speed to zero, the same model could be used to estimate how long it would take to detect a stationary source. For example, the model could predict how long it would take plant staff performing assigned duties carrying dosimeters to discover a contaminated spot in the facility.« less

  8. Expert system for online surveillance of nuclear reactor coolant pumps

    DOEpatents

    Gross, Kenny C.; Singer, Ralph M.; Humenik, Keith E.

    1993-01-01

    An expert system for online surveillance of nuclear reactor coolant pumps. This system provides a means for early detection of pump or sensor degradation. Degradation is determined through the use of a statistical analysis technique, sequential probability ratio test, applied to information from several sensors which are responsive to differing physical parameters. The results of sequential testing of the data provide the operator with an early warning of possible sensor or pump failure.

  9. Sequential experimental design based generalised ANOVA

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-07-01

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  10. Sequential experimental design based generalised ANOVA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover,more » generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.« less

  11. Dosimetric comparison of standard three-dimensional conformal radiotherapy followed by intensity-modulated radiotherapy boost schedule (sequential IMRT plan) with simultaneous integrated boost–IMRT (SIB IMRT) treatment plan in patients with localized carcinoma prostate

    PubMed Central

    Bansal, A.; Kapoor, R.; Singh, S. K.; Kumar, N.; Oinam, A. S.; Sharma, S. C.

    2012-01-01

    Aims: Dosimeteric and radiobiological comparison of two radiation schedules in localized carcinoma prostate: Standard Three-Dimensional Conformal Radiotherapy (3DCRT) followed by Intensity Modulated Radiotherapy (IMRT) boost (sequential-IMRT) with Simultaneous Integrated Boost IMRT (SIB-IMRT). Material and Methods: Thirty patients were enrolled. In all, the target consisted of PTV P + SV (Prostate and seminal vesicles) and PTV LN (lymph nodes) where PTV refers to planning target volume and the critical structures included: bladder, rectum and small bowel. All patients were treated with sequential-IMRT plan, but for dosimetric comparison, SIB-IMRT plan was also created. The prescription dose to PTV P + SV was 74 Gy in both strategies but with different dose per fraction, however, the dose to PTV LN was 50 Gy delivered in 25 fractions over 5 weeks for sequential-IMRT and 54 Gy delivered in 27 fractions over 5.5 weeks for SIB-IMRT. The treatment plans were compared in terms of dose–volume histograms. Also, Tumor Control Probability (TCP) and Normal Tissue Complication Probability (NTCP) obtained with the two plans were compared. Results: The volume of rectum receiving 70 Gy or more (V > 70 Gy) was reduced to 18.23% with SIB-IMRT from 22.81% with sequential-IMRT. SIB-IMRT reduced the mean doses to both bladder and rectum by 13% and 17%, respectively, as compared to sequential-IMRT. NTCP of 0.86 ± 0.75% and 0.01 ± 0.02% for the bladder, 5.87 ± 2.58% and 4.31 ± 2.61% for the rectum and 8.83 ± 7.08% and 8.25 ± 7.98% for the bowel was seen with sequential-IMRT and SIB-IMRT plans respectively. Conclusions: For equal PTV coverage, SIB-IMRT markedly reduced doses to critical structures, therefore should be considered as the strategy for dose escalation. SIB-IMRT achieves lesser NTCP than sequential-IMRT. PMID:23204659

  12. Sequential biases in accumulating evidence

    PubMed Central

    Huggins, Richard; Dogo, Samson Henry

    2015-01-01

    Whilst it is common in clinical trials to use the results of tests at one phase to decide whether to continue to the next phase and to subsequently design the next phase, we show that this can lead to biased results in evidence synthesis. Two new kinds of bias associated with accumulating evidence, termed ‘sequential decision bias’ and ‘sequential design bias’, are identified. Both kinds of bias are the result of making decisions on the usefulness of a new study, or its design, based on the previous studies. Sequential decision bias is determined by the correlation between the value of the current estimated effect and the probability of conducting an additional study. Sequential design bias arises from using the estimated value instead of the clinically relevant value of an effect in sample size calculations. We considered both the fixed‐effect and the random‐effects models of meta‐analysis and demonstrated analytically and by simulations that in both settings the problems due to sequential biases are apparent. According to our simulations, the sequential biases increase with increased heterogeneity. Minimisation of sequential biases arises as a new and important research area necessary for successful evidence‐based approaches to the development of science. © 2015 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd. PMID:26626562

  13. Quantitative acoustic measurements for characterization of speech and voice disorders in early untreated Parkinson's disease.

    PubMed

    Rusz, J; Cmejla, R; Ruzickova, H; Ruzicka, E

    2011-01-01

    An assessment of vocal impairment is presented for separating healthy people from persons with early untreated Parkinson's disease (PD). This study's main purpose was to (a) determine whether voice and speech disorder are present from early stages of PD before starting dopaminergic pharmacotherapy, (b) ascertain the specific characteristics of the PD-related vocal impairment, (c) identify PD-related acoustic signatures for the major part of traditional clinically used measurement methods with respect to their automatic assessment, and (d) design new automatic measurement methods of articulation. The varied speech data were collected from 46 Czech native speakers, 23 with PD. Subsequently, 19 representative measurements were pre-selected, and Wald sequential analysis was then applied to assess the efficiency of each measure and the extent of vocal impairment of each subject. It was found that measurement of the fundamental frequency variations applied to two selected tasks was the best method for separating healthy from PD subjects. On the basis of objective acoustic measures, statistical decision-making theory, and validation from practicing speech therapists, it has been demonstrated that 78% of early untreated PD subjects indicate some form of vocal impairment. The speech defects thus uncovered differ individually in various characteristics including phonation, articulation, and prosody.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morley, Steven

    The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less

  15. The Effects of the Previous Outcome on Probabilistic Choice in Rats

    PubMed Central

    Marshall, Andrew T.; Kirkpatrick, Kimberly

    2014-01-01

    This study examined the effects of previous outcomes on subsequent choices in a probabilistic-choice task. Twenty-four rats were trained to choose between a certain outcome (1 or 3 pellets) versus an uncertain outcome (3 or 9 pellets), delivered with a probability of .1, .33, .67, and .9 in different phases. Uncertain outcome choices increased with the probability of uncertain food. Additionally, uncertain choices increased with the probability of uncertain food following both certain-choice outcomes and unrewarded uncertain choices. However, following uncertain-choice food outcomes, there was a tendency to choose the uncertain outcome in all cases, indicating that the rats continued to “gamble” after successful uncertain choices, regardless of the overall probability or magnitude of food. A subsequent manipulation, in which the probability of uncertain food varied within each session as a function of the previous uncertain outcome, examined how the previous outcome and probability of uncertain food affected choice in a dynamic environment. Uncertain-choice behavior increased with the probability of uncertain food. The rats exhibited increased sensitivity to probability changes and a greater degree of win–stay/lose–shift behavior than in the static phase. Simulations of two sequential choice models were performed to explore the possible mechanisms of reward value computations. The simulation results supported an exponentially decaying value function that updated as a function of trial (rather than time). These results emphasize the importance of analyzing global and local factors in choice behavior and suggest avenues for the future development of sequential-choice models. PMID:23205915

  16. An exact computational method for performance analysis of sequential test algorithms for detecting network intrusions

    NASA Astrophysics Data System (ADS)

    Chen, Xinjia; Lacy, Fred; Carriere, Patrick

    2015-05-01

    Sequential test algorithms are playing increasingly important roles for quick detecting network intrusions such as portscanners. In view of the fact that such algorithms are usually analyzed based on intuitive approximation or asymptotic analysis, we develop an exact computational method for the performance analysis of such algorithms. Our method can be used to calculate the probability of false alarm and average detection time up to arbitrarily pre-specified accuracy.

  17. The impact of eyewitness identifications from simultaneous and sequential lineups.

    PubMed

    Wright, Daniel B

    2007-10-01

    Recent guidelines in the US allow either simultaneous or sequential lineups to be used for eyewitness identification. This paper investigates how potential jurors weight the probative value of the different outcomes from both of these types of lineups. Participants (n=340) were given a description of a case that included some exonerating and some incriminating evidence. There was either a simultaneous or a sequential lineup. Depending on the condition, an eyewitness chose the suspect, chose a filler, or made no identification. The participant had to judge the guilt of the suspect and decide whether to render a guilty verdict. For both simultaneous and sequential lineups an identification had a large effect,increasing the probability of a guilty verdict. There were no reliable effects detected between making no identification and identifying a filler. The effect sizes were similar for simultaneous and sequential lineups. These findings are important for judges and other legal professionals to know for trials involving lineup identifications.

  18. Post-traumatic Stress Disorder and Risk of Parkinson Disease: A Nationwide Longitudinal Study.

    PubMed

    Chan, Yee-Lam E; Bai, Ya-Mei; Hsu, Ju-Wei; Huang, Kai-Lin; Su, Tung-Ping; Li, Cheng-Ta; Lin, Wei-Chen; Pan, Tai-Long; Chen, Tzeng-Ji; Tsai, Shih-Jen; Chen, Mu-Hong

    2017-08-01

    Increasing evidence has suggested a relationship between post-traumatic stress disorder (PTSD) and neurodegenerative disorder, such as Alzheimer disease. The association between PTSD and Parkinson disease (PD), however, remains unclear. Using the Taiwan National Health Insurance Research Database, 7,280 subjects (1,456 patients aged ≥45 years with PTSD and 5,824 age-/sex-matched individuals without PTSD) were enrolled between 2002 and 2009 and followed to the end of 2011. Subjects who developed PD during the follow-up period were identified. An increased risk of developing PD was found in patients with PTSD (Wald χ 2  = 12.061, hazard ratio [HR]: 3.46, 95% confidence interval [CI]: 1.72-6.96) compared with individuals without PTSD, after adjusting for demographic data and medical and psychiatric comorbidities. The sensitivity tests after excluding the first year observation (Wald χ 2  = 7.948, HR: 3.01, 95% CI: 1.40-6.46) and the first 3-year observation (Wald χ 2  = 5.099, HR: 3.07, 95% CI: 1.16-8.15) were consistent. Patients with PTSD had an elevated risk of developing PD in later life. Further studies would be required to clarify the exact pathophysiology between PTSD and PD and to investigate whether the prompt intervention for PTSD may reduce this risk. Copyright © 2017 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  19. Radiation reaction force on a particle in Schwarzschild spacetime

    NASA Astrophysics Data System (ADS)

    Tripathi, Swapnil; Wiseman, Alan

    2007-04-01

    The mathematical modelling of the radiation reaction force experienced by a particle in curved spacetime is very important for calculations of the templates used in detection of gravitational waves with LIGO, LISA etc. In particular, extreme mass ratio inspirals are strong candidates for gravitational wave detection with LISA. We model these systems as a particle in Schwarzschild spacetime, and use the Quinn Wald axioms to regularize the self force. Mode by mode expansion techniques are used for calculating the selfforce. Recent progress in this work is being reported in this talkootnotetextA. G. Wiseman, Phys. Rev. D 61 (2000) arXiv.org:gr-qc/084014 ootnotetextT.C. Quinn, Phys. Rev. D 62 (2000) arXiv.org:gr- qc/064029 ootnotetextT.C. Quinn, R.M. Wald Phys. Rev. D 56 (1997) 3381

  20. Some sequential, distribution-free pattern classification procedures with applications

    NASA Technical Reports Server (NTRS)

    Poage, J. L.

    1971-01-01

    Some sequential, distribution-free pattern classification techniques are presented. The decision problem to which the proposed classification methods are applied is that of discriminating between two kinds of electroencephalogram responses recorded from a human subject: spontaneous EEG and EEG driven by a stroboscopic light stimulus at the alpha frequency. The classification procedures proposed make use of the theory of order statistics. Estimates of the probabilities of misclassification are given. The procedures were tested on Gaussian samples and the EEG responses.

  1. A Bayesian sequential processor approach to spectroscopic portal system decisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sale, K; Candy, J; Breitfeller, E

    The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waitingmore » for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.« less

  2. ANALYSES OF RESPONSE–STIMULUS SEQUENCES IN DESCRIPTIVE OBSERVATIONS

    PubMed Central

    Samaha, Andrew L; Vollmer, Timothy R; Borrero, Carrie; Sloman, Kimberly; Pipkin, Claire St. Peter; Bourret, Jason

    2009-01-01

    Descriptive observations were conducted to record problem behavior displayed by participants and to record antecedents and consequences delivered by caregivers. Next, functional analyses were conducted to identify reinforcers for problem behavior. Then, using data from the descriptive observations, lag-sequential analyses were conducted to examine changes in the probability of environmental events across time in relation to occurrences of problem behavior. The results of the lag-sequential analyses were interpreted in light of the results of functional analyses. Results suggested that events identified as reinforcers in a functional analysis followed behavior in idiosyncratic ways: after a range of delays and frequencies. Thus, it is possible that naturally occurring reinforcement contingencies are arranged in ways different from those typically evaluated in applied research. Further, these complex response–stimulus relations can be represented by lag-sequential analyses. However, limitations to the lag-sequential analysis are evident. PMID:19949537

  3. Physics-based, Bayesian sequential detection method and system for radioactive contraband

    DOEpatents

    Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E

    2014-03-18

    A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.

  4. How to take statins

    MedlinePlus

    ... tablets are also available. They include a statin plus medicine to manage another condition, such as high ... 24514899 . Lee JW, Morris JK, Wald NJ. Grapefruit Juice and Statins. Am J Med . 2016;129(1): ...

  5. Implicit Learning of Predictive Relationships in Three-element Visual Sequences by Young and Old Adults

    PubMed Central

    Howard, James H.; Howard, Darlene V.; Dennis, Nancy A.; Kelly, Andrew J.

    2008-01-01

    Knowledge of sequential relationships enables future events to be anticipated and processed efficiently. Research with the serial reaction time task (SRTT) has shown that sequence learning often occurs implicitly without effort or awareness. Here we report four experiments that use a triplet-learning task (TLT) to investigate sequence learning in young and older adults. In the TLT people respond only to the last target event in a series of discrete, three-event sequences or triplets. Target predictability is manipulated by varying the triplet frequency (joint probability) and/or the statistical relationships (conditional probabilities) among events within the triplets. Results revealed that both groups learned, though older adults showed less learning of both joint and conditional probabilities. Young people used the statistical information in both cues, but older adults relied primarily on information in the second cue alone. We conclude that the TLT complements and extends the SRTT and other tasks by offering flexibility in the kinds of sequential statistical regularities that may be studied as well as by controlling event timing and eliminating motor response sequencing. PMID:18763897

  6. Postpartum pain in relation with Personal Meaning Organization.

    PubMed

    Nardi, B; Martini, M G; Arimatea, E; Vernice, M; Bellantuono, C; Frizzo, H; Nardi, M; Vincenzi, R

    2015-12-01

    The aim of this study was to investigate the relationship between postpartum pain and personality considered as Personal Meaning Organization (PMO). Pain diseases, not related to organic disorders, frequently occur in postpartum and may lead to severe consequences for women and their functions of caregiving. Emotions are usually experienced in the body and their expression is strictly related to individual personality. Considering personality as a process, each symptom expresses a need to maintain the sense of oneness and historical continuity. One-hundred and five women were enrolled from the Department of Obstetrics and Gynecology and after delivery they presented postpartum pain not related to organic diseases. Women filled out a general information questionnaire assessing age, employment, marital status, education level, parity, type of delivery, attendance to a prepartum course, week of gestation. Their personality, as PMO, was evaluated using the Mini Questionnaire of Personal Organization (MQPO). Controller PMO perceived more pain compared to the Principle Oriented PMO (95% CIs [-0.09, -1.98]; Wald Z=-2.28; P<0.02), slightly more than contextualized patients (95% CIs [-0.09, -1.15]; Wald Z=-1.81, P<0.06) and more than those with a Detached PMO (95% CIs [-0.09, -2.10]; Wald Z=-1.84, P<0.06). The results suggest a role of PMO in influencing the perception of postpartum pain and no relation with the other general information assessed, particularly, within the controller women group in which the experience of physical pain might be a way to represent a subjective discomfort.

  7. Rate of occurrence, gross appearance, and age relation of hyperostosis frontalis interna in females: a prospective autopsy study.

    PubMed

    Nikolić, Slobodan; Djonić, Danijela; Zivković, Vladimir; Babić, Dragan; Juković, Fehim; Djurić, Marija

    2010-09-01

    The aim of our study was to determine rate of occurrence and appearance of hyperostosis frontalis interna (HFI) in females and correlation of this phenomenon with ageing. The sample included 248 deceased females: 45 of them with different types of HFI, and 203 without HFI, average age 68.3 +/- 15.4 years (range, 19-93), and 58.2 +/- 20.2 years (range, 10-101), respectively. According to our results, the rate of HFI was 18.14%. The older the woman was, the higher the possibility of HFI occurring (Pearson correlation 0.211, N=248, P=0.001), but the type of HFI did not correlate with age (Pearson correlation 0.229, N=45, P=0.131). Frontal and temporal bone were significantly thicker in women with than in women without HFI (t= -10.490, DF=246, P=0.000, and t= -5.658, DF=246, P=0.000, respectively). These bones became thicker with ageing (Pearson correlation 0.178, N=248, P=0.005, and 0.303, N=248, P=0.000, respectively). The best predictors of HFI occurrence were respectively, frontal bone thickness, temporal bone thickness, and age(Wald. coeff.=35.487, P=0.000; Wald. coeff.=3.288, P=0.070, and Wald.coeff. =2.727, P =0.099). Diagnosis of HFI depends not only on frontal bone thickness, but also on waviness of internal plate of the frontal bone, as well as-the involvement of the inner bone surface.

  8. Localisation in a Growth Model with Interaction

    NASA Astrophysics Data System (ADS)

    Costa, M.; Menshikov, M.; Shcherbakov, V.; Vachkovskaia, M.

    2018-05-01

    This paper concerns the long term behaviour of a growth model describing a random sequential allocation of particles on a finite cycle graph. The model can be regarded as a reinforced urn model with graph-based interaction. It is motivated by cooperative sequential adsorption, where adsorption rates at a site depend on the configuration of existing particles in the neighbourhood of that site. Our main result is that, with probability one, the growth process will eventually localise either at a single site, or at a pair of neighbouring sites.

  9. Localisation in a Growth Model with Interaction

    NASA Astrophysics Data System (ADS)

    Costa, M.; Menshikov, M.; Shcherbakov, V.; Vachkovskaia, M.

    2018-06-01

    This paper concerns the long term behaviour of a growth model describing a random sequential allocation of particles on a finite cycle graph. The model can be regarded as a reinforced urn model with graph-based interaction. It is motivated by cooperative sequential adsorption, where adsorption rates at a site depend on the configuration of existing particles in the neighbourhood of that site. Our main result is that, with probability one, the growth process will eventually localise either at a single site, or at a pair of neighbouring sites.

  10. Error Control Coding Techniques for Space and Satellite Communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Cabral, Hermano A.; He, Jiali

    1997-01-01

    Bootstrap Hybrid Decoding (BHD) (Jelinek and Cocke, 1971) is a coding/decoding scheme that adds extra redundancy to a set of convolutionally encoded codewords and uses this redundancy to provide reliability information to a sequential decoder. Theoretical results indicate that bit error probability performance (BER) of BHD is close to that of Turbo-codes, without some of their drawbacks. In this report we study the use of the Multiple Stack Algorithm (MSA) (Chevillat and Costello, Jr., 1977) as the underlying sequential decoding algorithm in BHD, which makes possible an iterative version of BHD.

  11. A search for hep solar neutrinos at the Sudbury Neutrino Observatory

    NASA Astrophysics Data System (ADS)

    Winchester, Timothy J.

    Solar neutrinos from the fusion hep reaction, (helium-3 fusing with a proton to become helium-4, releasing a positron and neutrino), have previously remained undetected due to their flux being about one one-thousandth that of boron-8 neutrinos. These neutrinos are interesting theoretically because they are less dependent on solar composition than other solar neutrinos, and therefore provide a somewhat independent test of the Standard Solar Model. In this analysis, we develop a new event fitter for existing data from the Sudbury Neutrino Observatory. We also use the fitter to remove backgrounds that previously limited the fiducial volume, which we increase by 30%. We use a modified Wald-Wolfowitz test to increase the amount of live time by 200 days (18%) and show that this data is consistent with the previously-used data. Finally, we develop a Bayesian analysis technique to make full use of the posterior distributions of energy returned by the event fitter. In the first significant detection of hep neutrinos, we find that the most-probable rate of hep events is 3.5 x 10. 4 /cm. 2/s, which is significantly higher than the theoretical prediction. We find that the 95% credible region extends from 1.0 to 7.2 x 10. 4 /cm. 2/s, and that we can therefore exclude a rate of 0 hep events at greater than 95% probability.

  12. Diagnostic causal reasoning with verbal information.

    PubMed

    Meder, Björn; Mayrhofer, Ralf

    2017-08-01

    In diagnostic causal reasoning, the goal is to infer the probability of causes from one or multiple observed effects. Typically, studies investigating such tasks provide subjects with precise quantitative information regarding the strength of the relations between causes and effects or sample data from which the relevant quantities can be learned. By contrast, we sought to examine people's inferences when causal information is communicated through qualitative, rather vague verbal expressions (e.g., "X occasionally causes A"). We conducted three experiments using a sequential diagnostic inference task, where multiple pieces of evidence were obtained one after the other. Quantitative predictions of different probabilistic models were derived using the numerical equivalents of the verbal terms, taken from an unrelated study with different subjects. We present a novel Bayesian model that allows for incorporating the temporal weighting of information in sequential diagnostic reasoning, which can be used to model both primacy and recency effects. On the basis of 19,848 judgments from 292 subjects, we found a remarkably close correspondence between the diagnostic inferences made by subjects who received only verbal information and those of a matched control group to whom information was presented numerically. Whether information was conveyed through verbal terms or numerical estimates, diagnostic judgments closely resembled the posterior probabilities entailed by the causes' prior probabilities and the effects' likelihoods. We observed interindividual differences regarding the temporal weighting of evidence in sequential diagnostic reasoning. Our work provides pathways for investigating judgment and decision making with verbal information within a computational modeling framework. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Sequential and simultaneous choices: testing the diet selection and sequential choice models.

    PubMed

    Freidin, Esteban; Aw, Justine; Kacelnik, Alex

    2009-03-01

    We investigate simultaneous and sequential choices in starlings, using Charnov's Diet Choice Model (DCM) and Shapiro, Siller and Kacelnik's Sequential Choice Model (SCM) to integrate function and mechanism. During a training phase, starlings encountered one food-related option per trial (A, B or R) in random sequence and with equal probability. A and B delivered food rewards after programmed delays (shorter for A), while R ('rejection') moved directly to the next trial without reward. In this phase we measured latencies to respond. In a later, choice, phase, birds encountered the pairs A-B, A-R and B-R, the first implementing a simultaneous choice and the second and third sequential choices. The DCM predicts when R should be chosen to maximize intake rate, and SCM uses latencies of the training phase to predict choices between any pair of options in the choice phase. The predictions of both models coincided, and both successfully predicted the birds' preferences. The DCM does not deal with partial preferences, while the SCM does, and experimental results were strongly correlated to this model's predictions. We believe that the SCM may expose a very general mechanism of animal choice, and that its wider domain of success reflects the greater ecological significance of sequential over simultaneous choices.

  14. α '-corrected black holes in String Theory

    NASA Astrophysics Data System (ADS)

    Cano, Pablo A.; Meessen, Patrick; Ortín, Tomás; Ramírez, Pedro F.

    2018-05-01

    We consider the well-known solution of the Heterotic Superstring effective action to zeroth order in α ' that describes the intersection of a fundamental string with momentum and a solitonic 5-brane and which gives a 3-charge, static, extremal, supersymmetric black hole in 5 dimensions upon dimensional reduction on T5. We compute explicitly the first-order in α ' corrections to this solution, including SU(2) Yang-Mills fields which can be used to cancel some of these corrections and we study the main properties of this α '-corrected solution: supersymmetry, values of the near-horizon and asymptotic charges, behavior under α '-corrected T-duality, value of the entropy (using Wald formula directly in 10 dimensions), existence of small black holes etc. The value obtained for the entropy agrees, within the limits of approximation, with that obtained by microscopic methods. The α ' corrections coming from Wald's formula prove crucial for this result.

  15. Analysis of SET pulses propagation probabilities in sequential circuits

    NASA Astrophysics Data System (ADS)

    Cai, Shuo; Yu, Fei; Yang, Yiqun

    2018-05-01

    As the feature size of CMOS transistors scales down, single event transient (SET) has been an important consideration in designing logic circuits. Many researches have been done in analyzing the impact of SET. However, it is difficult to consider numerous factors. We present a new approach for analyzing the SET pulses propagation probabilities (SPPs). It considers all masking effects and uses SET pulses propagation probabilities matrices (SPPMs) to represent the SPPs in current cycle. Based on the matrix union operations, the SPPs in consecutive cycles can be calculated. Experimental results show that our approach is practicable and efficient.

  16. A weighted generalized score statistic for comparison of predictive values of diagnostic tests.

    PubMed

    Kosinski, Andrzej S

    2013-03-15

    Positive and negative predictive values are important measures of a medical diagnostic test performance. We consider testing equality of two positive or two negative predictive values within a paired design in which all patients receive two diagnostic tests. The existing statistical tests for testing equality of predictive values are either Wald tests based on the multinomial distribution or the empirical Wald and generalized score tests within the generalized estimating equations (GEE) framework. As presented in the literature, these test statistics have considerably complex formulas without clear intuitive insight. We propose their re-formulations that are mathematically equivalent but algebraically simple and intuitive. As is clearly seen with a new re-formulation we presented, the generalized score statistic does not always reduce to the commonly used score statistic in the independent samples case. To alleviate this, we introduce a weighted generalized score (WGS) test statistic that incorporates empirical covariance matrix with newly proposed weights. This statistic is simple to compute, always reduces to the score statistic in the independent samples situation, and preserves type I error better than the other statistics as demonstrated by simulations. Thus, we believe that the proposed WGS statistic is the preferred statistic for testing equality of two predictive values and for corresponding sample size computations. The new formulas of the Wald statistics may be useful for easy computation of confidence intervals for difference of predictive values. The introduced concepts have potential to lead to development of the WGS test statistic in a general GEE setting. Copyright © 2012 John Wiley & Sons, Ltd.

  17. A weighted generalized score statistic for comparison of predictive values of diagnostic tests

    PubMed Central

    Kosinski, Andrzej S.

    2013-01-01

    Positive and negative predictive values are important measures of a medical diagnostic test performance. We consider testing equality of two positive or two negative predictive values within a paired design in which all patients receive two diagnostic tests. The existing statistical tests for testing equality of predictive values are either Wald tests based on the multinomial distribution or the empirical Wald and generalized score tests within the generalized estimating equations (GEE) framework. As presented in the literature, these test statistics have considerably complex formulas without clear intuitive insight. We propose their re-formulations which are mathematically equivalent but algebraically simple and intuitive. As is clearly seen with a new re-formulation we present, the generalized score statistic does not always reduce to the commonly used score statistic in the independent samples case. To alleviate this, we introduce a weighted generalized score (WGS) test statistic which incorporates empirical covariance matrix with newly proposed weights. This statistic is simple to compute, it always reduces to the score statistic in the independent samples situation, and it preserves type I error better than the other statistics as demonstrated by simulations. Thus, we believe the proposed WGS statistic is the preferred statistic for testing equality of two predictive values and for corresponding sample size computations. The new formulas of the Wald statistics may be useful for easy computation of confidence intervals for difference of predictive values. The introduced concepts have potential to lead to development of the weighted generalized score test statistic in a general GEE setting. PMID:22912343

  18. Does the Aristotle Score predict outcome in congenital heart surgery?

    PubMed

    Kang, Nicholas; Tsang, Victor T; Elliott, Martin J; de Leval, Marc R; Cole, Timothy J

    2006-06-01

    The Aristotle Score has been proposed as a measure of 'complexity' in congenital heart surgery, and a tool for comparing performance amongst different centres. To date, however, it remains unvalidated. We examined whether the Basic Aristotle Score was a useful predictor of mortality following open-heart surgery, and compared it to the Risk Adjustment in Congenital Heart Surgery (RACHS-1) system. We also examined the ability of the Aristotle Score to measure performance. The Basic Aristotle Score and RACHS-1 risk categories were assigned retrospectively to 1085 operations involving cardiopulmonary bypass in children less than 18 years of age. Multiple logistic regression analysis was used to determine the significance of the Aristotle Score and RACHS-1 category as independent predictors of in-hospital mortality. Operative performance was calculated using the Aristotle equation: performance = complexity x survival. Multiple logistic regression identified RACHS-1 category to be a powerful predictor of mortality (Wald 17.7, p < 0.0001), whereas Aristotle Score was only weakly associated with mortality (Wald 4.8, p = 0.03). Age at operation and bypass time were also highly significant predictors of postoperative death (Wald 13.7 and 33.8, respectively, p < 0.0001 for both). Operative performance was measured at 7.52 units. The Basic Aristotle Score was only weakly associated with postoperative mortality in this series. Operative performance appeared to be inflated by the fact that the overall complexity of cases was relatively high in this series. An alternative equation (performance = complexity/mortality) is proposed as a fairer and more logical method of risk-adjustment.

  19. The effect of rare variants on inflation of the test statistics in case-control analyses.

    PubMed

    Pirie, Ailith; Wood, Angela; Lush, Michael; Tyrer, Jonathan; Pharoah, Paul D P

    2015-02-20

    The detection of bias due to cryptic population structure is an important step in the evaluation of findings of genetic association studies. The standard method of measuring this bias in a genetic association study is to compare the observed median association test statistic to the expected median test statistic. This ratio is inflated in the presence of cryptic population structure. However, inflation may also be caused by the properties of the association test itself particularly in the analysis of rare variants. We compared the properties of the three most commonly used association tests: the likelihood ratio test, the Wald test and the score test when testing rare variants for association using simulated data. We found evidence of inflation in the median test statistics of the likelihood ratio and score tests for tests of variants with less than 20 heterozygotes across the sample, regardless of the total sample size. The test statistics for the Wald test were under-inflated at the median for variants below the same minor allele frequency. In a genetic association study, if a substantial proportion of the genetic variants tested have rare minor allele frequencies, the properties of the association test may mask the presence or absence of bias due to population structure. The use of either the likelihood ratio test or the score test is likely to lead to inflation in the median test statistic in the absence of population structure. In contrast, the use of the Wald test is likely to result in under-inflation of the median test statistic which may mask the presence of population structure.

  20. A Randomised Trial of empiric 14-day Triple, five-day Concomitant, and ten-day Sequential Therapies for Helicobacter pylori in Seven Latin American Sites

    PubMed Central

    Greenberg, E. Robert; Anderson, Garnet L.; Morgan, Douglas R.; Torres, Javier; Chey, William D.; Bravo, Luis Eduardo; Dominguez, Ricardo L.; Ferreccio, Catterina; Herrero, Rolando; Lazcano-Ponce, Eduardo C.; Meza-Montenegro, Mercedes María; Peña, Rodolfo; Peña, Edgar M.; Salazar-Martínez, Eduardo; Correa, Pelayo; Martínez, María Elena; Valdivieso, Manuel; Goodman, Gary E.; Crowley, John J.; Baker, Laurence H.

    2011-01-01

    Summary Background Evidence from Europe, Asia, and North America suggests that standard three-drug regimens of a proton pump inhibitor plus amoxicillin and clarithromycin are significantly less effective for eradicating Helicobacter pylori (H. pylori) infection than five-day concomitant and ten-day sequential four-drug regimens that include a nitroimidazole. These four-drug regimens also entail fewer antibiotic doses and thus may be suitable for eradication programs in low-resource settings. Studies are limited from Latin America, however, where the burden of H. pylori-associated diseases is high. Methods We randomised 1463 men and women ages 21–65 selected from general populations in Chile, Colombia, Costa Rica, Honduras, Nicaragua, and Mexico (two sites) who tested positive for H. pylori by a urea breath test (UBT) to: 14 days of lansoprazole, amoxicillin, and clarithromycin (standard therapy); five days of lansoprazole, amoxicillin, clarithromycin, and metronidazole (concomitant therapy); or five days of lansoprazole and amoxicillin followed by five of lansoprazole, clarithromycin, and metronidazole (sequential therapy). Eradication was assessed by UBT six–eight weeks after randomisation. Findings In intention-to-treat analyses, the probability of eradication with standard therapy was 82·2%, which was 8·6% higher (95% adjusted CI: 2·6%, 14·5%) than with concomitant therapy (73·6%) and 5·6% higher (95% adjusted CI: −0·04%, 11·6%) than with sequential therapy (76·5%). In analyses limited to the 1314 participants who adhered to their assigned therapy, the probabilities of eradication were 87·1%, 78·7%, and 81·1% with standard, concomitant, and sequential therapies, respectively. Neither four-drug regimen was significantly better than standard triple therapy in any of the seven sites. Interpretation Standard 14-day triple-drug therapy is preferable to five-day concomitant or ten-day sequential four-drug regimens as empiric therapy for H. pylori among diverse Latin American populations. Funding Bill & Melinda Gates Foundation and US National Institutes of Health. PMID:21777974

  1. Systemic fat embolism and the patent foramen ovale--a prospective autopsy study.

    PubMed

    Nikolić, Slobodan; Zivković, Vladimir; Babić, Dragan; Djonić, Danijela; Djurić, Marija

    2012-05-01

    A fat embolism is a known and common complication of blunt force injuries, especially pelvic and long bones fractures. The aim of this study was to determine the importance of a patent foramen ovale (PFO) in developing systemic fat embolism (SFE) and eventually fat embolism syndrome (FES) in patients suffering from orthopaedic blunt injuries and consequent lung fat embolism. The sample was divided: 32 subjects with a sealed foramen ovale (SFO), and 20 subjects with a PFO. In our sample, there was no difference in either the incidence of renal fat embolism in subjects with PFO compared to those with SFO (Fisher's exact test 0.228, p=0.154) or in the grade of renal fat embolism (Pearson Chi-square 2.728, p=0.435). However, there was a statistically significant correlation between the grade of lung fat embolism and the number of fractured bones for the whole sample (Spearman's rho 0.271, p=0.052), but no correlation between the grade of lung fat embolism and the ISS or NISS (Pearson correlation 0.048, p=0.736, and 0.108, p=0.445, respectively). In our study, the presence of fat emboli in the kidney, i.e. SFE, could effectively be predicted by the grade of lung fat embolism (the moderate and slight grades of lung fat embolism were better predictors than the massive one: logistic regression - Wald. Coeff.=11.446, p=0.003, Wald. Coeff.=10.553, p=0.001, and Wald. Coeff.=4.128, p=0.042), and less effectively by presence of PFO (Wald. Coeff.=2.850, p=0.091). This study pointed out that lung and SFE are not pure biomechanical events, so the role of a PFO is not crucial in developing a lung fat embolism into a systemic embolism: the fat embolism is more of a biochemical and pathophsyiological event, than a biomechanical one. The appearance of a patent foramen ovale associated with a systemic fat embolism should be less emphasised: maybe arteriovenous shunts and anastomosis between the functional and nutritive, i.e. systemic circulation of lungs play a more important role in developing a SFE than a PFO. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. On the assessment of the added value of new predictive biomarkers.

    PubMed

    Chen, Weijie; Samuelson, Frank W; Gallas, Brandon D; Kang, Le; Sahiner, Berkman; Petrick, Nicholas

    2013-07-29

    The surge in biomarker development calls for research on statistical evaluation methodology to rigorously assess emerging biomarkers and classification models. Recently, several authors reported the puzzling observation that, in assessing the added value of new biomarkers to existing ones in a logistic regression model, statistical significance of new predictor variables does not necessarily translate into a statistically significant increase in the area under the ROC curve (AUC). Vickers et al. concluded that this inconsistency is because AUC "has vastly inferior statistical properties," i.e., it is extremely conservative. This statement is based on simulations that misuse the DeLong et al. method. Our purpose is to provide a fair comparison of the likelihood ratio (LR) test and the Wald test versus diagnostic accuracy (AUC) tests. We present a test to compare ideal AUCs of nested linear discriminant functions via an F test. We compare it with the LR test and the Wald test for the logistic regression model. The null hypotheses of these three tests are equivalent; however, the F test is an exact test whereas the LR test and the Wald test are asymptotic tests. Our simulation shows that the F test has the nominal type I error even with a small sample size. Our results also indicate that the LR test and the Wald test have inflated type I errors when the sample size is small, while the type I error converges to the nominal value asymptotically with increasing sample size as expected. We further show that the DeLong et al. method tests a different hypothesis and has the nominal type I error when it is used within its designed scope. Finally, we summarize the pros and cons of all four methods we consider in this paper. We show that there is nothing inherently less powerful or disagreeable about ROC analysis for showing the usefulness of new biomarkers or characterizing the performance of classification models. Each statistical method for assessing biomarkers and classification models has its own strengths and weaknesses. Investigators need to choose methods based on the assessment purpose, the biomarker development phase at which the assessment is being performed, the available patient data, and the validity of assumptions behind the methodologies.

  3. Evaluating specificity of sequential extraction for chemical forms of lead in artificially-contaminated and field-contaminated soils.

    PubMed

    Tai, Yiping; McBride, Murray B; Li, Zhian

    2013-03-30

    In the present study, we evaluated a commonly employed modified Bureau Communautaire de Référence (BCR test) 3-step sequential extraction procedure for its ability to distinguish forms of solid-phase Pb in soils with different sources and histories of contamination. When the modified BCR test was applied to mineral soils spiked with three forms of Pb (pyromorphite, hydrocerussite and nitrate salt), the added Pb was highly susceptible to dissolution in the operationally-defined "reducible" or "oxide" fraction regardless of form. When three different materials (mineral soil, organic soil and goethite) were spiked with soluble Pb nitrate, the BCR sequential extraction profiles revealed that soil organic matter was capable of retaining Pb in more stable and acid-resistant forms than silicate clay minerals or goethite. However, the BCR sequential extraction for field-collected soils with known and different sources of Pb contamination was not sufficiently discriminatory in the dissolution of soil Pb phases to allow soil Pb forms to be "fingerprinted" by this method. It is concluded that standard sequential extraction procedures are probably not very useful in predicting lability and bioavailability of Pb in contaminated soils. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. [Sequential sampling plans to Orthezia praelonga Douglas (Hemiptera: Sternorrhyncha, Ortheziidae) in citrus].

    PubMed

    Costa, Marilia G; Barbosa, José C; Yamamoto, Pedro T

    2007-01-01

    The sequential sampling is characterized by using samples of variable sizes, and has the advantage of reducing sampling time and costs if compared to fixed-size sampling. To introduce an adequate management for orthezia, sequential sampling plans were developed for orchards under low and high infestation. Data were collected in Matão, SP, in commercial stands of the orange variety 'Pêra Rio', at five, nine and 15 years of age. Twenty samplings were performed in the whole area of each stand by observing the presence or absence of scales on plants, being plots comprised of ten plants. After observing that in all of the three stands the scale population was distributed according to the contagious model, fitting the Negative Binomial Distribution in most samplings, two sequential sampling plans were constructed according to the Sequential Likelihood Ratio Test (SLRT). To construct these plans an economic threshold of 2% was adopted and the type I and II error probabilities were fixed in alpha = beta = 0.10. Results showed that the maximum numbers of samples expected to determine control need were 172 and 76 samples for stands with low and high infestation, respectively.

  5. Sequential state discrimination and requirement of quantum dissonance

    NASA Astrophysics Data System (ADS)

    Pang, Chao-Qian; Zhang, Fu-Lin; Xu, Li-Fang; Liang, Mai-Lin; Chen, Jing-Ling

    2013-11-01

    We study the procedure for sequential unambiguous state discrimination. A qubit is prepared in one of two possible states and measured by two observers Bob and Charlie sequentially. A necessary condition for the state to be unambiguously discriminated by Charlie is the absence of entanglement between the principal qubit, prepared by Alice, and Bob's auxiliary system. In general, the procedure for both Bob and Charlie to recognize between two nonorthogonal states conclusively relies on the availability of quantum discord which is precisely the quantum dissonance when the entanglement is absent. In Bob's measurement, the left discord is positively correlated with the information extracted by Bob, and the right discord enhances the information left to Charlie. When their product achieves its maximum the probability for both Bob and Charlie to identify the state achieves its optimal value.

  6. Are patients referred to rehabilitation diagnosed accurately?

    PubMed

    Tederko, Piotr; Krasuski, Marek; Nyka, Izabella; Mycielski, Jerzy; Tarnacka, Beata

    2017-07-17

    An accurate diagnosis of the leading health condition and comorbidities is a prerequisite for safe and effective rehabilitation. The problem of diagnostic errors in physical and rehabilitation medicine (PRM) has not been addressed sufficiently. The responsibility of a referring physician is to determine indications and contraindications for rehabilitation. To assess the rate of and risk factors for inaccurate referral diagnoses (RD) in patients referred to a rehabilitation facility. We hypothesized that inaccurate RD would be more common in patients 1) referred by non-PRM physicians; 2) waiting longer for the admission; 3) older patients. Retrospective observational study. 1000 randomly selected patients admitted between 2012 and 2016 to a day- rehabilitation center (DRC). University DRC specialized in musculoskeletal diseases. On admission all cases underwent clinical verification of RD. Inappropriateness regarding primary diagnoses and comorbidities were noted. Influence of several factors affecting probability of inaccurate RD was analyzed with multiple binary regression model applied to 6 categories of diseases. The rate of inaccurate RD was 25.2%. Higher frequency of inaccurate RD was noted among patients referred by non-PRM specialists (30.3% vs 17.3% in cases referred by PRM specialists). Application of logit regression showed highly significant influence of the specialty of a referring physician on the odds of inaccurate RD (joint Wald test ch2(6)=38.98, p- value=0.000), controlling for the influence of other variables. This may reflect a suboptimal knowledge of the rehabilitation process and a tendency to neglect of comorbidities by non-PRM specialists. The rate of inaccurate RD did not correlate with time between referral and admission (joint Wald test of all odds ratios equal to 1, chi2(6)=5.62, p-value=0.467), however, mean and median waiting times were relatively short (35.7 and 25 days respectively).A high risk of overlooked multimorbidity was revealed in elderly patients (all odds ratios for variable age significantly higher than 1). Hypotheses 1 and 3 were confirmed. Over 25% of patients referred to DRC had inaccurate RD. Risk factors for inaccurate RD include referral by a non-PRM specialist and elderly age. Verification of RD should be routinely introduced to PRM practice.

  7. Sequentially Simulated Outcomes: Kind Experience versus Nontransparent Description

    ERIC Educational Resources Information Center

    Hogarth, Robin M.; Soyer, Emre

    2011-01-01

    Recently, researchers have investigated differences in decision making based on description and experience. We address the issue of when experience-based judgments of probability are more accurate than are those based on description. If description is well understood ("transparent") and experience is misleading ("wicked"), it…

  8. The utility of Bayesian predictive probabilities for interim monitoring of clinical trials

    PubMed Central

    Connor, Jason T.; Ayers, Gregory D; Alvarez, JoAnn

    2014-01-01

    Background Bayesian predictive probabilities can be used for interim monitoring of clinical trials to estimate the probability of observing a statistically significant treatment effect if the trial were to continue to its predefined maximum sample size. Purpose We explore settings in which Bayesian predictive probabilities are advantageous for interim monitoring compared to Bayesian posterior probabilities, p-values, conditional power, or group sequential methods. Results For interim analyses that address prediction hypotheses, such as futility monitoring and efficacy monitoring with lagged outcomes, only predictive probabilities properly account for the amount of data remaining to be observed in a clinical trial and have the flexibility to incorporate additional information via auxiliary variables. Limitations Computational burdens limit the feasibility of predictive probabilities in many clinical trial settings. The specification of prior distributions brings additional challenges for regulatory approval. Conclusions The use of Bayesian predictive probabilities enables the choice of logical interim stopping rules that closely align with the clinical decision making process. PMID:24872363

  9. Increased Automaticity and Altered Temporal Preparation Following Sleep Deprivation.

    PubMed

    Kong, Danyang; Asplund, Christopher L; Ling, Aiqing; Chee, Michael W L

    2015-08-01

    Temporal expectation enables us to focus limited processing resources, thereby optimizing perceptual and motor processing for critical upcoming events. We investigated the effects of total sleep deprivation (TSD) on temporal expectation by evaluating the foreperiod and sequential effects during a psychomotor vigilance task (PVT). We also examined how these two measures were modulated by vulnerability to TSD. Three 10-min visual PVT sessions using uniformly distributed foreperiods were conducted in the wake-maintenance zone the evening before sleep deprivation (ESD) and three more in the morning following approximately 22 h of TSD. TSD vulnerable and nonvulnerable groups were determined by a tertile split of participants based on the change in the number of behavioral lapses recorded during ESD and TSD. A subset of participants performed six additional 10-min modified auditory PVTs with exponentially distributed foreperiods during rested wakefulness (RW) and TSD to test the effect of temporal distribution on foreperiod and sequential effects. Sleep laboratory. There were 172 young healthy participants (90 males) with regular sleep patterns. Nineteen of these participants performed the modified auditory PVT. Despite behavioral lapses and slower response times, sleep deprived participants could still perceive the conditional probability of temporal events and modify their level of preparation accordingly. Both foreperiod and sequential effects were magnified following sleep deprivation in vulnerable individuals. Only the foreperiod effect increased in nonvulnerable individuals. The preservation of foreperiod and sequential effects suggests that implicit time perception and temporal preparedness are intact during total sleep deprivation. Individuals appear to reallocate their depleted preparatory resources to more probable event timings in ongoing trials, whereas vulnerable participants also rely more on automatic processes. © 2015 Associated Professional Sleep Societies, LLC.

  10. Serum dehydroepiandrosterone sulfate and incident depression in the elderly: the Pro.V.A. study.

    PubMed

    Veronese, Nicola; De Rui, Marina; Bolzetta, Francesco; Zambon, Sabina; Corti, Maria Chiara; Baggio, Giovannella; Toffanello, Elena Debora; Crepaldi, Gaetano; Perissinotto, Egle; Manzato, Enzo; Sergi, Giuseppe

    2015-08-01

    Dehydroepiandrosterone sulfate (DHEAS) appears to have a protective effect against depression, but contrasting findings are available. Therefore, we investigated whether high serum DHEAS levels were associated with any protective effect on incident depression and incident severe depression in a representative group of elderly men and women. In a population-based cohort longitudinal study in the general community, 789 older participants without depression and cognitive impairment at the baseline were included, among 3,099 screened subjects. Serum DHEAS levels were determined based on blood samples; incident depression and severe depression were diagnosed by means of the Geriatric Depression Scale (GDS) and confirmed by geriatricians skilled in psychogeriatric medicine. No baseline differences were found in GDS across age- and gender-specific tertiles of serum DHEAS. Over 4.4 years of follow-up, 137 new cases of depression were recorded. Of them, 35 among men and 64 in women were cases of incident severe depression. Cox's regression analysis, adjusted for potential confounders, revealed that higher DHEAS levels were associated with reduced risk of incident depression irrespective of gender (HR: 0.30; 95% CI: 0.09-0.96; Wald χ(2) = 4.09; df = 1; p = 0.04; women: HR: 0.31; 95% CI: 0.14-0.69; Wald χ(2) = 8.37; df = 1; p = 0.004) and of severe incident depression only in men (HR: 0.25; 95% CI: 0.06-0.99; Wald χ(2) = 4.05; df = 1; p = 0.04). Higher serum DHEAS levels were found to be significantly protective for the onset of depression irrespective of gender, whereas only in men was this association found also for incident severe depression. Copyright © 2015 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  11. Sequential dynamics in visual short-term memory.

    PubMed

    Kool, Wouter; Conway, Andrew R A; Turk-Browne, Nicholas B

    2014-10-01

    Visual short-term memory (VSTM) is thought to help bridge across changes in visual input, and yet many studies of VSTM employ static displays. Here we investigate how VSTM copes with sequential input. In particular, we characterize the temporal dynamics of several different components of VSTM performance, including: storage probability, precision, variability in precision, guessing, and swapping. We used a variant of the continuous-report VSTM task developed for static displays, quantifying the contribution of each component with statistical likelihood estimation, as a function of serial position and set size. In Experiments 1 and 2, storage probability did not vary by serial position for small set sizes, but showed a small primacy effect and a robust recency effect for larger set sizes; precision did not vary by serial position or set size. In Experiment 3, the recency effect was shown to reflect an increased likelihood of swapping out items from earlier serial positions and swapping in later items, rather than an increased rate of guessing for earlier items. Indeed, a model that incorporated responding to non-targets provided a better fit to these data than alternative models that did not allow for swapping or that tried to account for variable precision. These findings suggest that VSTM is updated in a first-in-first-out manner, and they bring VSTM research into closer alignment with classical working memory research that focuses on sequential behavior and interference effects.

  12. The Sequential Probability Ratio Test: An efficient alternative to exact binomial testing for Clean Water Act 303(d) evaluation.

    PubMed

    Chen, Connie; Gribble, Matthew O; Bartroff, Jay; Bay, Steven M; Goldstein, Larry

    2017-05-01

    The United States's Clean Water Act stipulates in section 303(d) that states must identify impaired water bodies for which total maximum daily loads (TMDLs) of pollution inputs into water bodies are developed. Decision-making procedures about how to list, or delist, water bodies as impaired, or not, per Clean Water Act 303(d) differ across states. In states such as California, whether or not a particular monitoring sample suggests that water quality is impaired can be regarded as a binary outcome variable, and California's current regulatory framework invokes a version of the exact binomial test to consolidate evidence across samples and assess whether the overall water body complies with the Clean Water Act. Here, we contrast the performance of California's exact binomial test with one potential alternative, the Sequential Probability Ratio Test (SPRT). The SPRT uses a sequential testing framework, testing samples as they become available and evaluating evidence as it emerges, rather than measuring all the samples and calculating a test statistic at the end of the data collection process. Through simulations and theoretical derivations, we demonstrate that the SPRT on average requires fewer samples to be measured to have comparable Type I and Type II error rates as the current fixed-sample binomial test. Policymakers might consider efficient alternatives such as SPRT to current procedure. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Sequential dynamics in visual short-term memory

    PubMed Central

    Conway, Andrew R. A.; Turk-Browne, Nicholas B.

    2014-01-01

    Visual short-term memory (VSTM) is thought to help bridge across changes in visual input, and yet many studies of VSTM employ static displays. Here we investigate how VSTM copes with sequential input. In particular, we characterize the temporal dynamics of several different components of VSTM performance, including: storage probability, precision, variability in precision, guessing, and swapping. We used a variant of the continuous-report VSTM task developed for static displays, quantifying the contribution of each component with statistical likelihood estimation, as a function of serial position and set size. In Experiments 1 and 2, storage probability did not vary by serial position for small set sizes, but showed a small primacy effect and a robust recency effect for larger set sizes; precision did not vary by serial position or set size. In Experiment 3, the recency effect was shown to reflect an increased likelihood of swapping out items from earlier serial positions and swapping in later items, rather than an increased rate of guessing for earlier items. Indeed, a model that incorporated responding to non-targets provided a better fit to these data than alternative models that did not allow for swapping or that tried to account for variable precision. These findings suggest that VSTM is updated in a first-in-first-out manner, and they bring VSTM research into closer alignment with classical working memory research that focuses on sequential behavior and interference effects. PMID:25228092

  14. Adaptive x-ray threat detection using sequential hypotheses testing with fan-beam experimental data (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Thamvichai, Ratchaneekorn; Huang, Liang-Chih; Ashok, Amit; Gong, Qian; Coccarelli, David; Greenberg, Joel A.; Gehm, Michael E.; Neifeld, Mark A.

    2017-05-01

    We employ an adaptive measurement system, based on sequential hypotheses testing (SHT) framework, for detecting material-based threats using experimental data acquired on an X-ray experimental testbed system. This testbed employs 45-degree fan-beam geometry and 15 views over a 180-degree span to generate energy sensitive X-ray projection data. Using this testbed system, we acquire multiple view projection data for 200 bags. We consider an adaptive measurement design where the X-ray projection measurements are acquired in a sequential manner and the adaptation occurs through the choice of the optimal "next" source/view system parameter. Our analysis of such an adaptive measurement design using the experimental data demonstrates a 3x-7x reduction in the probability of error relative to a static measurement design. Here the static measurement design refers to the operational system baseline that corresponds to a sequential measurement using all the available sources/views. We also show that by using adaptive measurements it is possible to reduce the number of sources/views by nearly 50% compared a system that relies on static measurements.

  15. Differential Response of Immunohistochemically Defined Breast Cancer Subtypes to Anthracycline-Based Adjuvant Chemotherapy with or without Paclitaxel

    PubMed Central

    Fountzilas, George; Dafni, Urania; Bobos, Mattheos; Batistatou, Anna; Kotoula, Vassiliki; Trihia, Helen; Malamou-Mitsi, Vassiliki; Miliaras, Spyros; Chrisafi, Sofia; Papadopoulos, Savvas; Sotiropoulou, Maria; Filippidis, Theodoros; Gogas, Helen; Koletsa, Triantafyllia; Bafaloukos, Dimitrios; Televantou, Despina; Kalogeras, Konstantine T.; Pectasides, Dimitrios; Skarlos, Dimosthenis V.; Koutras, Angelos; Dimopoulos, Meletios A.

    2012-01-01

    Background The aim of the present study was to investigate the efficacy of adjuvant dose-dense sequential chemotherapy with epirubicin, paclitaxel, and CMF in subgroups of patients with high-risk operable breast cancer, according to tumor subtypes defined by immunohistochemistry (IHC). Materials and Methods Formalin-fixed paraffin-embedded (FFPE) tumor tissue samples from 1,039 patients participating in two adjuvant dose-dense sequential chemotherapy phase III trials were centrally assessed in tissue micro-arrays by IHC for 6 biological markers, that is, estrogen receptor (ER), progesterone receptor (PgR), HER2, Ki67, cytokeratin 5 (CK5), and EGFR. The majority of the cases were further evaluated for HER2 amplification by FISH. Patients were classified as: luminal A (ER/PgR-positive, HER2-negative, Ki67low); luminal B (ER/PgR-positive, HER2-negative, Ki67high); luminal-HER2 (ER/PgR-positive, HER2-positive); HER2-enriched (ER-negative, PgR-negative, HER2-positive); triple-negative (TNBC) (ER-negative, PgR-negative, HER2-negative); and basal core phenotype (BCP) (TNBC, CK5-positive and/or EGFR-positive). Results After a median follow-up time of 105.4 months the 5-year disease-free survival (DFS) and overall survival (OS) rates were 73.1% and 86.1%, respectively. Among patients with HER2-enriched tumors there was a significant benefit in both DFS and OS (log-rank test; p = 0.021 and p = 0.006, respectively) for those treated with paclitaxel. The subtype classification was found to be of both predictive and prognostic value. Setting luminal A as the referent category, the adjusted for prognostic factors HR for relapse for patients with TNBC was 1.91 (95% CI: 1.31–2.80, Wald's p = 0.001) and for death 2.53 (95% CI: 1.62–3.60, p<0.001). Site of and time to first relapse differed according to subtype. Locoregional relapses and brain metastases were more frequent in patients with TNBC, while liver metastases were more often seen in patients with HER2-enriched tumors. Conclusions Triple-negative phenotype is of adverse prognostic value for DFS and OS in patients treated with adjuvant dose-dense sequential chemotherapy. In the pre-trastuzumab era, the HER2-enriched subtype predicts favorable outcome following paclitaxel-containing treatment. PMID:22679488

  16. [Fractional exhaled nitric oxide in monitoring and therapeutic management of asthma].

    PubMed

    Melo, Bruno; Costa, Patrício; Afonso, Ariana; Machado, Vânia; Moreira, Carla; Gonçalves, Augusta; Gonçalves, Jean-Pierre

    2014-01-01

    Asthma is a chronic respiratory disease characterized by hyper-responsiveness and bronchial inflammation. The bronchial inflammation in these patients can be monitored by measuring the fractional exhaled nitric oxide. This study aims to determine fractional exhaled nitric oxide association with peak expiratory flow and with asthma control inferred by the Global Initiative for Asthma. Observational, analytical and cross-sectional study of children with asthma, 6-12 years-old, followed in the Outpatient Respiratory Pathology of Braga Hospital. Sociodemographic and clinical information were collected through a questionnaire. fractional exhaled nitric oxide and peak expiratory flow were determined by portable analyzer Niox Mino® and flow meter, respectively. The sample is constituted by 101 asthmatic children, 63 (62.4%) of males and 38 (37.6%) females. The mean age of participants in the sample is 9.18 (1.99) years. The logistic regression performed with the cutoff value obtained by ROC curve, revealed that fractional exhaled nitric oxide (b(FENO classes) = 0.85; χ(2)Wald (1) = 8.71; OR = 2.33; p = 0.003) has a statistical significant effect on the probability of changing level of asthma control. The odds ratio of going from "controlled" to "partly controlled/uncontrolled" is 2.33 per each level of fractional exhaled nitric oxide. The probability of an asthmatic children change their level of asthma control, from 'controlled' to 'partly controlled/uncontrolled', taking into account a change in their fractional exhaled nitric oxide level, increases 133%.

  17. Mining sequential patterns for protein fold recognition.

    PubMed

    Exarchos, Themis P; Papaloukas, Costas; Lampros, Christos; Fotiadis, Dimitrios I

    2008-02-01

    Protein data contain discriminative patterns that can be used in many beneficial applications if they are defined correctly. In this work sequential pattern mining (SPM) is utilized for sequence-based fold recognition. Protein classification in terms of fold recognition plays an important role in computational protein analysis, since it can contribute to the determination of the function of a protein whose structure is unknown. Specifically, one of the most efficient SPM algorithms, cSPADE, is employed for the analysis of protein sequence. A classifier uses the extracted sequential patterns to classify proteins in the appropriate fold category. For training and evaluating the proposed method we used the protein sequences from the Protein Data Bank and the annotation of the SCOP database. The method exhibited an overall accuracy of 25% in a classification problem with 36 candidate categories. The classification performance reaches up to 56% when the five most probable protein folds are considered.

  18. Black holes in vector-tensor theories and their thermodynamics

    NASA Astrophysics Data System (ADS)

    Fan, Zhong-Ying

    2018-01-01

    In this paper, we study Einstein gravity either minimally or non-minimally coupled to a vector field which breaks the gauge symmetry explicitly in general dimensions. We first consider a minimal theory which is simply the Einstein-Proca theory extended with a quartic self-interaction term for the vector field. We obtain its general static maximally symmetric black hole solution and study the thermodynamics using Wald formalism. The aspects of the solution are much like a Reissner-Nordstrøm black hole in spite of that a global charge cannot be defined for the vector. For non-minimal theories, we obtain a lot of exact black hole solutions, depending on the parameters of the theories. In particular, many of the solutions are general static and have maximal symmetry. However, there are some subtleties and ambiguities in the derivation of the first laws because the existence of an algebraic degree of freedom of the vector in general invalids the Wald entropy formula. The thermodynamics of these solutions deserves further studies.

  19. From Wald to Savage: homo economicus becomes a Bayesian statistician.

    PubMed

    Giocoli, Nicola

    2013-01-01

    Bayesian rationality is the paradigm of rational behavior in neoclassical economics. An economic agent is deemed rational when she maximizes her subjective expected utility and consistently revises her beliefs according to Bayes's rule. The paper raises the question of how, when and why this characterization of rationality came to be endorsed by mainstream economists. Though no definitive answer is provided, it is argued that the question is of great historiographic importance. The story begins with Abraham Wald's behaviorist approach to statistics and culminates with Leonard J. Savage's elaboration of subjective expected utility theory in his 1954 classic The Foundations of Statistics. The latter's acknowledged fiasco to achieve a reinterpretation of traditional inference techniques along subjectivist and behaviorist lines raises the puzzle of how a failed project in statistics could turn into such a big success in economics. Possible answers call into play the emphasis on consistency requirements in neoclassical theory and the impact of the postwar transformation of U.S. business schools. © 2012 Wiley Periodicals, Inc.

  20. Parallelization of sequential Gaussian, indicator and direct simulation algorithms

    NASA Astrophysics Data System (ADS)

    Nunes, Ruben; Almeida, José A.

    2010-08-01

    Improving the performance and robustness of algorithms on new high-performance parallel computing architectures is a key issue in efficiently performing 2D and 3D studies with large amount of data. In geostatistics, sequential simulation algorithms are good candidates for parallelization. When compared with other computational applications in geosciences (such as fluid flow simulators), sequential simulation software is not extremely computationally intensive, but parallelization can make it more efficient and creates alternatives for its integration in inverse modelling approaches. This paper describes the implementation and benchmarking of a parallel version of the three classic sequential simulation algorithms: direct sequential simulation (DSS), sequential indicator simulation (SIS) and sequential Gaussian simulation (SGS). For this purpose, the source used was GSLIB, but the entire code was extensively modified to take into account the parallelization approach and was also rewritten in the C programming language. The paper also explains in detail the parallelization strategy and the main modifications. Regarding the integration of secondary information, the DSS algorithm is able to perform simple kriging with local means, kriging with an external drift and collocated cokriging with both local and global correlations. SIS includes a local correction of probabilities. Finally, a brief comparison is presented of simulation results using one, two and four processors. All performance tests were carried out on 2D soil data samples. The source code is completely open source and easy to read. It should be noted that the code is only fully compatible with Microsoft Visual C and should be adapted for other systems/compilers.

  1. Multilevel sequential Monte Carlo samplers

    DOE PAGES

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; ...

    2016-08-24

    Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level h L. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h 0>h 1 ...>h L. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less

  2. Cell wall invertase as a regulator in determining sequential development of endosperm and embryo through glucose signaling early in seed development.

    PubMed

    Wang, Lu; Liao, Shengjin; Ruan, Yong-Ling

    2013-01-01

    Seed development depends on coordination among embryo, endosperm and seed coat. Endosperm undergoes nuclear division soon after fertilization, whereas embryo remains quiescent for a while. Such a developmental sequence is of great importance for proper seed development. However, the underlying mechanism remains unclear. Recent results on the cellular domain- and stage-specific expression of invertase genes in cotton and Arabidopsis revealed that cell wall invertase may positively and specifically regulate nuclear division of endosperm after fertilization, thereby playing a role in determining the sequential development of endosperm and embryo, probably through glucose signaling.

  3. The Relationship between the Emotional Intelligence of Secondary Public School Principals and School Performance

    ERIC Educational Resources Information Center

    Ashworth, Stephanie R.

    2013-01-01

    The study examined the relationship between secondary public school principals' emotional intelligence and school performance. The correlational study employed an explanatory sequential mixed methods model. The non-probability sample consisted of 105 secondary public school principals in Texas. The emotional intelligence characteristics of the…

  4. Mutual Information Item Selection in Adaptive Classification Testing

    ERIC Educational Resources Information Center

    Weissman, Alexander

    2007-01-01

    A general approach for item selection in adaptive multiple-category classification tests is provided. The approach uses mutual information (MI), a special case of the Kullback-Leibler distance, or relative entropy. MI works efficiently with the sequential probability ratio test and alleviates the difficulties encountered with using other local-…

  5. Transportation Efficiency Act to End Oil Addiction: Securing America’s Future

    DTIC Science & Technology

    2011-03-17

    250 to $300 billion.20 These numbers represent an uneasy dependence that the nation maintains on foreign suppliers. General Charles Wald , USAF (RET... Lisa A, The Geopolitics of Energy: Emerging Trends, Changing Landscapes, Uncertain Times (Washington, DC: Center for Strategic and International

  6. The polymorphism of the ATP-binding cassette transporter 1 gene modulates Alzheimer disease risk in Chinese Han ethnic population.

    PubMed

    Sun, Yi-Min; Li, Hong-Lei; Guo, Qi-Hao; Wu, Ping; Hong, Zhen; Lu, Chuan-Zhen; Wu, Zhi-Ying

    2012-07-01

    Recent studies highlight a potential role of cholesterol metabolic disturbance in the pathophysiology of Alzheimer disease (AD). The adenosine triphosphate (ATP)-binding cassette transporter 1 (ABCA1) gene resides within proximity of linkage peaks on chromosome 9q influence AD and plays a key role in cellular cholesterol efflux in the brain. We studied the role of R219K and V825I polymorphisms of ABCA1 in modulating the risk of AD in 321 AD patients and 349 comparisons of Chinese Han. Genotyping of R219K and V825I were performed by PCR-restriction fragment length polymorphism analysis. The genotype distribution of R219K was different with more RK in total AD group (χ(2) = 8.705, df = 2, p = 0.013), late-onset AD (LOAD) group (χ(2) = 10.636, df = 2, p = 0.005), APOE non-ε4ε4 group (χ(2) = 9.900, df = 2, p = 0.007), and female AD group (χ(2) = 8.369, df = 2, p = 0.015). Logistic regression manifested the risk of AD increased in RK carriers in total AD group (Wald = 6.102, df = 1, p = 0.014, odds ratio [OR]: 1.546, 95% confidence interval [95% CI]: 1.094-2.185), LOAD group (Wald = 7.746, df = 1, p = 0.005, OR: 1.921, 95% CI: 1.213-3.041), and APOE non-ε4ε4 group (Wald = 6.399, df = 1, p = 0.011, OR: 1.586, 95% CI: 1.109-2.266). K allele (RK + KK) also increased the risk of AD compared with RR allele in LOAD group (Wald = 4.750, df = 1, p = 0.029, OR: 1.619, 95% CI: 1.050-2.497). However, no discrepancy was found in V825I. In R219K, age at onset (AAO) was significantly lower by 4.9 years on average in patients of KK genotype than those of RK in APOE ε4 carrying group and higher by 5.5 years in patients of KK genotype than those of RR in APOE ε4 noncarrying group. In V825I, AAO was diseased by 4.3 years in II genotype compared with VV genotype in APOE ε4 noncarrying group and 3.4 years in APOE ε4ε4 noncarrying group. The results indicated that the RK genotype or K allele (RK + KK) of R219K may relate to the development of AD in the east of China.

  7. [The use of the sequential mathematical analysis for the determination of the driver's seat position inside the car passenger compartment from the injuries to the extremities in the case of a traffic accident].

    PubMed

    Habova, K; Smirenin, Eksp; Fetisov, D; Tamberg, Eksp

    2015-01-01

    The objective of the present study was to determine the diagnostic coefficients (DC) for the injuries to the upper and lower extremities of the vehicle drivers inflicted inside the passenger compartment in the case of a traffic accident. We have analysed the archival expert documents collected from 45 regional bureaus of forensic medical expertise during the period from 1995 to 2014 that contained the results of examination of 200 corpses and 300 survivors who had suffered injuries in the traffic accidents. The statistical and mathematical treatment of these materials with the use of sequential mathematical analysis based on the Bayes and Wald formulas yielded diagnostic coefficients that make it possible to elucidate the most informative features characterizing the driver of a vehicle. In case of a lethal outcome, the most significant injuries include bleeding from the posterior left elbow region (DC +7.6), skin scratches on the palm surface of the right wrist (DC +7.6), bleeding from the postrerior region of the left lower leg (DC +7.6), wounds on the dorsal surface of the left wrist (DC +6.3), bruises at the anterior surface of the left knee (DC +6.3), etc. The most informative features in the survivals of the traffic accidents are bone fractures (DC +7.0), tension of ligaments and dislocation of the right talocrural joint (DC +6.5), fractures of the left kneecap and left tibial epiphysis (DC +5.4), hemorrhage and bruises in the anterior right knee region (DC + 5.4 each), skin scratches in the right posterior carpal region (DC +5.1). It is concluded that the use of the diagnostic coefficients makes it possible to draw the attention of the experts to the above features and to objectively determine the driver's seat position inside the car passenger compartment in the case of a traffic accident. Moreover such an approach contributes to the improvement of the quality of expert conclusions and the results of forensic medical expertise of the circumstance of traffic accidents.

  8. Bayesian approach to inverse statistical mechanics.

    PubMed

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  9. Bayesian approach to inverse statistical mechanics

    NASA Astrophysics Data System (ADS)

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  10. Improved Margin of Error Estimates for Proportions in Business: An Educational Example

    ERIC Educational Resources Information Center

    Arzumanyan, George; Halcoussis, Dennis; Phillips, G. Michael

    2015-01-01

    This paper presents the Agresti & Coull "Adjusted Wald" method for computing confidence intervals and margins of error for common proportion estimates. The presented method is easily implementable by business students and practitioners and provides more accurate estimates of proportions particularly in extreme samples and small…

  11. International Defense Acquisition Management and the Fifth-Generation Fighter Quandary

    DTIC Science & Technology

    2012-12-17

    Wald , USAF Ret) stated “We may be on verge of building our last manned fighter” (Mitchell, 2012). So, while reasonable people can disagree, this...New York Times, 22 June, Section 3, Page 1. Daniel, Lisa (2011), Plan Improves Navy, Marine Corps Air Capabilities, American Forces Press Service

  12. Dosimetric effects of patient rotational setup errors on prostate IMRT treatments

    NASA Astrophysics Data System (ADS)

    Fu, Weihua; Yang, Yong; Li, Xiang; Heron, Dwight E.; Saiful Huq, M.; Yue, Ning J.

    2006-10-01

    The purpose of this work is to determine dose delivery errors that could result from systematic rotational setup errors (ΔΦ) for prostate cancer patients treated with three-phase sequential boost IMRT. In order to implement this, different rotational setup errors around three Cartesian axes were simulated for five prostate patients and dosimetric indices, such as dose-volume histogram (DVH), tumour control probability (TCP), normal tissue complication probability (NTCP) and equivalent uniform dose (EUD), were employed to evaluate the corresponding dosimetric influences. Rotational setup errors were simulated by adjusting the gantry, collimator and horizontal couch angles of treatment beams and the dosimetric effects were evaluated by recomputing the dose distributions in the treatment planning system. Our results indicated that, for prostate cancer treatment with the three-phase sequential boost IMRT technique, the rotational setup errors do not have significant dosimetric impacts on the cumulative plan. Even in the worst-case scenario with ΔΦ = 3°, the prostate EUD varied within 1.5% and TCP decreased about 1%. For seminal vesicle, slightly larger influences were observed. However, EUD and TCP changes were still within 2%. The influence on sensitive structures, such as rectum and bladder, is also negligible. This study demonstrates that the rotational setup error degrades the dosimetric coverage of target volume in prostate cancer treatment to a certain degree. However, the degradation was not significant for the three-phase sequential boost prostate IMRT technique and for the margin sizes used in our institution.

  13. Factors, Practices, and Policies Influencing Students' Upward Transfer to Baccalaureate-Degree Programs and Institutions: A Mixed Methods Analysis

    ERIC Educational Resources Information Center

    LaSota, Robin Rae

    2013-01-01

    My dissertation utilizes an explanatory, sequential mixed-methods research design to assess factors influencing community college students' transfer probability to baccalaureate-granting institutions and to present promising practices in colleges and states directed at improving upward transfer, particularly for low-income and first-generation…

  14. Numerically stable algorithm for combining census and sample estimates with the multivariate composite estimator

    Treesearch

    R. L. Czaplewski

    2009-01-01

    The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...

  15. Upregulation of transmitter release probability improves a conversion of synaptic analogue signals into neuronal digital spikes

    PubMed Central

    2012-01-01

    Action potentials at the neurons and graded signals at the synapses are primary codes in the brain. In terms of their functional interaction, the studies were focused on the influence of presynaptic spike patterns on synaptic activities. How the synapse dynamics quantitatively regulates the encoding of postsynaptic digital spikes remains unclear. We investigated this question at unitary glutamatergic synapses on cortical GABAergic neurons, especially the quantitative influences of release probability on synapse dynamics and neuronal encoding. Glutamate release probability and synaptic strength are proportionally upregulated by presynaptic sequential spikes. The upregulation of release probability and the efficiency of probability-driven synaptic facilitation are strengthened by elevating presynaptic spike frequency and Ca2+. The upregulation of release probability improves spike capacity and timing precision at postsynaptic neuron. These results suggest that the upregulation of presynaptic glutamate release facilitates a conversion of synaptic analogue signals into digital spikes in postsynaptic neurons, i.e., a functional compatibility between presynaptic and postsynaptic partners. PMID:22852823

  16. A framework for sensitivity analysis of decision trees.

    PubMed

    Kamiński, Bogumił; Jakubczyk, Michał; Szufel, Przemysław

    2018-01-01

    In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.

  17. Models based on value and probability in health improve shared decision making.

    PubMed

    Ortendahl, Monica

    2008-10-01

    Diagnostic reasoning and treatment decisions are a key competence of doctors. A model based on values and probability provides a conceptual framework for clinical judgments and decisions, and also facilitates the integration of clinical and biomedical knowledge into a diagnostic decision. Both value and probability are usually estimated values in clinical decision making. Therefore, model assumptions and parameter estimates should be continually assessed against data, and models should be revised accordingly. Introducing parameter estimates for both value and probability, which usually pertain in clinical work, gives the model labelled subjective expected utility. Estimated values and probabilities are involved sequentially for every step in the decision-making process. Introducing decision-analytic modelling gives a more complete picture of variables that influence the decisions carried out by the doctor and the patient. A model revised for perceived values and probabilities by both the doctor and the patient could be used as a tool for engaging in a mutual and shared decision-making process in clinical work.

  18. Language experience changes subsequent learning

    PubMed Central

    Onnis, Luca; Thiessen, Erik

    2013-01-01

    What are the effects of experience on subsequent learning? We explored the effects of language-specific word order knowledge on the acquisition of sequential conditional information. Korean and English adults were engaged in a sequence learning task involving three different sets of stimuli: auditory linguistic (nonsense syllables), visual non-linguistic (nonsense shapes), and auditory non-linguistic (pure tones). The forward and backward probabilities between adjacent elements generated two equally probable and orthogonal perceptual parses of the elements, such that any significant preference at test must be due to either general cognitive biases, or prior language-induced biases. We found that language modulated parsing preferences with the linguistic stimuli only. Intriguingly, these preferences are congruent with the dominant word order patterns of each language, as corroborated by corpus analyses, and are driven by probabilistic preferences. Furthermore, although the Korean individuals had received extensive formal explicit training in English and lived in an English-speaking environment, they exhibited statistical learning biases congruent with their native language. Our findings suggest that mechanisms of statistical sequential learning are implicated in language across the lifespan, and experience with language may affect cognitive processes and later learning. PMID:23200510

  19. Sequential measurement of conjugate variables as an alternative quantum state tomography.

    PubMed

    Di Lorenzo, Antonio

    2013-01-04

    It is shown how it is possible to reconstruct the initial state of a one-dimensional system by sequentially measuring two conjugate variables. The procedure relies on the quasicharacteristic function, the Fourier transform of the Wigner quasiprobability. The proper characteristic function obtained by Fourier transforming the experimentally accessible joint probability of observing "position" then "momentum" (or vice versa) can be expressed as a product of the quasicharacteristic function of the two detectors and that unknown of the quantum system. This allows state reconstruction through the sequence (1) data collection, (2) Fourier transform, (3) algebraic operation, and (4) inverse Fourier transform. The strength of the measurement should be intermediate for the procedure to work.

  20. Power Enhancement in High Dimensional Cross-Sectional Tests

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Yao, Jiawei

    2016-01-01

    We propose a novel technique to boost the power of testing a high-dimensional vector H : θ = 0 against sparse alternatives where the null hypothesis is violated only by a couple of components. Existing tests based on quadratic forms such as the Wald statistic often suffer from low powers due to the accumulation of errors in estimating high-dimensional parameters. More powerful tests for sparse alternatives such as thresholding and extreme-value tests, on the other hand, require either stringent conditions or bootstrap to derive the null distribution and often suffer from size distortions due to the slow convergence. Based on a screening technique, we introduce a “power enhancement component”, which is zero under the null hypothesis with high probability, but diverges quickly under sparse alternatives. The proposed test statistic combines the power enhancement component with an asymptotically pivotal statistic, and strengthens the power under sparse alternatives. The null distribution does not require stringent regularity conditions, and is completely determined by that of the pivotal statistic. As specific applications, the proposed methods are applied to testing the factor pricing models and validating the cross-sectional independence in panel data models. PMID:26778846

  1. Risk-sensitive reinforcement learning.

    PubMed

    Shen, Yun; Tobia, Michael J; Sommer, Tobias; Obermayer, Klaus

    2014-07-01

    We derive a family of risk-sensitive reinforcement learning methods for agents, who face sequential decision-making tasks in uncertain environments. By applying a utility function to the temporal difference (TD) error, nonlinear transformations are effectively applied not only to the received rewards but also to the true transition probabilities of the underlying Markov decision process. When appropriate utility functions are chosen, the agents' behaviors express key features of human behavior as predicted by prospect theory (Kahneman & Tversky, 1979 ), for example, different risk preferences for gains and losses, as well as the shape of subjective probability curves. We derive a risk-sensitive Q-learning algorithm, which is necessary for modeling human behavior when transition probabilities are unknown, and prove its convergence. As a proof of principle for the applicability of the new framework, we apply it to quantify human behavior in a sequential investment task. We find that the risk-sensitive variant provides a significantly better fit to the behavioral data and that it leads to an interpretation of the subject's responses that is indeed consistent with prospect theory. The analysis of simultaneously measured fMRI signals shows a significant correlation of the risk-sensitive TD error with BOLD signal change in the ventral striatum. In addition we find a significant correlation of the risk-sensitive Q-values with neural activity in the striatum, cingulate cortex, and insula that is not present if standard Q-values are used.

  2. Comparison between variable and fixed dwell-time PN acquisition algorithms. [for synchronization in pseudonoise spread spectrum systems

    NASA Technical Reports Server (NTRS)

    Braun, W. R.

    1981-01-01

    Pseudo noise (PN) spread spectrum systems require a very accurate alignment between the PN code epochs at the transmitter and receiver. This synchronism is typically established through a two-step algorithm, including a coarse synchronization procedure and a fine synchronization procedure. A standard approach for the coarse synchronization is a sequential search over all code phases. The measurement of the power in the filtered signal is used to either accept or reject the code phase under test as the phase of the received PN code. This acquisition strategy, called a single dwell-time system, has been analyzed by Holmes and Chen (1977). A synopsis of the field of sequential analysis as it applies to the PN acquisition problem is provided. From this, the implementation of the variable dwell time algorithm as a sequential probability ratio test is developed. The performance of this algorithm is compared to the optimum detection algorithm and to the fixed dwell-time system.

  3. Time scale of random sequential adsorption.

    PubMed

    Erban, Radek; Chapman, S Jonathan

    2007-04-01

    A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.

  4. Sequential bearings-only-tracking initiation with particle filtering method.

    PubMed

    Liu, Bin; Hao, Chengpeng

    2013-01-01

    The tracking initiation problem is examined in the context of autonomous bearings-only-tracking (BOT) of a single appearing/disappearing target in the presence of clutter measurements. In general, this problem suffers from a combinatorial explosion in the number of potential tracks resulted from the uncertainty in the linkage between the target and the measurement (a.k.a the data association problem). In addition, the nonlinear measurements lead to a non-Gaussian posterior probability density function (pdf) in the optimal Bayesian sequential estimation framework. The consequence of this nonlinear/non-Gaussian context is the absence of a closed-form solution. This paper models the linkage uncertainty and the nonlinear/non-Gaussian estimation problem jointly with solid Bayesian formalism. A particle filtering (PF) algorithm is derived for estimating the model's parameters in a sequential manner. Numerical results show that the proposed solution provides a significant benefit over the most commonly used methods, IPDA and IMMPDA. The posterior Cramér-Rao bounds are also involved for performance evaluation.

  5. Strategies for Testing Statistical and Practical Significance in Detecting DIF with Logistic Regression Models

    ERIC Educational Resources Information Center

    Fidalgo, Angel M.; Alavi, Seyed Mohammad; Amirian, Seyed Mohammad Reza

    2014-01-01

    This study examines three controversial aspects in differential item functioning (DIF) detection by logistic regression (LR) models: first, the relative effectiveness of different analytical strategies for detecting DIF; second, the suitability of the Wald statistic for determining the statistical significance of the parameters of interest; and…

  6. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    ERIC Educational Resources Information Center

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  7. A Note on Three Statistical Tests in the Logistic Regression DIF Procedure

    ERIC Educational Resources Information Center

    Paek, Insu

    2012-01-01

    Although logistic regression became one of the well-known methods in detecting differential item functioning (DIF), its three statistical tests, the Wald, likelihood ratio (LR), and score tests, which are readily available under the maximum likelihood, do not seem to be consistently distinguished in DIF literature. This paper provides a clarifying…

  8. Tests of Measurement Invariance without Subgroups: A Generalization of Classical Methods

    ERIC Educational Resources Information Center

    Merkle, Edgar C.; Zeileis, Achim

    2013-01-01

    The issue of measurement invariance commonly arises in factor-analytic contexts, with methods for assessment including likelihood ratio tests, Lagrange multiplier tests, and Wald tests. These tests all require advance definition of the number of groups, group membership, and offending model parameters. In this paper, we study tests of measurement…

  9. Physical Disabilities: Education and Related Services, Fall 2001-Spring 2002.

    ERIC Educational Resources Information Center

    Kulik, Barbara J., Ed.

    2001-01-01

    These journal articles, which address the education of students with physical disabilities, include the following: (1) an interview with Jim Silcock, a Joan Wald Bacon Award Recipient for 2000; (2) Students with Orthopedic Impairments in the General Education Classroom: A Survey of Teacher Roles and Responsibilities (Alison M. Stafford and…

  10. Radiation-reaction force on a small charged body to second order

    NASA Astrophysics Data System (ADS)

    Moxon, Jordan; Flanagan, Éanna

    2018-05-01

    In classical electrodynamics, an accelerating charged body emits radiation and experiences a corresponding radiation-reaction force, or self-force. We extend to higher order in the total charge a previous rigorous derivation of the electromagnetic self-force in flat spacetime by Gralla, Harte, and Wald. The method introduced by Gralla, Harte, and Wald computes the self-force from the Maxwell field equations and conservation of stress-energy in a limit where the charge, size, and mass of the body go to zero, and it does not require regularization of a singular self-field. For our higher-order computation, an adjustment of the definition of the mass of the body is necessary to avoid including self-energy from the electromagnetic field sourced by the body in the distant past. We derive the evolution equations for the mass, spin, and center-of-mass position of the body through second order. We derive, for the first time, the second-order acceleration dependence of the evolution of the spin (self-torque), as well as a mixing between the extended body effects and the acceleration-dependent effects on the overall body motion.

  11. Faith-Based Hospitals and Variation in Psychiatric Inpatient Length of Stay in California, 2002-2011.

    PubMed

    Banta, Jim E; McKinney, Ogbochi

    2016-06-01

    We examined current treatment patterns at faith-based hospitals. Psychiatric discharges from all community-based hospitals in California were obtained for 2002-2011 and a Behavioral Model of Health Services Utilization approach used to study hospital religious affiliation and length of stay (LOS). During 10 years there were 1,976,893 psychiatric inpatient discharges, of which 14.3% were from faith-based nonprofit hospitals (eighteen Catholic, seven Seventh-day Adventist, and one Jewish hospital). Modest differences in patient characteristics and shorter LOS (7.5 vs. 8.3 days) were observed between faith-based and other hospitals. Multivariable negative binomial regression found shorter LOS at faith-based nonprofit hospitals (coefficient = -0.1169, p < 0.001, Wald χ (2) = 55) and greater LOS at all nonprofits (coefficient = 1.5909, p < 0.001, Wald χ (2) = 2755) as compared to local government-controlled hospitals. Faith-based hospitals provide a substantial and consistent amount of psychiatric care in California and may have slightly lower LOS after adjusting for patient and other hospital characteristics.

  12. Increased Automaticity and Altered Temporal Preparation Following Sleep Deprivation

    PubMed Central

    Kong, Danyang; Asplund, Christopher L.; Ling, Aiqing; Chee, Michael W.L.

    2015-01-01

    Study Objectives: Temporal expectation enables us to focus limited processing resources, thereby optimizing perceptual and motor processing for critical upcoming events. We investigated the effects of total sleep deprivation (TSD) on temporal expectation by evaluating the foreperiod and sequential effects during a psychomotor vigilance task (PVT). We also examined how these two measures were modulated by vulnerability to TSD. Design: Three 10-min visual PVT sessions using uniformly distributed foreperiods were conducted in the wake-maintenance zone the evening before sleep deprivation (ESD) and three more in the morning following approximately 22 h of TSD. TSD vulnerable and nonvulnerable groups were determined by a tertile split of participants based on the change in the number of behavioral lapses recorded during ESD and TSD. A subset of participants performed six additional 10-min modified auditory PVTs with exponentially distributed foreperiods during rested wakefulness (RW) and TSD to test the effect of temporal distribution on foreperiod and sequential effects. Setting: Sleep laboratory. Participants: There were 172 young healthy participants (90 males) with regular sleep patterns. Nineteen of these participants performed the modified auditory PVT. Measurements and Results: Despite behavioral lapses and slower response times, sleep deprived participants could still perceive the conditional probability of temporal events and modify their level of preparation accordingly. Both foreperiod and sequential effects were magnified following sleep deprivation in vulnerable individuals. Only the foreperiod effect increased in nonvulnerable individuals. Conclusions: The preservation of foreperiod and sequential effects suggests that implicit time perception and temporal preparedness are intact during total sleep deprivation. Individuals appear to reallocate their depleted preparatory resources to more probable event timings in ongoing trials, whereas vulnerable participants also rely more on automatic processes. Citation: Kong D, Asplund CL, Ling A, Chee MWL. Increased automaticity and altered temporal preparation following sleep deprivation. SLEEP 2015;38(8):1219–1227. PMID:25845689

  13. Sequential Analysis of Mastery Behavior in 6- and 12-Month-Old Infants.

    ERIC Educational Resources Information Center

    MacTurk, Robert H.; And Others

    1987-01-01

    Sequences of mastery behavior were analyzed in a sample of 67 infants 6 to 12 months old. Authors computed (a) frequencies of six categories of mastery behavior, transitional probabilities, and z scores for each behavior change, and (b) transitions from a mastery behavior to positive affect. Changes in frequencies and similarity in organization…

  14. Comparing and Combining Dichotomous and Polytomous Items with SPRT Procedure in Computerized Classification Testing.

    ERIC Educational Resources Information Center

    Lau, C. Allen; Wang, Tianyou

    The purposes of this study were to: (1) extend the sequential probability ratio testing (SPRT) procedure to polytomous item response theory (IRT) models in computerized classification testing (CCT); (2) compare polytomous items with dichotomous items using the SPRT procedure for their accuracy and efficiency; (3) study a direct approach in…

  15. Importance and Effectiveness of Student Health Services at a South Texas University

    ERIC Educational Resources Information Center

    McCaig, Marilyn M.

    2013-01-01

    The study examined the health needs of students at a south Texas university and documented the utility of the student health center. The descriptive study employed a mixed methods explanatory sequential design (ESD). The non-probability sample consisted of 140 students who utilized the university's health center during the period of March 23-30,…

  16. EXSPRT: An Expert Systems Approach to Computer-Based Adaptive Testing.

    ERIC Educational Resources Information Center

    Frick, Theodore W.; And Others

    Expert systems can be used to aid decision making. A computerized adaptive test (CAT) is one kind of expert system, although it is not commonly recognized as such. A new approach, termed EXSPRT, was devised that combines expert systems reasoning and sequential probability ratio test stopping rules. EXSPRT-R uses random selection of test items,…

  17. Putting the C Back into the ABCs: A Multi-Year, Multi-Region Investigation of Condom Use by Ugandan Youths 2003–2010

    PubMed Central

    Davis, Rosemary; Ouma, Joseph; Lwanga, Stephen K.; Moxon, Sarah

    2014-01-01

    A major strategy for preventing transmission of HIV and other STIs is the consistent use of condoms during sexual intercourse. Condom use among youths is particularly important to reduce the number of new cases and the national prevalence. Condom use has been often promoted by the Uganda National AIDS Commission. Although a number of studies have established an association between condom use at one’s sexual debut and future condom use, few studies have explored this association over time, and whether the results are generalizable across multiple locations. This multi time point, multi district study assesses the relationship between sexual debut and condom use and consistent use of condoms thereafter. Uganda has used Lot Quality Assurance Sampling surveys since 2003 to monitor district level HIV programs and improve access to HIV health services. This study includes 4518 sexually active youths interviewed at five time points (2003–2010) in up to 23 districts located across Uganda. Using logistic regression, we measured the association of condom use at first sexual intercourse on recent condom usage, controlling for several factors including: age, sex, education, marital status, age at first intercourse, geographical location, and survey year. The odds of condom use at last intercourse, using a condom at last intercourse with a non-regular partner, and consistently using a condom are, respectively, 9.63 (95%WaldCI = 8.03–11.56), 3.48 (95%WaldCI = 2.27–5.33), and 11.12 (95%WaldCI = 8.95–13.81) times more likely for those individuals using condoms during their sexual debut. These values did not decrease by more than 20% when controlling for potential confounders. The results suggest that HIV prevention programs should encourage condom use among youth during sexual debut. Success with this outcome may have a lasting influence on preventing HIV and other STIs later in life. PMID:24705381

  18. Putting the C back into the ABCs: a multi-year, multi-region investigation of condom use by Ugandan youths 2003-2010.

    PubMed

    Valadez, Joseph J; Jeffery, Caroline; Davis, Rosemary; Ouma, Joseph; Lwanga, Stephen K; Moxon, Sarah

    2014-01-01

    A major strategy for preventing transmission of HIV and other STIs is the consistent use of condoms during sexual intercourse. Condom use among youths is particularly important to reduce the number of new cases and the national prevalence. Condom use has been often promoted by the Uganda National AIDS Commission. Although a number of studies have established an association between condom use at one's sexual debut and future condom use, few studies have explored this association over time, and whether the results are generalizable across multiple locations. This multi time point, multi district study assesses the relationship between sexual debut and condom use and consistent use of condoms thereafter. Uganda has used Lot Quality Assurance Sampling surveys since 2003 to monitor district level HIV programs and improve access to HIV health services. This study includes 4518 sexually active youths interviewed at five time points (2003-2010) in up to 23 districts located across Uganda. Using logistic regression, we measured the association of condom use at first sexual intercourse on recent condom usage, controlling for several factors including: age, sex, education, marital status, age at first intercourse, geographical location, and survey year. The odds of condom use at last intercourse, using a condom at last intercourse with a non-regular partner, and consistently using a condom are, respectively, 9.63 (95%WaldCI = 8.03-11.56), 3.48 (95%WaldCI = 2.27-5.33), and 11.12 (95%WaldCI = 8.95-13.81) times more likely for those individuals using condoms during their sexual debut. These values did not decrease by more than 20% when controlling for potential confounders. The results suggest that HIV prevention programs should encourage condom use among youth during sexual debut. Success with this outcome may have a lasting influence on preventing HIV and other STIs later in life.

  19. Randomized Controlled Trial of Family Therapy in Advanced Cancer Continued Into Bereavement.

    PubMed

    Kissane, David W; Zaider, Talia I; Li, Yuelin; Hichenberg, Shira; Schuler, Tammy; Lederberg, Marguerite; Lavelle, Lisa; Loeb, Rebecca; Del Gaudio, Francesca

    2016-06-01

    Systematic family-centered cancer care is needed. We conducted a randomized controlled trial of family therapy, delivered to families identified by screening to be at risk from dysfunctional relationships when one of their relatives has advanced cancer. Eligible patients with advanced cancer and their family members screened above the cut-off on the Family Relationships Index. After screening 1,488 patients or relatives at Memorial Sloan Kettering Cancer Center or three related community hospice programs, 620 patients (42%) were recruited, which represented 170 families. Families were stratified by three levels of family dysfunction (low communicating, low involvement, and high conflict) and randomly assigned to one of three arms: standard care or 6 or 10 sessions of a manualized family intervention. Primary outcomes were the Complicated Grief Inventory-Abbreviated (CGI) and Beck Depression Inventory-II (BDI-II). Generalized estimating equations allowed for clustered data in an intention-to-treat analysis. On the CGI, a significant treatment effect (Wald χ(2) = 6.88; df = 2; P = .032) and treatment by family-type interaction was found (Wald χ(2) = 20.64; df = 4; P < .001), and better outcomes resulted from 10 sessions compared with standard care for low-communicating and high-conflict groups compared with low-involvement families. Low-communicating families improved by 6 months of bereavement. In the standard care arm, 15.5% of the bereaved developed a prolonged grief disorder at 13 months of bereavement compared with 3.3% of those who received 10 sessions of intervention (Wald χ(2) = 8.31; df = 2; P =.048). No significant treatment effects were found on the BDI-II. Family-focused therapy delivered to high-risk families during palliative care and continued into bereavement reduced the severity of complicated grief and the development of prolonged grief disorder. © 2016 by American Society of Clinical Oncology.

  20. Randomized Controlled Trial of Family Therapy in Advanced Cancer Continued Into Bereavement

    PubMed Central

    Zaider, Talia I.; Li, Yuelin; Hichenberg, Shira; Schuler, Tammy; Lederberg, Marguerite; Lavelle, Lisa; Loeb, Rebecca; Del Gaudio, Francesca

    2016-01-01

    Purpose Systematic family-centered cancer care is needed. We conducted a randomized controlled trial of family therapy, delivered to families identified by screening to be at risk from dysfunctional relationships when one of their relatives has advanced cancer. Patients and Methods Eligible patients with advanced cancer and their family members screened above the cut-off on the Family Relationships Index. After screening 1,488 patients or relatives at Memorial Sloan Kettering Cancer Center or three related community hospice programs, 620 patients (42%) were recruited, which represented 170 families. Families were stratified by three levels of family dysfunction (low communicating, low involvement, and high conflict) and randomly assigned to one of three arms: standard care or 6 or 10 sessions of a manualized family intervention. Primary outcomes were the Complicated Grief Inventory-Abbreviated (CGI) and Beck Depression Inventory-II (BDI-II). Generalized estimating equations allowed for clustered data in an intention-to-treat analysis. Results On the CGI, a significant treatment effect (Wald χ2 = 6.88; df = 2; P = .032) and treatment by family-type interaction was found (Wald χ2 = 20.64; df = 4; P < .001), and better outcomes resulted from 10 sessions compared with standard care for low-communicating and high-conflict groups compared with low-involvement families. Low-communicating families improved by 6 months of bereavement. In the standard care arm, 15.5% of the bereaved developed a prolonged grief disorder at 13 months of bereavement compared with 3.3% of those who received 10 sessions of intervention (Wald χ2 = 8.31; df = 2; P =.048). No significant treatment effects were found on the BDI-II. Conclusion Family-focused therapy delivered to high-risk families during palliative care and continued into bereavement reduced the severity of complicated grief and the development of prolonged grief disorder. PMID:27069071

  1. Variables Measured during Cardiopulmonary Exercise Testing as Predictors of Mortality in Chronic Systolic Heart Failure

    PubMed Central

    Keteyian, Steven J.; Patel, Mahesh; Kraus, William E.; Brawner, Clinton A.; McConnell, Timothy R.; Piña, Ileana L.; Leifer, Eric S.; Fleg, Jerome L.; Blackburn, Gordon; Fonarow, Gregg C.; Chase, Paul J.; Piner, Lucy; Vest, Marianne; O’Connor, Christopher M.; Ehrman, Jonathan K.; Walsh, Mary N.; Ewald, Gregory; Bensimhon, Dan; Russell, Stuart D.

    2015-01-01

    BACKGROUND Data from a cardiopulmonary exercise (CPX) test are used to determine prognosis in patients with chronic heart failure (HF). However, few published studies have simultaneously compared the relative prognostic strength of multiple CPX variables. OBJECTIVES We sought to describe the strength of the association among variables measured during a CPX test and all-cause mortality in patients with HF with reduced ejection fraction (HFrEF), including the influence of sex and patient effort, as measured by respiratory exchange ratio (RER). METHODS Among patients (n = 2,100, 29% women) enrolled in the HF-ACTION (HF-A Controlled Trial Investigating Outcomes of exercise traiNing) trial, 10 CPX test variables measured at baseline (e.g., peak oxygen uptake [VO2], exercise duration, percent predicted peak VO2 [%ppVO2], ventilatory efficiency) were examined. RESULTS Over a median follow-up of 32 months, there were 357 deaths. All CPX variables, except RER, were related to all-cause mortality (all p < 0.0001). Both %ppVO2 and exercise duration were equally able to predict (Wald χ2: ~141) and discriminate (c-index: 0.69) mortality. Peak VO2 (mL·kg−1·min−1) was the strongest predictor of mortality among men (Wald χ2: 129) and exercise duration among women (Wald χ2: 41). Multivariable analyses showed that %ppVO2, exercise duration, and peak VO2 (mL·kg−1·min−1) were similarly able to predict and discriminate mortality. In men, a 10% 1-year mortality rate corresponded to a peak VO2 of 10.9 mL·kg−1·min−1 versus 5.3 mlkg−1/min−1 in women. CONCLUSIONS Peak VO2, exercise duration, and % ppVO2 carried the strongest ability to predict and discriminate the likelihood of death in patients with HFrEF. The prognosis associated with a given peak V2 differed by sex. PMID:26892413

  2. The Impact of Micronutrient Supplementation in Alcohol-Exposed Pregnancies on Information Processing Skills in Ukrainian Infants

    PubMed Central

    Kable, J. A.; Coles, C. D.; Keen, C. L.; Uriu-Adams, J. Y.; Jones, K. L.; Yevtushok, L.; Kulikovsky, Y.; Wertelecki, W.; Pedersen, T. L.; Chambers, C. D.

    2015-01-01

    Objectives The potential of micronutrients to ameliorate the impact of prenatal alcohol exposure was explored in a clinical trial conducted in Ukraine. Cardiac orienting responses during a habituation/dishabituation learning paradigm were obtained from 6–12-month-olds to assess neurophysiological encoding and memory of environmental events. Materials and methods Women who differed in prenatal alcohol use were recruited during pregnancy and assigned to a group (no study-provided supplements, multivitamin/mineral supplement, or multivitamin/mineral supplement plus choline supplement). An infant habituation/dishabituation paradigm was used to assess outcomes in the offspring. Ten trials were used for the habituation and five for the dishabituation condition. Heart rate was collected for 30 sec prior to stimulus onset and then 12 sec post-stimulus onset. Difference values (ΔHR) were computed for the first three trials of each condition and aggregated for analysis. Gestational blood samples were collected to assess maternal nutritional status and changes as a function of the intervention. Results Choline supplementation resulted in a greater ΔHR on the visual habituation (Wald Chi-Square (1, 149) = 10.9, p < .001, eta-squared = .043) trials for all infants and for the infants with no prenatal alcohol exposure on the dishabituation (Wald Chi-Square (1, 139) = 6.1, p < .013, eta-squared = .065) trials. The latency of the response was reduced in both conditions (Habituation: Wald Chi-Square (1, 150) = 9.0, p < .003, eta-squared = .056; Dishabituation: Wald Chi-Square (1, 137) = 4.9, p < .027, eta-squared = .032) for all infants whose mothers received choline supplementation. Change in gestational choline level was positively related (r = .19) to ΔHR during habituation trials, and levels of one choline metabolite, dimethylglycine (DMG), predicted ΔHR during habituation trials (r = .23) and latency of responses (r = −.20). A trend was found between DMG and ΔHR on the dishabituation trials (r = .19) and latency of the response (r = −.18). Multivitamin/mineral or multivitamin/mineral plus choline supplementation did not significantly affect cardiac orienting responses to the auditory stimuli. Conclusion Choline supplementation when administered together with routinely recommended multivitamin/mineral prenatal supplements during pregnancy may provide a beneficial impact to basic learning mechanisms involved in encoding and memory of environmental events in alcohol-exposed pregnancies as well as non- or low alcohol-exposed pregnancies. Changes in nutrient status of the mother suggested that this process may be mediated by the breakdown of choline to betaine and then to DMG. One mechanism by which choline supplementation may positively affect brain development is through prevention of fetal alcohol-related depletion of DMG, a metabolic nutrient that can protect against overproduction of glycine, during critical periods of neurogenesis. PMID:26493109

  3. The moderating role of parental smoking on their children's attitudes toward smoking among a predominantly minority sample: a cross-sectional analysis.

    PubMed

    Wilkinson, Anna V; Shete, Sanjay; Prokhorov, Alexander V

    2008-07-14

    In general having a parent who smokes or smoked is a strong and consistent predictor of smoking initiation among their children while authoritative parenting style, open communication that demonstrates mutual respect between child and parent, and parental expectations not to smoke are protective. It has been hypothesized that parental smoking affects their children's smoking initiation through both imitation of the behavior and effects on attitudes toward smoking. The goals of the current analysis were to examine these two potential mechanisms. In 2003, 1,417 high school students in Houston, Texas, completed a cross-sectional survey as part of the evaluation of an interactive smoking prevention and cessation program delivered via CD-ROM. To assess the relationship between number of parents who currently smoke and children's smoking status, we completed an unconditional logistic regression. To determine whether the attitudes that children of smokers hold toward smoking are significantly more positive than the attitudes of children of non-smokers we examined whether the parents smoking status moderated the relationship between children's attitudes toward smoking and their ever smoking using unconditional logistic regressions. Compared to participants whose parents did not currently smoke, participants who reported one or both parents currently smoke, had increased odds of ever smoking (OR = 1.31; 95% CI: 1.03-1.68; Wald chi2 = 4.78 (df = 1) p = 0.03 and OR = 2.16; 95% CI: 1.51-3.10; Wald chi2 = 17.80 (df = 1) p < 0.001, respectively). In addition, the relationship between attitudes and ever smoking was stronger among participants when at least one parent currently smokes (OR = 2.50; 95% CI: 1.96-3.19; Wald chi2 = 54.71 (df = 1) p < 0.001) than among participants whose parents did not smoke (OR = 1.72; 95% CI: 1.40-2.12; Wald chi2 = 26.45 (df = 1) p < 0.001). Children of smokers were more likely to smoke and reported more favorable attitudes toward smoking compared to children of non-smokers. One interpretation of our findings is that parental smoking not only directly influences behavior; it also moderates their children's attitudes towards smoking and thereby impacts their children's behavior. Our results demonstrate a continued need for primary prevention smoking interventions to be sensitive to the family context. They also underscore the importance of discussing parental smoking as a risk factor for smoking initiation, regardless of ethnicity, and of tailoring prevention messages to account for the influence that parental smoking status may have on the smoking attitudes and the associated normative beliefs.

  4. The moderating role of parental smoking on their children's attitudes toward smoking among a predominantly minority sample: a cross-sectional analysis

    PubMed Central

    Wilkinson, Anna V; Shete, Sanjay; Prokhorov, Alexander V

    2008-01-01

    Background In general having a parent who smokes or smoked is a strong and consistent predictor of smoking initiation among their children while authoritative parenting style, open communication that demonstrates mutual respect between child and parent, and parental expectations not to smoke are protective. It has been hypothesized that parental smoking affects their children's smoking initiation through both imitation of the behavior and effects on attitudes toward smoking. The goals of the current analysis were to examine these two potential mechanisms. Methods In 2003, 1,417 high school students in Houston, Texas, completed a cross-sectional survey as part of the evaluation of an interactive smoking prevention and cessation program delivered via CD-ROM. To assess the relationship between number of parents who currently smoke and children's smoking status, we completed an unconditional logistic regression. To determine whether the attitudes that children of smokers hold toward smoking are significantly more positive than the attitudes of children of non-smokers we examined whether the parents smoking status moderated the relationship between children's attitudes toward smoking and their ever smoking using unconditional logistic regressions. Results Compared to participants whose parents did not currently smoke, participants who reported one or both parents currently smoke, had increased odds of ever smoking (OR = 1.31; 95% CI: 1.03–1.68; Wald χ2 = 4.78 (df = 1) p = 0.03 and OR = 2.16; 95% CI: 1.51–3.10; Wald χ2 = 17.80 (df = 1) p < 0.001, respectively). In addition, the relationship between attitudes and ever smoking was stronger among participants when at least one parent currently smokes (OR = 2.50; 95% CI: 1.96–3.19; Wald χ2 = 54.71 (df = 1) p < 0.001) than among participants whose parents did not smoke (OR = 1.72; 95% CI: 1.40–2.12; Wald χ2 = 26.45 (df = 1) p < 0.001). Conclusion Children of smokers were more likely to smoke and reported more favorable attitudes toward smoking compared to children of non-smokers. One interpretation of our findings is that parental smoking not only directly influences behavior; it also moderates their children's attitudes towards smoking and thereby impacts their children's behavior. Our results demonstrate a continued need for primary prevention smoking interventions to be sensitive to the family context. They also underscore the importance of discussing parental smoking as a risk factor for smoking initiation, regardless of ethnicity, and of tailoring prevention messages to account for the influence that parental smoking status may have on the smoking attitudes and the associated normative beliefs. PMID:18625061

  5. Decomposition of conditional probability for high-order symbolic Markov chains.

    PubMed

    Melnik, S S; Usatenko, O V

    2017-07-01

    The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.

  6. Decomposition of conditional probability for high-order symbolic Markov chains

    NASA Astrophysics Data System (ADS)

    Melnik, S. S.; Usatenko, O. V.

    2017-07-01

    The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.

  7. Sequential ranging integration times in the presence of CW interference in the ranging channel

    NASA Technical Reports Server (NTRS)

    Mathur, Ashok; Nguyen, Tien

    1986-01-01

    The Deep Space Network (DSN), managed by the Jet Propulsion Laboratory for NASA, is used primarily for communication with interplanetary spacecraft. The high sensitivity required to achieve planetary communications makes the DSN very susceptible to radio-frequency interference (RFI). In this paper, an analytical model is presented of the performance degradation of the DSN sequential ranging subsystem in the presence of downlink CW interference in the ranging channel. A trade-off between the ranging component integration times and the ranging signal-to-noise ratio to achieve a desired level of range measurement accuracy and the probability of error in the code components is also presented. Numerical results presented illustrate the required trade-offs under various interference conditions.

  8. Combined Parameter and State Estimation Problem in a Complex Domain: RF Hyperthermia Treatment Using Nanoparticles

    NASA Astrophysics Data System (ADS)

    Bermeo Varon, L. A.; Orlande, H. R. B.; Eliçabe, G. E.

    2016-09-01

    The particle filter methods have been widely used to solve inverse problems with sequential Bayesian inference in dynamic models, simultaneously estimating sequential state variables and fixed model parameters. This methods are an approximation of sequences of probability distributions of interest, that using a large set of random samples, with presence uncertainties in the model, measurements and parameters. In this paper the main focus is the solution combined parameters and state estimation in the radiofrequency hyperthermia with nanoparticles in a complex domain. This domain contains different tissues like muscle, pancreas, lungs, small intestine and a tumor which is loaded iron oxide nanoparticles. The results indicated that excellent agreements between estimated and exact value are obtained.

  9. Advanced Degrees of Debt: Analyzing the Patterns and Determinants of Graduate Student Borrowing

    ERIC Educational Resources Information Center

    Belasco, Andrew S.; Trivette, Michael J.; Webber, Karen L.

    2014-01-01

    Despite record student debt and the growing importance of graduate education, little is known about what drives graduate student borrowing. In response to that research gap, this study draws on several national data sources to analyze the patterns and predictors of education-related debt among graduate students specifically. Adjusted Wald tests…

  10. Transitioning from Elementary School to Middle School: The Ecology of Black Males' Behavior

    ERIC Educational Resources Information Center

    Mundy, Alma Christienne

    2014-01-01

    The purpose of this mixed method study is to explain the ecology Black males experience as they transition from elementary school to middle school in terms of behavior. The Black male graduation rate is well below 50% nationally (Orfield, Losen, Wald, & Swanson, 2004; Schott Foundation for Public Education, 2010). Graduating from high school…

  11. Asking Sensitive Questions: A Statistical Power Analysis of Randomized Response Models

    ERIC Educational Resources Information Center

    Ulrich, Rolf; Schroter, Hannes; Striegel, Heiko; Simon, Perikles

    2012-01-01

    This article derives the power curves for a Wald test that can be applied to randomized response models when small prevalence rates must be assessed (e.g., detecting doping behavior among elite athletes). These curves enable the assessment of the statistical power that is associated with each model (e.g., Warner's model, crosswise model, unrelated…

  12. Assessing the Item Response Theory with Covariate (IRT-C) Procedure for Ascertaining Differential Item Functioning

    ERIC Educational Resources Information Center

    Tay, Louis; Vermunt, Jeroen K.; Wang, Chun

    2013-01-01

    We evaluate the item response theory with covariates (IRT-C) procedure for assessing differential item functioning (DIF) without preknowledge of anchor items (Tay, Newman, & Vermunt, 2011). This procedure begins with a fully constrained baseline model, and candidate items are tested for uniform and/or nonuniform DIF using the Wald statistic.…

  13. Multiple model cardinalized probability hypothesis density filter

    NASA Astrophysics Data System (ADS)

    Georgescu, Ramona; Willett, Peter

    2011-09-01

    The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.

  14. How Do High School Students Solve Probability Problems? A Mixed Methods Study on Probabilistic Reasoning

    ERIC Educational Resources Information Center

    Heyvaert, Mieke; Deleye, Maarten; Saenen, Lore; Van Dooren, Wim; Onghena, Patrick

    2018-01-01

    When studying a complex research phenomenon, a mixed methods design allows to answer a broader set of research questions and to tap into different aspects of this phenomenon, compared to a monomethod design. This paper reports on how a sequential equal status design (QUAN ? QUAL) was used to examine students' reasoning processes when solving…

  15. Is a Basketball Free-Throw Sequence Nonrandom? A Group Exercise for Undergraduate Statistics Students

    ERIC Educational Resources Information Center

    Adolph, Stephen C.

    2007-01-01

    I describe a group exercise that I give to my undergraduate biostatistics class. The exercise involves analyzing a series of 200 consecutive basketball free-throw attempts to determine whether there is any evidence for sequential dependence in the probability of making a free-throw. The students are given the exercise before they have learned the…

  16. The Approximate Bayesian Computation methods in the localization of the atmospheric contamination source

    NASA Astrophysics Data System (ADS)

    Kopka, P.; Wawrzynczak, A.; Borysiewicz, M.

    2015-09-01

    In many areas of application, a central problem is a solution to the inverse problem, especially estimation of the unknown model parameters to model the underlying dynamics of a physical system precisely. In this situation, the Bayesian inference is a powerful tool to combine observed data with prior knowledge to gain the probability distribution of searched parameters. We have applied the modern methodology named Sequential Approximate Bayesian Computation (S-ABC) to the problem of tracing the atmospheric contaminant source. The ABC is technique commonly used in the Bayesian analysis of complex models and dynamic system. Sequential methods can significantly increase the efficiency of the ABC. In the presented algorithm, the input data are the on-line arriving concentrations of released substance registered by distributed sensor network from OVER-LAND ATMOSPHERIC DISPERSION (OLAD) experiment. The algorithm output are the probability distributions of a contamination source parameters i.e. its particular location, release rate, speed and direction of the movement, start time and duration. The stochastic approach presented in this paper is completely general and can be used in other fields where the parameters of the model bet fitted to the observable data should be found.

  17. A random walk rule for phase I clinical trials.

    PubMed

    Durham, S D; Flournoy, N; Rosenberger, W F

    1997-06-01

    We describe a family of random walk rules for the sequential allocation of dose levels to patients in a dose-response study, or phase I clinical trial. Patients are sequentially assigned the next higher, same, or next lower dose level according to some probability distribution, which may be determined by ethical considerations as well as the patient's response. It is shown that one can choose these probabilities in order to center dose level assignments unimodally around any target quantile of interest. Estimation of the quantile is discussed; the maximum likelihood estimator and its variance are derived under a two-parameter logistic distribution, and the maximum likelihood estimator is compared with other nonparametric estimators. Random walk rules have clear advantages: they are simple to implement, and finite and asymptotic distribution theory is completely worked out. For a specific random walk rule, we compute finite and asymptotic properties and give examples of its use in planning studies. Having the finite distribution theory available and tractable obviates the need for elaborate simulation studies to analyze the properties of the design. The small sample properties of our rule, as determined by exact theory, compare favorably to those of the continual reassessment method, determined by simulation.

  18. Language experience changes subsequent learning.

    PubMed

    Onnis, Luca; Thiessen, Erik

    2013-02-01

    What are the effects of experience on subsequent learning? We explored the effects of language-specific word order knowledge on the acquisition of sequential conditional information. Korean and English adults were engaged in a sequence learning task involving three different sets of stimuli: auditory linguistic (nonsense syllables), visual non-linguistic (nonsense shapes), and auditory non-linguistic (pure tones). The forward and backward probabilities between adjacent elements generated two equally probable and orthogonal perceptual parses of the elements, such that any significant preference at test must be due to either general cognitive biases, or prior language-induced biases. We found that language modulated parsing preferences with the linguistic stimuli only. Intriguingly, these preferences are congruent with the dominant word order patterns of each language, as corroborated by corpus analyses, and are driven by probabilistic preferences. Furthermore, although the Korean individuals had received extensive formal explicit training in English and lived in an English-speaking environment, they exhibited statistical learning biases congruent with their native language. Our findings suggest that mechanisms of statistical sequential learning are implicated in language across the lifespan, and experience with language may affect cognitive processes and later learning. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method

    PubMed Central

    Cao, Youfang; Liang, Jie

    2013-01-01

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape. PMID:23862966

  20. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method

    NASA Astrophysics Data System (ADS)

    Cao, Youfang; Liang, Jie

    2013-07-01

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.

  1. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method.

    PubMed

    Cao, Youfang; Liang, Jie

    2013-07-14

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.

  2. The Direct and Indirect Effects of Word Reading and Vocabulary on Adolescents' Reading Comprehension: Comparing Struggling and Adequate Comprehenders

    ERIC Educational Resources Information Center

    Oslund, Eric L.; Clemens, Nathan H.; Simmons, Deborah C.; Simmons, Leslie E.

    2018-01-01

    The current study examined statistically significant differences between struggling and adequate readers using a multicomponent model of reading comprehension in 796 sixth through eighth graders, with a primary focus on word reading and vocabulary. Path analyses and Wald tests were used to investigate the direct and indirect relations of word…

  3. Perception of Parental Bonds and Suicide Intent Among Egyptian Adolescents.

    PubMed

    Sharaf, Amira Y; Thompson, Elaine A; Abd El-Salam, Hoda F

    2016-04-01

    Suicidal adolescents, compared to their nonsuicidal peers, tend to perceive their parents as less "caring" and more "controlling"-which characterizes the "affectionless control" parenting style. Research findings are inconsistent regarding the distinct influence of mother versus father parenting on youth suicide intent; moreover, the influence of parents' joint parenting styles on suicide intent has not been investigated. Using a cross-sectional design and large sample (N = 150 youth, 13-21 years old), currently hospitalized in a treatment center in Egypt for a recent suicide attempt, data were collected using the Suicide Intent Scale, Parental Bonding Instrument, and Center for Epidemiologic Studies Depression Scale. Seventy percent of youth reported high suicide intent. Mother and father parenting styles, assessed independently, were not associated with adolescent suicide intent. The joint effect of both parents' parenting style, however, was positively associated with suicide intent (Wald χ(2) = 8.79, p = .03). Suicide intent was stronger among adolescents who experienced neglectful compared with optimal parenting style (B = 1.93, Wald χ(2) = 4.28, p = .04). The findings have direct implications for mental health nursing interventions, signaling the critical need to engage both parents in family-based interventions to address youth suicidal behavior. © 2016 Wiley Periodicals, Inc.

  4. Collisional super-Penrose process and Wald inequalities

    NASA Astrophysics Data System (ADS)

    Tanatarov, Igor V.; Zaslavskii, Oleg B.

    2017-09-01

    We consider collision of two massive particles in the equatorial plane of an axially symmetric stationary spacetime that produces two massless particles afterwards. It is implied that the horizon is absent but there is a naked singularity or another potential barrier that makes possible the head-on collision. The relationship between the energy in the center of mass frame E_{c.m.} and the Killing energy E measured at infinity is analyzed. It follows immediately from the Wald inequalities that unbounded E is possible for unbounded E_{c.m.} only. This can be realized if the spacetime is close to the threshold of the horizon formation. Different types of spacetimes (black holes, naked singularities, wormholes) correspond to different possible relations between E_{c.m.} and E. We develop a general approach that enables us to describe the collision process in the frames of the stationary observer and zero angular momentum observer. The escape cone and escape fraction are derived. A simple explanation of the existence of the bright spot is given. For the particular case of the Kerr metric, our results agree with the previous ones found in Patil et al. (Phys Rev D 93:104015, 2016).

  5. Predictive factors of clinical response in steroid-refractory ulcerative colitis treated with granulocyte-monocyte apheresis

    PubMed Central

    D'Ovidio, Valeria; Meo, Donatella; Viscido, Angelo; Bresci, Giampaolo; Vernia, Piero; Caprilli, Renzo

    2011-01-01

    AIM: To identify factors predicting the clinical response of ulcerative colitis patients to granulocyte-monocyte apheresis (GMA). METHODS: Sixty-nine ulcerative colitis patients (39 F, 30 M) dependent upon/refractory to steroids were treated with GMA. Steroid dependency, clinical activity index (CAI), C reactive protein (CRP) level, erythrocyte sedimentation rate (ESR), values at baseline, use of immunosuppressant, duration of disease, and age and extent of disease were considered for statistical analysis as predictive factors of clinical response. Univariate and multivariate logistic regression models were used. RESULTS: In the univariate analysis, CAI (P = 0.039) and ESR (P = 0.017) levels at baseline were singled out as predictive of clinical remission. In the multivariate analysis steroid dependency [Odds ratio (OR) = 0.390, 95% Confidence interval (CI): 0.176-0.865, Wald 5.361, P = 0.0160] and low CAI levels at baseline (4 < CAI < 7) (OR = 0.770, 95% CI: 0.425-1.394, Wald 3.747, P = 0.028) proved to be effective as factors predicting clinical response. CONCLUSION: GMA may be a valid therapeutic option for steroid-dependent ulcerative colitis patients with mild-moderate disease and its clinical efficacy seems to persist for 12 mo. PMID:21528055

  6. Adverse Clinical Outcome Associated With Mutations That Typify African American Colorectal Cancers.

    PubMed

    Wang, Zhenghe; Li, Li; Guda, Kishore; Chen, Zhengyi; Barnholtz-Sloan, Jill; Park, Young Soo; Markowitz, Sanford D; Willis, Joseph

    2016-12-01

    African Americans have the highest incidence and mortality from colorectal cancer (CRC) of any US racial group. We recently described a panel of 15 genes that are statistically significantly more likely to be mutated in CRCs from African Americans than in Caucasians (AA-CRC genes). The current study investigated the outcomes associated with these mutations in African American CRCs (AA-CRCs). In a cohort of 66 patients with stage I-III CRCs, eight of 27 CRCs with AA-CRC gene mutations (Mut+) developed metastatic disease vs only four of 39 mutation-negative (Mut-) cases (P = .03, Cox regression model with two-sided Wald test). Moreover, among stage III cases (n = 33), Mut+ cancers were nearly three times more likely to relapse as Mut- cases (7 of 15 Mut+ vs 3 of 18 Mut-; P = .03, Cox regression model with two-sided Wald test). AA-CRC mutations may thus define a high-risk subset of CRCs that contributes to the overall disparity in CRC outcomes observed in African Americans. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Ultrasensitive surveillance of sensors and processes

    DOEpatents

    Wegerich, Stephan W.; Jarman, Kristin K.; Gross, Kenneth C.

    2001-01-01

    A method and apparatus for monitoring a source of data for determining an operating state of a working system. The method includes determining a sensor (or source of data) arrangement associated with monitoring the source of data for a system, activating a method for performing a sequential probability ratio test if the data source includes a single data (sensor) source, activating a second method for performing a regression sequential possibility ratio testing procedure if the arrangement includes a pair of sensors (data sources) with signals which are linearly or non-linearly related; activating a third method for performing a bounded angle ratio test procedure if the sensor arrangement includes multiple sensors and utilizing at least one of the first, second and third methods to accumulate sensor signals and determining the operating state of the system.

  8. Ultrasensitive surveillance of sensors and processes

    DOEpatents

    Wegerich, Stephan W.; Jarman, Kristin K.; Gross, Kenneth C.

    1999-01-01

    A method and apparatus for monitoring a source of data for determining an operating state of a working system. The method includes determining a sensor (or source of data) arrangement associated with monitoring the source of data for a system, activating a method for performing a sequential probability ratio test if the data source includes a single data (sensor) source, activating a second method for performing a regression sequential possibility ratio testing procedure if the arrangement includes a pair of sensors (data sources) with signals which are linearly or non-linearly related; activating a third method for performing a bounded angle ratio test procedure if the sensor arrangement includes multiple sensors and utilizing at least one of the first, second and third methods to accumulate sensor signals and determining the operating state of the system.

  9. Hybrid and concatenated coding applications.

    NASA Technical Reports Server (NTRS)

    Hofman, L. B.; Odenwalder, J. P.

    1972-01-01

    Results of a study to evaluate the performance and implementation complexity of a concatenated and a hybrid coding system for moderate-speed deep-space applications. It is shown that with a total complexity of less than three times that of the basic Viterbi decoder, concatenated coding improves a constraint length 8 rate 1/3 Viterbi decoding system by 1.1 and 2.6 dB at bit error probabilities of 0.0001 and one hundred millionth, respectively. With a somewhat greater total complexity, the hybrid coding system is shown to obtain a 0.9-dB computational performance improvement over the basic rate 1/3 sequential decoding system. Although substantial, these complexities are much less than those required to achieve the same performances with more complex Viterbi or sequential decoder systems.

  10. Convolutional coding at 50 Mbps for the Shuttle Ku-band return link

    NASA Technical Reports Server (NTRS)

    Batson, B. H.; Huth, G. K.

    1976-01-01

    Error correcting coding is required for 50 Mbps data link from the Shuttle Orbiter through the Tracking and Data Relay Satellite System (TDRSS) to the ground because of severe power limitations. Convolutional coding has been chosen because the decoding algorithms (sequential and Viterbi) provide significant coding gains at the required bit error probability of one in 10 to the sixth power and can be implemented at 50 Mbps with moderate hardware. While a 50 Mbps sequential decoder has been built, the highest data rate achieved for a Viterbi decoder is 10 Mbps. Thus, five multiplexed 10 Mbps Viterbi decoders must be used to provide a 50 Mbps data rate. This paper discusses the tradeoffs which were considered when selecting the multiplexed Viterbi decoder approach for this application.

  11. The Impact of Optional Flexible Year Program on Texas Assessment of Knowledge and Skills Test Scores of Fifth Grade Students

    ERIC Educational Resources Information Center

    Longbotham, Pamela J.

    2012-01-01

    The study examined the impact of participation in an optional flexible year program (OFYP) on academic achievement. The ex post facto study employed an explanatory sequential mixed methods design. The non-probability sample consisted of 163 fifth grade students in an OFYP district and 137 5th graders in a 180-day instructional year school…

  12. The Bayesian Learning Automaton — Empirical Evaluation with Two-Armed Bernoulli Bandit Problems

    NASA Astrophysics Data System (ADS)

    Granmo, Ole-Christoffer

    The two-armed Bernoulli bandit (TABB) problem is a classical optimization problem where an agent sequentially pulls one of two arms attached to a gambling machine, with each pull resulting either in a reward or a penalty. The reward probabilities of each arm are unknown, and thus one must balance between exploiting existing knowledge about the arms, and obtaining new information.

  13. Utilization of medical services in the public health system in the Southern Brazil.

    PubMed

    Bastos, Gisele Alsina Nader; Duca, Giovâni Firpo Del; Hallal, Pedro Curi; Santos, Iná S

    2011-06-01

    To estimate the prevalence and analyze factors associated with the utilization of medical services in the public health system. Cross-sectional population-based study with 2,706 individuals aged 20-69 years carried out in Pelotas, Southern Brazil, in 2008. A systematic sampling with probability proportional to the number of households in each sector was adopted. The outcome was defined by the combination of the questions related to medical consultation in the previous three months and place. The exposure variables were: sex, age, marital status, level of schooling, family income, self-reported hospital admission in the previous year, having a regular physician, self-perception of health, and the main reason for the last consultation. Descriptive analysis was stratified by sex and the analytical statistics included the use of the Wald test for tendency and heterogeneity in the crude analysis and Poisson regression with robust variance in the adjusted analysis, taking into consideration cluster sampling. The prevalence of utilization of medical services in the three previous months was 60.6%, almost half of these (42.0%, 95%CI: 36.6;47.5) in public services. The most utilized public services were the primary care units (49.5%). In the adjusted analysis stratified by sex, men with advanced age and young women had higher probability of using the medical services in the public system. In both sexes, low level of schooling, low per capita family income, not having a regular physician and hospital admission in the previous year were associated with the outcome. Despite the expressive reduction in the utilization of medical health services in the public system in the last 15 years, the public services are now reaching a previously unassisted portion of the population (individuals with low income and schooling).

  14. Doubly Robust and Efficient Estimation of Marginal Structural Models for the Hazard Function

    PubMed Central

    Zheng, Wenjing; Petersen, Maya; van der Laan, Mark

    2016-01-01

    In social and health sciences, many research questions involve understanding the causal effect of a longitudinal treatment on mortality (or time-to-event outcomes in general). Often, treatment status may change in response to past covariates that are risk factors for mortality, and in turn, treatment status may also affect such subsequent covariates. In these situations, Marginal Structural Models (MSMs), introduced by Robins (1997), are well-established and widely used tools to account for time-varying confounding. In particular, a MSM can be used to specify the intervention-specific counterfactual hazard function, i.e. the hazard for the outcome of a subject in an ideal experiment where he/she was assigned to follow a given intervention on their treatment variables. The parameters of this hazard MSM are traditionally estimated using the Inverse Probability Weighted estimation (IPTW, van der Laan and Petersen (2007), Robins et al. (2000b), Robins (1999), Robins et al. (2008)). This estimator is easy to implement and admits Wald-type confidence intervals. However, its consistency hinges on the correct specification of the treatment allocation probabilities, and the estimates are generally sensitive to large treatment weights (especially in the presence of strong confounding), which are difficult to stabilize for dynamic treatment regimes. In this paper, we present a pooled targeted maximum likelihood estimator (TMLE, van der Laan and Rubin (2006)) for MSM for the hazard function under longitudinal dynamic treatment regimes. The proposed estimator is semiparametric efficient and doubly robust, hence offers bias reduction and efficiency gain over the incumbent IPTW estimator. Moreover, the substitution principle rooted in the TMLE potentially mitigates the sensitivity to large treatment weights in IPTW. We compare the performance of the proposed estimator with the IPTW and a non-targeted substitution estimator in a simulation study. PMID:27227723

  15. Sequential quantum secret sharing in a noisy environment aided with weak measurements

    NASA Astrophysics Data System (ADS)

    Ray, Maharshi; Chatterjee, Sourav; Chakrabarty, Indranil

    2016-05-01

    In this work we give a (n,n)-threshold protocol for sequential secret sharing of quantum information for the first time. By sequential secret sharing we refer to a situation where the dealer is not having all the secrets at the same time, at the beginning of the protocol; however if the dealer wishes to share secrets at subsequent phases she/he can realize it with the help of our protocol. First of all we present our protocol for three parties and later we generalize it for the situation where we have more (n> 3) parties. Interestingly, we show that our protocol of sequential secret sharing requires less amount of quantum as well as classical resource as compared to the situation wherein existing protocols are repeatedly used. Further in a much more realistic situation, we consider the sharing of qubits through two kinds of noisy channels, namely the phase damping channel (PDC) and the amplitude damping channel (ADC). When we carry out the sequential secret sharing in the presence of noise we observe that the fidelity of secret sharing at the kth iteration is independent of the effect of noise at the (k - 1)th iteration. In case of ADC we have seen that the average fidelity of secret sharing drops down to ½ which is equivalent to a random guess of the quantum secret. Interestingly, we find that by applying weak measurements one can enhance the average fidelity. This increase of the average fidelity can be achieved with certain trade off with the success probability of the weak measurements.

  16. EEG Classification with a Sequential Decision-Making Method in Motor Imagery BCI.

    PubMed

    Liu, Rong; Wang, Yongxuan; Newman, Geoffrey I; Thakor, Nitish V; Ying, Sarah

    2017-12-01

    To develop subject-specific classifier to recognize mental states fast and reliably is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this paper, a sequential decision-making strategy is explored in conjunction with an optimal wavelet analysis for EEG classification. The subject-specific wavelet parameters based on a grid-search method were first developed to determine evidence accumulative curve for the sequential classifier. Then we proposed a new method to set the two constrained thresholds in the sequential probability ratio test (SPRT) based on the cumulative curve and a desired expected stopping time. As a result, it balanced the decision time of each class, and we term it balanced threshold SPRT (BTSPRT). The properties of the method were illustrated on 14 subjects' recordings from offline and online tests. Results showed the average maximum accuracy of the proposed method to be 83.4% and the average decision time of 2.77[Formula: see text]s, when compared with 79.2% accuracy and a decision time of 3.01[Formula: see text]s for the sequential Bayesian (SB) method. The BTSPRT method not only improves the classification accuracy and decision speed comparing with the other nonsequential or SB methods, but also provides an explicit relationship between stopping time, thresholds and error, which is important for balancing the speed-accuracy tradeoff. These results suggest that BTSPRT would be useful in explicitly adjusting the tradeoff between rapid decision-making and error-free device control.

  17. Impact of a Sequential Intervention on Albumin Utilization in Critical Care.

    PubMed

    Lyu, Peter F; Hockenberry, Jason M; Gaydos, Laura M; Howard, David H; Buchman, Timothy G; Murphy, David J

    2016-07-01

    Literature generally finds no advantages in mortality risk for albumin over cheaper alternatives in many settings. Few studies have combined financial and nonfinancial strategies to reduce albumin overuse. We evaluated the effect of a sequential multifaceted intervention on decreasing albumin use in ICU and explore the effects of different strategies. Prospective prepost cohort study. Eight ICUs at two hospitals in an academic healthcare system. Adult patients admitted to study ICUs from September 2011 to August 2014 (n = 22,004). Over 2 years, providers in study ICUs participated in an intervention to reduce albumin use involving monthly feedback and explicit financial incentives in the first year and internal guidelines and order process changes in the second year. Outcomes measured were albumin orders per ICU admission, direct albumin costs, and mortality. Mean (SD) utilization decreased 37% from 2.7 orders (6.8) per admission during the baseline to 1.7 orders (4.6) during the intervention (p < 0.001). Regression analysis revealed that the intervention was independently associated with 0.9 fewer orders per admission, a 42% relative decrease. This adjusted effect consisted of an 18% reduction in the probability of using any albumin (p < 0.001) and a 29% reduction in the number of orders per admission among patients receiving any (p < 0.001). Secondary analysis revealed that probability reductions were concurrent with internal guidelines and order process modification while reductions in quantity occurred largely during the financial incentives and feedback period. Estimated cost savings totaled $2.5M during the 2-year intervention. There was no significant difference in ICU or hospital mortality between baseline and intervention. A sequential intervention achieved significant reductions in ICU albumin use and cost savings without changes in patient outcomes, supporting the combination of financial and nonfinancial strategies to align providers with evidence-based practices.

  18. Predicting Sasang Constitution Using Body-Shape Information

    PubMed Central

    Jang, Eunsu; Do, Jun-Hyeong; Jin, HeeJeong; Park, KiHyun; Ku, Boncho; Lee, Siwoo; Kim, Jong Yeol

    2012-01-01

    Objectives. Body measurement plays a pivotal role not only in the diagnosis of disease but also in the classification of typology. Sasang constitutional medicine, which is one of the forms of Traditional Korean Medicine, is considered to be strongly associated with body shape. We attempted to determine whether a Sasang constitutional analytic tool based on body shape information (SCAT-B) could predict Sasang constitution (SC). Methods. After surveying 23 Oriental medical clinics, 2,677 subjects were recruited and body shape information was collected. The SCAT-Bs for males and females were developed using multinomial logistic regression. Stepwise forward-variable selection was applied using the score statistic and Wald's test. Results. The predictive rates of the SCAT-B for Tae-eumin (TE), Soeumin (SE), and Soyangin (SY) types in males and females were 80.2%, 56.9%, and 37.7% (males) and 69.3%, 38.9%, and 50.0% (females) in the training set and were 74%, 70.1%, and 35% (males), and 67.4%, 66.3%, and 53.7% (females) in the test set, respectively. Higher constitutional probability scores showed a trend for association with higher predictability. Conclusions. This study shows that the Sasang constitutional analytic tool, which is based on body shape information, may be relatively highly predictive of TE type but may be less predictive when used for SY type. PMID:22792124

  19. The future of metabolic syndrome and cardiovascular disease prevention: polyhype or polyhope? Tales from the polyera.

    PubMed

    Franco, O H; Karnik, K; Bonneux, L

    2007-09-01

    Recently society has been witnessing the rise of a new era in the prevention and treatment of the metabolic syndrome and cardiovascular disease: the Polyera. This new era started when a promising concept - the Polypill - was introduced by Wald et al. in 2003. The Polypill is a theoretical combination of six pharmacological compounds (a statin, three different antihypertensives, aspirin, and folic acid) that in combination could reduce cardiovascular disease by more than 80%. Although the Polypill could theoretically be a highly effective intervention, it is not yet available in the market and its effectiveness remains unproven. In the population at large, cheap prizes may come at prohibitive costs. With frail elderly and population prevalences of co-morbidity far higher than in drug trials, rare adverse effects may be frequent. In December 2004, a more natural, safer, and probably tastier alternative to the Polypill - the Polymeal - was introduced. Contrary to the Polypill, the Polymeal combined 6 different foods (fruits and vegetables, almonds, chocolate, wine, fish, and garlic) that taken together in a regular basis could cut cardiovascular disease risk by over 75%. Polyproducts from the polyera in true populations might hide unexpected polyinteractions. In the polyera, polytrials will need to establish benefits, harms, and costs.

  20. Dysfunction of bulbar central pattern generator in ALS patients with dysphagia during sequential deglutition.

    PubMed

    Aydogdu, Ibrahim; Tanriverdi, Zeynep; Ertekin, Cumhur

    2011-06-01

    The aim of this study is to investigate a probable dysfunction of the central pattern generator (CPG) in dysphagic patients with ALS. We investigated 58 patients with ALS, 23 patients with PD, and 33 normal subjects. The laryngeal movements and EMG of the submental muscles were recorded during sequential water swallowing (SWS) of 100ml of water. The coordination of SWS and respiration was also studied in some normal cases and ALS patients. Normal subjects could complete the SWS optimally within 10s using 7 swallows, while in dysphagic ALS patients, the total duration and the number of swallows were significantly increased. The novel finding was that the regularity and rhythmicity of the swallowing pattern during SWS was disorganized to irregular and arhythmic pattern in 43% of the ALS patients. The duration and speed of swallowing were the most sensitive parameters for the disturbed oropharyngeal motility during SWS. The corticobulbar control of swallowing is insufficient in ALS, and the swallowing CPG cannot work very well to produce segmental muscle activation and sequential swallowing. CPG dysfunction can result in irregular and arhythmical sequential swallowing in ALS patients with bulbar plus pseudobulbar types. The arhythmical SWS pattern can be considered as a kind of dysfunction of CPG in human ALS cases with dysphagia. Copyright © 2010 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  1. Dry minor mergers and size evolution of high-z compact massive early-type galaxies

    NASA Astrophysics Data System (ADS)

    Oogi, Taira; Habe, Asao

    2012-09-01

    Recent observations show evidence that high-z (z ~ 2 - 3) early-type galaxies (ETGs) are quite compact than that with comparable mass at z ~ 0. Dry merger scenario is one of the most probable one that can explain such size evolution. However, previous studies based on this scenario do not succeed to explain both properties of high-z compact massive ETGs and local ETGs, consistently. We investigate effects of sequential, multiple dry minor (stellar mass ratio M2/M1<1/4) mergers on the size evolution of compact massive ETGs. We perform N-body simulations of the sequential minor mergers with parabolic and head-on orbits, including a dark matter component and a stellar component. We show that the sequential minor mergers of compact satellite galaxies are the most efficient in the size growth and in decrease of the velocity dispersion of the compact massive ETGs. The change of stellar size and density of the merger remnant is consistent with the recent observations. Furthermore, we construct the merger histories of candidates of high-z compact massive ETGs using the Millennium Simulation Database, and estimate the size growth of the galaxies by dry minor mergers. We can reproduce the mean size growth factor between z = 2 and z = 0, assuming the most efficient size growth obtained in the case of the sequential minor mergers in our simulations.

  2. Women in American History: A Series. Book Four, Women in the Progressive Era 1890-1920.

    ERIC Educational Resources Information Center

    Sanders, Beverly

    The document, one in a series of four on women in American history, discusses the role of women in the Progressive Era (1890-1920). Designed to supplement high school U.S. history textbooks, the book is comprised of five chapters. Chapter I describes reformers and radicals including Jane Addams and Lillian Wald who began the settlement house…

  3. An Examination of Reading and Discipline Data for Elementary and Secondary African American Students: Implications for Special Education

    ERIC Educational Resources Information Center

    Bowman-Perrott, Lisa; Lewis, Chance W.

    2008-01-01

    The achievement gap between African American students and their Caucasian peers has been of concern for quite some time in the field of education (Orfield, Losen, Wald, & Swanson, 2004). As a result, this article examined high stakes reading test scores for 4,135 African American students in grades 3 through 10 in a Midwestern school district.…

  4. Does the decision in a validation process of a surrogate endpoint change with level of significance of treatment effect? A proposal on validation of surrogate endpoints.

    PubMed

    Sertdemir, Y; Burgut, R

    2009-01-01

    In recent years the use of surrogate end points (S) has become an interesting issue. In clinical trials, it is important to get treatment outcomes as early as possible. For this reason there is a need for surrogate endpoints (S) which are measured earlier than the true endpoint (T). However, before a surrogate endpoint can be used it must be validated. For a candidate surrogate endpoint, for example time to recurrence, the validation result may change dramatically between clinical trials. The aim of this study is to show how the validation criterion (R(2)(trial)) proposed by Buyse et al. are influenced by the magnitude of treatment effect with an application using real data. The criterion R(2)(trial) proposed by Buyse et al. (2000) is applied to the four data sets from colon cancer clinical trials (C-01, C-02, C-03 and C-04). Each clinical trial is analyzed separately for treatment effect on survival (true endpoint) and recurrence free survival (surrogate endpoint) and this analysis is done also for each center in each trial. Results are used for standard validation analysis. The centers were grouped by the Wald statistic in 3 equal groups. Validation criteria R(2)(trial) were 0.641 95% CI (0.432-0.782), 0.223 95% CI (0.008-0.503), 0.761 95% CI (0.550-0.872) and 0.560 95% CI (0.404-0.687) for C-01, C-02, C-03 and C-04 respectively. The R(2)(trial) criteria changed by the Wald statistics observed for the centers used in the validation process. Higher the Wald statistic groups are higher the R(2)(trial) values observed. The recurrence free survival is not a good surrogate for overall survival in clinical trials with non significant treatment effects and moderate for significant treatment effects. This shows that the level of significance of treatment effect should be taken into account in validation process of surrogate endpoints.

  5. Prognostic utility of the Seattle Heart Failure Score and amino terminal pro B-type natriuretic peptide in varying stages of systolic heart failure.

    PubMed

    Adlbrecht, Christopher; Hülsmann, Martin; Neuhold, Stephanie; Strunk, Guido; Pacher, Richard

    2013-05-01

    Cardiac transplantation represents the best procedure to improve long-term clinical outcome in advanced chronic heart failure (CHF), if pre-selection criteria are sufficient to outweigh the risk of the failing heart over the risk of transplantation. Although the cornerstone of success, risk assessment in heart transplant candidates is still under-investigated. Amino terminal pro B-type natriuretic peptide (NT-proBNP) is regarded as the best predictor of outcome in CHF, and the Seattle Heart Failure Score (SHFS), including clinical markers, is widely used if NT-proBNP is unavailable. The present study assessed the predictive value for all-cause death of the SHFS in CHF patients and compared it with NT-proBNP in a multivariate model including established baseline parameters known to predict survival. A total of 429 patients receiving stable HF-specific pharmacotherapy were included and monitored for 53.4 ± 20.6 months. Of these, 133 patients (31%) died during follow-up. Several established predictors of death on univariate analysis proved significant for the total study cohort. Systolic pulmonary arterial pressure (hazard ratio [HR], 1.03; 95% confidence interval [CI], 1.02-1.05); p < 0.001, Wald 15.1), logNT-proBNP (HR, 1.51; 95% CI, 1.22-1.86; p < 0.001, Wald 14.9), and the SHFS (HR, 0.99; 95% CI, 0.99-1.00; p < 0.001, Wald 12.6) remained within the stepwise multivariate Cox regression model as independent predictors of all-cause death. Receiver operating characteristic curve analysis revealed an area under the curve of 0.802 for logNT-proBNP and 0.762 for the SHFS. NT-proBNP is a more potent marker to identify patients at the highest risk. If the NT-proBNP measurement is unavailable, the SHFS may serve as an adequate clinical surrogate to predict all-cause death. Copyright © 2013 International Society for Heart and Lung Transplantation. Published by Elsevier Inc. All rights reserved.

  6. Aortic Stiffness, Ambulatory Blood Pressure, and Predictors of Response to Antihypertensive Therapy in Hemodialysis.

    PubMed

    Georgianos, Panagiotis I; Agarwal, Rajiv

    2015-08-01

    Arterial stiffness is associated with elevated blood pressure (BP), but it is unclear whether it also makes hypertension more resistant to treatment. Among hypertensive dialysis patients, this study investigated whether aortic stiffness determines ambulatory BP and predicts its improvement with therapy. Post hoc analysis of the Hypertension in Hemodialysis Patients Treated With Atenolol or Lisinopril (HDPAL) trial. 179 hypertensive hemodialysis patients with echocardiographic left ventricular hypertrophy. Baseline aortic pulse wave velocity (PWV). Baseline and treatment-induced change in 44-hour ambulatory BP at 3, 6, and 12 months. Aortic PWV was assessed with an echocardiographic-Doppler technique (ACUSON Cypress, Siemens Medical), and 44-hour interdialytic ambulatory BP monitoring was performed with a Spacelabs 90207 monitor. Mean baseline aortic PWV was 7.6±2.7 (SD) m/s. Overall treatment-induced changes in ambulatory systolic BP (SBP) were -15.6±20.4, -18.9±22.5, and -20.0±19.7 mmHg at 3, 6, and 12 months. Changes in SBP were no different among tertiles of baseline PWV. Aortic PWV was associated directly with baseline ambulatory SBP and pulse pressure (PP) and inversely with diastolic BP (DBP). After adjustment for several cardiovascular risk factors, each 1-m/s higher PWV was associated with 1.34-mm Hg higher baseline SBP (β=1.34±0.46; P=0.004) and 1.02-mm Hg higher PP (β=1.02±0.33; P=0.002), whereas the association with DBP was no longer significant. Baseline PWV did not predict treatment-induced changes in SBP (Wald test, P=0.3) and DBP (Wald test, P=0.7), but was a predictor of an overall improvement in PP during follow-up (Wald test, P=0.03). Observational design; predominantly black patients were studied. Because aortic PWV is not a predictor of treatment-induced change in ambulatory BP among hypertensive dialysis patients, it indicates that among these patients, hypertension can be controlled successfully regardless of aortic stiffness. Published by Elsevier Inc.

  7. Sequential Revision of Belief, Trust Type, and the Order Effect.

    PubMed

    Entin, Elliot E; Serfaty, Daniel

    2017-05-01

    Objective To investigate how people's sequential adjustments to their position are impacted by the source of the information. Background There is an extensive body of research on how the order in which new information is received affects people's final views and decisions as well as research on how they adjust their views in light of new information. Method Seventy college-aged students, 60% of whom were women, completed one of eight different randomly distributed booklets prepared to create the eight different between-subjects treatment conditions created by crossing the two levels of information source with the four level of order conditions. Based on the information provided, participants estimated the probability of an attack, the dependent measure. Results Confirming information from an expert intelligence officer significantly increased the attack probability from the initial position more than confirming information from a longtime friend. Conversely, disconfirming information from a longtime friend decreased the attack probability significantly more than the same information from an intelligence officer. Conclusion It was confirmed that confirming and disconfirming evidence were differentially affected depending on information source, either an expert or a close friend. The difference appears to be due to the existence of two kinds of trust: cognitive-based imbued to an expert and affective-based imbued to a close friend. Application Purveyors of information need to understand that it is not only the content of a message that counts but that other forces are at work such as the order in which information is received and characteristics of the information source.

  8. Optimal nonlinear filtering using the finite-volume method

    NASA Astrophysics Data System (ADS)

    Fox, Colin; Morrison, Malcolm E. K.; Norton, Richard A.; Molteno, Timothy C. A.

    2018-01-01

    Optimal sequential inference, or filtering, for the state of a deterministic dynamical system requires simulation of the Frobenius-Perron operator, that can be formulated as the solution of a continuity equation. For low-dimensional, smooth systems, the finite-volume numerical method provides a solution that conserves probability and gives estimates that converge to the optimal continuous-time values, while a Courant-Friedrichs-Lewy-type condition assures that intermediate discretized solutions remain positive density functions. This method is demonstrated in an example of nonlinear filtering for the state of a simple pendulum, with comparison to results using the unscented Kalman filter, and for a case where rank-deficient observations lead to multimodal probability distributions.

  9. Health Monitoring of a Satellite System

    NASA Technical Reports Server (NTRS)

    Chen, Robert H.; Ng, Hok K.; Speyer, Jason L.; Guntur, Lokeshkumar S.; Carpenter, Russell

    2004-01-01

    A health monitoring system based on analytical redundancy is developed for satellites on elliptical orbits. First, the dynamics of the satellite including orbital mechanics and attitude dynamics is modelled as a periodic system. Then, periodic fault detection filters are designed to detect and identify the satellite's actuator and sensor faults. In addition, parity equations are constructed using the algebraic redundant relationship among the actuators and sensors. Furthermore, a residual processor is designed to generate the probability of each of the actuator and sensor faults by using a sequential probability test. Finally, the health monitoring system, consisting of periodic fault detection lters, parity equations and residual processor, is evaluated in the simulation in the presence of disturbances and uncertainty.

  10. Optimal decision making on the basis of evidence represented in spike trains.

    PubMed

    Zhang, Jiaxiang; Bogacz, Rafal

    2010-05-01

    Experimental data indicate that perceptual decision making involves integration of sensory evidence in certain cortical areas. Theoretical studies have proposed that the computation in neural decision circuits approximates statistically optimal decision procedures (e.g., sequential probability ratio test) that maximize the reward rate in sequential choice tasks. However, these previous studies assumed that the sensory evidence was represented by continuous values from gaussian distributions with the same variance across alternatives. In this article, we make a more realistic assumption that sensory evidence is represented in spike trains described by the Poisson processes, which naturally satisfy the mean-variance relationship observed in sensory neurons. We show that for such a representation, the neural circuits involving cortical integrators and basal ganglia can approximate the optimal decision procedures for two and multiple alternative choice tasks.

  11. Sequential CFAR detectors using a dead-zone limiter

    NASA Astrophysics Data System (ADS)

    Tantaratana, Sawasd

    1990-09-01

    The performances of some proposed sequential constant-false-alarm-rate (CFAR) detectors are evaluated. The observations are passed through a dead-zone limiter, the output of which is -1, 0, or +1, depending on whether the input is less than -c, between -c and c, or greater than c, where c is a constant. The test statistic is the sum of the outputs. The test is performed on a reduced set of data (those with absolute value larger than c), with the test statistic being the sum of the signs of the reduced set of data. Both constant and linear boundaries are considered. Numerical results show a significant reduction of the average number of observations needed to achieve the same false alarm and detection probabilities as a fixed-sample-size CFAR detector using the same kind of test statistic.

  12. Structured filtering

    NASA Astrophysics Data System (ADS)

    Granade, Christopher; Wiebe, Nathan

    2017-08-01

    A major challenge facing existing sequential Monte Carlo methods for parameter estimation in physics stems from the inability of existing approaches to robustly deal with experiments that have different mechanisms that yield the results with equivalent probability. We address this problem here by proposing a form of particle filtering that clusters the particles that comprise the sequential Monte Carlo approximation to the posterior before applying a resampler. Through a new graphical approach to thinking about such models, we are able to devise an artificial-intelligence based strategy that automatically learns the shape and number of the clusters in the support of the posterior. We demonstrate the power of our approach by applying it to randomized gap estimation and a form of low circuit-depth phase estimation where existing methods from the physics literature either exhibit much worse performance or even fail completely.

  13. A novel approach for small sample size family-based association studies: sequential tests.

    PubMed

    Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan

    2011-08-01

    In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.

  14. Gleason-Busch theorem for sequential measurements

    NASA Astrophysics Data System (ADS)

    Flatt, Kieran; Barnett, Stephen M.; Croke, Sarah

    2017-12-01

    Gleason's theorem is a statement that, given some reasonable assumptions, the Born rule used to calculate probabilities in quantum mechanics is essentially unique [A. M. Gleason, Indiana Univ. Math. J. 6, 885 (1957), 10.1512/iumj.1957.6.56050]. We show that Gleason's theorem contains within it also the structure of sequential measurements, and along with this the state update rule. We give a small set of axioms, which are physically motivated and analogous to those in Busch's proof of Gleason's theorem [P. Busch, Phys. Rev. Lett. 91, 120403 (2003), 10.1103/PhysRevLett.91.120403], from which the familiar Kraus operator form follows. An axiomatic approach has practical relevance as well as fundamental interest, in making clear those assumptions which underlie the security of quantum communication protocols. Interestingly, the two-time formalism is seen to arise naturally in this approach.

  15. Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions

    DOE PAGES

    Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.

    2017-01-09

    We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less

  16. Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.

    We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less

  17. Upper bounds on sequential decoding performance parameters

    NASA Technical Reports Server (NTRS)

    Jelinek, F.

    1974-01-01

    This paper presents the best obtainable random coding and expurgated upper bounds on the probabilities of undetectable error, of t-order failure (advance to depth t into an incorrect subset), and of likelihood rise in the incorrect subset, applicable to sequential decoding when the metric bias G is arbitrary. Upper bounds on the Pareto exponent are also presented. The G-values optimizing each of the parameters of interest are determined, and are shown to lie in intervals that in general have nonzero widths. The G-optimal expurgated bound on undetectable error is shown to agree with that for maximum likelihood decoding of convolutional codes, and that on failure agrees with the block code expurgated bound. Included are curves evaluating the bounds for interesting choices of G and SNR for a binary-input quantized-output Gaussian additive noise channel.

  18. Patient satisfaction with ambulatory care in Germany: effects of patient- and medical practice-related factors.

    PubMed

    Auras, Silke; Ostermann, Thomas; de Cruppé, Werner; Bitzer, Eva-Maria; Diel, Franziska; Geraedts, Max

    2016-12-01

    The study aimed to illustrate the effect of the patients' sex, age, self-rated health and medical practice specialization on patient satisfaction. Secondary analysis of patient survey data using multilevel analysis (generalized linear mixed model, medical practice as random effect) using a sequential modelling strategy. We examined the effects of the patients' sex, age, self-rated health and medical practice specialization on four patient satisfaction dimensions: medical practice organization, information, interaction, professional competence. The study was performed in 92 German medical practices providing ambulatory care in general medicine, internal medicine or gynaecology. In total, 9888 adult patients participated in a patient survey using the validated 'questionnaire on satisfaction with ambulatory care-quality from the patient perspective [ZAP]'. We calculated four models for each satisfaction dimension, revealing regression coefficients with 95% confidence intervals (CIs) for all independent variables, and using Wald Chi-Square statistic for each modelling step (model validity) and LR-Tests to compare the models of each step with the previous model. The patients' sex and age had a weak effect (maximum regression coefficient 1.09, CI 0.39; 1.80), and the patients' self-rated health had the strongest positive effect (maximum regression coefficient 7.66, CI 6.69; 8.63) on satisfaction ratings. The effect of medical practice specialization was heterogeneous. All factors studied, specifically the patients' self-rated health, affected patient satisfaction. Adjustment should always be considered because it improves the comparability of patient satisfaction in medical practices with atypically varying patient populations and increases the acceptance of comparisons. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  19. A Looping-Based Model for Quenching Repression

    PubMed Central

    Pollak, Yaroslav; Goldberg, Sarah; Amit, Roee

    2017-01-01

    We model the regulatory role of proteins bound to looped DNA using a simulation in which dsDNA is represented as a self-avoiding chain, and proteins as spherical protrusions. We simulate long self-avoiding chains using a sequential importance sampling Monte-Carlo algorithm, and compute the probabilities for chain looping with and without a protrusion. We find that a protrusion near one of the chain’s termini reduces the probability of looping, even for chains much longer than the protrusion–chain-terminus distance. This effect increases with protrusion size, and decreases with protrusion-terminus distance. The reduced probability of looping can be explained via an eclipse-like model, which provides a novel inhibitory mechanism. We test the eclipse model on two possible transcription-factor occupancy states of the D. melanogaster eve 3/7 enhancer, and show that it provides a possible explanation for the experimentally-observed eve stripe 3 and 7 expression patterns. PMID:28085884

  20. Probability matching and strategy availability.

    PubMed

    Koehler, Derek J; James, Greta

    2010-09-01

    Findings from two experiments indicate that probability matching in sequential choice arises from an asymmetry in strategy availability: The matching strategy comes readily to mind, whereas a superior alternative strategy, maximizing, does not. First, compared with the minority who spontaneously engage in maximizing, the majority of participants endorse maximizing as superior to matching in a direct comparison when both strategies are described. Second, when the maximizing strategy is brought to their attention, more participants subsequently engage in maximizing. Third, matchers are more likely than maximizers to base decisions in other tasks on their initial intuitions, suggesting that they are more inclined to use a choice strategy that comes to mind quickly. These results indicate that a substantial subset of probability matchers are victims of "underthinking" rather than "overthinking": They fail to engage in sufficient deliberation to generate a superior alternative to the matching strategy that comes so readily to mind.

  1. Near real-time adverse drug reaction surveillance within population-based health networks: methodology considerations for data accrual.

    PubMed

    Avery, Taliser R; Kulldorff, Martin; Vilk, Yury; Li, Lingling; Cheetham, T Craig; Dublin, Sascha; Davis, Robert L; Liu, Liyan; Herrinton, Lisa; Brown, Jeffrey S

    2013-05-01

    This study describes practical considerations for implementation of near real-time medical product safety surveillance in a distributed health data network. We conducted pilot active safety surveillance comparing generic divalproex sodium to historical branded product at four health plans from April to October 2009. Outcomes reported are all-cause emergency room visits and fractures. One retrospective data extract was completed (January 2002-June 2008), followed by seven prospective monthly extracts (January 2008-November 2009). To evaluate delays in claims processing, we used three analytic approaches: near real-time sequential analysis, sequential analysis with 1.5 month delay, and nonsequential (using final retrospective data). Sequential analyses used the maximized sequential probability ratio test. Procedural and logistical barriers to active surveillance were documented. We identified 6586 new users of generic divalproex sodium and 43,960 new users of the branded product. Quality control methods identified 16 extract errors, which were corrected. Near real-time extracts captured 87.5% of emergency room visits and 50.0% of fractures, which improved to 98.3% and 68.7% respectively with 1.5 month delay. We did not identify signals for either outcome regardless of extract timeframe, and slight differences in the test statistic and relative risk estimates were found. Near real-time sequential safety surveillance is feasible, but several barriers warrant attention. Data quality review of each data extract was necessary. Although signal detection was not affected by delay in analysis, when using a historical control group differential accrual between exposure and outcomes may theoretically bias near real-time risk estimates towards the null, causing failure to detect a signal. Copyright © 2013 John Wiley & Sons, Ltd.

  2. Dizocilpine (MK-801) impairs learning in the active place avoidance task but has no effect on the performance during task/context alternation.

    PubMed

    Vojtechova, Iveta; Petrasek, Tomas; Hatalova, Hana; Pistikova, Adela; Vales, Karel; Stuchlik, Ales

    2016-05-15

    The prevention of engram interference, pattern separation, flexibility, cognitive coordination and spatial navigation are usually studied separately at the behavioral level. Impairment in executive functions is often observed in patients suffering from schizophrenia. We have designed a protocol for assessing these functions all together as behavioral separation. This protocol is based on alternated or sequential training in two tasks testing different hippocampal functions (the Morris water maze and active place avoidance), and alternated or sequential training in two similar environments of the active place avoidance task. In Experiment 1, we tested, in adult rats, whether the performance in two different spatial tasks was affected by their order in sequential learning, or by their day-to-day alternation. In Experiment 2, rats learned to solve the active place avoidance task in two environments either alternately or sequentially. We found that rats are able to acquire both tasks and to discriminate both similar contexts without obvious problems regardless of the order or the alternation. We used two groups of rats, controls and a rat model of psychosis induced by a subchronic intraperitoneal application of 0.08mg/kg of dizocilpine (MK-801), a non-competitive antagonist of NMDA receptors. Dizocilpine had no selective effect on parallel/sequential learning of tasks/contexts. However, it caused hyperlocomotion and a significant deficit in learning in the active place avoidance task regardless of the task alternation. Cognitive coordination tested by this task is probably more sensitive to dizocilpine than spatial orientation because no hyperactivity or learning impairment was observed in the Morris water maze. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Sequential stages and distribution patterns of aging-related tau astrogliopathy (ARTAG) in the human brain.

    PubMed

    Kovacs, Gabor G; Xie, Sharon X; Robinson, John L; Lee, Edward B; Smith, Douglas H; Schuck, Theresa; Lee, Virginia M-Y; Trojanowski, John Q

    2018-06-11

    Aging-related tau astrogliopathy (ARTAG) describes tau pathology in astrocytes in different locations and anatomical regions. In the present study we addressed the question of whether sequential distribution patterns can be recognized for ARTAG or astroglial tau pathologies in both primary FTLD-tauopathies and non-FTLD-tauopathy cases. By evaluating 687 postmortem brains with diverse disorders we identified ARTAG in 455. We evaluated frequencies and hierarchical clustering of anatomical involvement and used conditional probability and logistic regression to model the sequential distribution of ARTAG and astroglial tau pathologies across different brain regions. For subpial and white matter ARTAG we recognize three and two patterns, respectively, each with three stages initiated or ending in the amygdala. Subependymal ARTAG does not show a clear sequential pattern. For grey matter (GM) ARTAG we recognize four stages including a striatal pathway of spreading towards the cortex and/or amygdala, and the brainstem, and an amygdala pathway, which precedes the involvement of the striatum and/or cortex and proceeds towards the brainstem. GM ARTAG and astrocytic plaque pathology in corticobasal degeneration follows a predominantly frontal-parietal cortical to temporal-occipital cortical, to subcortical, to brainstem pathway (four stages). GM ARTAG and tufted astrocyte pathology in progressive supranuclear palsy shows a striatum to frontal-parietal cortical to temporal to occipital, to amygdala, and to brainstem sequence (four stages). In Pick's disease cases with astroglial tau pathology an overlapping pattern with PSP can be appreciated. We conclude that tau-astrogliopathy type-specific sequential patterns cannot be simplified as neuron-based staging systems. The proposed cytopathological and hierarchical stages provide a conceptual approach to identify the initial steps of the pathogenesis of tau pathologies in ARTAG and primary FTLD-tauopathies.

  4. Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission

    NASA Astrophysics Data System (ADS)

    Huang, Yuechen; Li, Haiyang

    2018-06-01

    This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.

  5. Decay modes of the Hoyle state in 12C

    NASA Astrophysics Data System (ADS)

    Zheng, H.; Bonasera, A.; Huang, M.; Zhang, S.

    2018-04-01

    Recent experimental results give an upper limit less than 0.043% (95% C.L.) to the direct decay of the Hoyle state into 3α respect to the sequential decay into 8Be + α. We performed one and two-dimensional tunneling calculations to estimate such a ratio and found it to be more than one order of magnitude smaller than experiment depending on the range of the nuclear force. This is within high statistics experimental capabilities. Our results can also be tested by measuring the decay modes of high excitation energy states of 12C where the ratio of direct to sequential decay might reach 10% at E*(12C) = 10.3 MeV. The link between a Bose Einstein Condensate (BEC) and the direct decay of the Hoyle state is also addressed. We discuss a hypothetical 'Efimov state' at E*(12C) = 7.458 MeV, which would mainly sequentially decay with 3α of equal energies: a counterintuitive result of tunneling. Such a state, if it would exist, is at least 8 orders of magnitude less probable than the Hoyle's, thus below the sensitivity of recent and past experiments.

  6. The Evolution of NATO with Four Plausible Threat Scenarios

    DTIC Science & Technology

    1987-08-01

    were political changes as well. The evolution of British political institutions, and especially the widening of the franchise by electoral reform, had...countries. King Edward, in particular, excited the French imagination. In his con- tact with French statesmen, Edward made clear his desire for an...such as the Mitteland Kanal, British lines of communication and the Teuto- burger Wald before making contact. With Soviet air forces alert against

  7. Entropy of nonrotating isolated horizons in Lovelock theory from loop quantum gravity

    NASA Astrophysics Data System (ADS)

    Wang, Jing-Bo; Huang, Chao-Guang; Li, Lin

    2016-08-01

    In this paper, the BF theory method is applied to the nonrotating isolated horizons in Lovelock theory. The final entropy matches the Wald entropy formula for this theory. We also confirm the conclusion obtained by Bodendorfer et al. that the entropy is related to the flux operator rather than the area operator in general diffeomorphic-invariant theory. Supported by National Natural Science Foundation of China (11275207)

  8. Spruce bark beetle in Sumava NP: A precedent case of EU Wilderness Protection, the role of NGOs and the public in wilderness protection

    Treesearch

    Jaromir Blaha; Vojtech Kotecky

    2015-01-01

    Sumava National Park, in the Czech Republic, is, along with the adjacent Bayerischer Wald NP in Germany, one of the largest wilderness areas in Western and Central Europe. Mountain spruce forests here have been heavily influenced by natural disturbances. Following years of debate about conservation management in the national park, logging operations on the Czech side...

  9. Testing for independence in J×K contingency tables with complex sample survey data.

    PubMed

    Lipsitz, Stuart R; Fitzmaurice, Garrett M; Sinha, Debajyoti; Hevelone, Nathanael; Giovannucci, Edward; Hu, Jim C

    2015-09-01

    The test of independence of row and column variables in a (J×K) contingency table is a widely used statistical test in many areas of application. For complex survey samples, use of the standard Pearson chi-squared test is inappropriate due to correlation among units within the same cluster. Rao and Scott (1981, Journal of the American Statistical Association 76, 221-230) proposed an approach in which the standard Pearson chi-squared statistic is multiplied by a design effect to adjust for the complex survey design. Unfortunately, this test fails to exist when one of the observed cell counts equals zero. Even with the large samples typical of many complex surveys, zero cell counts can occur for rare events, small domains, or contingency tables with a large number of cells. Here, we propose Wald and score test statistics for independence based on weighted least squares estimating equations. In contrast to the Rao-Scott test statistic, the proposed Wald and score test statistics always exist. In simulations, the score test is found to perform best with respect to type I error. The proposed method is motivated by, and applied to, post surgical complications data from the United States' Nationwide Inpatient Sample (NIS) complex survey of hospitals in 2008. © 2015, The International Biometric Society.

  10. Exact test-based approach for equivalence test with parameter margin.

    PubMed

    Cassie Dong, Xiaoyu; Bian, Yuanyuan; Tsong, Yi; Wang, Tianhua

    2017-01-01

    The equivalence test has a wide range of applications in pharmaceutical statistics which we need to test for the similarity between two groups. In recent years, the equivalence test has been used in assessing the analytical similarity between a proposed biosimilar product and a reference product. More specifically, the mean values of the two products for a given quality attribute are compared against an equivalence margin in the form of ±f × σ R , where ± f × σ R is a function of the reference variability. In practice, this margin is unknown and is estimated from the sample as ±f × S R . If we use this estimated margin with the classic t-test statistic on the equivalence test for the means, both Type I and Type II error rates may inflate. To resolve this issue, we develop an exact-based test method and compare this method with other proposed methods, such as the Wald test, the constrained Wald test, and the Generalized Pivotal Quantity (GPQ) in terms of Type I error rate and power. Application of those methods on data analysis is also provided in this paper. This work focuses on the development and discussion of the general statistical methodology and is not limited to the application of analytical similarity.

  11. Antioxidant and anxiolytic activities of Crataegus nigra Wald. et Kit. berries.

    PubMed

    Popovic-Milenkovic, Marija T; Tomovic, Marina T; Brankovic, Snezana R; Ljujic, Biljana T; Jankovic, Slobodan M

    2014-01-01

    Hawthorn has been present for a long time in traditional medicine as antihypertensive, hypolipidemic, anti-inflammatory, gastroprotective, antimicrobial agent. Hawthorn can be used for the cure of stress, nervousness but there is no published paper about actions of Crataegus nigra Wald. et Kit. fruits. The present study was carried out to test free-radical-scavenging and anxiolytic activity of C. nigra fruits. DPPH (alpha,alpha-diphenyl-beta-picrylhydrazyl) assay was used to measure antioxidant activity. BHT, BHA, PG, quercetin and rutin were used as standards. The total amount of phenolic compounds, procyanidins, and flavonoids in the extracts, was determined spectrophotometrically. Results were expressed as equivalents of gallic acid, cyanidin chloride and quercetin equivalents, respectively. LC-MS/MS was used for identification and quantification of phenolic composition. The anxiety effect, expressed as the difference in time spent in the open and closed arms, was measured and compared between groups. Phenolic compound content of Crataegus nigra fruits was 72.7 mg/g. Yield of total flavonoid aglycones was 0.115 mg/g. Procyanidins were 5.6 mg/g. DPPH radical-scavenging capacity of the extracts showed linear concentration dependency, IC50 value were 27.33 microg/mL. Anxiolytic effect was observed. Species Crataegus nigra fruits hydroalcoholic extract showed antioxidant and anxiolytic activity.

  12. Estimation of probability of failure for damage-tolerant aerospace structures

    NASA Astrophysics Data System (ADS)

    Halbert, Keith

    The majority of aircraft structures are designed to be damage-tolerant such that safe operation can continue in the presence of minor damage. It is necessary to schedule inspections so that minor damage can be found and repaired. It is generally not possible to perform structural inspections prior to every flight. The scheduling is traditionally accomplished through a deterministic set of methods referred to as Damage Tolerance Analysis (DTA). DTA has proven to produce safe aircraft but does not provide estimates of the probability of failure of future flights or the probability of repair of future inspections. Without these estimates maintenance costs cannot be accurately predicted. Also, estimation of failure probabilities is now a regulatory requirement for some aircraft. The set of methods concerned with the probabilistic formulation of this problem are collectively referred to as Probabilistic Damage Tolerance Analysis (PDTA). The goal of PDTA is to control the failure probability while holding maintenance costs to a reasonable level. This work focuses specifically on PDTA for fatigue cracking of metallic aircraft structures. The growth of a crack (or cracks) must be modeled using all available data and engineering knowledge. The length of a crack can be assessed only indirectly through evidence such as non-destructive inspection results, failures or lack of failures, and the observed severity of usage of the structure. The current set of industry PDTA tools are lacking in several ways: they may in some cases yield poor estimates of failure probabilities, they cannot realistically represent the variety of possible failure and maintenance scenarios, and they do not allow for model updates which incorporate observed evidence. A PDTA modeling methodology must be flexible enough to estimate accurately the failure and repair probabilities under a variety of maintenance scenarios, and be capable of incorporating observed evidence as it becomes available. This dissertation describes and develops new PDTA methodologies that directly address the deficiencies of the currently used tools. The new methods are implemented as a free, publicly licensed and open source R software package that can be downloaded from the Comprehensive R Archive Network. The tools consist of two main components. First, an explicit (and expensive) Monte Carlo approach is presented which simulates the life of an aircraft structural component flight-by-flight. This straightforward MC routine can be used to provide defensible estimates of the failure probabilities for future flights and repair probabilities for future inspections under a variety of failure and maintenance scenarios. This routine is intended to provide baseline estimates against which to compare the results of other, more efficient approaches. Second, an original approach is described which models the fatigue process and future scheduled inspections as a hidden Markov model. This model is solved using a particle-based approximation and the sequential importance sampling algorithm, which provides an efficient solution to the PDTA problem. Sequential importance sampling is an extension of importance sampling to a Markov process, allowing for efficient Bayesian updating of model parameters. This model updating capability, the benefit of which is demonstrated, is lacking in other PDTA approaches. The results of this approach are shown to agree with the results of the explicit Monte Carlo routine for a number of PDTA problems. Extensions to the typical PDTA problem, which cannot be solved using currently available tools, are presented and solved in this work. These extensions include incorporating observed evidence (such as non-destructive inspection results), more realistic treatment of possible future repairs, and the modeling of failure involving more than one crack (the so-called continuing damage problem). The described hidden Markov model / sequential importance sampling approach to PDTA has the potential to improve aerospace structural safety and reduce maintenance costs by providing a more accurate assessment of the risk of failure and the likelihood of repairs throughout the life of an aircraft.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Lin; Dai, Zhenxue; Gong, Huili

    Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less

  14. Statistic inversion of multi-zone transition probability models for aquifer characterization in alluvial fans

    DOE PAGES

    Zhu, Lin; Dai, Zhenxue; Gong, Huili; ...

    2015-06-12

    Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less

  15. Situation models and memory: the effects of temporal and causal information on recall sequence.

    PubMed

    Brownstein, Aaron L; Read, Stephen J

    2007-10-01

    Participants watched an episode of the television show Cheers on video and then reported free recall. Recall sequence followed the sequence of events in the story; if one concept was observed immediately after another, it was recalled immediately after it. We also made a causal network of the show's story and found that recall sequence followed causal links; effects were recalled immediately after their causes. Recall sequence was more likely to follow causal links than temporal sequence, and most likely to follow causal links that were temporally sequential. Results were similar at 10-minute and 1-week delayed recall. This is the most direct and detailed evidence reported on sequential effects in recall. The causal network also predicted probability of recall; concepts with more links and concepts on the main causal chain were most likely to be recalled. This extends the causal network model to more complex materials than previous research.

  16. Children's sequential information search is sensitive to environmental probabilities.

    PubMed

    Nelson, Jonathan D; Divjak, Bojana; Gudmundsdottir, Gudny; Martignon, Laura F; Meder, Björn

    2014-01-01

    We investigated 4th-grade children's search strategies on sequential search tasks in which the goal is to identify an unknown target object by asking yes-no questions about its features. We used exhaustive search to identify the most efficient question strategies and evaluated the usefulness of children's questions accordingly. Results show that children have good intuitions regarding questions' usefulness and search adaptively, relative to the statistical structure of the task environment. Search was especially efficient in a task environment that was representative of real-world experiences. This suggests that children may use their knowledge of real-world environmental statistics to guide their search behavior. We also compared different related search tasks. We found positive transfer effects from first doing a number search task on a later person search task. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  17. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    NASA Astrophysics Data System (ADS)

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  18. Risk Assessment of Pollution Emergencies in Water Source Areas of the Hanjiang-to-Weihe River Diversion Project

    NASA Astrophysics Data System (ADS)

    Liu, Luyao; Feng, Minquan

    2018-03-01

    [Objective] This study quantitatively evaluated risk probabilities of sudden water pollution accidents under the influence of risk sources, thus providing an important guarantee for risk source identification during water diversion from the Hanjiang River to the Weihe River. [Methods] The research used Bayesian networks to represent the correlation between accidental risk sources. It also adopted the sequential Monte Carlo algorithm to combine water quality simulation with state simulation of risk sources, thereby determining standard-exceeding probabilities of sudden water pollution accidents. [Results] When the upstream inflow was 138.15 m3/s and the average accident duration was 48 h, the probabilities were 0.0416 and 0.0056 separately. When the upstream inflow was 55.29 m3/s and the average accident duration was 48 h, the probabilities were 0.0225 and 0.0028 separately. [Conclusions] The research conducted a risk assessment on sudden water pollution accidents, thereby providing an important guarantee for the smooth implementation, operation, and water quality of the Hanjiang-to-Weihe River Diversion Project.

  19. Age, period, and cohort analysis of regular dental care behavior and edentulism: A marginal approach

    PubMed Central

    2011-01-01

    Background To analyze the regular dental care behavior and prevalence of edentulism in adult Danes, reported in sequential cross-sectional oral health surveys by the application of a marginal approach to consider the possible clustering effect of birth cohorts. Methods Data from four sequential cross-sectional surveys of non-institutionalized Danes conducted from 1975-2005 comprising 4330 respondents aged 15+ years in 9 birth cohorts were analyzed. The key study variables were seeking dental care on an annual basis (ADC) and edentulism. For the analysis of ADC, survey year, age, gender, socio-economic status (SES) group, denture-wearing, and school dental care (SDC) during childhood were considered. For the analysis of edentulism, only respondents aged 35+ years were included. Survey year, age, gender, SES group, ADC, and SDC during childhood were considered as the independent factors. To take into account the clustering effect of birth cohorts, marginal logistic regressions with an independent correlation structure in generalized estimating equations (GEE) were carried out, with PROC GENMOD in SAS software. Results The overall proportion of people seeking ADC increased from 58.8% in 1975 to 86.7% in 2005, while for respondents aged 35 years or older, the overall prevalence of edentulism (35+ years) decreased from 36.4% in 1975 to 5.0% in 2005. Females, respondents in the higher SES group, in more recent survey years, with no denture, and receiving SDC in all grades during childhood were associated with higher probability of seeking ADC regularly (P < 0.05). The interaction of SDC and age (P < 0.0001) was significant. The probabilities of seeking ADC were even higher among subjects with SDC in all grades and aged 45 years or older. Females, older age group, respondents in earlier survey years, not seeking ADC, lower SES group, and not receiving SDC in all grades were associated with higher probability of being edentulous (P < 0.05). Conclusions With the use of GEE, the potential clustering effect of birth cohorts in sequential cross-sectional oral health survey data could be appropriately considered. The success of Danish dental health policy was demonstrated by a continued increase of regular dental visiting habits and tooth retention in adults because school dental care was provided to Danes in their childhood. PMID:21410991

  20. Anomaly Detection in Dynamic Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turcotte, Melissa

    2014-10-14

    Anomaly detection in dynamic communication networks has many important security applications. These networks can be extremely large and so detecting any changes in their structure can be computationally challenging; hence, computationally fast, parallelisable methods for monitoring the network are paramount. For this reason the methods presented here use independent node and edge based models to detect locally anomalous substructures within communication networks. As a first stage, the aim is to detect changes in the data streams arising from node or edge communications. Throughout the thesis simple, conjugate Bayesian models for counting processes are used to model these data streams. Amore » second stage of analysis can then be performed on a much reduced subset of the network comprising nodes and edges which have been identified as potentially anomalous in the first stage. The first method assumes communications in a network arise from an inhomogeneous Poisson process with piecewise constant intensity. Anomaly detection is then treated as a changepoint problem on the intensities. The changepoint model is extended to incorporate seasonal behavior inherent in communication networks. This seasonal behavior is also viewed as a changepoint problem acting on a piecewise constant Poisson process. In a static time frame, inference is made on this extended model via a Gibbs sampling strategy. In a sequential time frame, where the data arrive as a stream, a novel, fast Sequential Monte Carlo (SMC) algorithm is introduced to sample from the sequence of posterior distributions of the change points over time. A second method is considered for monitoring communications in a large scale computer network. The usage patterns in these types of networks are very bursty in nature and don’t fit a Poisson process model. For tractable inference, discrete time models are considered, where the data are aggregated into discrete time periods and probability models are fitted to the communication counts. In a sequential analysis, anomalous behavior is then identified from outlying behavior with respect to the fitted predictive probability models. Seasonality is again incorporated into the model and is treated as a changepoint model on the transition probabilities of a discrete time Markov process. Second stage analytics are then developed which combine anomalous edges to identify anomalous substructures in the network.« less

  1. Higher derivative theories for interacting massless gravitons in Minkowski spacetime

    NASA Astrophysics Data System (ADS)

    Bai, Dong; Xing, Yu-Hang

    2018-07-01

    We study a novel class of higher derivative theories for interacting massless gravitons in Minkowski spacetime. These theories were first discussed by Wald decades ago, and are characterized by scattering amplitudes essentially different from general relativity and many of its modifications. We discuss various aspects of these higher derivative theories, including the Lagrangian construction, violation of asymptotic causality, scattering amplitudes, non-renormalization, and possible implications in emergent gravitons from condensed matter systems.

  2. Making Better Use of Bandwidth: Data Compression and Network Management Technologies

    DTIC Science & Technology

    2005-01-01

    data , the compression would not be a success. A key feature of the Lempel - Ziv family of algorithms is that the...citeseer.nj.nec.com/yu02motion.html. Ziv , J., and A. Lempel , “A Universal Algorithm for Sequential Data Compression ,” IEEE Transac- tions on Information Theory, Vol. 23, 1977, pp. 337–342. ...probability models – Lempel - Ziv – Prediction by partial matching The central component of a lossless compression algorithm

  3. Attractors in complex networks

    NASA Astrophysics Data System (ADS)

    Rodrigues, Alexandre A. P.

    2017-10-01

    In the framework of the generalized Lotka-Volterra model, solutions representing multispecies sequential competition can be predictable with high probability. In this paper, we show that it occurs because the corresponding "heteroclinic channel" forms part of an attractor. We prove that, generically, in an attracting heteroclinic network involving a finite number of hyperbolic and non-resonant saddle-equilibria whose linearization has only real eigenvalues, the connections corresponding to the most positive expanding eigenvalues form part of an attractor (observable in numerical simulations).

  4. Attractors in complex networks.

    PubMed

    Rodrigues, Alexandre A P

    2017-10-01

    In the framework of the generalized Lotka-Volterra model, solutions representing multispecies sequential competition can be predictable with high probability. In this paper, we show that it occurs because the corresponding "heteroclinic channel" forms part of an attractor. We prove that, generically, in an attracting heteroclinic network involving a finite number of hyperbolic and non-resonant saddle-equilibria whose linearization has only real eigenvalues, the connections corresponding to the most positive expanding eigenvalues form part of an attractor (observable in numerical simulations).

  5. External versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment.

    DTIC Science & Technology

    1983-06-01

    tions. Linda is a teacher in elementary school . Linda works in a bookstore and takes Yoga classes. Linda is active in the feminist movement. (F) Linda...sophisticated group consisted of PhD students in the decision science program of the Stanford Busi- ness School , all with several advanced courses in... mind by seemingly incon- sequential cues. There is a contrast worthy of note between the effectiveness of exten- sional cues in the health-survey

  6. Photocatalytic Conversion of Nitrobenzene to Aniline through Sequential Proton-Coupled One-Electron Transfers from a Cadmium Sulfide Quantum Dot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, Stephen C.; Bettis Homan, Stephanie; Weiss, Emily A.

    2016-01-28

    This paper describes the use of cadmium sulfide quantum dots (CdS QDs) as visible-light photocatalysts for the reduction of nitrobenzene to aniline through six sequential photoinduced, proton-coupled electron transfers. At pH 3.6–4.3, the internal quantum yield of photons-to-reducing electrons is 37.1% over 54 h of illumination, with no apparent decrease in catalyst activity. Monitoring of the QD exciton by transient absorption reveals that, for each step in the catalytic cycle, the sacrificial reductant, 3-mercaptopropionic acid, scavenges the excitonic hole in ~5 ps to form QD•–; electron transfer to nitrobenzene or the intermediates nitrosobenzene and phenylhydroxylamine then occurs on the nanosecondmore » time scale. The rate constants for the single-electron transfer reactions are correlated with the driving forces for the corresponding proton-coupled electron transfers. This result suggests, but does not prove, that electron transfer, not proton transfer, is rate-limiting for these reactions. Nuclear magnetic resonance analysis of the QD–molecule systems shows that the photoproduct aniline, left unprotonated, serves as a poison for the QD catalyst by adsorbing to its surface. Performing the reaction at an acidic pH not only encourages aniline to desorb but also increases the probability of protonated intermediates; the latter effect probably ensures that recruitment of protons is not rate-limiting.« less

  7. New methods of testing nonlinear hypothesis using iterative NLLS estimator

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.

  8. The cancer care experiences of gay, lesbian and bisexual patients: A secondary analysis of data from the UK Cancer Patient Experience Survey.

    PubMed

    Hulbert-Williams, N J; Plumpton, C O; Flowers, P; McHugh, R; Neal, R D; Semlyen, J; Storey, L

    2017-07-01

    Understanding the effects of population diversity on cancer-related experiences is a priority in oncology care. Previous research demonstrates inequalities arising from variation in age, gender and ethnicity. Inequalities and sexual orientation remain underexplored. Here, we report, for the first time in the UK, a quantitative secondary analysis of the 2013 UK National Cancer Patient Experience Survey which contains 70 questions on specific aspects of care, and six on overall care experiences. 68,737 individuals responded, of whom 0.8% identified as lesbian, gay or bisexual. Controlling for age, gender and concurrent mental health comorbidity, logistic regression models applying post-estimate probability Wald tests explored response differences between heterosexual, bisexual and lesbian/gay respondents. Significant differences were found for 16 questions relating to: (1) a lack of patient-centred care and involvement in decision-making, (2) a need for health professional training and revision of information resources to negate the effects of heteronormativity and (3) evidence of substantial social isolation through cancer. These findings suggest a pattern of inequality, with less positive cancer experiences reported by lesbian, gay and (especially) bisexual respondents. Poor patient-professional communication and heteronormativity in the healthcare setting potentially explain many of the differences found. Social isolation is problematic for this group and warrants further exploration. © 2017 John Wiley & Sons Ltd.

  9. Meat consumption in São Paulo-Brazil: trend in the last decade.

    PubMed

    de Carvalho, Aline Martins; César, Chester Luiz Galvão; Fisberg, Regina Mara; Marchioni, Dirce Maria

    2014-01-01

    To characterize trends in meat consumption, and verify the percentage of excessive red and processed meat consumption in the last decade in São Paulo, Brazil. Cross-sectional weighted data from the Health Survey for São Paulo, conducted in São Paulo, Brazil among people aged 12 years and older. Diet was assessed by two 24-hour recalls in each survey. Usual meat consumption was estimated by Multiple Source Method. Wald tests were used to compare means across survey years. Data were collected from adolescents, adults, and elderly using a representative, complex, multistage probability-based survey in 2003 and in 2008 in São Paulo, southeast of Brazil. 2631 Brazilians were studied in 2003 and 1662 in 2008. Daily mean of red and processed meat consumption was 100 g/day in 2003, and 113 g/day in 2008. Excessive red and processed meat consumption was observed in almost 75% of the subjects, especially among adolescents in both surveys. Beef represented the largest proportion of meat consumed, followed by poultry, pork and fish in both surveys. Daily red and processed meat consumption was higher in 2008 than in 2003, and almost the entire population consumed more than what is recommended by World Cancer Research Fund. Public health strategies are needed, in order to reduce red and processed meat consumption to the recommended amounts, for a healthy diet.

  10. Taking the easy way out? Increasing implementation effort reduces probability maximizing under cognitive load.

    PubMed

    Schulze, Christin; Newell, Ben R

    2016-07-01

    Cognitive load has previously been found to have a positive effect on strategy selection in repeated risky choice. Specifically, whereas inferior probability matching often prevails under single-task conditions, optimal probability maximizing sometimes dominates when a concurrent task competes for cognitive resources. We examined the extent to which this seemingly beneficial effect of increased task demands hinges on the effort required to implement each of the choice strategies. Probability maximizing typically involves a simple repeated response to a single option, whereas probability matching requires choice proportions to be tracked carefully throughout a sequential choice task. Here, we flipped this pattern by introducing a manipulation that made the implementation of maximizing more taxing and, at the same time, allowed decision makers to probability match via a simple repeated response to a single option. The results from two experiments showed that increasing the implementation effort of probability maximizing resulted in decreased adoption rates of this strategy. This was the case both when decision makers simultaneously learned about the outcome probabilities and responded to a dual task (Exp. 1) and when these two aspects were procedurally separated in two distinct stages (Exp. 2). We conclude that the effort involved in implementing a choice strategy is a key factor in shaping repeated choice under uncertainty. Moreover, highlighting the importance of implementation effort casts new light on the sometimes surprising and inconsistent effects of cognitive load that have previously been reported in the literature.

  11. Multitarget tracking in cluttered environment for a multistatic passive radar system under the DAB/DVB network

    NASA Astrophysics Data System (ADS)

    Shi, Yi Fang; Park, Seung Hyo; Song, Taek Lyul

    2017-12-01

    The target tracking using multistatic passive radar in a digital audio/video broadcast (DAB/DVB) network with illuminators of opportunity faces two main challenges: the first challenge is that one has to solve the measurement-to-illuminator association ambiguity in addition to the conventional association ambiguity between the measurements and targets, which introduces a significantly complex three-dimensional (3-D) data association problem among the target-measurement illuminator, this is because all the illuminators transmit the same carrier frequency signals and signals transmitted by different illuminators but reflected via the same target become indistinguishable; the other challenge is that only the bistatic range and range-rate measurements are available while the angle information is unavailable or of very poor quality. In this paper, the authors propose a new target tracking algorithm directly in three-dimensional (3-D) Cartesian coordinates with the capability of track management using the probability of target existence as a track quality measure. The proposed algorithm is termed sequential processing-joint integrated probabilistic data association (SP-JIPDA), which applies the modified sequential processing technique to resolve the additional association ambiguity between measurements and illuminators. The SP-JIPDA algorithm sequentially operates the JIPDA tracker to update each track for each illuminator with all the measurements in the common measurement set at each time. For reasons of fair comparison, the existing modified joint probabilistic data association (MJPDA) algorithm that addresses the 3-D data association problem via "supertargets" using gate grouping and provides tracks directly in 3-D Cartesian coordinates, is enhanced by incorporating the probability of target existence as an effective track quality measure for track management. Both algorithms deal with nonlinear observations using the extended Kalman filtering. A simulation study is performed to verify the superiority of the proposed SP-JIPDA algorithm over the MJIPDA in this multistatic passive radar system.

  12. Sex differences in the effect of birth order and parents' educational status on stunting: a study on Bengalee preschool children from eastern India.

    PubMed

    Biswas, Sadaruddin; Bose, Kaushik

    2010-08-01

    One of the greatest problems facing developing countries, including rural India, is undernutrition in terms of stunting among under 5-year-old children. However, there exists scanty information on the prevalence of stunting among preschool children in India and in particular in West Bengal. This study investigated prevalence of stunting and identified the predictor(s) of stunting among 1-5-year-old Bengalee rural preschool children of Integrated Child Development Services (ICDS) centres. This cross-sectional study was undertaken at different ICDS centres of Chapra Block, Nadia District, West Bengal, India. A total of 673 preschool children (323 boys and 350 girls), aged 1-5 years were selected from 30 randomly selected ICDS centres to study the impact of parents' educational status and child birth order on stunting. The overall (age and sex combined) rate of stunting was 39.2%. Child birth order (BO) (chi(2)=14.10, df=1, p<0.001), father educational status (FES) (chi(2)=21.11, p<0.001) and mother educational status (MES) (chi(2)=14.34, df=1, p>0.001) were significantly associated with the prevalence of stunting among girls. Logistic regression analyses revealed that both FES (Wald=19.97, p<0.001) as well as MES (Wald=13.95, p<0.001) were strong predictors of stunting among girls. Similarly BO (Wald=13.71, p<0.001) was a strong predictor of stunting among girls. Girls with >or=3rd BO had significantly higher risk (OR=2.49, CI=1.54-4.03) of stunting than those with or=secondary level. Similarly, girls with MESor=secondary level. In conclusion our study revealed that BO as well as parents' educational status were strong predictors of stunting among girls but not boys. Sex discrimination could be a likely cause for this sex difference in the impact of BO and parents' educational status.

  13. Sequential versus simultaneous use of chemotherapy and gonadotropin-releasing hormone agonist (GnRHa) among estrogen receptor (ER)-positive premenopausal breast cancer patients: effects on ovarian function, disease-free survival, and overall survival.

    PubMed

    Zhang, Ying; Ji, Yajie; Li, Jianwei; Lei, Li; Wu, Siyu; Zuo, Wenjia; Jia, Xiaoqing; Wang, Yujie; Mo, Miao; Zhang, Na; Shen, Zhenzhou; Wu, Jiong; Shao, Zhimin; Liu, Guangyu

    2018-04-01

    To investigate ovarian function and therapeutic efficacy among estrogen receptor (ER)-positive, premenopausal breast cancer patients treated with gonadotropin-releasing hormone agonist (GnRHa) and chemotherapy simultaneously or sequentially. This study was a phase 3, open-label, parallel, randomized controlled trial (NCT01712893). Two hundred sixteen premenopausal patients (under 45 years) diagnosed with invasive ER-positive breast cancer were enrolled from July 2009 to May 2013 and randomized at a 1:1 ratio to receive (neo)adjuvant chemotherapy combined with sequential or simultaneous GnRHa treatment. All patients were advised to receive GnRHa for at least 2 years. The primary outcome was the incidence of early menopause, defined as amenorrhea lasting longer than 12 months after the last chemotherapy or GnRHa dose, with postmenopausal or unknown follicle-stimulating hormone and estradiol levels. The menstrual resumption period and survivals were the secondary endpoints. The median follow-up time was 56.9 months (IQR 49.5-72.4 months). One hundred and eight patients were enrolled in each group. Among them, 92 and 78 patients had complete primary endpoint data in the sequential and simultaneous groups, respectively. The rates of early menopause were 22.8% (21/92) in the sequential group and 23.1% (18/78) in the simultaneous group [simultaneous vs. sequential: OR 1.01 (95% CI 0.50-2.08); p = 0.969; age-adjusted OR 1.13; (95% CI 0.54-2.37); p = 0.737]. The median menstruation resumption period was 12.0 (95% CI 9.3-14.7) months and 10.3 (95% CI 8.2-12.4) months for the sequential and simultaneous groups, respectively [HR 0.83 (95% CI 0.59-1.16); p = 0.274; age-adjusted HR 0.90 (95%CI 0.64-1.27); p = 0.567]. No significant differences were evident for disease-free survival (p = 0.290) or overall survival (p = 0.514) between the two groups. For ER-positive premenopausal patients, the sequential use of GnRHa and chemotherapy showed ovarian preservation and survival outcomes that were no worse than simultaneous use. The application of GnRHa can probably be delayed until menstruation resumption after chemotherapy.

  14. Effective Online Bayesian Phylogenetics via Sequential Monte Carlo with Guided Proposals

    PubMed Central

    Fourment, Mathieu; Claywell, Brian C; Dinh, Vu; McCoy, Connor; Matsen IV, Frederick A; Darling, Aaron E

    2018-01-01

    Abstract Modern infectious disease outbreak surveillance produces continuous streams of sequence data which require phylogenetic analysis as data arrives. Current software packages for Bayesian phylogenetic inference are unable to quickly incorporate new sequences as they become available, making them less useful for dynamically unfolding evolutionary stories. This limitation can be addressed by applying a class of Bayesian statistical inference algorithms called sequential Monte Carlo (SMC) to conduct online inference, wherein new data can be continuously incorporated to update the estimate of the posterior probability distribution. In this article, we describe and evaluate several different online phylogenetic sequential Monte Carlo (OPSMC) algorithms. We show that proposing new phylogenies with a density similar to the Bayesian prior suffers from poor performance, and we develop “guided” proposals that better match the proposal density to the posterior. Furthermore, we show that the simplest guided proposals can exhibit pathological behavior in some situations, leading to poor results, and that the situation can be resolved by heating the proposal density. The results demonstrate that relative to the widely used MCMC-based algorithm implemented in MrBayes, the total time required to compute a series of phylogenetic posteriors as sequences arrive can be significantly reduced by the use of OPSMC, without incurring a significant loss in accuracy. PMID:29186587

  15. Sequential Monte Carlo tracking of the marginal artery by multiple cue fusion and random forest regression.

    PubMed

    Cherry, Kevin M; Peplinski, Brandon; Kim, Lauren; Wang, Shijun; Lu, Le; Zhang, Weidong; Liu, Jianfei; Wei, Zhuoshi; Summers, Ronald M

    2015-01-01

    Given the potential importance of marginal artery localization in automated registration in computed tomography colonography (CTC), we have devised a semi-automated method of marginal vessel detection employing sequential Monte Carlo tracking (also known as particle filtering tracking) by multiple cue fusion based on intensity, vesselness, organ detection, and minimum spanning tree information for poorly enhanced vessel segments. We then employed a random forest algorithm for intelligent cue fusion and decision making which achieved high sensitivity and robustness. After applying a vessel pruning procedure to the tracking results, we achieved statistically significantly improved precision compared to a baseline Hessian detection method (2.7% versus 75.2%, p<0.001). This method also showed statistically significantly improved recall rate compared to a 2-cue baseline method using fewer vessel cues (30.7% versus 67.7%, p<0.001). These results demonstrate that marginal artery localization on CTC is feasible by combining a discriminative classifier (i.e., random forest) with a sequential Monte Carlo tracking mechanism. In so doing, we present the effective application of an anatomical probability map to vessel pruning as well as a supplementary spatial coordinate system for colonic segmentation and registration when this task has been confounded by colon lumen collapse. Published by Elsevier B.V.

  16. Performance Analysis of Ranging Techniques for the KPLO Mission

    NASA Astrophysics Data System (ADS)

    Park, Sungjoon; Moon, Sangman

    2018-03-01

    In this study, the performance of ranging techniques for the Korea Pathfinder Lunar Orbiter (KPLO) space communication system is investigated. KPLO is the first lunar mission of Korea, and pseudo-noise (PN) ranging will be used to support the mission along with sequential ranging. We compared the performance of both ranging techniques using the criteria of accuracy, acquisition probability, and measurement time. First, we investigated the end-to-end accuracy error of a ranging technique incorporating all sources of errors such as from ground stations and the spacecraft communication system. This study demonstrates that increasing the clock frequency of the ranging system is not required when the dominant factor of accuracy error is independent of the thermal noise of the ranging technique being used in the system. Based on the understanding of ranging accuracy, the measurement time of PN and sequential ranging are further investigated and compared, while both techniques satisfied the accuracy and acquisition requirements. We demonstrated that PN ranging performed better than sequential ranging in the signal-to-noise ratio (SNR) regime where KPLO will be operating, and we found that the T2B (weighted-voting balanced Tausworthe, voting v = 2) code is the best choice among the PN codes available for the KPLO mission.

  17. Weakly charged generalized Kerr-NUT-(A)dS spacetimes

    NASA Astrophysics Data System (ADS)

    Frolov, Valeri P.; Krtouš, Pavel; Kubizňák, David

    2017-08-01

    We find an explicit solution of the source free Maxwell equations in a generalized Kerr-NUT-(A)dS spacetime in all dimensions. This solution is obtained as a linear combination of the closed conformal Killing-Yano tensor hab, which is present in such a spacetime, and a derivative of the primary Killing vector, associated with hab. For the vanishing cosmological constant the obtained solution reduces to the Wald's electromagnetic field generated from the primary Killing vector.

  18. First Law for fields with Internal Gauge Freedom

    NASA Astrophysics Data System (ADS)

    Prabhu, Kartik

    2016-03-01

    We extend the analysis of Iyer and Wald to derive the First Law of blackhole mechanics in the presence of fields charged under an `internal gauge group'. We treat diffeomorphisms and gauge transformations in a unified way by formulating the theory on a principal bundle. The first law then relates the energy and angular momentum at infinity to a potential times charge term at the horizon. The gravitational potential and charge give a notion of temperature and entropy respectively.

  19. Growth rate for blackhole instabilities

    NASA Astrophysics Data System (ADS)

    Prabhu, Kartik; Wald, Robert

    2015-04-01

    Hollands and Wald showed that dynamic stability of stationary axisymmetric black holes is equivalent to positivity of canonical energy on a space of linearised axisymmetric perturbations satisfying certain boundary and gauge conditions. Using a reflection isometry of the background, we split the energy into kinetic and potential parts. We show that the kinetic energy is positive. In the case that potential energy is negative, we show existence of exponentially growing perturbations and further obtain a variational formula for the growth rate.

  20. Physical process first law and increase of horizon entropy for black holes in Einstein-Gauss-Bonnet gravity.

    PubMed

    Chatterjee, Ayan; Sarkar, Sudipta

    2012-03-02

    We establish the physical process version of the first law by studying small perturbations of a stationary black hole with a regular bifurcation surface in Einstein-Gauss-Bonnet gravity. Our result shows that when the stationary black hole is perturbed by a matter stress energy tensor and finally settles down to a new stationary state, the Wald entropy increases as long as the matter satisfies the null energy condition.

  1. Fatigue-life distributions for reaction time data.

    PubMed

    Tejo, Mauricio; Niklitschek-Soto, Sebastián; Marmolejo-Ramos, Fernando

    2018-06-01

    The family of fatigue-life distributions is introduced as an alternative model of reaction time data. This family includes the shifted Wald distribution and a shifted version of the Birnbaum-Saunders distribution. Although the former has been proposed as a way to model reaction time data, the latter has not. Hence, we provide theoretical, mathematical and practical arguments in support of the shifted Birnbaum-Saunders as a suitable model of simple reaction times and associated cognitive mechanisms.

  2. California quake assessed

    NASA Astrophysics Data System (ADS)

    Wuethrich, Bernice

    On January 17, at 4:31 A.M., a 6.6 magnitude earthquake hit the Los Angeles area, crippling much of the local infrastructure and claiming 51 lives. Members of the Southern California Earthquake Network, a consortium of scientists at universities and the United States Geological Survey (USGS), entered a controlled crisis mode. Network scientists, including David Wald, Susan Hough, Kerry Sieh, and a half dozen others went into the field to gather information on the earthquake, which apparently ruptured an unmapped fault.

  3. Advanced Research Projects Agency - Energy (ARPA-E): Background, Status, and Selected Issues for Congress

    DTIC Science & Technology

    2009-04-29

    in 2007. It effectively began operation in February 2008 when its first director, Lisa Porter, began to manage the organization. IARPA is considered...47 Personal Communication with Lisa Porter, Director, IARPA, January 23, 2009. Sally Adde, “Q&A With: IARPA Director Lisa Porter,” IEEE...continued) 109-39 (Washington: GPO, 2006). 50 John M. Broder and Matthew L. Wald , “Big Science Role Is Seen in Global Warming Cure,” New

  4. Acid Extraction - Ion Exchange Recovery of Cinchona Alkaloids Process and Plant Development

    DTIC Science & Technology

    1945-06-08

    of commercial vinegar for a period of three days. The maceration was followed by successive percola- tions with the same material. A yield of...16.2 grams out of a possible 25 was effected for an efficiency of 72 percent. The extraction with commflrcial vinegar was conducted at the request...Program by Lt. Ronsone 75 The Sulfur ic Acid Extraction of Remijia Bark, by Dr. Arthur W. Walde 85 Preliminary Vinegar and Acetic Acid Extraction

  5. Grindr, Scruff, and on the Hunt: Predictors of Condomless Anal Sex, Internet Use, and Mobile Application Use Among Men Who Have Sex With Men.

    PubMed

    Whitfield, Darren L; Kattari, Shanna K; Walls, N Eugene; Al-Tayyib, Alia

    2017-05-01

    In 2016, gay, bisexual, and other men who have sex with men (MSM) comprise more than half of all new HIV diagnoses in the United States, with the primary mode of infection being condomless anal sex (CAS). While studies report an association between use of Internet-based social networking sites and increased CAS, the research on the relationship between cell phone mobile applications (e.g., Grindr, Scruff, Jack'd) and CAS is much less developed. The present study examines whether the manner in which gay, bisexual, and other MSM find sexual partners predicts an increase in likelihood of engaging in CAS in an urban, noncoastal U.S. city. Conducting a secondary data analysis of the 2011 National HIV Behavioral Surveillance survey for Denver ( N = 546), the authors performed binary logistic regression analyses to assess the models that predict how MSM find sexual partners, and the odds of engaging in CAS. While the results suggest that age and race are associated with the mode of finding sexual partners, using the Internet or a mobile app to find sexual partners was not predictive of CAS ( Z Wald = .41, p = .52; Z Wald = .80, p = .37). In terms of HIV prevention, these findings suggest a need for intervention to address HIV prevention on multiple levels (e.g., individual, group, community).

  6. Simplified Estimation and Testing in Unbalanced Repeated Measures Designs.

    PubMed

    Spiess, Martin; Jordan, Pascal; Wendt, Mike

    2018-05-07

    In this paper we propose a simple estimator for unbalanced repeated measures design models where each unit is observed at least once in each cell of the experimental design. The estimator does not require a model of the error covariance structure. Thus, circularity of the error covariance matrix and estimation of correlation parameters and variances are not necessary. Together with a weak assumption about the reason for the varying number of observations, the proposed estimator and its variance estimator are unbiased. As an alternative to confidence intervals based on the normality assumption, a bias-corrected and accelerated bootstrap technique is considered. We also propose the naive percentile bootstrap for Wald-type tests where the standard Wald test may break down when the number of observations is small relative to the number of parameters to be estimated. In a simulation study we illustrate the properties of the estimator and the bootstrap techniques to calculate confidence intervals and conduct hypothesis tests in small and large samples under normality and non-normality of the errors. The results imply that the simple estimator is only slightly less efficient than an estimator that correctly assumes a block structure of the error correlation matrix, a special case of which is an equi-correlation matrix. Application of the estimator and the bootstrap technique is illustrated using data from a task switch experiment based on an experimental within design with 32 cells and 33 participants.

  7. The Association of Serum Levels of Brain-Derived Neurotrophic Factor with the Occurrence of and Recovery from Delirium in Older Medical Inpatients.

    PubMed

    Williams, John; Finn, Karen; Melvin, Vincent; Meagher, David; McCarthy, Geraldine; Adamis, Dimitrios

    2017-01-01

    Limited studies of the association between BDNF levels and delirium have given inconclusive results. This prospective, longitudinal study examined the relationship between BDNF levels and the occurrence of and recovery from delirium. Participants were assessed twice weekly using MoCA, DRS-R98, and APACHE II scales. BDNF levels were estimated using an ELISA method. Delirium was defined with DRS-R98 (score > 16) and recovery from delirium as ≥2 consecutive assessments without delirium prior to discharge. We identified no difference in BDNF levels between those with and without delirium. Excluding those who never developed delirium ( n = 140), we examined the association of BDNF levels and other variables with delirium recovery. Of 58 who experienced delirium, 39 remained delirious while 19 recovered. Using Generalized Estimating Equations models we found that BDNF levels (Wald χ 2 = 7.155; df: 1, p = 0.007) and MoCA (Wald χ 2 = 4.933; df: 1, p = 0.026) were associated with recovery. No significant association was found for APACHE II, dementia, age, or gender. BDNF levels do not appear to be directly linked to the occurrence of delirium but recovery was less likely in those with continuously lower levels. No previous study has investigated the role of BDNF in delirium recovery and these findings warrant replication in other populations.

  8. Alcohol consumption and the risk of type 2 diabetes mellitus: effect modification by hypercholesterolemia: the Third Korea National Health and Nutrition Examination Survey (2005).

    PubMed

    Jang, Hyeongap; Jang, Won-Mo; Park, Jong-Heon; Oh, Juhwan; Oh, Mu-Kyung; Hwang, Soo-Hee; Kim, Yong-Ik; Lee, Jin-Seok

    2012-01-01

    While the protective nature of moderate alcohol consumption against diabetes mellitus is well known, inconsistent findings continue to be reported. The possibility of different mixes of effect modifiers has been raised as a reason for those inconsistent findings. Our study aim was to examine potential effect modifiers that can change the effect of alcohol consumption on type 2 diabetes. From data in the third Korea National Health and Nutrition Examination Survey, 3,982 individuals over the age of 30 years who had not been diagnosed with diabetes were selected for inclusion in the study population. Breslow and Day's test and the Wald test between hypercholesterolemia and alcohol consumption in a multiple logistic regression model were used to assess effect modification. Odds ratios for diabetes stratified by alcohol consumption strata and assessed using Breslow and Day's tests for homogeneity indicated that hypercholesterolemia was not a significant confounding factor (p=0.01). However, the Wald test for interaction terms, which is a conservative method of effect modification, was significant (p=0.03). The results indicate that moderate alcohol consumption is not necessarily protective for type 2 diabetes mellitus, if a person has hypercholesterolemia. People who have hypercholesterolemia should be aware of the risk associated with alcohol consumption, a risk that contrasts with the reported protective effect of moderate alcohol consumption on diabetes.

  9. Density profiles of the exclusive queuing process

    NASA Astrophysics Data System (ADS)

    Arita, Chikashi; Schadschneider, Andreas

    2012-12-01

    The exclusive queuing process (EQP) incorporates the exclusion principle into classic queuing models. It is characterized by, in addition to the entrance probability α and exit probability β, a third parameter: the hopping probability p. The EQP can be interpreted as an exclusion process of variable system length. Its phase diagram in the parameter space (α,β) is divided into a convergent phase and a divergent phase by a critical line which consists of a curved part and a straight part. Here we extend previous studies of this phase diagram. We identify subphases in the divergent phase, which can be distinguished by means of the shape of the density profile, and determine the velocity of the system length growth. This is done for EQPs with different update rules (parallel, backward sequential and continuous time). We also investigate the dynamics of the system length and the number of customers on the critical line. They are diffusive or subdiffusive with non-universal exponents that also depend on the update rules.

  10. Time-course variation of statistics embedded in music: Corpus study on implicit learning and knowledge.

    PubMed

    Daikoku, Tatsuya

    2018-01-01

    Learning and knowledge of transitional probability in sequences like music, called statistical learning and knowledge, are considered implicit processes that occur without intention to learn and awareness of what one knows. This implicit statistical knowledge can be alternatively expressed via abstract medium such as musical melody, which suggests this knowledge is reflected in melodies written by a composer. This study investigates how statistics in music vary over a composer's lifetime. Transitional probabilities of highest-pitch sequences in Ludwig van Beethoven's Piano Sonata were calculated based on different hierarchical Markov models. Each interval pattern was ordered based on the sonata opus number. The transitional probabilities of sequential patterns that are musical universal in music gradually decreased, suggesting that time-course variations of statistics in music reflect time-course variations of a composer's statistical knowledge. This study sheds new light on novel methodologies that may be able to evaluate the time-course variation of composer's implicit knowledge using musical scores.

  11. Generation of intervention strategy for a genetic regulatory network represented by a family of Markov Chains.

    PubMed

    Berlow, Noah; Pal, Ranadip

    2011-01-01

    Genetic Regulatory Networks (GRNs) are frequently modeled as Markov Chains providing the transition probabilities of moving from one state of the network to another. The inverse problem of inference of the Markov Chain from noisy and limited experimental data is an ill posed problem and often generates multiple model possibilities instead of a unique one. In this article, we address the issue of intervention in a genetic regulatory network represented by a family of Markov Chains. The purpose of intervention is to alter the steady state probability distribution of the GRN as the steady states are considered to be representative of the phenotypes. We consider robust stationary control policies with best expected behavior. The extreme computational complexity involved in search of robust stationary control policies is mitigated by using a sequential approach to control policy generation and utilizing computationally efficient techniques for updating the stationary probability distribution of a Markov chain following a rank one perturbation.

  12. [The possibilities for determining the passenger position inside the car passenger compartment based on the injuries to the extremities estimated with the use of the sequential mathematical analysis].

    PubMed

    Smirenin, S A; Khabova, Z S; Fetisov, V A

    2015-01-01

    The objective of the present study was to determine the diagnostic coefficients (DC) of injuries to the upper and lower extremities of the passengers inside the car passenger compartment based on the analysis of 599 archival expert documents available from 45 regional state bureaus of forensic medical examination of the Russian federation for the period from 1995 till 2014. These materials included the data obtained by the examination of 200 corpses and 300 live persons involved in the traffic accidents. The statistical and mathematical treatment of these materials with the use the sequential analysis method based on the Byes and Wald formulas yielded the diagnostic coefficients that made it possible to identify the most important signs characterizing the risk of injuries for the passenger occupying the front seat of the vehicle. In the case of the lethal outcome, such injuries include fractures of the right femur (DC -8.9), bleeding (DC -7.1), wounds in the soft tissues of the right thigh (DC -5.0) with the injurious force applied to its anterior surface, bruises on the posterior surface of the right shoulder (DC -6.2), the right deltoid region (DC -5.9), and the posterior surface of the right forearm (DC -5.5), fractures of the right humerus (DC -5.), etc. When both the driver and the passengers survive, the most informative signs in the latter are bleeding and scratches (DC -14.5 and 11.5 respectively) in the soft tissues at the posterior surface of the right shoulder, fractures of the right humerus (DC -10.0), bruises on the anterior surface of the right thigh (DC -13.0), the posterior surface of the right forearm (DC -10.0) and the fontal region of the right lower leg (DC -10.0), bleeding in the posterior region of the right forearm (DC -9.0) and the anterior region of the left thigh (DC -8.6), fractures of the right femur (DG -8.1), etc. It is concluded that the knowledge of diagnostic coefficients helps to draw attention of the experts to the analysis of the above morphological signs for the objective determination of the passenger position inside the car passenger compartment during traffic accidents and thereby to improve the quality of expert conclusions and the results of forensic medical examination of the injuries inflicted in car crashes.

  13. Scaling in tournaments

    NASA Astrophysics Data System (ADS)

    Ben-Naim, E.; Redner, S.; Vazquez, F.

    2007-02-01

    We study a stochastic process that mimics single-game elimination tournaments. In our model, the outcome of each match is stochastic: the weaker player wins with upset probability q<=1/2, and the stronger player wins with probability 1-q. The loser is eliminated. Extremal statistics of the initial distribution of player strengths governs the tournament outcome. For a uniform initial distribution of strengths, the rank of the winner, x*, decays algebraically with the number of players, N, as x*~N-β. Different decay exponents are found analytically for sequential dynamics, βseq=1-2q, and parallel dynamics, \\beta_par=1+\\frac{\\ln (1-q)}{\\ln 2} . The distribution of player strengths becomes self-similar in the long time limit with an algebraic tail. Our theory successfully describes statistics of the US college basketball national championship tournament.

  14. More heads choose better than one: Group decision making can eliminate probability matching.

    PubMed

    Schulze, Christin; Newell, Ben R

    2016-06-01

    Probability matching is a robust and common failure to adhere to normative predictions in sequential decision making. We show that this choice anomaly is nearly eradicated by gathering individual decision makers into small groups and asking the groups to decide. The group choice advantage emerged both when participants generated responses for an entire sequence of choices without outcome feedback (Exp. 1a) and when participants made trial-by-trial predictions with outcome feedback after each decision (Exp. 1b). We show that the dramatic improvement observed in group settings stands in stark contrast to a complete lack of effective solitary deliberation. These findings suggest a crucial role of group discussion in alleviating the impact of hasty intuitive responses in tasks better suited to careful deliberation.

  15. Small-Scale Spatio-Temporal Distribution of Bactrocera minax (Enderlein) (Diptera: Tephritidae) Using Probability Kriging.

    PubMed

    Wang, S Q; Zhang, H Y; Li, Z L

    2016-10-01

    Understanding spatio-temporal distribution of pest in orchards can provide important information that could be used to design monitoring schemes and establish better means for pest control. In this study, the spatial and temporal distribution of Bactrocera minax (Enderlein) (Diptera: Tephritidae) was assessed, and activity trends were evaluated by using probability kriging. Adults of B. minax were captured in two successive occurrences in a small-scale citrus orchard by using food bait traps, which were placed both inside and outside the orchard. The weekly spatial distribution of B. minax within the orchard and adjacent woods was examined using semivariogram parameters. The edge concentration was discovered during the most weeks in adult occurrence, and the population of the adults aggregated with high probability within a less-than-100-m-wide band on both of the sides of the orchard and the woods. The sequential probability kriged maps showed that the adults were estimated in the marginal zone with higher probability, especially in the early and peak stages. The feeding, ovipositing, and mating behaviors of B. minax are possible explanations for these spatio-temporal patterns. Therefore, spatial arrangement and distance to the forest edge of traps or spraying spot should be considered to enhance pest control on B. minax in small-scale orchards.

  16. Multinomial Logistic Regression & Bootstrapping for Bayesian Estimation of Vertical Facies Prediction in Heterogeneous Sandstone Reservoirs

    NASA Astrophysics Data System (ADS)

    Al-Mudhafar, W. J.

    2013-12-01

    Precisely prediction of rock facies leads to adequate reservoir characterization by improving the porosity-permeability relationships to estimate the properties in non-cored intervals. It also helps to accurately identify the spatial facies distribution to perform an accurate reservoir model for optimal future reservoir performance. In this paper, the facies estimation has been done through Multinomial logistic regression (MLR) with respect to the well logs and core data in a well in upper sandstone formation of South Rumaila oil field. The entire independent variables are gamma rays, formation density, water saturation, shale volume, log porosity, core porosity, and core permeability. Firstly, Robust Sequential Imputation Algorithm has been considered to impute the missing data. This algorithm starts from a complete subset of the dataset and estimates sequentially the missing values in an incomplete observation by minimizing the determinant of the covariance of the augmented data matrix. Then, the observation is added to the complete data matrix and the algorithm continues with the next observation with missing values. The MLR has been chosen to estimate the maximum likelihood and minimize the standard error for the nonlinear relationships between facies & core and log data. The MLR is used to predict the probabilities of the different possible facies given each independent variable by constructing a linear predictor function having a set of weights that are linearly combined with the independent variables by using a dot product. Beta distribution of facies has been considered as prior knowledge and the resulted predicted probability (posterior) has been estimated from MLR based on Baye's theorem that represents the relationship between predicted probability (posterior) with the conditional probability and the prior knowledge. To assess the statistical accuracy of the model, the bootstrap should be carried out to estimate extra-sample prediction error by randomly drawing datasets with replacement from the training data. Each sample has the same size of the original training set and it can be conducted N times to produce N bootstrap datasets to re-fit the model accordingly to decrease the squared difference between the estimated and observed categorical variables (facies) leading to decrease the degree of uncertainty.

  17. Are Standard Diagnostic Test Characteristics Sufficient for the Assessment of Continual Patient Monitoring?

    DTIC Science & Technology

    2013-02-01

    leukocyte esterase in the diagnosis of urinary tract infection may be higher in patients of an underserved population, who tend to receive evaluation...Crit Care. 2001;5(4):184–8. 4. Lawless ST. Crying wolf: false alarms in a pediatric intensive care unit. Crit Care Med. 1994;22:981–5. 5. Wald A...after traumatic inju- ries: a predictor of mortality? J Emerg Med. 2003;25:175–9. 22. Lipsky AM, Gausche-Hill M, Henneman PL, et al. Prehospital

  18. Proceedings of an AAAS Symposium on January 8, 1980: How Much does the Defense Department Advance Science?

    DTIC Science & Technology

    1980-09-24

    DEFENSE OF FREEDOM 18 Edward Teller DANGERS OF USING SCIENCE FOR THE ARMS BUSINESS IN A CORPORATE STATE 21 George Wald DISCUSSION . 25...many other questions. Does the Don’s basic-research program oversupport research which is oriented to the solution of applied problems? Has DOD...long-range goal of being technologically superior in the world. We would be very short-sighted if our program was directed only to the solution of

  19. Traumatic Brain Injury Screening: Preliminary Findings in a US Army Brigade Combat Team

    DTIC Science & Technology

    2009-01-01

    Screening: Preliminary Findings in a US Army Brigade Combat Team Heidi Terrio, MD, MPH; Lisa A. Brenner, PhD; Brian J. Ivins, MS; John M. Cho, MD; Katherine...Sheila Saliman and Lisa Betthauser is greatly appreciated. Corresponding author: Heidi Terrio, MD, MPH, 1853 O’Connell Blvd, Bldg 1042, Room 107, Fort...more mild TBI symptoms among injured soldiers with and without TBI (n = 1208)∗,† Parameter Adjusted 95% CI adjusted Variable estimate (β) SE β Wald P

  20. Cosmic censorship and Weak Gravity Conjecture in the Einstein-Maxwell-dilaton theory

    NASA Astrophysics Data System (ADS)

    Yu, Ten-Yeh; Wen, Wen-Yu

    2018-06-01

    We explore the cosmic censorship in the Einstein-Maxwell-dilaton theory following Wald's thought experiment to destroy a black hole by throwing in a test particle. We discover that at probe limit the extremal charged dilaton black hole could be destroyed by a test particle with specific energy. Nevertheless the censorship is well protected if backreaction or self-force is included. At the end, we discuss an interesting connection between Hoop Conjecture and Weak Gravity Conjecture.

  1. Probable flood predictions in ungauged coastal basins of El Salvador

    USGS Publications Warehouse

    Friedel, M.J.; Smith, M.E.; Chica, A.M.E.; Litke, D.

    2008-01-01

    A regionalization procedure is presented and used to predict probable flooding in four ungauged coastal river basins of El Salvador: Paz, Jiboa, Grande de San Miguel, and Goascoran. The flood-prediction problem is sequentially solved for two regions: upstream mountains and downstream alluvial plains. In the upstream mountains, a set of rainfall-runoff parameter values and recurrent peak-flow discharge hydrographs are simultaneously estimated for 20 tributary-basin models. Application of dissimilarity equations among tributary basins (soft prior information) permitted development of a parsimonious parameter structure subject to information content in the recurrent peak-flow discharge values derived using regression equations based on measurements recorded outside the ungauged study basins. The estimated joint set of parameter values formed the basis from which probable minimum and maximum peak-flow discharge limits were then estimated revealing that prediction uncertainty increases with basin size. In the downstream alluvial plain, model application of the estimated minimum and maximum peak-flow hydrographs facilitated simulation of probable 100-year flood-flow depths in confined canyons and across unconfined coastal alluvial plains. The regionalization procedure provides a tool for hydrologic risk assessment and flood protection planning that is not restricted to the case presented herein. ?? 2008 ASCE.

  2. GENERAL A Hierarchy of Compatibility and Comeasurability Levels in Quantum Logics with Unique Conditional Probabilities

    NASA Astrophysics Data System (ADS)

    Gerd, Niestegge

    2010-12-01

    In the quantum mechanical Hilbert space formalism, the probabilistic interpretation is a later ad-hoc add-on, more or less enforced by the experimental evidence, but not motivated by the mathematical model itself. A model involving a clear probabilistic interpretation from the very beginning is provided by the quantum logics with unique conditional probabilities. It includes the projection lattices in von Neumann algebras and here probability conditionalization becomes identical with the state transition of the Lüders-von Neumann measurement process. This motivates the definition of a hierarchy of five compatibility and comeasurability levels in the abstract setting of the quantum logics with unique conditional probabilities. Their meanings are: the absence of quantum interference or influence, the existence of a joint distribution, simultaneous measurability, and the independence of the final state after two successive measurements from the sequential order of these two measurements. A further level means that two elements of the quantum logic (events) belong to the same Boolean subalgebra. In the general case, the five compatibility and comeasurability levels appear to differ, but they all coincide in the common Hilbert space formalism of quantum mechanics, in von Neumann algebras, and in some other cases.

  3. Goodness of fit of probability distributions for sightings as species approach extinction.

    PubMed

    Vogel, Richard M; Hosking, Jonathan R M; Elphick, Chris S; Roberts, David L; Reed, J Michael

    2009-04-01

    Estimating the probability that a species is extinct and the timing of extinctions is useful in biological fields ranging from paleoecology to conservation biology. Various statistical methods have been introduced to infer the time of extinction and extinction probability from a series of individual sightings. There is little evidence, however, as to which of these models provide adequate fit to actual sighting records. We use L-moment diagrams and probability plot correlation coefficient (PPCC) hypothesis tests to evaluate the goodness of fit of various probabilistic models to sighting data collected for a set of North American and Hawaiian bird populations that have either gone extinct, or are suspected of having gone extinct, during the past 150 years. For our data, the uniform, truncated exponential, and generalized Pareto models performed moderately well, but the Weibull model performed poorly. Of the acceptable models, the uniform distribution performed best based on PPCC goodness of fit comparisons and sequential Bonferroni-type tests. Further analyses using field significance tests suggest that although the uniform distribution is the best of those considered, additional work remains to evaluate the truncated exponential model more fully. The methods we present here provide a framework for evaluating subsequent models.

  4. Bayesian randomized clinical trials: From fixed to adaptive design.

    PubMed

    Yin, Guosheng; Lam, Chi Kin; Shi, Haolun

    2017-08-01

    Randomized controlled studies are the gold standard for phase III clinical trials. Using α-spending functions to control the overall type I error rate, group sequential methods are well established and have been dominating phase III studies. Bayesian randomized design, on the other hand, can be viewed as a complement instead of competitive approach to the frequentist methods. For the fixed Bayesian design, the hypothesis testing can be cast in the posterior probability or Bayes factor framework, which has a direct link to the frequentist type I error rate. Bayesian group sequential design relies upon Bayesian decision-theoretic approaches based on backward induction, which is often computationally intensive. Compared with the frequentist approaches, Bayesian methods have several advantages. The posterior predictive probability serves as a useful and convenient tool for trial monitoring, and can be updated at any time as the data accrue during the trial. The Bayesian decision-theoretic framework possesses a direct link to the decision making in the practical setting, and can be modeled more realistically to reflect the actual cost-benefit analysis during the drug development process. Other merits include the possibility of hierarchical modeling and the use of informative priors, which would lead to a more comprehensive utilization of information from both historical and longitudinal data. From fixed to adaptive design, we focus on Bayesian randomized controlled clinical trials and make extensive comparisons with frequentist counterparts through numerical studies. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Detecting Signals of Disproportionate Reporting from Singapore's Spontaneous Adverse Event Reporting System: An Application of the Sequential Probability Ratio Test.

    PubMed

    Chan, Cheng Leng; Rudrappa, Sowmya; Ang, Pei San; Li, Shu Chuen; Evans, Stephen J W

    2017-08-01

    The ability to detect safety concerns from spontaneous adverse drug reaction reports in a timely and efficient manner remains important in public health. This paper explores the behaviour of the Sequential Probability Ratio Test (SPRT) and ability to detect signals of disproportionate reporting (SDRs) in the Singapore context. We used SPRT with a combination of two hypothesised relative risks (hRRs) of 2 and 4.1 to detect signals of both common and rare adverse events in our small database. We compared SPRT with other methods in terms of number of signals detected and whether labelled adverse drug reactions were detected or the reaction terms were considered serious. The other methods used were reporting odds ratio (ROR), Bayesian Confidence Propagation Neural Network (BCPNN) and Gamma Poisson Shrinker (GPS). The SPRT produced 2187 signals in common with all methods, 268 unique signals, and 70 signals in common with at least one other method, and did not produce signals in 178 cases where two other methods detected them, and there were 403 signals unique to one of the other methods. In terms of sensitivity, ROR performed better than other methods, but the SPRT method found more new signals. The performances of the methods were similar for negative predictive value and specificity. Using a combination of hRRs for SPRT could be a useful screening tool for regulatory agencies, and more detailed investigation of the medical utility of the system is merited.

  6. When is Pharmacogenetic Testing for Antidepressant Response Ready for the Clinic? A Cost-effectiveness Analysis Based on Data from the STAR*D Study

    PubMed Central

    Perlis, Roy H.; Patrick, Amanda; Smoller, Jordan W.; Wang, Philip S.

    2009-01-01

    The potential of personalized medicine to transform the treatment of mood disorders has been widely touted in psychiatry, but has not been quantified. We estimated the costs and benefits of a putative pharmacogenetic test for antidepressant response in the treatment of major depressive disorder (MDD) from the societal perspective. Specifically, we performed cost-effectiveness analyses using state-transition probability models incorporating probabilities from the multicenter STAR*D effectiveness study of MDD. Costs and quality-adjusted life years were compared for sequential antidepressant trials, with or without guidance from a pharmacogenetic test for differential response to selective serotonin reuptake inhibitors (SSRIs). Likely SSRI responders received an SSRI, while likely nonresponders received the norepinephrine/dopamine reuptake inhibitor bupropion. For a 40-year-old with major depressive disorder, applying the pharmacogenetic test and using the non-SSRI bupropion for those at higher risk for nonresponse cost $93,520 per additional quality-adjusted life-year (QALY) compared with treating all patients with an SSRI first and switching sequentially in the case of nonremission. Cost/QALY dropped below $50,000 for tests with remission rate ratios as low as 1.5, corresponding to odds ratios ~1.8–2.0. Tests for differential antidepressant response could thus become cost-effective under certain circumstances. These circumstances, particularly availability of alternative treatment strategies and test effect sizes, can be estimated and should be considered before these tests are broadly applied in clinical settings. PMID:19494805

  7. Studies on Hydrogen Production by Photosynthetic Bacteria after Anaerobic Fermentation of Starch by a Hyperthermophile, Pyrococcus furiosus

    NASA Astrophysics Data System (ADS)

    Sugitate, Toshihiro; Fukatsu, Makoto; Ishimi, Katsuhiro; Kohno, Hideki; Wakayama, Tatsuki; Nakamura, Yoshihiro; Miyake, Jun; Asada, Yasuo

    In order to establish the sequential hydrogen production from waste starch using a hyperthermophile, Pyrococcus furiosus, and a photosynthetic bacterium, basic studies were done. P. furiosus produced hydrogen and acetate by anaerobic fermentation at 90°C. A photosynthetic bacterium, Rhodobacter sphaeroides RV, was able to produce hydrogen from acetate under anaerobic and light conditions at 30°C. However, Rb. sphaeroides RV was not able to produce hydrogen from acetate in the presence of sodium chloride that was essential for the growth and hydrogen production of P. furiosus although it produced hydrogen from lactate at a reduced rate with 1% sodium chloride. A newly isolated strain, CST-8, from natural environment was, however, able to produce hydrogen from acetate, especially with 3 mM L-alanine and in the presence of 1% sodium chloride. The sequential hydrogen production with P. furiosus and salt-tolerant photosynthetic bacteria could be probable at least in the laboratory experiment scale.

  8. Sequential megafaunal collapse in the North Pacific Ocean: An ongoing legacy of industrial whaling?

    USGS Publications Warehouse

    Springer, A.M.; Estes, J.A.; Van Vliet, Gus B.; Williams, T.M.; Doak, D.F.; Danner, E.M.; Forney, K.A.; Pfister, B.

    2003-01-01

    Populations of seals, sea lions, and sea otters have sequentially collapsed over large areas of the northern North Pacific Ocean and southern Bering Sea during the last several decades. A bottom-up nutritional limitation mechanism induced by physical oceanographic change or competition with fisheries was long thought to be largely responsible for these declines. The current weight of evidence is more consistent with top-down forcing. Increased predation by killer whales probably drove the sea otter collapse and may have been responsible for the earlier pinniped declines as well. We propose that decimation of the great whales by post-World War II industrial whaling caused the great whales' foremost natural predators, killer whales, to begin feeding more intensively on the smaller marine mammals, thus "fishing-down" this element of the marine food web. The timing of these events, information on the abundance, diet, and foraging behavior of both predators and prey, and feasibility analyses based on demographic and energetic modeling are all consistent with this hypothesis.

  9. Sequential megafaunal collapse in the North Pacific Ocean: An ongoing legacy of industrial whaling?

    PubMed Central

    Springer, A. M.; Estes, J. A.; van Vliet, G. B.; Williams, T. M.; Doak, D. F.; Danner, E. M.; Forney, K. A.; Pfister, B.

    2003-01-01

    Populations of seals, sea lions, and sea otters have sequentially collapsed over large areas of the northern North Pacific Ocean and southern Bering Sea during the last several decades. A bottom-up nutritional limitation mechanism induced by physical oceanographic change or competition with fisheries was long thought to be largely responsible for these declines. The current weight of evidence is more consistent with top-down forcing. Increased predation by killer whales probably drove the sea otter collapse and may have been responsible for the earlier pinniped declines as well. We propose that decimation of the great whales by post-World War II industrial whaling caused the great whales' foremost natural predators, killer whales, to begin feeding more intensively on the smaller marine mammals, thus “fishing-down” this element of the marine food web. The timing of these events, information on the abundance, diet, and foraging behavior of both predators and prey, and feasibility analyses based on demographic and energetic modeling are all consistent with this hypothesis. PMID:14526101

  10. Footprints of electron correlation in strong-field double ionization of Kr close to the sequential-ionization regime

    NASA Astrophysics Data System (ADS)

    Li, Xiaokai; Wang, Chuncheng; Yuan, Zongqiang; Ye, Difa; Ma, Pan; Hu, Wenhui; Luo, Sizuo; Fu, Libin; Ding, Dajun

    2017-09-01

    By combining kinematically complete measurements and a semiclassical Monte Carlo simulation we study the correlated-electron dynamics in the strong-field double ionization of Kr. Interestingly, we find that, as we step into the sequential-ionization regime, there are still signatures of correlation in the two-electron joint momentum spectrum and, more intriguingly, the scaling law of the high-energy tail is completely different from early predictions on the low-Z atom (He). These experimental observations are well reproduced by our generalized semiclassical model adapting a Green-Sellin-Zachor potential. It is revealed that the competition between the screening effect of inner-shell electrons and the Coulomb focusing of nuclei leads to a non-inverse-square central force, which twists the returned electron trajectory at the vicinity of the parent core and thus significantly increases the probability of hard recollisions between two electrons. Our results might have promising applications ranging from accurately retrieving atomic structures to simulating celestial phenomena in the laboratory.

  11. Contingency Space Analysis: An Alternative Method for Identifying Contingent Relations from Observational Data

    PubMed Central

    Martens, Brian K; DiGennaro, Florence D; Reed, Derek D; Szczech, Frances M; Rosenthal, Blair D

    2008-01-01

    Descriptive assessment methods have been used in applied settings to identify consequences for problem behavior, thereby aiding in the design of effective treatment programs. Consensus has not been reached, however, regarding the types of data or analytic strategies that are most useful for describing behavior–consequence relations. One promising approach involves the analysis of conditional probabilities from sequential recordings of behavior and events that follow its occurrence. In this paper we review several strategies for identifying contingent relations from conditional probabilities, and propose an alternative strategy known as a contingency space analysis (CSA). Step-by-step procedures for conducting and interpreting a CSA using sample data are presented, followed by discussion of the potential use of a CSA for conducting descriptive assessments, informing intervention design, and evaluating changes in reinforcement contingencies following treatment. PMID:18468280

  12. Measurement Model Nonlinearity in Estimation of Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Majji, Manoranjan; Junkins, J. L.; Turner, J. D.

    2012-06-01

    The role of nonlinearity of the measurement model and its interactions with the uncertainty of measurements and geometry of the problem is studied in this paper. An examination of the transformations of the probability density function in various coordinate systems is presented for several astrodynamics applications. Smooth and analytic nonlinear functions are considered for the studies on the exact transformation of uncertainty. Special emphasis is given to understanding the role of change of variables in the calculus of random variables. The transformation of probability density functions through mappings is shown to provide insight in to understanding the evolution of uncertainty in nonlinear systems. Examples are presented to highlight salient aspects of the discussion. A sequential orbit determination problem is analyzed, where the transformation formula provides useful insights for making the choice of coordinates for estimation of dynamic systems.

  13. Extended target recognition in cognitive radar networks.

    PubMed

    Wei, Yimin; Meng, Huadong; Liu, Yimin; Wang, Xiqin

    2010-01-01

    We address the problem of adaptive waveform design for extended target recognition in cognitive radar networks. A closed-loop active target recognition radar system is extended to the case of a centralized cognitive radar network, in which a generalized likelihood ratio (GLR) based sequential hypothesis testing (SHT) framework is employed. Using Doppler velocities measured by multiple radars, the target aspect angle for each radar is calculated. The joint probability of each target hypothesis is then updated using observations from different radar line of sights (LOS). Based on these probabilities, a minimum correlation algorithm is proposed to adaptively design the transmit waveform for each radar in an amplitude fluctuation situation. Simulation results demonstrate performance improvements due to the cognitive radar network and adaptive waveform design. Our minimum correlation algorithm outperforms the eigen-waveform solution and other non-cognitive waveform design approaches.

  14. Critical role of bevacizumab scheduling in combination with pre-surgical chemo-radiotherapy in MRI-defined high-risk locally advanced rectal cancer: Results of the BRANCH trial.

    PubMed

    Avallone, Antonio; Pecori, Biagio; Bianco, Franco; Aloj, Luigi; Tatangelo, Fabiana; Romano, Carmela; Granata, Vincenza; Marone, Pietro; Leone, Alessandra; Botti, Gerardo; Petrillo, Antonella; Caracò, Corradina; Iaffaioli, Vincenzo R; Muto, Paolo; Romano, Giovanni; Comella, Pasquale; Budillon, Alfredo; Delrio, Paolo

    2015-10-06

    We have previously shown that an intensified preoperative regimen including oxaliplatin plus raltitrexed and 5-fluorouracil/folinic acid (OXATOM/FUFA) during preoperative pelvic radiotherapy produced promising results in locally advanced rectal cancer (LARC). Preclinical evidence suggests that the scheduling of bevacizumab may be crucial to optimize its combination with chemo-radiotherapy. This non-randomized, non-comparative, phase II study was conducted in MRI-defined high-risk LARC. Patients received three biweekly cycles of OXATOM/FUFA during RT. Bevacizumab was given 2 weeks before the start of chemo-radiotherapy, and on the same day of chemotherapy for 3 cycles (concomitant-schedule A) or 4 days prior to the first and second cycle of chemotherapy (sequential-schedule B). Primary end point was pathological complete tumor regression (TRG1) rate. The accrual for the concomitant-schedule was early terminated because the number of TRG1 (2 out of 16 patients) was statistically inconsistent with the hypothesis of activity (30%) to be tested. Conversely, the endpoint was reached with the sequential-schedule and the final TRG1 rate among 46 enrolled patients was 50% (95% CI 35%-65%). Neutropenia was the most common grade ≥ 3 toxicity with both schedules, but it was less pronounced with the sequential than concomitant-schedule (30% vs. 44%). Postoperative complications occurred in 8/15 (53%) and 13/46 (28%) patients in schedule A and B, respectively. At 5 year follow-up the probability of PFS and OS was 80% (95%CI, 66%-89%) and 85% (95%CI, 69%-93%), respectively, for the sequential-schedule. These results highlights the relevance of bevacizumab scheduling to optimize its combination with preoperative chemo-radiotherapy in the management of LARC.

  15. Selenium speciation in seleniferous agricultural soils under different cropping systems using sequential extraction and X-ray absorption spectroscopy.

    PubMed

    Qin, Hai-Bo; Zhu, Jian-Ming; Lin, Zhi-Qing; Xu, Wen-Po; Tan, De-Can; Zheng, Li-Rong; Takahashi, Yoshio

    2017-06-01

    Selenium (Se) speciation in soil is critically important for understanding the solubility, mobility, bioavailability, and toxicity of Se in the environment. In this study, Se fractionation and chemical speciation in agricultural soils from seleniferous areas were investigated using the elaborate sequential extraction and X-ray absorption near-edge structure (XANES) spectroscopy. The speciation results quantified by XANES technique generally agreed with those obtained by sequential extraction, and the combination of both approaches can reliably characterize Se speciation in soils. Results showed that dominant organic Se (56-81% of the total Se) and lesser Se(IV) (19-44%) were observed in seleniferous agricultural soils. A significant decrease in the proportion of organic Se to the total Se was found in different types of soil, i.e., paddy soil (81%) > uncultivated soil (69-73%) > upland soil (56-63%), while that of Se(IV) presented an inverse tendency. This suggests that Se speciation in agricultural soils can be significantly influenced by different cropping systems. Organic Se in seleniferous agricultural soils was probably derived from plant litter, which provides a significant insight for phytoremediation in Se-laden ecosystems and biofortification in Se-deficient areas. Furthermore, elevated organic Se in soils could result in higher Se accumulation in crops and further potential chronic Se toxicity to local residents in seleniferous areas. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Does the adolescent patellar tendon respond to 5 days of cumulative load during a volleyball tournament?

    PubMed

    van Ark, M; Docking, S I; van den Akker-Scheek, I; Rudavsky, A; Rio, E; Zwerver, J; Cook, J L

    2016-02-01

    Patellar tendinopathy (jumper's knee) has a high prevalence in jumping athletes. Excessive load on the patellar tendon through high volumes of training and competition is an important risk factor. Structural changes in the tendon are related to a higher risk of developing patellar tendinopathy. The critical tendon load that affects tendon structure is unknown. The aim of this study was to investigate patellar tendon structure on each day of a 5-day volleyball tournament in an adolescent population (16-18 years). The right patellar tendon of 41 players in the Australian Volleyball Schools Cup was scanned with ultrasound tissue characterization (UTC) on every day of the tournament (Monday to Friday). UTC can quantify structure of a tendon into four echo types based on the stability of the echo pattern. Generalized estimating equations (GEE) were used to test for change of echo type I and II over the tournament days. Participants played between eight and nine matches during the tournament. GEE analysis showed no significant change of echo type percentages of echo type I (Wald chi-square = 4.603, d.f. = 4, P = 0.331) and echo type II (Wald chi-square = 6.070, d.f. = 4, P = 0.194) over time. This study shows that patellar tendon structure of 16-18-year-old volleyball players is not affected during 5 days of cumulative loading during a volleyball tournament. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  17. Biases and power for groups comparison on subjective health measurements.

    PubMed

    Hamel, Jean-François; Hardouin, Jean-Benoit; Le Neel, Tanguy; Kubis, Gildas; Roquelaure, Yves; Sébille, Véronique

    2012-01-01

    Subjective health measurements are increasingly used in clinical research, particularly for patient groups comparisons. Two main types of analytical strategies can be used for such data: so-called classical test theory (CTT), relying on observed scores and models coming from Item Response Theory (IRT) relying on a response model relating the items responses to a latent parameter, often called latent trait. Whether IRT or CTT would be the most appropriate method to compare two independent groups of patients on a patient reported outcomes measurement remains unknown and was investigated using simulations. For CTT-based analyses, groups comparison was performed using t-test on the scores. For IRT-based analyses, several methods were compared, according to whether the Rasch model was considered with random effects or with fixed effects, and the group effect was included as a covariate or not. Individual latent traits values were estimated using either a deterministic method or by stochastic approaches. Latent traits were then compared with a t-test. Finally, a two-steps method was performed to compare the latent trait distributions, and a Wald test was performed to test the group effect in the Rasch model including group covariates. The only unbiased IRT-based method was the group covariate Wald's test, performed on the random effects Rasch model. This model displayed the highest observed power, which was similar to the power using the score t-test. These results need to be extended to the case frequently encountered in practice where data are missing and possibly informative.

  18. Treatment of Small Hepatocellular Carcinoma (≤2 cm) in the Caudate Lobe with Sequential Transcatheter Arterial Chemoembolization and Radiofrequency Ablation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hyun, Dongho; Cho, Sung Ki, E-mail: sungkismc.cho@samsung.com; Shin, Sung Wook

    2016-07-15

    PurposeTo evaluate technical feasibility and treatment results of sequential transcatheter arterial chemoembolization (TACE) and cone-beam computed tomography-guided percutaneous radiofrequency ablation (CBCT-RFA) for small hepatocellular carcinoma (HCC) in the caudate lobe.Materials and MethodsInstitutional review board approved this retrospective study. Radiologic database was searched for the patients referred to perform TACE and CBCT-RFA for small caudate HCCs (≤2 cm) between February 2009 and February 2014. A total of 14 patients (12 men and 2 women, mean age; 61.3 years) were included. Percutaneous ultrasonography-guided RFA (pUS-RFA) and surgery were infeasible due to poor conspicuity, inconspicuity or no safe electrode pathway, and poor hepatic reserve. Proceduralmore » success (completion of both TACE and CBCT-RFA), technique efficacy (absence of tumor enhancement at 1 month after treatment), and complication were evaluated. Treatment results including local tumor progression (LTP), intrahepatic distant recurrence (IDR), overall survival (OS), and progression-free survival (PFS) were analyzed.ResultsProcedural success and technique efficacy rates were 78.6 % (11/14) and 90.9 % (10/11), respectively. Average follow-up period was 45.3 months (range, 13.4–64.6 months). The 1-, 3-, and 5-year LTP probabilities were 0, 12.5, and 12.5 %, respectively. IDR occurred in seven patients (63.6 %, 7/11). The 1-, 3-, and 5-year PFS probabilities were 81.8, 51.9, and 26 %, respectively. The 1-, 3-, and 5-year OS probabilities were 100, 80.8, and 80.8 %, respectively.ConclusionCombination of TACE and CBCT-RFA seems feasible for small HCC in the caudate lobe not amenable to pUS-RFA and effective in local tumor control.« less

  19. Assessment of pretest probability of pulmonary embolism in the emergency department by physicians in training using the Wells model.

    PubMed

    Penaloza, Andrea; Mélot, Christian; Dochy, Emmanuelle; Blocklet, Didier; Gevenois, Pierre Alain; Wautrecht, Jean-Claude; Lheureux, Philippe; Motte, Serge

    2007-01-01

    Assessment of pretest probability should be the initial step in investigation of patients with suspected pulmonary embolism (PE). In teaching hospitals physicians in training are often the first physicians to evaluate patients. To evaluate the accuracy of pretest probability assessment of PE by physicians in training using the Wells clinical model and to assess the safety of a diagnostic strategy including pretest probability assessment. 291 consecutive outpatients with clinical suspicion of PE were categorized as having a low, moderate or high pretest probability of PE by physicians in training who could take supervising physicians' advice when they deemed necessary. Then, patients were managed according to a sequential diagnostic algorithm including D-dimer testing, lung scan, leg compression ultrasonography and helical computed tomography. Patients in whom PE was deemed absent were followed up for 3 months. 34 patients (18%) had PE. Prevalence of PE in the low, moderate and high pretest probability groups categorized by physicians in training alone was 3% (95% confidence interval (CI): 1% to 9%), 31% (95% CI: 22% to 42%) and 100% (95% CI: 61% to 100%) respectively. One of the 152 untreated patients (0.7%, 95% CI: 0.1% to 3.6%) developed a thromboembolic event during the 3-month follow-up period. Physicians in training can use the Wells clinical model to determine pretest probability of PE. A diagnostic strategy including the use of this model by physicians in training with access to supervising physicians' advice appears to be safe.

  20. Chemical effects in ion mixing of a ternary system (metal-SiO2)

    NASA Technical Reports Server (NTRS)

    Banwell, T.; Nicolet, M.-A.; Sands, T.; Grunthaner, P. J.

    1987-01-01

    The mixing of Ti, Cr, and Ni thin films with SiO2 by low-temperature (- 196-25 C) irradiation with 290 keV Xe has been investigated. Comparison of the morphology of the intermixed region and the dose dependences of net metal transport into SiO2 reveals that long range motion and phase formation probably occur as separate and sequential processes. Kinetic limitations suppress chemical effects in these systems during the initial transport process. Chemical interactions influence the subsequent phase formation.

  1. Nonadherence to oral linezolid after hospitalization: a retrospective claims analysis of the incidence and consequence of claim reversals.

    PubMed

    Ball, Amy T; Xu, Yihua; Sanchez, Robert J; Shelbaya, Ahmed; Deminski, Michael C; Nau, David P

    2010-12-01

    Linezolid is available in an oral as well as an intravenous formulation. It is an oxazolidinone antibiotic and is effective in treating resistant gram-positive organisms such as methicillin-resistant Staphylococcus aureus and multidrug-resistant Streptococcus pneumoniae. The goals of this study were to identify the incidence of claim reversals for oral linezolid in members who were recently discharged from a hospital and to study the subsequent pattern of health care utilization to quantify the consequences for members who have a reversed linezolid claim. This study was a retrospective claims analysis of Humana Medicare Advantage Prescription Drug patients who had a claim for oral linezolid after an inpatient discharge between April 1, 2006, and June 30, 2008. The incidence of reversed claims among those with a linezolid prescription was measured as a proxy for medication adherence. Propensity scores were calculated to account for differences in patients' propensity to have a reversed claim. The association of the claim reversal with subsequent expenditures was assessed through 3 multivariate regression models wherein the dependent variables were drug, medical, and total costs for the 60-day period after discharge. The key independent variable was the occurrence of a reversed linezolid claim, and control variables included the propensity score quartiles and other clinical and demographic characteristics. All costs were provided in US dollars and from the year in which they occurred. Of 1046 patients identified (mean [SD] age, 69 [12] years; 51% male), 252 patients (24.1%) had a claim reversal for linezolid. Among these, 125 patients (49.6%) received linezolid within 10 days of the initial reversal, 39 patients (15.5%) received other antibiotics, and 88 patients (34.9%) did not receive any antibiotics. The unadjusted, mean outpatient drug costs were $696 and $2265 for patients with and without a reversal, respectively, whereas mean medical costs were $13,567 and $9355. Multivariable analyses revealed that members who did not receive linezolid after the claim reversal had significantly higher medical expenditures (Wald χ(2), 8.370; P = 0.004) and lower drug expenditures (Wald χ(2), 122.630; P < 0.01). The total costs did not differ significantly between the 2 groups (Wald χ(2), 1.540; P = 0.215), however, as the medical savings were partially negated by the higher drug costs. These patients with a reversed outpatient claim for linezolid had lower outpatient drug costs and higher medical costs in the 60-day period after the reversal. Copyright © 2010 Elsevier HS Journals, Inc. Published by EM Inc USA.. All rights reserved.

  2. Examining differences in HPV awareness and knowledge and HPV vaccine awareness and acceptability between U.S. Hispanic and island Puerto Rican women.

    PubMed

    Morales-Campos, Daisy Y; Vanderpool, Robin C

    2017-01-01

    In 2015, only 42% of Puerto Rican (PR) girls aged 13-17 and 44% of U.S. Hispanic girls aged 13-17 were vaccinated with all three Human Papillomavirus (HPV) vaccine doses; These percentages were far lower than the Healthy People 2020 goal of 80% of girls aged 13-15 the Healthy People 2020 goal of 80%. The purpose of this study was to examine potential differences in HPV awareness and knowledge and HPV vaccine awareness and acceptability between a population-based sample of U.S. Hispanic and island Puerto Rican women. We restricted our analyses to female respondents from the Health Information National Trends Survey (HINTS) 2007 (n=375; U.S. Hispanic) and HINTS Puerto Rico 2009 (n=417; PR). Using the Wald chi-square test, we assessed if there were significant differences in HPV awareness and knowledge and HPV vaccine awareness and acceptability between U.S. Hispanic and island PR women. We then utilized logistic or multinomial regression to control for covariates on significant outcomes. Both groups of Hispanic women were highly knowledgeable that HPV causes cancer (89.2% in both samples) and that HPV is a sexually transmitted infection (78.1% [U.S. Hispanics] and 84.7% [PR]). Less than 10% of both groups recognized that HPV can clear on its own without treatment. Island PR women had significantly higher HPV vaccine awareness (66.9% vs. 61.0%; Wald X 2 F(1, 97) = 16.03, p < .001) and were more accepting of the HPV vaccine for a real or hypothetical daughter, compared to U.S. Hispanic women (74.8% vs. 56.1%; Wald X 2 F(2, 96) = 7.18, p < .001). However, after controlling for sociodemographic variables and survey group, there was no longer a difference between the two groups of women and HPV vaccine awareness (AOR = .53; 95% CI = .23, 1.24). Moreover, after controlled analysis, island PR women were significantly less likely to have their hypothetical daughter get the HPV vaccine, compared to U.S. Hispanic women (AOR = 0.26; 95% CI = .08, .81). Future research focused on factors contributing to differences and similarities in HPV knowledge and awareness and HPV vaccine awareness and acceptability between these two groups of Hispanic women is warranted. Findings may assist in developing health education programs and media to promote HPV vaccination among both groups.

  3. Prevalence and Trajectories of Psychiatric Symptoms Among Sober Living House Residents.

    PubMed

    Polcin, Doug; Korcha, Rachael; Gupta, Shalika; Subbaraman, Meenakshi Sabina; Mericle, Amy A

    2016-01-01

    Sober living houses are alcohol- and drug-free recovery residences that help individuals with substance use disorders maintain long-term abstinence. Given the prevalence of co-occurring mental disorders among individuals entering substance use treatment, it is likely that many such residents are also contending with psychiatric symptoms, and it is unclear how these symptoms may affect their sobriety. This study sought to describe the prevalence and trajectories of different types of symptoms among sober living house residents and examine how these symptoms affect substance use outcomes. Data for this study were collected as part of a larger study on outcomes among sober living house residents in Northern California. The current study examined data from 300 residents in two housing groups; residents were interviewed upon entry and re-interviewed at 6-, 12-, and 18-month follow-ups. Psychiatric symptoms were assessed using the Brief Symptom Inventory (BSI). General estimating equations tested changes in BSI global psychological distress and clinical symptom scales over time and examined the relationship between scale scores and substance use in longitudinal models controlling for demographics, length of stay, and psychiatric service utilization. The average age of residents was 38.5 years (SD = 10.1) and they were mostly male (80%) and Caucasian (65%). Retention rates were high, with 90% (n = 269) participating in at least one follow-up interview. Overall psychological distress (Wald χ(2) = 7.99, df = 3, p = .046), symptoms of depression (Wald χ(2) = 13.57, df = 3, p = .004), and phobic anxiety (Wald χ(2) = 7.89, df = 3, p = .048) significantly improved over time. In all models examining the relationship between BSI scale scores and substance use, rates of abstinence and days of use among those who reported using substances also improved over time. Overall distress (OR = 0.48, p < .001) as well as higher scores on the somatization (OR = 0.56, p < .001), depression (OR = 0.53, p < .001), hostility (OR = 0.71, p = .006), and phobic anxiety (OR = 0.74, p = .012) subscales were significantly associated with a decreased likelihood of abstinence. Symptoms of somatization (B = 0.092, SE = 0.029, p = .002) were associated with an increase in the number of days substances were used among those who reported use. Psychological symptoms among sober living house residents improve over time, but they are risk factors for relapse, suggesting that additional support provided to residents with psychiatric symptoms could improve substance use outcomes.

  4. Troponin elevation in severe sepsis and septic shock: the role of left ventricular diastolic dysfunction and right ventricular dilatation*.

    PubMed

    Landesberg, Giora; Jaffe, Allan S; Gilon, Dan; Levin, Phillip D; Goodman, Sergey; Abu-Baih, Abed; Beeri, Ronen; Weissman, Charles; Sprung, Charles L; Landesberg, Amir

    2014-04-01

    Serum troponin concentrations predict mortality in almost every clinical setting they have been examined, including sepsis. However, the causes for troponin elevations in sepsis are poorly understood. We hypothesized that detailed investigation of myocardial dysfunction by echocardiography can provide insight into the possible causes of troponin elevation and its association with mortality in sepsis. Prospective, analytic cohort study. Tertiary academic institute. A cohort of ICU patients with severe sepsis or septic shock. Advanced echocardiography using global strain, strain-rate imaging and 3D left and right ventricular volume analyses in addition to the standard echocardiography, and concomitant high-sensitivity troponin-T measurement in patients with severe sepsis or septic shock. Two hundred twenty-five echocardiograms and concomitant high-sensitivity troponin-T measurements were performed in a cohort of 106 patients within the first days of severe sepsis or septic shock (2.1 ± 1.4 measurements/patient). Combining echocardiographic and clinical variables, left ventricular diastolic dysfunction defined as increased mitral E-to-strain-rate e'-wave ratio, right ventricular dilatation (increased right ventricular end-systolic volume index), high Acute Physiology and Chronic Health Evaluation-II score, and low glomerular filtration rate best correlated with elevated log-transformed concomitant high-sensitivity troponin-T concentrations (mixed linear model: t = 3.8, 3.3, 2.8, and -2.1 and p = 0.001, 0.0002, 0.006, and 0.007, respectively). Left ventricular systolic dysfunction determined by reduced strain-rate s'-wave or low ejection fraction did not significantly correlate with log(concomitant high-sensitivity troponin-T). Forty-one patients (39%) died in-hospital. Right ventricular end-systolic volume index and left ventricular strain-rate e'-wave predicted in-hospital mortality, independent of Acute Physiology and Chronic Health Evaluation-II score (logistic regression: Wald = 8.4, 6.6, and 9.8 and p = 0.004, 0.010, and 0.001, respectively). Concomitant high-sensitivity troponin-T predicted mortality in univariate analysis (Wald = 8.4; p = 0.004), but not when combined with right ventricular end-systolic volume index and strain-rate e'-wave in the multivariate analysis (Wald = 2.3, 4.6, and 6.2 and p = 0.13, 0.032, and 0.012, respectively). Left ventricular diastolic dysfunction and right ventricular dilatation are the echocardiographic variables correlating best with concomitant high-sensitivity troponin-T concentrations. Left ventricular diastolic and right ventricular systolic dysfunction seem to explain the association of troponin with mortality in severe sepsis and septic shock.

  5. Universal scheme for finite-probability perfect transfer of arbitrary multispin states through spin chains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Zhong-Xiao, E-mail: zxman@mail.qfnu.edu.cn; An, Nguyen Ba, E-mail: nban@iop.vast.ac.vn; Xia, Yun-Jie, E-mail: yjxia@mail.qfnu.edu.cn

    In combination with the theories of open system and quantum recovering measurement, we propose a quantum state transfer scheme using spin chains by performing two sequential operations: a projective measurement on the spins of ‘environment’ followed by suitably designed quantum recovering measurements on the spins of interest. The scheme allows perfect transfer of arbitrary multispin states through multiple parallel spin chains with finite probability. Our scheme is universal in the sense that it is state-independent and applicable to any model possessing spin–spin interactions. We also present possible methods to implement the required measurements taking into account the current experimental technologies.more » As applications, we consider two typical models for which the probabilities of perfect state transfer are found to be reasonably high at optimally chosen moments during the time evolution. - Highlights: • Scheme that can achieve perfect quantum state transfer is devised. • The scheme is state-independent and applicable to any spin-interaction models. • The scheme allows perfect transfer of arbitrary multispin states. • Applications to two typical models are considered in detail.« less

  6. Role of the evening eastward electric field and the seed perturbations in the sequential occurrence of plasma bubble

    NASA Astrophysics Data System (ADS)

    Abadi, P.; Otsuka, Y.; Shiokawa, K.; Yamamoto, M.; M Buhari, S.; Abdullah, M.

    2017-12-01

    We investigate the 3-m ionospheric irregularities and the height variation of equatorial F-region observed by the Equatorial Atmosphere Radar (EAR) at Kototabang (100.3°E, 0.2°S, dip. Lat.: 10.1°S) in Indonesia and ionosondes at Chumphon (99.3°E, 10.7°N, dip. Lat.: 3°N) in Thailand and at Bac Lieu (105.7°E, 9.3°N, dip. Lat.; 1.5°N) in Vietnam, respectively, during March-April from 2011 to 2014. We aim to disclose the relation between pre-reversal enhancement (PRE) of evening eastward electric field and the sequential occurrence of the equatorial plasma bubble (EPB) in the period of 19-22 LT. In summary, (i) we found that the zonal spacing between consecutive EPBs ranges from less than 100 km up to 800 km with a maximum occurrence around 100-300 km as shown in Figure 1(a), and this result is consistent with the previous study [e.g. Makela et al., 2010]; (ii) the probability of the sequential occurrence of the EPB enhances with the increase of PRE strength (see Figure 1(b)); and (iii) Figure 1(c) shows that the zonal spacing between consecutive EPBs is less than 300 km for the weaker PRE (<30 m/s), whereas the zonal spacing is more varied for the stronger PRE (≥30 m/s). Our results remark that the PRE strength is a prominent factor of the sequential occurrence of the EPB. However, we also consider another factor, namely the zonal structure of seed perturbation modulated by gravity wave (GW), and the zonal spacing between consecutive EPBs may fit with the wavelength of the zonal structure of seed perturbation. We particularly attribute the result (iii) to the effects of PRE and seed perturbation on the sequential occurrence of the EPB, that is, we suggest that the weaker PRE could cause the sequential occurrence of the EPB when the zonal structure of seed perturbation has a shorter wavelength. We, however, need a further investigation for confirming the periodic seeding mechanism, and we will use a network of GPS receivers in the western part of Southeast Asia to analyze the zonal wavy structure in the TEC as a manifestation of the seed perturbations.

  7. Robust parameter design for automatically controlled systems and nanostructure synthesis

    NASA Astrophysics Data System (ADS)

    Dasgupta, Tirthankar

    2007-12-01

    This research focuses on developing comprehensive frameworks for developing robust parameter design methodology for dynamic systems with automatic control and for synthesis of nanostructures. In many automatically controlled dynamic processes, the optimal feedback control law depends on the parameter design solution and vice versa and therefore an integrated approach is necessary. A parameter design methodology in the presence of feedback control is developed for processes of long duration under the assumption that experimental noise factors are uncorrelated over time. Systems that follow a pure-gain dynamic model are considered and the best proportional-integral and minimum mean squared error control strategies are developed by using robust parameter design. The proposed method is illustrated using a simulated example and a case study in a urea packing plant. This idea is also extended to cases with on-line noise factors. The possibility of integrating feedforward control with a minimum mean squared error feedback control scheme is explored. To meet the needs of large scale synthesis of nanostructures, it is critical to systematically find experimental conditions under which the desired nanostructures are synthesized reproducibly, at large quantity and with controlled morphology. The first part of the research in this area focuses on modeling and optimization of existing experimental data. Through a rigorous statistical analysis of experimental data, models linking the probabilities of obtaining specific morphologies to the process variables are developed. A new iterative algorithm for fitting a Multinomial GLM is proposed and used. The optimum process conditions, which maximize the above probabilities and make the synthesis process less sensitive to variations of process variables around set values, are derived from the fitted models using Monte-Carlo simulations. The second part of the research deals with development of an experimental design methodology, tailor-made to address the unique phenomena associated with nanostructure synthesis. A sequential space filling design called Sequential Minimum Energy Design (SMED) for exploring best process conditions for synthesis of nanowires. The SMED is a novel approach to generate sequential designs that are model independent, can quickly "carve out" regions with no observable nanostructure morphology, and allow for the exploration of complex response surfaces.

  8. Goal-Directed Decision Making with Spiking Neurons.

    PubMed

    Friedrich, Johannes; Lengyel, Máté

    2016-02-03

    Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level. Copyright © 2016 the authors 0270-6474/16/361529-18$15.00/0.

  9. Goal-Directed Decision Making with Spiking Neurons

    PubMed Central

    Lengyel, Máté

    2016-01-01

    Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. SIGNIFICANCE STATEMENT Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level. PMID:26843636

  10. Reproductive success of kittiwakes and murres in sequential stages of the nesting period: Relationships with diet and oceanography

    NASA Astrophysics Data System (ADS)

    Renner, Heather M.; Drummond, Brie A.; Benson, Anna-Marie; Paredes, Rosana

    2014-11-01

    Reproductive success is one of the most easily-measured and widely studied demographic parameters of colonial nesting seabirds. Nevertheless, factors affecting the sequential stages (egg laying, incubation, chick-rearing) of reproductive success are less understood. We investigated the separate sequential stages of reproductive success in piscivorous black-legged kittiwakes (Rissa tridactyla) and thick-billed murres (Uria lomvia) using a 36-year dataset (1975-2010) on the major Pribilof Islands (St. Paul and St. George), which have recently had contrasting population trajectories. Our objectives were to evaluate how the proportion of successful nests varied among stages, and to quantify factors influencing the probability of nest success at each stage in each island. We modeled the probability of nest success at each stage using General Linear Mixed Models incorporating broad-scale and local climate variables, and diet as covariates as well as other measures of reproduction such as timing of breeding and reproductive output in the previous year and previous stage. For both species we found: (1) Success in previous stages of the breeding cycle and success in the prior year better explained overall success than any environmental variables. Phenology was also an important predictor of laying success for kittiwakes. (2) Fledging success was lower when chick diets contained oceanic fish found farther from the colonies and small invertebrates, rather than coastal fish species. (3) Differences in reproductive variables at St. Paul and St. George islands did not correspond to population trends between the two islands. Our results highlight the potential importance of adult condition and annual survival to kittiwake and murre productivity and ultimately, populations. Adult condition carrying over from the previous year ultimately seems to drive annual breeding success in a cascade effect. Furthermore, condition and survival appear to be important contributors to population dynamics at each island. Therefore, adult condition and survival prior to breeding, and factors that influence these parameters such as foraging conditions in the non-breeding season, may be important datasets for understanding drivers of seabird demography at the Pribilof Islands.

  11. A database on downward shortwave radiation for Africa and Europe

    NASA Astrophysics Data System (ADS)

    Lefevre, M.; Cros, S.; Albuisson, M.; Wald, L.

    2003-04-01

    Shortwave (SW) radiation is an element of the radiation budget, an essential component in climate studies. The network of stations measuring radiation is very scarce in the ocean and coastal areas.[1] and [2] demonstrate that a proper processing of satellite data provides better results than interpolation techniques. Several methods are available for the conversion of spaceborne observations made in the visible range by geostationnary satellites into SW radiation available at ocean level. Our concern is the series of Meteosat satellites that observe Africa, Europe and the Eastern Atlantic Ocean for several years. When operated on a routine basis, many of these methods exhibit several drawbacks, one of them being the poor accuracy in irradiance [3]. We designed a new method that is capable of processing long time-series of images acquired by the series of sensors aboard the Meteosat satellites. The method is using the same principle than several methods of proven quality: [4] [5] [6] [7] [8] [9] [10] [11]. With respect to these methods, the new one, called Heliosat-II, offers several improvements in operation and accuracy. These improvements are due to several causes: (i) the Meteosat data are calibrated and converted into radiances [12]; (ii) we use a new database of monthly values of the atmospheric optical turbidity for clear skies available on cells of 5’ of arc angle in size (SoDa Web site: http://www.soda-is.com); (iii) we use terrain elevation TerrainBase database using the same cell size (useful for land / ocean separation); (iv) a better modelling of the irradiation under clear-skies and overcast skies was performed [13]; (v) more physical description of the optical processes was made possible by the calibration step; known proven models are implemented in the method; (vi) observations of [14] were used to model the spatial distribution of radiances of the very thick clouds; (vii) changes in ocean albedo due to sun glitter are taken into account. We made comparisons between satellite-derived assessments and measurements performed in the world radiation network in Europe and Africa. The results depend upon the number of pixels whose values are averaged for the comparison with the irradiation measurements. The satellite data are in B2 format. This format results from a sub-sampling of the high-resolution data. Briefly written, one pixel out of six original pixels is kept. Estimates at the geographical locations of the stations are therefore produced by spatial interpolation [15]. These estimates were compared to observations made by 60 stations in Europe and 30 stations in Africa for one year, July 1994 to June 1995. The bias and root mean square error (RMSE) for the assessment of the irradiance for a month are better than respectively 3 and 17 W m-2 on cells of 5' of arc angle in size (approx. 10 km at mid-latitude). The RMSE decreases down to 9 W m-2 if assessments are averaged over cells of 0.5° of arc angle. Data in B2 format were collected from Eumetsat. They were quality-controlled and calibrated. The method Heliosat-II is being operated to produce a database of SW downward radiation for the Eastern Atlantic Ocean, Europe and Africa from 1985 onwards and for each day. This database is accessible through the SoDa service on a free basis [16] (http://www.soda-is.com). Further, tools are available through this service to estimate longwave downward irradiance and net irradiance from the SW downward irradiance. The Heliosat-II method can be operated in real-time. When applied to Meteosat data (MOP and MSG), it produces maps of downward SW irradiance within the hour following the acquisition. 1 Perez R., Seals R., Zelenka A., 1997, Comparing satellite remote sensing and ground network measurements for the production of site/time specific irradiance data, Solar Energy, 60, 89-96. 2 Zelenka A., Perez R., Seals R., and Renné D., 1999, Effective accuracy of satellite-derived hourly irradiances, Theoretical and Applied Climatology, 62, 199-207. 3 Rigollier C., Wald L., 1999, The HelioClim Project: from satellite images to solar radiation maps. In Proceedings of the ISES Solar World Congress 1999, Jerusalem, Israel, July 4-9, 1999, volume I, pp 427-431. 4 Pastre C., 1981, Développement d’une méthode de détermination du rayonnement solaire global à partir des données Meteosat, La Météorologie, VIe série N°24, mars 1981. 5 Möser W., Raschke E., 1983, Mapping of global radiation and of cloudiness from Meteosat image data: theory and ground truth comparisons, Meteorologische Rundschau, 36, 33-41. 6 Möser W., Raschke E., 1984, Incident solar radiation over Europe estimated from Meteosat data, Journal of Applied Meteorology, 23, 166-170. 7 Cano D., Monget J.M., Albuisson M., Guillard H., Regas N., Wald L., 1986, A method for the determination of the global solar radiation from meteorological satellite data, Solar Energy, 37, 31-39. 8 Diabaté L., Demarcq H., Michaud-Regas N., Wald L., 1988, Estimating incident solar radiation at the surface from images of the Earth transmitted by geostationary satellites: the Heliosat Project, International Journal of Solar Energy, 5, 261-278. 9 Beyer H.G., Costanzo C., Heinemann D., 1996, Modifications of the Heliosat procedure for irradiance estimates from satellite images, Solar Energy, 56, 3, 207-212. 10 Stuhlmann R., Rieland M., Raschke E., 1990, An improvement of the IGMK model to derive total and diffuse solar radiation at the surface from satellite data, Journal of Applied Meteorology, 29, 596-603. 11 Delorme C., Gallo A., Olivieri J., 1992, Quick use of Wefax images from Meteosat to determine daily solar radiation in France, Solar Energy, 49 (3), 191-197. 12 Rigollier C., Lefèvre M., Blanc Ph., Wald L., 2002. The operational calibration of images taken in the visible channel of the Meteosat-series of satellites. To be published by Journal of Atmospheric and Oceanic Technology. 13 Rigollier C., Bauer O., Wald L., 1999, On the clear sky model of the 4th European Solar Radiation Atlas with respect to the Heliosat method, Solar Energy, 68(1), 33-48. 14 Taylor V.R., Stowe L.L., 1984, Reflectance characteristics of uniform Earth and cloud surfaces derived from Nimbus 7 ERB, Journal of Geophysical Research, 89(D4), 4987-4996. 15 Lefèvre M., Remund J., Albuisson M., Wald L., 2002, Study of effective distances for interpolation schemes in meteorology. European Geophysical Society, 27th General Assembly, Geophysical Research Abstracts, vol. 4, April 2002, EGS02-A-03429. 16 Wald L., 2000. SoDa: a project for the integration and exploitation of networked solar radiation databases. In Proceedings of the European Geophysical Society Meeting, XXV General Assembly (CD-ROM).

  12. Five-dimensional Myers-Perry black holes cannot be overspun in gedanken experiments

    NASA Astrophysics Data System (ADS)

    An, Jincheng; Shan, Jieru; Zhang, Hongbao; Zhao, Suting

    2018-05-01

    We apply the new version of a gedanken experiment designed recently by Sorce and Wald to overspin the five-dimensional Myers-Perry black holes. As a result, the extremal black holes cannot be overspun at the linear order. On the other hand, although the nearly extremal black holes could be overspun at the linear order, this process is shown to be prohibited by the quadratic order correction. Thus, no violation of the weak cosmic censorship conjecture occurs around the five-dimensional Myers-Perry black holes.

  13. Efficient Simulation Budget Allocation for Selecting an Optimal Subset

    NASA Technical Reports Server (NTRS)

    Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay

    2008-01-01

    We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.

  14. A new methodology to integrate planetary quarantine requirements into mission planning, with application to a Jupiter orbiter

    NASA Technical Reports Server (NTRS)

    Howard, R. A.; North, D. W.; Pezier, J. P.

    1975-01-01

    A new methodology is proposed for integrating planetary quarantine objectives into space exploration planning. This methodology is designed to remedy the major weaknesses inherent in the current formulation of planetary quarantine requirements. Application of the methodology is illustrated by a tutorial analysis of a proposed Jupiter Orbiter mission. The proposed methodology reformulates planetary quarantine planning as a sequential decision problem. Rather than concentrating on a nominal plan, all decision alternatives and possible consequences are laid out in a decision tree. Probabilities and values are associated with the outcomes, including the outcome of contamination. The process of allocating probabilities, which could not be made perfectly unambiguous and systematic, is replaced by decomposition and optimization techniques based on principles of dynamic programming. Thus, the new methodology provides logical integration of all available information and allows selection of the best strategy consistent with quarantine and other space exploration goals.

  15. Hold it! The influence of lingering rewards on choice diversification and persistence.

    PubMed

    Schulze, Christin; van Ravenzwaaij, Don; Newell, Ben R

    2017-11-01

    Learning to choose adaptively when faced with uncertain and variable outcomes is a central challenge for decision makers. This study examines repeated choice in dynamic probability learning tasks in which outcome probabilities changed either as a function of the choices participants made or independently of those choices. This presence/absence of sequential choice-outcome dependencies was implemented by manipulating a single task aspect between conditions: the retention/withdrawal of reward across individual choice trials. The study addresses how people adapt to these learning environments and to what extent they engage in 2 choice strategies often contrasted as paradigmatic examples of striking violation of versus nominal adherence to rational choice: diversification and persistent probability maximizing, respectively. Results show that decisions approached adaptive choice diversification and persistence when sufficient feedback was provided on the dynamic rules of the probabilistic environments. The findings of divergent behavior in the 2 environments indicate that diversified choices represented a response to the reward retention manipulation rather than to the mere variability of outcome probabilities. Choice in both environments was well accounted for by the generalized matching law, and computational modeling-based strategy analyses indicated that adaptive choice arose mainly from reliance on reinforcement learning strategies. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. Nomogram predicting response after chemoradiotherapy in rectal cancer using sequential PETCT imaging: a multicentric prospective study with external validation.

    PubMed

    van Stiphout, Ruud G P M; Valentini, Vincenzo; Buijsen, Jeroen; Lammering, Guido; Meldolesi, Elisa; van Soest, Johan; Leccisotti, Lucia; Giordano, Alessandro; Gambacorta, Maria A; Dekker, Andre; Lambin, Philippe

    2014-11-01

    To develop and externally validate a predictive model for pathologic complete response (pCR) for locally advanced rectal cancer (LARC) based on clinical features and early sequential (18)F-FDG PETCT imaging. Prospective data (i.a. THUNDER trial) were used to train (N=112, MAASTRO Clinic) and validate (N=78, Università Cattolica del S. Cuore) the model for pCR (ypT0N0). All patients received long-course chemoradiotherapy (CRT) and surgery. Clinical parameters were age, gender, clinical tumour (cT) stage and clinical nodal (cN) stage. PET parameters were SUVmax, SUVmean, metabolic tumour volume (MTV) and maximal tumour diameter, for which response indices between pre-treatment and intermediate scan were calculated. Using multivariate logistic regression, three probability groups for pCR were defined. The pCR rates were 21.4% (training) and 23.1% (validation). The selected predictive features for pCR were cT-stage, cN-stage, response index of SUVmean and maximal tumour diameter during treatment. The models' performances (AUC) were 0.78 (training) and 0.70 (validation). The high probability group for pCR resulted in 100% correct predictions for training and 67% for validation. The model is available on the website www.predictcancer.org. The developed predictive model for pCR is accurate and externally validated. This model may assist in treatment decisions during CRT to select complete responders for a wait-and-see policy, good responders for extra RT boost and bad responders for additional chemotherapy. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  17. A discrete event modelling framework for simulation of long-term outcomes of sequential treatment strategies for ankylosing spondylitis.

    PubMed

    Tran-Duy, An; Boonen, Annelies; van de Laar, Mart A F J; Franke, Angelinus C; Severens, Johan L

    2011-12-01

    To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Discrete event simulation paradigm was selected for model development. Drug efficacy was modelled as changes in disease activity (Bath Ankylosing Spondylitis Disease Activity Index (BASDAI)) and functional status (Bath Ankylosing Spondylitis Functional Index (BASFI)), which were linked to costs and health utility using statistical models fitted based on an observational AS cohort. Published clinical data were used to estimate drug efficacy and time to events. Two strategies were compared: (1) five available non-steroidal anti-inflammatory drugs (strategy 1) and (2) same as strategy 1 plus two tumour necrosis factor α inhibitors (strategy 2). 13,000 patients were followed up individually until death. For probability sensitivity analysis, Monte Carlo simulations were performed with 1000 sets of parameters sampled from the appropriate probability distributions. The models successfully generated valid data on treatments, BASDAI, BASFI, utility, quality-adjusted life years (QALYs) and costs at time points with intervals of 1-3 months during the simulation length of 70 years. Incremental cost per QALY gained in strategy 2 compared with strategy 1 was €35,186. At a willingness-to-pay threshold of €80,000, it was 99.9% certain that strategy 2 was cost-effective. The modelling framework provides great flexibility to implement complex algorithms representing treatment selection, disease progression and changes in costs and utilities over time of patients with AS. Results obtained from the simulation are plausible.

  18. Historical demography of common carp estimated from individuals collected from various parts of the world using the pairwise sequentially markovian coalescent approach.

    PubMed

    Yuan, Zihao; Huang, Wei; Liu, Shikai; Xu, Peng; Dunham, Rex; Liu, Zhanjiang

    2018-04-01

    The inference of historical demography of a species is helpful for understanding species' differentiation and its population dynamics. However, such inference has been previously difficult due to the lack of proper analytical methods and availability of genetic data. A recently developed method called Pairwise Sequentially Markovian Coalescent (PSMC) offers the capability for estimation of the trajectories of historical populations over considerable time periods using genomic sequences. In this study, we applied this approach to infer the historical demography of the common carp using samples collected from Europe, Asia and the Americas. Comparison between Asian and European common carp populations showed that the last glacial period starting 100 ka BP likely caused a significant decline in population size of the wild common carp in Europe, while it did not have much of an impact on its counterparts in Asia. This was probably caused by differences in glacial activities in East Asia and Europe, and suggesting a separation of the European and Asian clades before the last glacial maximum. The North American clade which is an invasive population shared a similar demographic history as those from Europe, consistent with the idea that the North American common carp probably had European ancestral origins. Our analysis represents the first reconstruction of the historical population demography of the common carp, which is important to elucidate the separation of European and Asian common carp clades during the Quaternary glaciation, as well as the dispersal of common carp across the world.

  19. NTCP reduction for advanced head and neck cancer patients using proton therapy for complete or sequential boost treatment versus photon therapy.

    PubMed

    Jakobi, Annika; Stützer, Kristin; Bandurska-Luque, Anna; Löck, Steffen; Haase, Robert; Wack, Linda-Jacqueline; Mönnich, David; Thorwarth, Daniel; Perez, Damien; Lühr, Armin; Zips, Daniel; Krause, Mechthild; Baumann, Michael; Perrin, Rosalind; Richter, Christian

    2015-01-01

    To determine by treatment plan comparison differences in toxicity risk reduction for patients with head and neck squamous cell carcinoma (HNSCC) from proton therapy either used for complete treatment or sequential boost treatment only. For 45 HNSCC patients, intensity-modulated photon (IMXT) and proton (IMPT) treatment plans were created including a dose escalation via simultaneous integrated boost with a one-step adaptation strategy after 25 fractions for sequential boost treatment. Dose accumulation was performed for pure IMXT treatment, pure IMPT treatment and for a mixed modality treatment with IMXT for the elective target followed by a sequential boost with IMPT. Treatment plan evaluation was based on modern normal tissue complication probability (NTCP) models for mucositis, xerostomia, aspiration, dysphagia, larynx edema and trismus. Individual NTCP differences between IMXT and IMPT (∆NTCPIMXT-IMPT) as well as between IMXT and the mixed modality treatment (∆NTCPIMXT-Mix) were calculated. Target coverage was similar in all three scenarios. NTCP values could be reduced in all patients using IMPT treatment. However, ∆NTCPIMXT-Mix values were a factor 2-10 smaller than ∆NTCPIMXT-IMPT. Assuming a threshold of ≥ 10% NTCP reduction in xerostomia or dysphagia risk as criterion for patient assignment to IMPT, less than 15% of the patients would be selected for a proton boost, while about 50% would be assigned to pure IMPT treatment. For mucositis and trismus, ∆NTCP ≥ 10% occurred in six and four patients, respectively, with pure IMPT treatment, while no such difference was identified with the proton boost. The use of IMPT generally reduces the expected toxicity risk while maintaining good tumor coverage in the examined HNSCC patients. A mixed modality treatment using IMPT solely for a sequential boost reduces the risk by 10% only in rare cases. In contrast, pure IMPT treatment may be reasonable for about half of the examined patient cohort considering the toxicities xerostomia and dysphagia, if a feasible strategy for patient anatomy changes is implemented.

  20. Altered Fermentation Performances, Growth, and Metabolic Footprints Reveal Competition for Nutrients between Yeast Species Inoculated in Synthetic Grape Juice-Like Medium.

    PubMed

    Rollero, Stephanie; Bloem, Audrey; Ortiz-Julien, Anne; Camarasa, Carole; Divol, Benoit

    2018-01-01

    The sequential inoculation of non- Saccharomyces yeasts and Saccharomyces cerevisiae in grape juice is becoming an increasingly popular practice to diversify wine styles and/or to obtain more complex wines with a peculiar microbial footprint. One of the main interactions is competition for nutrients, especially nitrogen sources, that directly impacts not only fermentation performance but also the production of aroma compounds. In order to better understand the interactions taking place between non-Saccharomyces yeasts and S. cerevisiae during alcoholic fermentation, sequential inoculations of three yeast species ( Pichia burtonii, Kluyveromyces marxianus, Zygoascus meyerae ) with S. cerevisiae were performed individually in a synthetic medium. Different species-dependent interactions were evidenced. Indeed, the three sequential inoculations resulted in three different behaviors in terms of growth. P. burtonii and Z. meyerae declined after the inoculation of S. cerevisiae which promptly outcompeted the other two species. However, while the presence of P. burtonii did not impact the fermentation kinetics of S. cerevisiae , that of Z. meyerae rendered the overall kinetics very slow and with no clear exponential phase. K. marxianus and S. cerevisiae both declined and became undetectable before fermentation completion. The results also demonstrated that yeasts differed in their preference for nitrogen sources. Unlike Z. meyerae and P. burtonii, K. marxianus appeared to be a competitor for S. cerevisiae (as evidenced by the uptake of ammonium and amino acids), thereby explaining the resulting stuck fermentation. Nevertheless, the results suggested that competition for other nutrients (probably vitamins) occurred during the sequential inoculation of Z. meyerae with S. cerevisiae . The metabolic footprint of the non- Saccharomyces yeasts determined after 48 h of fermentation remained until the end of fermentation and combined with that of S. cerevisiae . For instance, fermentations performed with K. marxianus were characterized by the formation of phenylethanol and phenylethyl acetate, while those performed with P. burtonii or Z. meyerae displayed higher production of isoamyl alcohol and ethyl esters. When considering sequential inoculation of yeasts, the nutritional requirements of the yeasts used should be carefully considered and adjusted accordingly. Finally, our chemical data suggests that the organoleptic properties of the wine are altered in a species specific manner.

  1. If Only my Leader Would just Do Something! Passive Leadership Undermines Employee Well-being Through Role Stressors and Psychological Resource Depletion.

    PubMed

    Barling, Julian; Frone, Michael R

    2017-08-01

    The goal of this study was to develop and test a sequential mediational model explaining the negative relationship of passive leadership to employee well-being. Based on role stress theory, we posit that passive leadership will predict higher levels of role ambiguity, role conflict and role overload. Invoking Conservation of Resources theory, we further hypothesize that these role stressors will indirectly and negatively influence two aspects of employee well-being, namely overall mental health and overall work attitude, through psychological work fatigue. Using a probability sample of 2467 US workers, structural equation modelling supported the model by showing that role stressors and psychological work fatigue partially mediated the negative relationship between passive leadership and both aspects of employee well-being. The hypothesized, sequential indirect relationships explained 47.9% of the overall relationship between passive leadership and mental health and 26.6% of the overall relationship between passive leadership and overall work attitude. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Learning of state-space models with highly informative observations: A tempered sequential Monte Carlo solution

    NASA Astrophysics Data System (ADS)

    Svensson, Andreas; Schön, Thomas B.; Lindsten, Fredrik

    2018-05-01

    Probabilistic (or Bayesian) modeling and learning offers interesting possibilities for systematic representation of uncertainty using probability theory. However, probabilistic learning often leads to computationally challenging problems. Some problems of this type that were previously intractable can now be solved on standard personal computers thanks to recent advances in Monte Carlo methods. In particular, for learning of unknown parameters in nonlinear state-space models, methods based on the particle filter (a Monte Carlo method) have proven very useful. A notoriously challenging problem, however, still occurs when the observations in the state-space model are highly informative, i.e. when there is very little or no measurement noise present, relative to the amount of process noise. The particle filter will then struggle in estimating one of the basic components for probabilistic learning, namely the likelihood p (data | parameters). To this end we suggest an algorithm which initially assumes that there is substantial amount of artificial measurement noise present. The variance of this noise is sequentially decreased in an adaptive fashion such that we, in the end, recover the original problem or possibly a very close approximation of it. The main component in our algorithm is a sequential Monte Carlo (SMC) sampler, which gives our proposed method a clear resemblance to the SMC2 method. Another natural link is also made to the ideas underlying the approximate Bayesian computation (ABC). We illustrate it with numerical examples, and in particular show promising results for a challenging Wiener-Hammerstein benchmark problem.

  3. Biochemical transport modeling, estimation, and detection in realistic environments

    NASA Astrophysics Data System (ADS)

    Ortner, Mathias; Nehorai, Arye

    2006-05-01

    Early detection and estimation of the spread of a biochemical contaminant are major issues for homeland security applications. We present an integrated approach combining the measurements given by an array of biochemical sensors with a physical model of the dispersion and statistical analysis to solve these problems and provide system performance measures. We approximate the dispersion model of the contaminant in a realistic environment through numerical simulations of reflected stochastic diffusions describing the microscopic transport phenomena due to wind and chemical diffusion using the Feynman-Kac formula. We consider arbitrary complex geometries and account for wind turbulence. Localizing the dispersive sources is useful for decontamination purposes and estimation of the cloud evolution. To solve the associated inverse problem, we propose a Bayesian framework based on a random field that is particularly powerful for localizing multiple sources with small amounts of measurements. We also develop a sequential detector using the numerical transport model we propose. Sequential detection allows on-line analysis and detecting wether a change has occurred. We first focus on the formulation of a suitable sequential detector that overcomes the presence of unknown parameters (e.g. release time, intensity and location). We compute a bound on the expected delay before false detection in order to decide the threshold of the test. For a fixed false-alarm rate, we obtain the detection probability of a substance release as a function of its location and initial concentration. Numerical examples are presented for two real-world scenarios: an urban area and an indoor ventilation duct.

  4. Concerted vs. Sequential. Two Activation Patterns of Vast Arrays of Intracellular Ca2+ Channels in Muscle

    PubMed Central

    Zhou, Jinsong; Brum, Gustavo; González, Adom; Launikonis, Bradley S.; Stern, Michael D.; Ríos, Eduardo

    2005-01-01

    To signal cell responses, Ca2+ is released from storage through intracellular Ca2+ channels. Unlike most plasmalemmal channels, these are clustered in quasi-crystalline arrays, which should endow them with unique properties. Two distinct patterns of local activation of Ca2+ release were revealed in images of Ca2+ sparks in permeabilized cells of amphibian muscle. In the presence of sulfate, an anion that enters the SR and precipitates Ca2+, sparks became wider than in the conventional, glutamate-based solution. Some of these were “protoplatykurtic” (had a flat top from early on), suggesting an extensive array of channels that activate simultaneously. Under these conditions the rate of production of signal mass was roughly constant during the rise time of the spark and could be as high as 5 μm3 ms−1, consistent with a release current >50 pA since the beginning of the event. This pattern, called “concerted activation,” was observed also in rat muscle fibers. When sulfate was combined with a reduced cytosolic [Ca2+] (50 nM) these sparks coexisted (and interfered) with a sequential progression of channel opening, probably mediated by Ca2+-induced Ca2+ release (CICR). Sequential propagation, observed only in frogs, may require parajunctional channels, of RyR isoform β, which are absent in the rat. Concerted opening instead appears to be a property of RyR α in the amphibian and the homologous isoform 1 in the mammal. PMID:16186560

  5. Robust multiperson detection and tracking for mobile service and social robots.

    PubMed

    Li, Liyuan; Yan, Shuicheng; Yu, Xinguo; Tan, Yeow Kee; Li, Haizhou

    2012-10-01

    This paper proposes an efficient system which integrates multiple vision models for robust multiperson detection and tracking for mobile service and social robots in public environments. The core technique is a novel maximum likelihood (ML)-based algorithm which combines the multimodel detections in mean-shift tracking. First, a likelihood probability which integrates detections and similarity to local appearance is defined. Then, an expectation-maximization (EM)-like mean-shift algorithm is derived under the ML framework. In each iteration, the E-step estimates the associations to the detections, and the M-step locates the new position according to the ML criterion. To be robust to the complex crowded scenarios for multiperson tracking, an improved sequential strategy to perform the mean-shift tracking is proposed. Under this strategy, human objects are tracked sequentially according to their priority order. To balance the efficiency and robustness for real-time performance, at each stage, the first two objects from the list of the priority order are tested, and the one with the higher score is selected. The proposed method has been successfully implemented on real-world service and social robots. The vision system integrates stereo-based and histograms-of-oriented-gradients-based human detections, occlusion reasoning, and sequential mean-shift tracking. Various examples to show the advantages and robustness of the proposed system for multiperson tracking from mobile robots are presented. Quantitative evaluations on the performance of multiperson tracking are also performed. Experimental results indicate that significant improvements have been achieved by using the proposed method.

  6. Adaptive sequential Bayesian classification using Page's test

    NASA Astrophysics Data System (ADS)

    Lynch, Robert S., Jr.; Willett, Peter K.

    2002-03-01

    In this paper, the previously introduced Mean-Field Bayesian Data Reduction Algorithm is extended for adaptive sequential hypothesis testing utilizing Page's test. In general, Page's test is well understood as a method of detecting a permanent change in distribution associated with a sequence of observations. However, the relationship between detecting a change in distribution utilizing Page's test with that of classification and feature fusion is not well understood. Thus, the contribution of this work is based on developing a method of classifying an unlabeled vector of fused features (i.e., detect a change to an active statistical state) as quickly as possible given an acceptable mean time between false alerts. In this case, the developed classification test can be thought of as equivalent to performing a sequential probability ratio test repeatedly until a class is decided, with the lower log-threshold of each test being set to zero and the upper log-threshold being determined by the expected distance between false alerts. It is of interest to estimate the delay (or, related stopping time) to a classification decision (the number of time samples it takes to classify the target), and the mean time between false alerts, as a function of feature selection and fusion by the Mean-Field Bayesian Data Reduction Algorithm. Results are demonstrated by plotting the delay to declaring the target class versus the mean time between false alerts, and are shown using both different numbers of simulated training data and different numbers of relevant features for each class.

  7. Reductions in alcohol and cocaine use following a group coping intervention for HIV-positive adults with childhood sexual abuse histories.

    PubMed

    Meade, Christina S; Drabkin, Anya S; Hansen, Nathan B; Wilson, Patrick A; Kochman, Arlene; Sikkema, Kathleen J

    2010-11-01

    Few interventions exist to reduce alcohol and non-injection drug use among people living with HIV/AIDS. This study tested the effects of a coping group intervention for HIV-positive adults with childhood sexual abuse histories on alcohol, cocaine and marijuana use. Participants were assigned randomly to the experimental coping group or a time-matched comparison support group. Both interventions were delivered in a group format over 15 weekly 90-minute sessions. A diverse sample of 247 HIV-positive men and women with childhood sexual abuse were recruited from AIDS service organizations and community health centers in New York City. Substance use was assessed pre- and post-intervention and every 4 months during a 12-month follow-up period. Using an intent-to-treat analysis, longitudinal changes in substance use by condition were assessed using generalized estimating equations. At baseline, 42% of participants drank alcohol, 26% used cocaine and 26% used marijuana. Relative to participants in the support group, those in the coping group had greater reductions in quantity of alcohol use (Wald χ²(₄)=10.77, P = 0.029) and any cocaine use (Wald χ²(₄) = 9.81, P = 0.044) overtime. Many HIV patients, particularly those with childhood sexual abuse histories, continue to abuse substances. This group intervention that addressed coping with HIV and sexual trauma was effective in reducing alcohol and cocaine use, with effects sustained at 12-month follow-up. Integrating mental health treatment into HIV prevention may improve outcomes. © 2010 The Authors, Addiction © 2010 Society for the Study of Addiction.

  8. Extension of the Peters–Belson method to estimate health disparities among multiple groups using logistic regression with survey data

    PubMed Central

    Li, Y.; Graubard, B. I.; Huang, P.; Gastwirth, J. L.

    2015-01-01

    Determining the extent of a disparity, if any, between groups of people, for example, race or gender, is of interest in many fields, including public health for medical treatment and prevention of disease. An observed difference in the mean outcome between an advantaged group (AG) and disadvantaged group (DG) can be due to differences in the distribution of relevant covariates. The Peters–Belson (PB) method fits a regression model with covariates to the AG to predict, for each DG member, their outcome measure as if they had been from the AG. The difference between the mean predicted and the mean observed outcomes of DG members is the (unexplained) disparity of interest. We focus on applying the PB method to estimate the disparity based on binary/multinomial/proportional odds logistic regression models using data collected from complex surveys with more than one DG. Estimators of the unexplained disparity, an analytic variance–covariance estimator that is based on the Taylor linearization variance–covariance estimation method, as well as a Wald test for testing a joint null hypothesis of zero for unexplained disparities between two or more minority groups and a majority group, are provided. Simulation studies with data selected from simple random sampling and cluster sampling, as well as the analyses of disparity in body mass index in the National Health and Nutrition Examination Survey 1999–2004, are conducted. Empirical results indicate that the Taylor linearization variance–covariance estimation is accurate and that the proposed Wald test maintains the nominal level. PMID:25382235

  9. Intraindividual variability in reaction time predicts cognitive outcomes 5 years later.

    PubMed

    Bielak, Allison A M; Hultsch, David F; Strauss, Esther; Macdonald, Stuart W S; Hunter, Michael A

    2010-11-01

    Building on results suggesting that intraindividual variability in reaction time (inconsistency) is highly sensitive to even subtle changes in cognitive ability, this study addressed the capacity of inconsistency to predict change in cognitive status (i.e., cognitive impairment, no dementia [CIND] classification) and attrition 5 years later. Two hundred twelve community-dwelling older adults, initially aged 64-92 years, remained in the study after 5 years. Inconsistency was calculated from baseline reaction time performance. Participants were assigned to groups on the basis of their fluctuations in CIND classification over time. Logistic and Cox regressions were used. Baseline inconsistency significantly distinguished among those who remained or transitioned into CIND over the 5 years and those who were consistently intact (e.g., stable intact vs. stable CIND, Wald (1) = 7.91, p < .01, Exp(β) = 1.49). Average level of inconsistency over time was also predictive of study attrition, for example, Wald (1) = 11.31, p < .01, Exp(β) = 1.24. For both outcomes, greater inconsistency was associated with a greater likelihood of being in a maladaptive group 5 years later. Variability based on moderately cognitively challenging tasks appeared to be particularly sensitive to longitudinal changes in cognitive ability. Mean rate of responding was a comparable predictor of change in most instances, but individuals were at greater relative risk of being in a maladaptive outcome group if they were more inconsistent rather than if they were slower in responding. Implications for the potential utility of intraindividual variability in reaction time as an early marker of cognitive decline are discussed. (c) 2010 APA, all rights reserved

  10. Reply to “Comment on ‘Ground motions from the 2015 Mw 7.8 Gorkha, Nepal, earthquake constrained by a detailed assessment of macroseismic data’ by Stacey S. Martin, Susan E. Hough, and Charleen Hung” by Andrea Tertulliani, Laura Graziani, Corrado Castellano, Alessandra Maramai, and Antonio Rossi

    USGS Publications Warehouse

    Martin, Stacey S.; Hough, Susan E.

    2016-01-01

    We thank Andrea Tertulliani and his colleagues for their interest in our article on the 2015 Gorkha earthquake (Martin, Hough, et al., 2015), and for their comments pertaining to our study (Tertulliani et al., 2016). Indeed, as they note, a comprehensive assessment of macroseismic effects for an earthquake with far‐reaching effects as that of Gorkha is not only critically important but is also an extremely difficult undertaking. In the absence of a widely known web‐based system, employing a well‐calibrated algorithm with which to collect and systematically assess macroseismic information (e.g., Wald et al., 1999; Coppola et al., 2010; Bossu et al., 2015) in the Indian subcontinent, one is left with two approaches to characterize effects of an event such as the Gorkha earthquake: a comprehensive ground‐based survey such as the one undertaken in India following the 2001 Bhuj earthquake (Pande and Kayal, 2003), or an assessment such as Martin, Hough, et al. (2015) akin to other contemporary studies (e.g., Nuttli, 1973; Sieh, 1978; Meltzner and Wald, 1998; Martin and Szeliga, 2010; Ambraseys and Bilham, 2012; Mahajan et al., 2012; Gupta et al., 2013; Singh et al., 2013; Hough and Martin, 2015; Martin and Hough, 2015; Martin, Bradley, et al., 2015; Ribeiro et al., 2015), based primarily upon media reports and other available documentary accounts.

  11. Hazard ratio estimation and inference in clinical trials with many tied event times.

    PubMed

    Mehrotra, Devan V; Zhang, Yiwei

    2018-06-13

    The medical literature contains numerous examples of randomized clinical trials with time-to-event endpoints in which large numbers of events accrued over relatively short follow-up periods, resulting in many tied event times. A generally common feature across such examples was that the logrank test was used for hypothesis testing and the Cox proportional hazards model was used for hazard ratio estimation. We caution that this common practice is particularly risky in the setting of many tied event times for two reasons. First, the estimator of the hazard ratio can be severely biased if the Breslow tie-handling approximation for the Cox model (the default in SAS and Stata software) is used. Second, the 95% confidence interval for the hazard ratio can include one even when the corresponding logrank test p-value is less than 0.05. To help establish a better practice, with applicability for both superiority and noninferiority trials, we use theory and simulations to contrast Wald and score tests based on well-known tie-handling approximations for the Cox model. Our recommendation is to report the Wald test p-value and corresponding confidence interval based on the Efron approximation. The recommended test is essentially as powerful as the logrank test, the accompanying point and interval estimates of the hazard ratio have excellent statistical properties even in settings with many tied event times, inferential alignment between the p-value and confidence interval is guaranteed, and implementation is straightforward using commonly used software. Copyright © 2018 John Wiley & Sons, Ltd.

  12. Small Sample Performance of Bias-corrected Sandwich Estimators for Cluster-Randomized Trials with Binary Outcomes

    PubMed Central

    Li, Peng; Redden, David T.

    2014-01-01

    SUMMARY The sandwich estimator in generalized estimating equations (GEE) approach underestimates the true variance in small samples and consequently results in inflated type I error rates in hypothesis testing. This fact limits the application of the GEE in cluster-randomized trials (CRTs) with few clusters. Under various CRT scenarios with correlated binary outcomes, we evaluate the small sample properties of the GEE Wald tests using bias-corrected sandwich estimators. Our results suggest that the GEE Wald z test should be avoided in the analyses of CRTs with few clusters even when bias-corrected sandwich estimators are used. With t-distribution approximation, the Kauermann and Carroll (KC)-correction can keep the test size to nominal levels even when the number of clusters is as low as 10, and is robust to the moderate variation of the cluster sizes. However, in cases with large variations in cluster sizes, the Fay and Graubard (FG)-correction should be used instead. Furthermore, we derive a formula to calculate the power and minimum total number of clusters one needs using the t test and KC-correction for the CRTs with binary outcomes. The power levels as predicted by the proposed formula agree well with the empirical powers from the simulations. The proposed methods are illustrated using real CRT data. We conclude that with appropriate control of type I error rates under small sample sizes, we recommend the use of GEE approach in CRTs with binary outcomes due to fewer assumptions and robustness to the misspecification of the covariance structure. PMID:25345738

  13. Cross-sectional survey of older patients' views regarding multidisciplinary care for chronic conditions in general practice.

    PubMed

    Bonney, Andrew; Magee, Christopher; Pearson, Russell

    2014-01-01

    The ageing population and increasing prevalence of chronic illness have contributed to the need for significant primary care reform, including increased use of multidisciplinary care and task substitution. This cross-sectional study explores conditions under which older patients would accept having health professionals other than their general practitioner (GP) involved in their care for chronic disease management (CDM). Ten practices were randomly sampled from a contiguous major city and inner regional area. Questionnaires were distributed to consecutive patients aged 60 years and over in each practice. Agency theory was used to inform analyses. Statistical analysis was undertaken using Wald's test, growth modelling and linear regression, controlling for the clustered design. The response rate was 53% (n=272). Most respondents (79%) had at least one chronic health condition. Respondents were more comfortable with GP than with practice nurse management in the CDM scenario (Wald's test=105.49, P<0.001). Comfort with practice nurse CDM was positively associated with increased contact with their GP at the time of the visit (β=0.41, P<0.001), negatively associated with the number of the respondent's chronic conditions (β=-0.13, P=0.030) and not associated with the frequency of other health professional visits. Agency theory suggests that patients employ continuity of care to optimise factors important in CDM: information symmetry and goal alignment. Our findings are consistent with the theory and lend support to ensuring that interpersonal continuity of care is not lost in health care reform. Further research exploring patients' acceptance of differing systems of care is required.

  14. Infrared observations of OB star formation in NGC 6334

    NASA Technical Reports Server (NTRS)

    Harvey, P. M.; Gatley, I.

    1982-01-01

    Infrared photometry and maps from 2 to 100 microns are presented for three of the principal far infrared sources in NGC 6334. Each region is powered by two or more very young stars. The distribution of dust and ionized gas is probably strongly affected by the presence of the embedded stars; one of the sources is a blister H II region, another has a bipolar structure, and the third exhibits asymmetric temperature structure. The presence of protostellar objects throughout the region suggests that star formation has occurred nearly simultaneously in the whole molecular cloud rather than having been triggered sequentially from within.

  15. Multivariate Analysis, Retrieval, and Storage System (MARS). Volume 1: MARS System and Analysis Techniques

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Vanderberg, J. D.; Woodbury, N. W.

    1974-01-01

    A method for rapidly examining the probable applicability of weight estimating formulae to a specific aerospace vehicle design is presented. The Multivariate Analysis Retrieval and Storage System (MARS) is comprised of three computer programs which sequentially operate on the weight and geometry characteristics of past aerospace vehicles designs. Weight and geometric characteristics are stored in a set of data bases which are fully computerized. Additional data bases are readily added to the MARS system and/or the existing data bases may be easily expanded to include additional vehicles or vehicle characteristics.

  16. Binomial Test Method for Determining Probability of Detection Capability for Fracture Critical Applications

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    2011-01-01

    The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.

  17. Role of conviction in nonequilibrium models of opinion formation

    NASA Astrophysics Data System (ADS)

    Crokidakis, Nuno; Anteneodo, Celia

    2012-12-01

    We analyze the critical behavior of a class of discrete opinion models in the presence of disorder. Within this class, each agent opinion takes a discrete value (±1 or 0) and its time evolution is ruled by two terms, one representing agent-agent interactions and the other the degree of conviction or persuasion (a self-interaction). The mean-field limit, where each agent can interact evenly with any other, is considered. Disorder is introduced in the strength of both interactions, with either quenched or annealed random variables. With probability p (1-p), a pairwise interaction reflects a negative (positive) coupling, while the degree of conviction also follows a binary probability distribution (two different discrete probability distributions are considered). Numerical simulations show that a nonequilibrium continuous phase transition, from a disordered state to a state with a prevailing opinion, occurs at a critical point pc that depends on the distribution of the convictions, with the transition being spoiled in some cases. We also show how the critical line, for each model, is affected by the update scheme (either parallel or sequential) as well as by the kind of disorder (either quenched or annealed).

  18. Modeling haul-out behavior of walruses in Bering Sea ice

    USGS Publications Warehouse

    Udevitz, M.S.; Jay, C.V.; Fischbach, Anthony S.; Garlich-Miller, J. L.

    2009-01-01

    Understanding haul-out behavior of ice-associated pinnipeds is essential for designing and interpreting popula-tion surveys and for assessing effects of potential changes in their ice environments. We used satellite-linked transmitters to obtain sequential information about location and haul-out state for Pacific walruses, Odobenus rosmarus divergens (Il-liger, 1815), in the Bering Sea during April of 2004, 2005, and 2006. We used these data in a generalized mixed model of haul-out bout durations and a hierarchical Bayesian model of haul-out probabilities to assess factors related to walrus haul-out behavior, and provide the first predictive model of walrus haul-out behavior in sea ice habitat. Average haul-out bout duration was 9 h, but durations of haul-out bouts tended to increase with durations of preceding in-water bouts. On aver-age, tagged walruses spent only about 17% of their time hauled out on sea ice. Probability of being hauled out decreased with wind speed, increased with temperature, and followed a diurnal cycle with the highest values in the evening. Our haul-out probability model can be used to estimate the proportion of the population that is unavailable for detection in spring surveys of Pacific walruses on sea ice.

  19. Critical role of bevacizumab scheduling in combination with pre-surgical chemo-radiotherapy in MRI-defined high-risk locally advanced rectal cancer: results of the branch trial

    PubMed Central

    Avallone, Antonio; Pecori, Biagio; Bianco, Franco; Aloj, Luigi; Tatangelo, Fabiana; Romano, Carmela; Granata, Vincenza; Marone, Pietro; Leone, Alessandra; Botti, Gerardo; Petrillo, Antonella; Caracò, Corradina; Iaffaioli, Vincenzo R.; Muto, Paolo; Romano, Giovanni; Comella, Pasquale; Budillon, Alfredo; Delrio, Paolo

    2015-01-01

    Background We have previously shown that an intensified preoperative regimen including oxaliplatin plus raltitrexed and 5-fluorouracil/folinic acid (OXATOM/FUFA) during preoperative pelvic radiotherapy produced promising results in locally advanced rectal cancer (LARC). Preclinical evidence suggests that the scheduling of bevacizumab may be crucial to optimize its combination with chemo-radiotherapy. Patients and methods This non-randomized, non-comparative, phase II study was conducted in MRI-defined high-risk LARC. Patients received three biweekly cycles of OXATOM/FUFA during RT. Bevacizumab was given 2 weeks before the start of chemo-radiotherapy, and on the same day of chemotherapy for 3 cycles (concomitant-schedule A) or 4 days prior to the first and second cycle of chemotherapy (sequential-schedule B). Primary end point was pathological complete tumor regression (TRG1) rate. Results The accrual for the concomitant-schedule was early terminated because the number of TRG1 (2 out of 16 patients) was statistically inconsistent with the hypothesis of activity (30%) to be tested. Conversely, the endpoint was reached with the sequential-schedule and the final TRG1 rate among 46 enrolled patients was 50% (95% CI 35%–65%). Neutropenia was the most common grade ≥3 toxicity with both schedules, but it was less pronounced with the sequential than concomitant-schedule (30% vs. 44%). Postoperative complications occurred in 8/15 (53%) and 13/46 (28%) patients in schedule A and B, respectively. At 5 year follow-up the probability of PFS and OS was 80% (95%CI, 66%–89%) and 85% (95%CI, 69%–93%), respectively, for the sequential-schedule. Conclusions These results highlights the relevance of bevacizumab scheduling to optimize its combination with preoperative chemo-radiotherapy in the management of LARC. PMID:26320185

  20. The Evolution of Gene Regulatory Networks that Define Arthropod Body Plans.

    PubMed

    Auman, Tzach; Chipman, Ariel D

    2017-09-01

    Our understanding of the genetics of arthropod body plan development originally stems from work on Drosophila melanogaster from the late 1970s and onward. In Drosophila, there is a relatively detailed model for the network of gene interactions that proceeds in a sequential-hierarchical fashion to define the main features of the body plan. Over the years, we have a growing understanding of the networks involved in defining the body plan in an increasing number of arthropod species. It is now becoming possible to tease out the conserved aspects of these networks and to try to reconstruct their evolution. In this contribution, we focus on several key nodes of these networks, starting from early patterning in which the main axes are determined and the broad morphological domains of the embryo are defined, and on to later stage wherein the growth zone network is active in sequential addition of posterior segments. The pattern of conservation of networks is very patchy, with some key aspects being highly conserved in all arthropods and others being very labile. Many aspects of early axis patterning are highly conserved, as are some aspects of sequential segment generation. In contrast, regional patterning varies among different taxa, and some networks, such as the terminal patterning network, are only found in a limited range of taxa. The growth zone segmentation network is ancient and is probably plesiomorphic to all arthropods. In some insects, it has undergone significant modification to give rise to a more hardwired network that generates individual segments separately. In other insects and in most arthropods, the sequential segmentation network has undergone a significant amount of systems drift, wherein many of the genes have changed. However, it maintains a conserved underlying logic and function. © The Author 2017. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.

  1. Nonvisualization of the ovaries on pelvic ultrasound: does MRI add anything?

    PubMed

    Lisanti, Christopher J; Wood, Jonathan R; Schwope, Ryan B

    2014-02-01

    The purpose of our study is to assess the utility of pelvic magnetic resonance imaging (MRI) in the event that either one or both ovaries are not visualized by pelvic ultrasound. This HIPAA-compliant retrospective study was approved by our local institutional review board and informed consent waived. 1926 pelvic MRI examinations between March 2007 and December 2011 were reviewed and included if a combined transabdominal and endovaginal pelvic ultrasound had been performed in the preceding 6 months with at least one ovary nonvisualized. Ovaries not visualized on pelvic ultrasound were assumed to be normal and compared with the pelvic MRI findings. MRI findings were categorized as concordant or discordant. Discordant findings were divided into malignant, non-malignant physiologic or non-malignant non-physiologic. The modified Wald, the "rule of thirds", and the binomial distribution probability tests were performed. 255 pelvic ultrasounds met inclusion criteria with 364 ovaries not visualized. 0 malignancies were detected on MRI. 6.9% (25/364) of nonvisualized ovaries had non-malignant discordant findings on MRI: 5.2% (19/364) physiologic, 1.6% (6/364) non-physiologic. Physiologic findings included: 16 functional cysts and 3 hemorrhagic cysts. Non-physiologic findings included: 3 cysts in post-menopausal women, 1 hydrosalpinx, and 2 broad ligament fibroids. The theoretical risk of detecting an ovarian carcinoma on pelvic MRI when an ovary is not visualized on ultrasound ranges from 0 to 1.3%. If an ovary is not visualized on pelvic ultrasound, it can be assumed to be without carcinoma and MRI rarely adds additional information.

  2. Asymptotic formulae for likelihood-based tests of new physics

    NASA Astrophysics Data System (ADS)

    Cowan, Glen; Cranmer, Kyle; Gross, Eilam; Vitells, Ofer

    2011-02-01

    We describe likelihood-based statistical tests for use in high energy physics for the discovery of new phenomena and for construction of confidence intervals on model parameters. We focus on the properties of the test procedures that allow one to account for systematic uncertainties. Explicit formulae for the asymptotic distributions of test statistics are derived using results of Wilks and Wald. We motivate and justify the use of a representative data set, called the "Asimov data set", which provides a simple method to obtain the median experimental sensitivity of a search or measurement as well as fluctuations about this expectation.

  3. First law of entanglement entropy in topologically massive gravity

    NASA Astrophysics Data System (ADS)

    Cheng, Long; Hung, Ling-Yan; Liu, Si-Nong; Zhou, Hong-Zhe

    2016-09-01

    In this paper we explore the validity of the first law of entanglement entropy in the context of topologically massive gravity (TMG). We find that the variation of the holographic entanglement entropy under perturbation from the pure anti-de Sitter background satisfies the first law upon imposing the bulk equations of motion in a given time slice, despite the appearance of instabilities in the bulk for generic gravitational Chern-Simons coupling μ . The Noether-Wald entropy is different from the holographic entanglement entropy in a general boosted frame. However, this discrepancy does not affect the entanglement first law.

  4. The evolution of hospice in America: nursing's role in the movement.

    PubMed

    Hoffmann, Rosemary L

    2005-07-01

    In the current society, many individuals fear death and the feelings of suffering and loneliness that often accompany death. Two visionaries in the United States, Florence Wald and Dr. Elisabeth Kubler-Ross, recognized these fears and planned the nation's first hospice movement in the 1970s. The hospice philosophy continues to prosper in the new millennium. In this article, the founding American hospice's philosophy, types of facilities, standards, health team composition, patient demographics, organizations, reimbursement, and research are compared and contrasted with those of the current hospice movement. Existing issues with the modern movement are also discussed.

  5. Allowing for population stratification in case-only studies of gene-environment interaction, using genomic control.

    PubMed

    Yadav, Pankaj; Freitag-Wolf, Sandra; Lieb, Wolfgang; Dempfle, Astrid; Krawczak, Michael

    2015-10-01

    Gene-environment interactions (G × E) have attracted considerable research interest in the past owing to their scientific and public health implications, but powerful statistical methods are required to successfully track down G × E, particularly at a genome-wide level. Previously, a case-only (CO) design has been proposed as a means to identify G × E with greater efficiency than traditional case-control or cohort studies. However, as with genotype-phenotype association studies themselves, hidden population stratification (PS) can impact the validity of G × E studies using a CO design. Since this problem has been subject to little research to date, we used comprehensive simulation to systematically assess the type I error rate, power and effect size bias of CO studies of G × E in the presence of PS. Three types of PS were considered, namely genetic-only (PSG), environment-only (PSE), and joint genetic and environmental stratification (PSGE). Our results reveal that the type I error rate of an unadjusted Wald test, appropriate for the CO design, would be close to its nominal level (0.05 in our study) as long as PS involves only one interaction partner (i.e., either PSG or PSE). In contrast, if the study population is stratified with respect to both G and E (i.e., if there is PSGE), then the type I error rate is seriously inflated and estimates of the underlying G × E interaction are biased. Comparison of CO to a family-based case-parents design confirmed that the latter is more robust against PSGE, as expected. However, case-parent trios may be particularly unsuitable for G × E studies in view of the fact that they require genotype data from parents and that many diseases with an environmental component are likely to be of late onset. An alternative approach to adjusting for PS is principal component analysis (PCA), which has been widely used for this very purpose in past genome-wide association studies (GWAS). However, resolving genetic PS properly by PCA requires genetic data at the population level, the availability of which would conflict with the basic idea of the CO design. Therefore, we explored three modified Wald test statistics, inspired by the genomic control (GC) approach to GWAS, as an alternative means to allow for PSGE. The modified statistics were benchmarked against a stratified Wald test assuming known population affiliation, which should provide maximum power under PS. Our results confirm that GC is capable of successfully and efficiently correcting the PS-induced inflation of the type I error rate in CO studies of G × E.

  6. Enhancing Doctors’ Competencies in Communication With and Activation of Older Patients: The Promoting Active Aging (PRACTA) Computer-Based Intervention Study

    PubMed Central

    Chylińska, Joanna; Lazarewicz, Magdalena; Rzadkiewicz, Marta; Jaworski, Mariusz; Adamus, Miroslawa; Haugan, Gørill; Lillefjell, Monica; Espnes, Geir Arild

    2017-01-01

    Background Demographic changes over the past decades call for the promotion of health and disease prevention for older patients, as well as strategies to enhance their independence, productivity, and quality of life. Objective Our objective was to examine the effects of a computer-based educational intervention designed for general practitioners (GPs) to promote active aging. Methods The Promoting Active Aging (PRACTA) study consisted of a baseline questionnaire, implementation of an intervention, and a follow-up questionnaire that was administered 1 month after the intervention. A total of 151 primary care facilities (response rate 151/767, 19.7%) and 503 GPs (response rate 503/996, 50.5%) agreed to participate in the baseline assessment. At the follow-up, 393 GPs filled in the questionnaires (response rate, 393/503, 78.1%), but not all of them took part in the intervention. The final study group of 225 GPs participated in 3 study conditions: e-learning (knowledge plus skills modelling, n=42), a pdf article (knowledge only, n=89), and control (no intervention, n=94). We measured the outcome as scores on the Patients Expectations Scale, Communication Scale, Attitude Toward Treatment and Health Scale, and Self-Efficacy Scale. Results GPs participating in e-learning demonstrated a significant rise in their perception of older patients’ expectations for disease explanation (Wald χ2=19.7, P<.001) and in perception of motivational aspect of older patients’ attitude toward treatment and health (Wald χ2=8.9, P=.03) in comparison with both the control and pdf article groups. We observed additional between-group differences at the level of statistical trend. GPs participating in the pdf article intervention demonstrated a decline in self-assessed communication, both at the level of global scoring (Wald χ2=34.5, P<.001) and at the level of 20 of 26 specific behaviors (all P<.05). Factors moderating the effects of the intervention were the number of patients per GP and the facility’s organizational structure. Conclusions Both methods were suitable, but in different areas and under different conditions. The key benefit of the pdf article intervention was raising doctors’ reflection on limitations in their communication skills, whereas e-learning was more effective in changing their perception of older patients’ proactive attitude, especially among GPs working in privately owned facilities and having a greater number of assigned patients. Although we did not achieve all expected effects of the PRACTA intervention, both its forms seem promising in terms of enhancing the competencies of doctors in communication with and activation of older patients. PMID:28228370

  7. Estimation of distribution overlap of urn models.

    PubMed

    Hampton, Jerrad; Lladser, Manuel E

    2012-01-01

    A classical problem in statistics is estimating the expected coverage of a sample, which has had applications in gene expression, microbial ecology, optimization, and even numismatics. Here we consider a related extension of this problem to random samples of two discrete distributions. Specifically, we estimate what we call the dissimilarity probability of a sample, i.e., the probability of a draw from one distribution not being observed in [Formula: see text] draws from another distribution. We show our estimator of dissimilarity to be a [Formula: see text]-statistic and a uniformly minimum variance unbiased estimator of dissimilarity over the largest appropriate range of [Formula: see text]. Furthermore, despite the non-Markovian nature of our estimator when applied sequentially over [Formula: see text], we show it converges uniformly in probability to the dissimilarity parameter, and we present criteria when it is approximately normally distributed and admits a consistent jackknife estimator of its variance. As proof of concept, we analyze V35 16S rRNA data to discern between various microbial environments. Other potential applications concern any situation where dissimilarity of two discrete distributions may be of interest. For instance, in SELEX experiments, each urn could represent a random RNA pool and each draw a possible solution to a particular binding site problem over that pool. The dissimilarity of these pools is then related to the probability of finding binding site solutions in one pool that are absent in the other.

  8. Probabilistic assessment of safe groundwater utilization in farmed fish ponds of blackfoot disease hyperendemic areas in terms of the regulation of arsenic concentrations.

    PubMed

    Jang, Cheng-Shin

    2008-03-15

    This work probabilistically explored a safe utilization ratio (UR) of groundwater in fish ponds located in blackfoot disease hyperendemic areas in terms of the regulation of arsenic (As) concentrations. Sequential indicator simulation was used to reproduce As concentrations in groundwater and to propagate their uncertainty. Corresponding URs of groundwater were obtained from the relationship of mass balance between reproduced As concentrations in groundwater and the As regulation in farmed fish ponds. Three levels were adopted to evaluate the UR - UR> or =0.5, 0.5>UR> or =0.1 and UR<0.1. The high probability of the UR> or =0.5 level presents in the northern and southern regions where groundwater can be a major water source. The high probability of the 0.5>UR> or =0.1 level is mainly distributed in the central-coastal, central-eastern and southeastern regions where groundwater should be considered as a subordinate water source. Being available, extra surface water has priority over providing aquacultural needs of the regions with the high probability of the UR> or =0.5 and 0.5>UR> or =0.1 levels. In the regions with the high probability of the UR<0.1 level, in the central-coastal and southwestern regions, groundwater utilization should be reduced substantially or even prohibited completely for no adverse effects on human health.

  9. A Search Model for Imperfectly Detected Targets

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert

    2012-01-01

    Under the assumptions that 1) the search region can be divided up into N non-overlapping sub-regions that are searched sequentially, 2) the probability of detection is unity if a sub-region is selected, and 3) no information is available to guide the search, there are two extreme case models. The search can be done perfectly, leading to a uniform distribution over the number of searches required, or the search can be done with no memory, leading to a geometric distribution for the number of searches required with a success probability of 1/N. If the probability of detection P is less than unity, but the search is done otherwise perfectly, the searcher will have to search the N regions repeatedly until detection occurs. The number of searches is thus the sum two random variables. One is N times the number of full searches (a geometric distribution with success probability P) and the other is the uniform distribution over the integers 1 to N. The first three moments of this distribution were computed, giving the mean, standard deviation, and the kurtosis of the distribution as a function of the two parameters. The model was fit to the data presented last year (Ahumada, Billington, & Kaiwi, 2 required to find a single pixel target on a simulated horizon. The model gave a good fit to the three moments for all three observers.

  10. Development of a site fidelity index based on population capture-recapture data

    PubMed Central

    Ferrari, Mariano A.; Crespo, Enrique A.; Coscarella, Mariano A.

    2018-01-01

    Background Site fidelity is considered as an animal’s tendency to return to a previously occupied place; this is a component of animal behaviour that allows us to understand movement patterns and aspects related to the animal’s life history. Although there are many site fidelity metrics, the lack of standardisation presents a considerable challenge in terms of comparability among studies. Methods This investigation focused on the theoretical development of a standardised composite site fidelity index and its statistical distribution in order to obtain reliable population-level site fidelity comparisons. The arithmetic and harmonic means were used as mathematical structures in order to create different indexes by combining the most commonly used indicators for site fidelity such as Occurrence, Permanence and Periodicity. The index performance was then evaluated in simulated populations and one real population of Commerson’s dolphins (Cephalorhynchus commersonii (Lacépède 1804)). In the first case, the indexes were evaluated based on how they were affected by different probability values such as the occurrence of the individual within the study area (φ) and capture probability (p). As a precision measure for the comparison of the indexes, the Wald confidence interval (CI) and the mean square error were applied. Given that there was no previous data concerning the distribution parameters of this population, bootstrap CIs were applied for the study case. Results Eight alternative indexes were developed. The indexes with an arithmetic mean structure, in general, had a consistently inferior performance than those with a harmonic mean structure. The index IH4, in particular, achieved the best results in all of the scenarios and in the study case. Additionally, this index presented a normal distribution. As such, it was proposed as a standardised measure for site fidelity (Standardised Site Fidelity Index—SSFI). Discussion The SSFI is the first standardised metric that quantifies site fidelity at a populational level. It is an estimator that varies between zero and one and works in situations where detection is not perfect and effort can be constant or not. Moreover, it has an associated CI that allows users to make comparisons. PMID:29761064

  11. Development of a site fidelity index based on population capture-recapture data.

    PubMed

    Tschopp, Ayelen; Ferrari, Mariano A; Crespo, Enrique A; Coscarella, Mariano A

    2018-01-01

    Site fidelity is considered as an animal's tendency to return to a previously occupied place; this is a component of animal behaviour that allows us to understand movement patterns and aspects related to the animal's life history. Although there are many site fidelity metrics, the lack of standardisation presents a considerable challenge in terms of comparability among studies. This investigation focused on the theoretical development of a standardised composite site fidelity index and its statistical distribution in order to obtain reliable population-level site fidelity comparisons. The arithmetic and harmonic means were used as mathematical structures in order to create different indexes by combining the most commonly used indicators for site fidelity such as Occurrence, Permanence and Periodicity. The index performance was then evaluated in simulated populations and one real population of Commerson's dolphins ( Cephalorhynchus commersonii (Lacépède 1804)). In the first case, the indexes were evaluated based on how they were affected by different probability values such as the occurrence of the individual within the study area (φ) and capture probability ( p ). As a precision measure for the comparison of the indexes, the Wald confidence interval (CI) and the mean square error were applied. Given that there was no previous data concerning the distribution parameters of this population, bootstrap CIs were applied for the study case. Eight alternative indexes were developed. The indexes with an arithmetic mean structure, in general, had a consistently inferior performance than those with a harmonic mean structure. The index IH4, in particular, achieved the best results in all of the scenarios and in the study case. Additionally, this index presented a normal distribution. As such, it was proposed as a standardised measure for site fidelity (Standardised Site Fidelity Index-SSFI). The SSFI is the first standardised metric that quantifies site fidelity at a populational level. It is an estimator that varies between zero and one and works in situations where detection is not perfect and effort can be constant or not. Moreover, it has an associated CI that allows users to make comparisons.

  12. Introducing ShakeMap to potential users in Puerto Rico using scenarios of damaging historical and probable earthquakes

    NASA Astrophysics Data System (ADS)

    Huerfano, V. A.; Cua, G.; von Hillebrandt, C.; Saffar, A.

    2007-12-01

    The island of Puerto Rico has a long history of damaging earthquakes. Major earthquakes from off-shore sources have affected Puerto Rico in 1520, 1615, 1670, 1751, 1787, 1867, and 1918 (Mueller et al, 2003; PRSN Catalogue). Recent trenching has also yielded evidence of possible M7.0 events inland (Prentice, 2000). The high seismic hazard, large population, high tsunami potential and relatively poor construction practice can result in a potentially devastating combination. Efficient emergency response in event of a large earthquake will be crucial to minimizing the loss of life and disruption of lifeline systems in Puerto Rico. The ShakeMap system (Wald et al, 2004) developed by the USGS to rapidly display and disseminate information about the geographical distribution of ground shaking (and hence potential damage) following a large earthquake has proven to be a vital tool for post earthquake emergency response efforts, and is being adopted/emulated in various seismically active regions worldwide. Implementing a robust ShakeMap system is among the top priorities of the Puerto Rico Seismic Network. However, the ultimate effectiveness of ShakeMap in post- earthquake response depends not only on its rapid availability, but also on the effective use of the information it provides. We developed ShakeMap scenarios of a suite of damaging historical and probable earthquakes that severely impact San Juan, Ponce, and Mayagüez, the 3 largest cities in Puerto Rico. Earthquake source parameters were obtained from McCann and Mercado (1998); and Huérfano (2004). For historical earthquakes that generated tsunamis, tsunami inundation maps were generated using the TIME method (Shuto, 1991). The ShakeMap ground shaking maps were presented to local and regional governmental and emergency response agencies at the 2007 Annual conference of the Puerto Rico Emergency Management and Disaster Administration in San Juan, PR, and at numerous other emergency management talks and training sessions. Economic losses are estimated using the ShakeMap scenario ground motions (Saffar, 2007). The calibration tasks necessary in generating these scenarios (developing Vs30 maps, attenuation relationships) complement the on-going efforts of the Puerto Rico Seismic Network to generate ShakeMaps in real-time.

  13. A further test of sequential-sampling models that account for payoff effects on response bias in perceptual decision tasks.

    PubMed

    Diederich, Adele

    2008-02-01

    Recently, Diederich and Busemeyer (2006) evaluated three hypotheses formulated as particular versions of a sequential-sampling model to account for the effects of payoffs in a perceptual decision task with time constraints. The bound-change hypothesis states that payoffs affect the distance of the starting position of the decision process to each decision bound. The drift-rate-change hypothesis states that payoffs affect the drift rate of the decision process. The two-stage-processing hypothesis assumes two processes, one for processing payoffs and another for processing stimulus information, and that on a given trial, attention switches from one process to the other. The latter hypothesis gave the best account of their data. The present study investigated two questions: (1) Does the experimental setting influence decisions, and consequently affect the fits of the hypotheses? A task was conducted in two experimental settings--either the time limit or the payoff matrix was held constant within a given block of trials, using three different payoff matrices and four different time limits--in order to answer this question. (2) Could it be that participants neglect payoffs on some trials and stimulus information on others? To investigate this idea, a further hypothesis was considered, the mixture-of-processes hypothesis. Like the two-stage-processing hypothesis, it postulates two processes, one for payoffs and another for stimulus information. However, it differs from the previous hypothesis in assuming that on a given trial exactly one of the processes operates, never both. The present design had no effect on choice probability but may have affected choice response times (RTs). Overall, the two-stage-processing hypothesis gave the best account, with respect both to choice probabilities and to observed mean RTs and mean RT patterns within a choice pair.

  14. Enumerative and binomial sequential sampling plans for the multicolored Asian lady beetle (Coleoptera: Coccinellidae) in wine grapes.

    PubMed

    Galvan, T L; Burkness, E C; Hutchison, W D

    2007-06-01

    To develop a practical integrated pest management (IPM) system for the multicolored Asian lady beetle, Harmonia axyridis (Pallas) (Coleoptera: Coccinellidae), in wine grapes, we assessed the spatial distribution of H. axyridis and developed eight sampling plans to estimate adult density or infestation level in grape clusters. We used 49 data sets collected from commercial vineyards in 2004 and 2005, in Minnesota and Wisconsin. Enumerative plans were developed using two precision levels (0.10 and 0.25); the six binomial plans reflected six unique action thresholds (3, 7, 12, 18, 22, and 31% of cluster samples infested with at least one H. axyridis). The spatial distribution of H. axyridis in wine grapes was aggregated, independent of cultivar and year, but it was more randomly distributed as mean density declined. The average sample number (ASN) for each sampling plan was determined using resampling software. For research purposes, an enumerative plan with a precision level of 0.10 (SE/X) resulted in a mean ASN of 546 clusters. For IPM applications, the enumerative plan with a precision level of 0.25 resulted in a mean ASN of 180 clusters. In contrast, the binomial plans resulted in much lower ASNs and provided high probabilities of arriving at correct "treat or no-treat" decisions, making these plans more efficient for IPM applications. For a tally threshold of one adult per cluster, the operating characteristic curves for the six action thresholds provided binomial sequential sampling plans with mean ASNs of only 19-26 clusters, and probabilities of making correct decisions between 83 and 96%. The benefits of the binomial sampling plans are discussed within the context of improving IPM programs for wine grapes.

  15. Evaluation of an automated safety surveillance system using risk adjusted sequential probability ratio testing.

    PubMed

    Matheny, Michael E; Normand, Sharon-Lise T; Gross, Thomas P; Marinac-Dabic, Danica; Loyo-Berrios, Nilsa; Vidi, Venkatesan D; Donnelly, Sharon; Resnic, Frederic S

    2011-12-14

    Automated adverse outcome surveillance tools and methods have potential utility in quality improvement and medical product surveillance activities. Their use for assessing hospital performance on the basis of patient outcomes has received little attention. We compared risk-adjusted sequential probability ratio testing (RA-SPRT) implemented in an automated tool to Massachusetts public reports of 30-day mortality after isolated coronary artery bypass graft surgery. A total of 23,020 isolated adult coronary artery bypass surgery admissions performed in Massachusetts hospitals between January 1, 2002 and September 30, 2007 were retrospectively re-evaluated. The RA-SPRT method was implemented within an automated surveillance tool to identify hospital outliers in yearly increments. We used an overall type I error rate of 0.05, an overall type II error rate of 0.10, and a threshold that signaled if the odds of dying 30-days after surgery was at least twice than expected. Annual hospital outlier status, based on the state-reported classification, was considered the gold standard. An event was defined as at least one occurrence of a higher-than-expected hospital mortality rate during a given year. We examined a total of 83 hospital-year observations. The RA-SPRT method alerted 6 events among three hospitals for 30-day mortality compared with 5 events among two hospitals using the state public reports, yielding a sensitivity of 100% (5/5) and specificity of 98.8% (79/80). The automated RA-SPRT method performed well, detecting all of the true institutional outliers with a small false positive alerting rate. Such a system could provide confidential automated notification to local institutions in advance of public reporting providing opportunities for earlier quality improvement interventions.

  16. A protein-dependent side-chain rotamer library.

    PubMed

    Bhuyan, Md Shariful Islam; Gao, Xin

    2011-12-14

    Protein side-chain packing problem has remained one of the key open problems in bioinformatics. The three main components of protein side-chain prediction methods are a rotamer library, an energy function and a search algorithm. Rotamer libraries summarize the existing knowledge of the experimentally determined structures quantitatively. Depending on how much contextual information is encoded, there are backbone-independent rotamer libraries and backbone-dependent rotamer libraries. Backbone-independent libraries only encode sequential information, whereas backbone-dependent libraries encode both sequential and locally structural information. However, side-chain conformations are determined by spatially local information, rather than sequentially local information. Since in the side-chain prediction problem, the backbone structure is given, spatially local information should ideally be encoded into the rotamer libraries. In this paper, we propose a new type of backbone-dependent rotamer library, which encodes structural information of all the spatially neighboring residues. We call it protein-dependent rotamer libraries. Given any rotamer library and a protein backbone structure, we first model the protein structure as a Markov random field. Then the marginal distributions are estimated by the inference algorithms, without doing global optimization or search. The rotamers from the given library are then re-ranked and associated with the updated probabilities. Experimental results demonstrate that the proposed protein-dependent libraries significantly outperform the widely used backbone-dependent libraries in terms of the side-chain prediction accuracy and the rotamer ranking ability. Furthermore, without global optimization/search, the side-chain prediction power of the protein-dependent library is still comparable to the global-search-based side-chain prediction methods.

  17. Effect of Compression Devices on Preventing Deep Vein Thrombosis Among Adult Trauma Patients: A Systematic Review.

    PubMed

    Ibrahim, Mona; Ahmed, Azza; Mohamed, Warda Yousef; El-Sayed Abu Abduo, Somaya

    2015-01-01

    Trauma is the leading cause of death in Americans up to 44 years old each year. Deep vein thrombosis (DVT) is a significant condition occurring in trauma, and prophylaxis is essential to the appropriate management of trauma patients. The incidence of DVT varies in trauma patients, depending on patients' risk factors, modality of prophylaxis, and methods of detection. However, compression devices and arteriovenous (A-V) foot pumps prophylaxis are recommended in trauma patients, but the efficacy and optimal use of it is not well documented in the literature. The aim of this study was to review the literature on the effect of compression devices in preventing DVT among adult trauma patients. We searched through PubMed, CINAHL, and Cochrane Central Register of Controlled Trials for eligible studies published from 1990 until June 2014. Reviewers identified all randomized controlled trials that satisfied the study criteria, and the quality of included studies was assessed by Cochrane risk of bias tool. Five randomized controlled trials were included with a total of 1072 patients. Sequential compression devices significantly reduced the incidence of DVT in trauma patients. Also, foot pumps were more effective in reducing incidence of DVT compared with sequential compression devices. Sequential compression devices and foot pumps reduced the incidence of DVT in trauma patients. However, the evidence is limited to a small sample size and did not take into account other confounding variables that may affect the incidence of DVT in trauma patients. Future randomized controlled trials with larger probability samples to investigate the optimal use of mechanical prophylaxis in trauma patients are needed.

  18. Mechanism of Tacrine Block at Adult Human Muscle Nicotinic Acetylcholine Receptors

    PubMed Central

    Prince, Richard J.; Pennington, Richard A.; Sine, Steven M.

    2002-01-01

    We used single-channel kinetic analysis to study the inhibitory effects of tacrine on human adult nicotinic receptors (nAChRs) transiently expressed in HEK 293 cells. Single channel recording from cell-attached patches revealed concentration- and voltage-dependent decreases in mean channel open probability produced by tacrine (IC50 4.6 μM at −70 mV, 1.6 μM at −150 mV). Two main effects of tacrine were apparent in the open- and closed-time distributions. First, the mean channel open time decreased with increasing tacrine concentration in a voltage-dependent manner, strongly suggesting that tacrine acts as an open-channel blocker. Second, tacrine produced a new class of closings whose duration increased with increasing tacrine concentration. Concentration dependence of closed-times is not predicted by sequential models of channel block, suggesting that tacrine blocks the nAChR by an unusual mechanism. To probe tacrine's mechanism of action we fitted a series of kinetic models to our data using maximum likelihood techniques. Models incorporating two tacrine binding sites in the open receptor channel gave dramatically improved fits to our data compared with the classic sequential model, which contains one site. Improved fits relative to the sequential model were also obtained with schemes incorporating a binding site in the closed channel, but only if it is assumed that the channel cannot gate with tacrine bound. Overall, the best description of our data was obtained with a model that combined two binding sites in the open channel with a single site in the closed state of the receptor. PMID:12198092

  19. Probability matching in risky choice: the interplay of feedback and strategy availability.

    PubMed

    Newell, Ben R; Koehler, Derek J; James, Greta; Rakow, Tim; van Ravenzwaaij, Don

    2013-04-01

    Probability matching in sequential decision making is a striking violation of rational choice that has been observed in hundreds of experiments. Recent studies have demonstrated that matching persists even in described tasks in which all the information required for identifying a superior alternative strategy-maximizing-is present before the first choice is made. These studies have also indicated that maximizing increases when (1) the asymmetry in the availability of matching and maximizing strategies is reduced and (2) normatively irrelevant outcome feedback is provided. In the two experiments reported here, we examined the joint influences of these factors, revealing that strategy availability and outcome feedback operate on different time courses. Both behavioral and modeling results showed that while availability of the maximizing strategy increases the choice of maximizing early during the task, feedback appears to act more slowly to erode misconceptions about the task and to reinforce optimal responding. The results illuminate the interplay between "top-down" identification of choice strategies and "bottom-up" discovery of those strategies via feedback.

  20. The lod score method.

    PubMed

    Rice, J P; Saccone, N L; Corbett, J

    2001-01-01

    The lod score method originated in a seminal article by Newton Morton in 1955. The method is broadly concerned with issues of power and the posterior probability of linkage, ensuring that a reported linkage has a high probability of being a true linkage. In addition, the method is sequential, so that pedigrees or lod curves may be combined from published reports to pool data for analysis. This approach has been remarkably successful for 50 years in identifying disease genes for Mendelian disorders. After discussing these issues, we consider the situation for complex disorders, where the maximum lod score (MLS) statistic shares some of the advantages of the traditional lod score approach but is limited by unknown power and the lack of sharing of the primary data needed to optimally combine analytic results. We may still learn from the lod score method as we explore new methods in molecular biology and genetic analysis to utilize the complete human DNA sequence and the cataloging of all human genes.

  1. Forward and inverse uncertainty quantification using multilevel Monte Carlo algorithms for an elliptic non-local equation

    DOE PAGES

    Jasra, Ajay; Law, Kody J. H.; Zhou, Yan

    2016-01-01

    Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less

  2. Forward and inverse uncertainty quantification using multilevel Monte Carlo algorithms for an elliptic non-local equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jasra, Ajay; Law, Kody J. H.; Zhou, Yan

    Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less

  3. Estimating the Pollution Risk of Cadmium in Soil Using a Composite Soil Environmental Quality Standard

    PubMed Central

    Huang, Biao; Zhao, Yongcun

    2014-01-01

    Estimating standard-exceeding probabilities of toxic metals in soil is crucial for environmental evaluation. Because soil pH and land use types have strong effects on the bioavailability of trace metals in soil, they were taken into account by some environmental protection agencies in making composite soil environmental quality standards (SEQSs) that contain multiple metal thresholds under different pH and land use conditions. This study proposed a method for estimating the standard-exceeding probability map of soil cadmium using a composite SEQS. The spatial variability and uncertainty of soil pH and site-specific land use type were incorporated through simulated realizations by sequential Gaussian simulation. A case study was conducted using a sample data set from a 150 km2 area in Wuhan City and the composite SEQS for cadmium, recently set by the State Environmental Protection Administration of China. The method may be useful for evaluating the pollution risks of trace metals in soil with composite SEQSs. PMID:24672364

  4. Metal fractionation in marine sediments acidified by enrichment of CO2: A risk assessment.

    PubMed

    de Orte, Manoela Romanó; Bonnail, Estefanía; Sarmiento, Aguasanta M; Bautista-Chamizo, Esther; Basallote, M Dolores; Riba, Inmaculada; DelValls, Ángel; Nieto, José Miguel

    2018-06-01

    Carbon-capture and storage is considered to be a potential mitigation option for climate change. However, accidental leaks of CO 2 can occur, resulting in changes in ocean chemistry such as acidification and metal mobilization. Laboratory experiments were performed to provide data on the effects of CO 2 -related acidification on the chemical fractionation of metal(loid)s in marine-contaminated sediments using sequential extraction procedures. The results showed that sediments from Huelva estuary registered concentrations of arsenic, copper, lead, and zinc that surpass the probable biological effect level established by international protocols. Zinc had the greatest proportion in the most mobile fraction of the sediment. Metals in this fraction represent an environmental risk because they are weakly bound to sediment, and therefore more likely to migrate to the water column. Indeed, the concentration of this metal was lower in the most acidified scenarios when compared to control pH, indicating probable zinc mobilization from the sediment to the seawater. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Negative emotion enhances mnemonic precision and subjective feelings of remembering in visual long-term memory.

    PubMed

    Xie, Weizhen; Zhang, Weiwei

    2017-09-01

    Negative emotion sometimes enhances memory (higher accuracy and/or vividness, e.g., flashbulb memories). The present study investigates whether it is the qualitative (precision) or quantitative (the probability of successful retrieval) aspect of memory that drives these effects. In a visual long-term memory task, observers memorized colors (Experiment 1a) or orientations (Experiment 1b) of sequentially presented everyday objects under negative, neutral, or positive emotions induced with International Affective Picture System images. In a subsequent test phase, observers reconstructed objects' colors or orientations using the method of adjustment. We found that mnemonic precision was enhanced under the negative condition relative to the neutral and positive conditions. In contrast, the probability of successful retrieval was comparable across the emotion conditions. Furthermore, the boost in memory precision was associated with elevated subjective feelings of remembering (vividness and confidence) and metacognitive sensitivity in Experiment 2. Altogether, these findings suggest a novel precision-based account for emotional memories. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Statistically Qualified Neuro-Analytic system and Method for Process Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    1998-11-04

    An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertaintymore » in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.« less

  7. Validation of the Killip-Kimball Classification and Late Mortality after Acute Myocardial Infarction

    PubMed Central

    de Mello, Bruno Henrique Gallindo; Oliveira, Gustavo Bernardes F.; Ramos, Rui Fernando; Lopes, Bernardo Baptista C.; Barros, Cecília Bitarães S.; Carvalho, Erick de Oliveira; Teixeira, Fabio Bellini P.; Arruda, Guilherme D'Andréa S.; Revelo, Maria Sol Calero; Piegas, Leopoldo Soares

    2014-01-01

    Background The classification or index of heart failure severity in patients with acute myocardial infarction (AMI) was proposed by Killip and Kimball aiming at assessing the risk of in-hospital death and the potential benefit of specific management of care provided in Coronary Care Units (CCU) during the decade of 60. Objective To validate the risk stratification of Killip classification in the long-term mortality and compare the prognostic value in patients with non-ST-segment elevation MI (NSTEMI) relative to patients with ST-segment elevation MI (STEMI), in the era of reperfusion and modern antithrombotic therapies. Methods We evaluated 1906 patients with documented AMI and admitted to the CCU, from 1995 to 2011, with a mean follow-up of 05 years to assess total mortality. Kaplan-Meier (KM) curves were developed for comparison between survival distributions according to Killip class and NSTEMI versus STEMI. Cox proportional regression models were developed to determine the independent association between Killip class and mortality, with sensitivity analyses based on type of AMI. Results: The proportions of deaths and the KM survival distributions were significantly different across Killip class >1 (p <0.001) and with a similar pattern between patients with NSTEMI and STEMI. Cox models identified the Killip classification as a significant, sustained, consistent predictor and independent of relevant covariables (Wald χ2 16.5 [p = 0.001], NSTEMI) and (Wald χ2 11.9 [p = 0.008], STEMI). Conclusion The Killip and Kimball classification performs relevant prognostic role in mortality at mean follow-up of 05 years post-AMI, with a similar pattern between NSTEMI and STEMI patients. PMID:25014060

  8. An atlas of ShakeMaps for selected global earthquakes

    USGS Publications Warehouse

    Allen, Trevor I.; Wald, David J.; Hotovec, Alicia J.; Lin, Kuo-Wan; Earle, Paul S.; Marano, Kristin D.

    2008-01-01

    An atlas of maps of peak ground motions and intensity 'ShakeMaps' has been developed for almost 5,000 recent and historical global earthquakes. These maps are produced using established ShakeMap methodology (Wald and others, 1999c; Wald and others, 2005) and constraints from macroseismic intensity data, instrumental ground motions, regional topographically-based site amplifications, and published earthquake-rupture models. Applying the ShakeMap methodology allows a consistent approach to combine point observations with ground-motion predictions to produce descriptions of peak ground motions and intensity for each event. We also calculate an estimated ground-motion uncertainty grid for each earthquake. The Atlas of ShakeMaps provides a consistent and quantitative description of the distribution and intensity of shaking for recent global earthquakes (1973-2007) as well as selected historic events. As such, the Atlas was developed specifically for calibrating global earthquake loss estimation methodologies to be used in the U.S. Geological Survey Prompt Assessment of Global Earthquakes for Response (PAGER) Project. PAGER will employ these loss models to rapidly estimate the impact of global earthquakes as part of the USGS National Earthquake Information Center's earthquake-response protocol. The development of the Atlas of ShakeMaps has also led to several key improvements to the Global ShakeMap system. The key upgrades include: addition of uncertainties in the ground motion mapping, introduction of modern ground-motion prediction equations, improved estimates of global seismic-site conditions (VS30), and improved definition of stable continental region polygons. Finally, we have merged all of the ShakeMaps in the Atlas to provide a global perspective of earthquake ground shaking for the past 35 years, allowing comparison with probabilistic hazard maps. The online Atlas and supporting databases can be found at http://earthquake.usgs.gov/eqcenter/shakemap/atlas.php/.

  9. The Burden of Hard Atherosclerotic Plaques Does Not Promote Endoleak Development After Endovascular Aortic Aneurysm Repair: A Risk Stratification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petersen, Johannes, E-mail: johannes.petersen@i-med.ac.at; Glodny, Bernhard, E-mail: bernhard.glodny@i-med.ac.at

    Purpose: To objectify the influence of the atherosclerotic burden in the proximal landing zone on the development of endoleaks after endovascular abdominal aortic aneurysm repair (EVAR) or thoracic endovascular aneurysm repair (TEVAR) using objective aortic calcium scoring (ACS). Materials and Methods: This retrospective observation study included 267 patients who received an aortic endograft between 1997 and 2010 and for whom preoperative computed tomography (CT) was available to perform ACS using the CT-based V600 method. The mean follow-up period was 2 {+-} 2.3 years. Results: Type I endoleaks persisted in 45 patients (16.9%), type II in 34 (12.7%), type III inmore » 8 (3%), and type IV or V in 3 patients, respectively (1.1% each). ACS in patients with type I endoleaks was not increased: 0.029 {+-} 0.061 ml compared with 0.075 {+-} 0.1349 ml in the rest of the patients, (p > 0.05; Whitney-Mann U-Test). There were significantly better results for the indication 'traumatic aortic rupture' than for the other indications (p < 0.05). In multivariate logistic regression analyses, age was an independent risk factor for the development of type I endoleaks in the thoracic aorta (Wald 9.5; p = 0.002), whereas ACS score was an independent protective factor (Wald 6.9; p = 0.009). In the abdominal aorta, neither age nor ACS influenced the development of endoleaks. Conclusion: Contrary to previous assumptions, TEVAR and EVAR can be carried out without increasing the risk of an endoleak of any type, even if there is a high atherosclerotic 'hard-plaque' burden of the aorta. The results are significantly better for traumatic aortic.« less

  10. Self-Perceived Quality of Life Among Patients with Alzheimer's Disease: Two Longitudinal Models of Analysis.

    PubMed

    Conde-Sala, Josep L; Turró-Garriga, Oriol; Portellano-Ortiz, Cristina; Viñas-Diez, Vanesa; Gascón-Bayarri, Jordi; Reñé-Ramírez, Ramón

    2016-04-12

    The objective was to analyze the factors that influence self-perceived quality of life (QoL) in patients with Alzheimer's disease (AD), contrasting two different longitudinal models. A total of 127 patients were followed up over 24 months. The instruments applied were: Quality of Life in Alzheimer's Disease scale (QoL-AD), Geriatric Depression Scale-15, Anosognosia Questionnaire-Dementia, Disability Assessment in Dementia, Neuropsychiatric Inventory, and the Mini-Mental State Examination. Two models for grouping patients were tested: 1) Baseline score on the QoL-AD (QoL-Baseline), and 2) Difference in QoL-AD score between baseline and follow-up (QoL-Change). Generalized estimating equations were used to analyze longitudinal data, and multinomial regression analyses were performed. Over the follow-up period the QoL-Baseline model showed greater variability between groups (Wald χ2 = 172.3, p < 0.001) than did the QoL-Change model (Wald χ2  = 1.7, p = 0.427). In the QoL-Baseline model the predictive factors were greater depression (odds ratio [OR] = 1.20; 95% CI: 1.00- 1.45) and lower functional ability (OR = 0.92; 95% CI: 0.85- 0.99) for the Low QoL group (< 33 QoL-AD), and less depression (OR = 0.68; 95% CI: 0.52- 0.88), more anosognosia (OR = 1.07; 95% CI: 1.01- 1.13), and fewer neuropsychiatric symptoms (OR = 0.95; 95% CI: 0.91- 0.99) for the High-QoL group (>37 QoL-AD). The model based on baseline scores (QoL-Baseline) was better than the QoL-Change model in terms of identifying trajectories and predictors of QoL in AD.

  11. The Long Live Kids campaign: awareness of campaign messages.

    PubMed

    Faulkner, Guy E J; Kwan, Matthew Y W; MacNeill, Margaret; Brownrigg, Michelle

    2011-05-01

    Media interventions are one strategy used to promote physical activity, but little is known about their effectiveness with children. As part of a larger evaluation, the purpose of this study was to assess the short-term effect of a private industry sponsored media literacy campaign, Long Live Kids, aimed at children in Canada. Specifically, we investigated children's awareness of the campaign and its correlates. Using a cohort design, a national sample (N = 331, male = 171; mean age = 10.81, SD = 0.99) completed a telephone survey two weeks prior to the campaign release, and again 1 year later. Only 3% of the children were able to recall the Long Live Kids campaign unprompted and 57% had prompted recall. Logistic regression found family income (Wald χ(2) = 11.06, p < .05), and free-time physical activity (Wald χ(2) = 5.67, p < .01) significantly predicted campaign awareness. Active children (≥3 days/week) were twice as likely to have recalled the campaign compared with inactive children (<3 days/week), whereas children living in high-income households (>$60,000/yr) were between 3.5 to 5 times more likely to have campaign recall compared with children living in a low-income households (<$20,000/yr). These findings suggest that media campaigns developed by industry may have a role in promoting physical activity to children although our findings identified a knowledge gap between children living in high- and low-income households. Future research needs to examine how children become aware of such media campaigns and how this mediated information is being used by children.

  12. Remission of depression following electroconvulsive therapy (ECT) is associated with higher levels of brain-derived neurotrophic factor (BDNF).

    PubMed

    Freire, Thiago Fernando Vasconcelos; Fleck, Marcelo Pio de Almeida; da Rocha, Neusa Sica

    2016-03-01

    Research on the association between electroconvulsive therapy (ECT) and increased brain derived neurotrophic factor (BDNF) levels has produced conflicting result. There have been few studies which have evaluated BDNF levels in clinical contexts where there was remission following treatment. The objective of this study was to investigate whether remission of depression following ECT is associated with changes in BDNF levels. Adult inpatients in a psychiatric unit were invited to participate in this naturalistic study. Diagnoses were made using the Mini-International Neuropsychiatric Interview (MINI) and symptoms were evaluated at admission and discharge using the Hamilton Rating Scale for Depression (HDRS-17). Thirty-one patients who received a diagnosis of depression and were subjected to ECT were included retrospectively. Clinical remission was defined as a score of less than eight on the HDRS-17 at discharge. Serum BDNF levels were measured in blood samples collected at admission and discharge with a commercial kit used in accordance with the manufacturer's instructions. Subjects HDRS-17 scores improved following ECT (t = 13.29; p = 0.00). A generalized estimating equation (GEE) model revealed a remission × time interaction with BDNF levels as a dependent variable in a Wald chi-square test [Wald χ(2) = 5.98; p = 0.01]. A post hoc Bonferroni test revealed that non-remitters had lower BDNF levels at admission than remitters (p = 0.03), but there was no difference at discharge (p = 0.16). ECT remitters had higher serum BDNF levels at admission and the level did not vary during treatment. ECT non-remitters had lower serum BDNF levels at admission, but levels increased during treatment and were similar to those of ECT remitters at discharge. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Testing non-inferiority of a new treatment in three-arm clinical trials with binary endpoints.

    PubMed

    Tang, Nian-Sheng; Yu, Bin; Tang, Man-Lai

    2014-12-18

    A two-arm non-inferiority trial without a placebo is usually adopted to demonstrate that an experimental treatment is not worse than a reference treatment by a small pre-specified non-inferiority margin due to ethical concerns. Selection of the non-inferiority margin and establishment of assay sensitivity are two major issues in the design, analysis and interpretation for two-arm non-inferiority trials. Alternatively, a three-arm non-inferiority clinical trial including a placebo is usually conducted to assess the assay sensitivity and internal validity of a trial. Recently, some large-sample approaches have been developed to assess the non-inferiority of a new treatment based on the three-arm trial design. However, these methods behave badly with small sample sizes in the three arms. This manuscript aims to develop some reliable small-sample methods to test three-arm non-inferiority. Saddlepoint approximation, exact and approximate unconditional, and bootstrap-resampling methods are developed to calculate p-values of the Wald-type, score and likelihood ratio tests. Simulation studies are conducted to evaluate their performance in terms of type I error rate and power. Our empirical results show that the saddlepoint approximation method generally behaves better than the asymptotic method based on the Wald-type test statistic. For small sample sizes, approximate unconditional and bootstrap-resampling methods based on the score test statistic perform better in the sense that their corresponding type I error rates are generally closer to the prespecified nominal level than those of other test procedures. Both approximate unconditional and bootstrap-resampling test procedures based on the score test statistic are generally recommended for three-arm non-inferiority trials with binary outcomes.

  14. Diagnostic delay in psychogenic seizures and the association with anti-seizure medication trials.

    PubMed

    Kerr, Wesley T; Janio, Emily A; Le, Justine M; Hori, Jessica M; Patel, Akash B; Gallardo, Norma L; Bauirjan, Janar; Chau, Andrea M; D'Ambrosio, Shannon R; Cho, Andrew Y; Engel, Jerome; Cohen, Mark S; Stern, John M

    2016-08-01

    The average delay from first seizure to diagnosis of psychogenic non-epileptic seizures (PNES) is over 7 years. The reason for this delay is not well understood. We hypothesized that a perceived decrease in seizure frequency after starting an anti-seizure medication (ASM) may contribute to longer delays, but the frequency of such a response has not been well established. Time from onset to diagnosis, medication history and associated seizure frequency was acquired from the medical records of 297 consecutive patients with PNES diagnosed using video-electroencephalographic monitoring. Exponential regression was used to model the effect of medication trials and response on diagnostic delay. Mean diagnostic delay was 8.4 years (min 1 day, max 52 years). The robust average diagnostic delay was 2.8 years (95% CI: 2.2-3.5 years) based on an exponential model as 10 to the mean of log10 delay. Each ASM trial increased the robust average delay exponentially by at least one third of a year (Wald t=3.6, p=0.004). Response to ASM trials did not significantly change diagnostic delay (Wald t=-0.9, p=0.38). Although a response to ASMs was observed commonly in these patients with PNES, the presence of a response was not associated with longer time until definitive diagnosis. Instead, the number of ASMs tried was associated with a longer delay until diagnosis, suggesting that ASM trials were continued despite lack of response. These data support the guideline that patients with seizures should be referred to epilepsy care centers after failure of two medication trials. Copyright © 2016 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.

  15. Effects of an HIV peer prevention intervention on sexual and injecting risk behaviors among injecting drug users and their risk partners in Thai Nguyen, Vietnam: A randomized controlled trial

    PubMed Central

    Go, Vivian F.; Frangakis, Constantine; Le Minh, Nguyen; Latkin, Carl A.; Ha, Tran Viet; Mo, Tran Thi; Sripaipan, Teerada; Davis, Wendy; Zelaya, Carla; Vu, Pham The; Chen, Yong; Celentano, David D.; Quan, Vu Minh

    2014-01-01

    Globally, 30% of new HIV infections outside sub-Saharan Africa involve injecting drug users (IDU) and in many countries, including Vietnam, HIV epidemics are concentrated among IDU. We conducted a randomized controlled trial in Thai Nguyen, Vietnam, to evaluate whether a peer oriented behavioral intervention could reduce injecting and sexual HIV risk behaviors among IDU and their network members. 419 HIV-negative index IDU aged 18 years or older and 516 injecting and sexual network members were enrolled. Each index participant was randomly assigned to receive a series of six small group peer educator-training sessions and three booster sessions in addition to HIV testing and counseling (HTC) (intervention; n = 210) or HTC only (control; n = 209). Follow-up, including HTC, was conducted at 3, 6, 9 and 12 months post-intervention. The proportion of unprotected sex dropped significantly from 49% to 27% (SE (difference) = 3%, p < 0.01) between baseline and the 3-month visit among all index-network member pairs. However, at 12 months, post-intervention, intervention participants had a 14% greater decline in unprotected sex relative to control participants (Wald test = 10.8, df = 4, p = 0.03). This intervention effect is explained by trial participants assigned to the control arm who missed at least one standardized HTC session during follow-up and subsequently reported increased unprotected sex. The proportion of observed needle/syringe sharing dropped significantly between baseline and the 3-month visit (14% vs. 3%, SE (difference) = 2%, p < 0.01) and persisted until 12 months, but there was no difference across trial arms (Wald test = 3.74, df = 3, p = 0.44). PMID:24034963

  16. [Study on high-risk behaviour and suicide associated risk factors related to HIV/AIDS among gay or bisexual men].

    PubMed

    Chen, Hong-quan; Li, Yang; Zhang, Bei-chuan; Li, Xiu-fang

    2011-10-01

    Characteristics on AIDS high-risk behaviors in gay or bisexual men with suicide ideas were explored and analyzed. A cross-sectional survey was conducted with the snowball sampling method adopted. Subjects with suicide ideas were collected from responses to the valid questionnaires and subjects with no suicide ideas were collected from the age comparable men. The overall rate of gays or bisexuals with suicide ideas was 20.2% in this survey. The attitude for homogeneity and marital status among the unmarried was more than that among the comparable group (P < 0.05). The rate of AIDS high-risk behaviors as same-sex sexual harassment, bleeding during sexual intercourse in the last year, coitus with unfamiliar same-sex partners in cities, suffering from adult same-sex sexual abuse before the age of 16, having had sexual abuse and abusive behavior, having had active or passive anal kiss, having had active or passive coitus with fingers, alcohol consumption weekly at least once or more, hurt by gays because of attitude and/or same-sex sexual activity and hurt by heterosexual men because of attitude and/or same-sex sexual activity were significantly higher in gays and bisexual men with suicide ideas than those without (P < 0.05). Data from multivariate logistic regression models suggested that harm from gays (Waldχ(2) = 6.637, P = 0.010) and heterosexual men (Waldχ(2) = 5.835, P = 0.016) due to attitude on homosexual activity appear to be the risk factors causing the suicide ideas. Reducing the social discrimination and harm towards gays and bisexual men could reduce the occurrence of the suicide ideas and have a positive effect on curbing the prevalence of AIDS.

  17. Altered Fermentation Performances, Growth, and Metabolic Footprints Reveal Competition for Nutrients between Yeast Species Inoculated in Synthetic Grape Juice-Like Medium

    PubMed Central

    Rollero, Stephanie; Bloem, Audrey; Ortiz-Julien, Anne; Camarasa, Carole; Divol, Benoit

    2018-01-01

    The sequential inoculation of non-Saccharomyces yeasts and Saccharomyces cerevisiae in grape juice is becoming an increasingly popular practice to diversify wine styles and/or to obtain more complex wines with a peculiar microbial footprint. One of the main interactions is competition for nutrients, especially nitrogen sources, that directly impacts not only fermentation performance but also the production of aroma compounds. In order to better understand the interactions taking place between non-Saccharomyces yeasts and S. cerevisiae during alcoholic fermentation, sequential inoculations of three yeast species (Pichia burtonii, Kluyveromyces marxianus, Zygoascus meyerae) with S. cerevisiae were performed individually in a synthetic medium. Different species-dependent interactions were evidenced. Indeed, the three sequential inoculations resulted in three different behaviors in terms of growth. P. burtonii and Z. meyerae declined after the inoculation of S. cerevisiae which promptly outcompeted the other two species. However, while the presence of P. burtonii did not impact the fermentation kinetics of S. cerevisiae, that of Z. meyerae rendered the overall kinetics very slow and with no clear exponential phase. K. marxianus and S. cerevisiae both declined and became undetectable before fermentation completion. The results also demonstrated that yeasts differed in their preference for nitrogen sources. Unlike Z. meyerae and P. burtonii, K. marxianus appeared to be a competitor for S. cerevisiae (as evidenced by the uptake of ammonium and amino acids), thereby explaining the resulting stuck fermentation. Nevertheless, the results suggested that competition for other nutrients (probably vitamins) occurred during the sequential inoculation of Z. meyerae with S. cerevisiae. The metabolic footprint of the non-Saccharomyces yeasts determined after 48 h of fermentation remained until the end of fermentation and combined with that of S. cerevisiae. For instance, fermentations performed with K. marxianus were characterized by the formation of phenylethanol and phenylethyl acetate, while those performed with P. burtonii or Z. meyerae displayed higher production of isoamyl alcohol and ethyl esters. When considering sequential inoculation of yeasts, the nutritional requirements of the yeasts used should be carefully considered and adjusted accordingly. Finally, our chemical data suggests that the organoleptic properties of the wine are altered in a species specific manner. PMID:29487584

  18. Flavor-changing Z decays: A window to ultraheavy quarks?

    NASA Astrophysics Data System (ADS)

    Ganapathi, V.; Weiler, T.; Laermann, E.; Schmitt, I.; Zerwas, P. M.

    1983-02-01

    We study flavor-changing Z decays into quarks, Z-->Q+q¯, in the standard SU(2)×U(1) theory with sequential generations. Such decays occur in higher-order electroweak interactions, with a probability growing as the fourth power of the mass of the heaviest (virtual) quark mediating the transition. With the possible exception of Z-->bs¯, these decay modes are generally very rare in the three-generation scheme. However, with four generations Z-->b'b¯ is observable if the t' mass is a few hundred GeV. Such decay modes could thus provide a glimpse of the ultraheavy-quark spectrum.

  19. Parallel discrete event simulation using shared memory

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.

    1988-01-01

    With traditional event-list techniques, evaluating a detailed discrete-event simulation-model can often require hours or even days of computation time. By eliminating the event list and maintaining only sufficient synchronization to ensure causality, parallel simulation can potentially provide speedups that are linear in the numbers of processors. A set of shared-memory experiments, using the Chandy-Misra distributed-simulation algorithm, to simulate networks of queues is presented. Parameters of the study include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential-simulation of most queueing network models.

  20. Hydrogeology and historical assessment of a classic sequential-land use landfill site, Illinois, U.S.A.

    NASA Astrophysics Data System (ADS)

    Booth, Colin J.; Vagt, Peter J.

    1990-05-01

    The Blackwell site in northeastern Illinois was a classic sequential-use project combining land reclamation, a sanitary landfill, and a recreational park. This paper adds a recent assessment of leachate generation and groundwater contamination to the site's unfinished record. Hydrogeological studies show that (1) the landfill sits astride an outwash aquifer and a till mound, which are separated from an underlying dolomite aquifer by a thin, silty till; (2) leachate leaks from the landfill at an estimated average rate between 48 and 78 m3/d; (3) the resultant contaminant plume is virtually stagnant in the till but rapidly diluted in the outwash aquifer, so that no off-site contamination is detected; (4) trace VOC levels in the dolomite probably indicate that contaminants have migrated there from the landfill-derived plume in the outwash. Deviations from the original landfill concepts included elimination of a leachate collection system, increased landfill size, local absence of a clay liner, and partial use of nonclay cover. The hydrogeological setting was unsuitable for the landfill as constructed, indicating the importance of detailed geological consideration in landfill and land-use planning.

  1. Source and migration of dissolved manganese in the Central Nile Delta Aquifer, Egypt

    NASA Astrophysics Data System (ADS)

    Bennett, P. C.; El Shishtawy, A. M.; Sharp, J. M.; Atwia, M. G.

    2014-08-01

    Dissolved metals in waters in shallow deltaic sediments are one of the world's major health problems, and a prime example is arsenic contamination in Bangladesh. The Central Nile Delta Aquifer, a drinking water source for more than 6 million people, can have high concentrations of dissolved manganese (Mn). Standard hydrochemical analyses coupled with sequential chemical extraction is used to identify the source of the Mn and to identify the probable cause of the contamination. Fifty-nine municipal supply wells were sampled and the results compared with published data for groundwaters and surface waters. Drill cuttings from 4 wells were collected and analyzed by sequential chemical extraction to test the hypothesized Mn-generating processes. The data from this research show that the Mn source is not deep saline water, microbial reduction of Mn oxides at the production depth, or leakage from irrigation drainage ditches. Instead, Mn associated with carbonate minerals in the surficial confining layer and transported down along the disturbed well annulus of the municipal supply wells is the likely source. This analysis provides a basis for future hydrogeological and contaminant transport modeling as well as remediation-modification of well completion practices and pumping schedules to mitigate the problem.

  2. Risk-Constrained Dynamic Programming for Optimal Mars Entry, Descent, and Landing

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki

    2013-01-01

    A chance-constrained dynamic programming algorithm was developed that is capable of making optimal sequential decisions within a user-specified risk bound. This work handles stochastic uncertainties over multiple stages in the CEMAT (Combined EDL-Mobility Analyses Tool) framework. It was demonstrated by a simulation of Mars entry, descent, and landing (EDL) using real landscape data obtained from the Mars Reconnaissance Orbiter. Although standard dynamic programming (DP) provides a general framework for optimal sequential decisionmaking under uncertainty, it typically achieves risk aversion by imposing an arbitrary penalty on failure states. Such a penalty-based approach cannot explicitly bound the probability of mission failure. A key idea behind the new approach is called risk allocation, which decomposes a joint chance constraint into a set of individual chance constraints and distributes risk over them. The joint chance constraint was reformulated into a constraint on an expectation over a sum of an indicator function, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the chance-constraint optimization problem can be turned into an unconstrained optimization over a Lagrangian, which can be solved efficiently using a standard DP approach.

  3. Mass-induced instability of SAdS black hole in Einstein-Ricci cubic gravity

    NASA Astrophysics Data System (ADS)

    Myung, Yun Soo

    2018-05-01

    We perform the stability analysis of Schwarzschild-AdS (SAdS) black hole in the Einstein-Ricci cubic gravity. It shows that the Ricci tensor perturbations exhibit unstable modes for small black holes. We call this the mass-induced instability of SAdS black hole because the instability of small black holes arises from the massiveness in the linearized Einstein-Ricci cubic gravity, but not a feature of higher-order derivative theory giving ghost states. Also, we point out that the correlated stability conjecture holds for the SAdS black hole by computing the Wald entropy of SAdS black hole in Einstein-Ricci cubic gravity.

  4. Leadership in nursing education: voices from the past.

    PubMed

    Gosline, Mary Beth

    2004-01-01

    When education for nurses became a reality, leaders in the emerging profession spoke out early and often for educational improvements to prepare those who would nurse. The writings and speeches of Isabel Hampton Robb, Mary Adelaide Nutting, Lavinia Lloyd Dock, Lillian Wald, and Isabel Maitland Stewart formed the basis for a qualitative study that documents the voices of early nursing leaders who contributed to the development of nursing education as it moved from "training" toward professional education in a university setting. What is documented in the literature is the desire of these women to enhance the professional status of nursing through improvements in its educational system.

  5. Towards a bulk description of higher spin SYK

    NASA Astrophysics Data System (ADS)

    González, Hernán A.; Grumiller, Daniel; Salzer, Jakob

    2018-05-01

    We consider on the bulk side extensions of the Sachdev-Ye-Kitaev (SYK) model to Yang-Mills and higher spins. To this end we study generalizations of the Jackiw-Teitelboim (JT) model in the BF formulation. Our main goal is to obtain generalizations of the Schwarzian action, which we achieve in two ways: by considering the on-shell action supplemented by suitable boundary terms compatible with all symmetries, and by applying the Lee-Wald-Zoupas formalism to analyze the symplectic structure of dilaton gravity. We conclude with a discussion of the entropy (including log-corrections from higher spins) and a holographic dictionary for the generalized SYK/JT correspondence.

  6. Quantitative comparison of randomization designs in sequential clinical trials based on treatment balance and allocation randomness.

    PubMed

    Zhao, Wenle; Weng, Yanqiu; Wu, Qi; Palesch, Yuko

    2012-01-01

    To evaluate the performance of randomization designs under various parameter settings and trial sample sizes, and identify optimal designs with respect to both treatment imbalance and allocation randomness, we evaluate 260 design scenarios from 14 randomization designs under 15 sample sizes range from 10 to 300, using three measures for imbalance and three measures for randomness. The maximum absolute imbalance and the correct guess (CG) probability are selected to assess the trade-off performance of each randomization design. As measured by the maximum absolute imbalance and the CG probability, we found that performances of the 14 randomization designs are located in a closed region with the upper boundary (worst case) given by Efron's biased coin design (BCD) and the lower boundary (best case) from the Soares and Wu's big stick design (BSD). Designs close to the lower boundary provide a smaller imbalance and a higher randomness than designs close to the upper boundary. Our research suggested that optimization of randomization design is possible based on quantified evaluation of imbalance and randomness. Based on the maximum imbalance and CG probability, the BSD, Chen's biased coin design with imbalance tolerance method, and Chen's Ehrenfest urn design perform better than popularly used permuted block design, EBCD, and Wei's urn design. Copyright © 2011 John Wiley & Sons, Ltd.

  7. Enhancing Doctors' Competencies in Communication With and Activation of Older Patients: The Promoting Active Aging (PRACTA) Computer-Based Intervention Study.

    PubMed

    Wlodarczyk, Dorota; Chylińska, Joanna; Lazarewicz, Magdalena; Rzadkiewicz, Marta; Jaworski, Mariusz; Adamus, Miroslawa; Haugan, Gørill; Lillefjell, Monica; Espnes, Geir Arild

    2017-02-22

    Demographic changes over the past decades call for the promotion of health and disease prevention for older patients, as well as strategies to enhance their independence, productivity, and quality of life. Our objective was to examine the effects of a computer-based educational intervention designed for general practitioners (GPs) to promote active aging. The Promoting Active Aging (PRACTA) study consisted of a baseline questionnaire, implementation of an intervention, and a follow-up questionnaire that was administered 1 month after the intervention. A total of 151 primary care facilities (response rate 151/767, 19.7%) and 503 GPs (response rate 503/996, 50.5%) agreed to participate in the baseline assessment. At the follow-up, 393 GPs filled in the questionnaires (response rate, 393/503, 78.1%), but not all of them took part in the intervention. The final study group of 225 GPs participated in 3 study conditions: e-learning (knowledge plus skills modelling, n=42), a pdf article (knowledge only, n=89), and control (no intervention, n=94). We measured the outcome as scores on the Patients Expectations Scale, Communication Scale, Attitude Toward Treatment and Health Scale, and Self-Efficacy Scale. GPs participating in e-learning demonstrated a significant rise in their perception of older patients' expectations for disease explanation (Wald χ 2 =19.7, P<.001) and in perception of motivational aspect of older patients' attitude toward treatment and health (Wald χ 2 =8.9, P=.03) in comparison with both the control and pdf article groups. We observed additional between-group differences at the level of statistical trend. GPs participating in the pdf article intervention demonstrated a decline in self-assessed communication, both at the level of global scoring (Wald χ 2 =34.5, P<.001) and at the level of 20 of 26 specific behaviors (all P<.05). Factors moderating the effects of the intervention were the number of patients per GP and the facility's organizational structure. Both methods were suitable, but in different areas and under different conditions. The key benefit of the pdf article intervention was raising doctors' reflection on limitations in their communication skills, whereas e-learning was more effective in changing their perception of older patients' proactive attitude, especially among GPs working in privately owned facilities and having a greater number of assigned patients. Although we did not achieve all expected effects of the PRACTA intervention, both its forms seem promising in terms of enhancing the competencies of doctors in communication with and activation of older patients. ©Dorota Wlodarczyk, Joanna Chylińska, Magdalena Lazarewicz, Marta Rzadkiewicz, Mariusz Jaworski, Miroslawa Adamus, Gørill Haugan, Monica Lillefjell, Geir Arild Espnes. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 22.02.2017.

  8. Depression, gender, and the treatment gap in Mexico.

    PubMed

    Rafful, Claudia; Medina-Mora, María Elena; Borges, Guilherme; Benjet, Corina; Orozco, Ricardo

    2012-04-01

    Gender is associated to lifetime risk of mood disorders, women having the highest lifetime and 12-month prevalence. In Mexico one out of five individuals with any mood disorder receives treatment during the first year. We evaluate the ages at which women and men are more vulnerable for the first onset of a major depressive episode, the longest duration and greatest number of episodes, the areas of daily functioning most affected, and which variables predict whether or not a person receives any kind of treatment. The Mexican National Comorbidity Survey, as part of the World Mental Health Surveys Initiative, is based on a stratified, multistage area probability Mexican urban household sample aged 18 to 65 (n=5782). Wald X(2) tests were performed to evaluate gender and cohort differences; logistic regression models were performed to evaluate gender and cohort as treatment predictors. The most vulnerable group is the cohort of 45-54 year-old women. Once a first episode occurs, there are no sex differences in terms of number or length of episodes. There is a gap in service use, especially among 18-29 year-old women; the oldest women are the most impaired. Individuals from rural communities are not represented and there may have been recall bias due to the retrospective design. Efforts should focus on factors related to the first onset episode and on early treatment programs to reduce the risk of subsequent episodes. Research and health resources should attend to the most vulnerable group, and the youngest women, who are in the reproductive age and have the largest treatment gap. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Regular use of dental services among adults and older adults in a vulnerable region in Southern Brazil.

    PubMed

    Machado, Luciene Petcov; Camargo, Maria Beatriz Junqueira; Jeronymo, José Carlos Milanez; Bastos, Gisele Alsina Nader

    2012-06-01

    To estimate the prevalence of regular use of dental care services by adults and older adults residing in vulnerable community and to identify associated factors. A population-based cross-sectional study was carried out with 3,391 adults and older adults residing in areas of social vulnerability in Porto Alegre, Southern Brazil, from July to December of 2009. A systematic sampling method was used the selection probability proportional to the population of each of the the 121 census sectors. The outcome for regular use of dental care services was defined as regular use of dental services, regardless of the presence of dental problems. A standardized questionnaire was administered, which included demographic, socioeconomic, type of dental care services, self-perception of dental health and self-perceived needs variables. A chi-square test for heterogeneity was used for bivariate analyses, and a Poisson regression with a robust variance and Wald tests were performed for the adjusted analysis. The prevalence of regular use of dental services was 25.7%. The prevalence was higher among people with >12 years schooling (PR 2.48 [95%CI:1.96;3.15]), higher income (PR 1.95[95%CI: 1.03;1.53]), use of private health services (PR 1.43 [95%CI: 1.20;1.71]),excellent self-perceived oral health (PR 4.44 [95%CI: 3.07;6.42]) and a self-perceived need for consultation related to routine checkup (RP 2.13 [95%CI: 1.54;2.96]). Inequalities were found in the regular use of dental services. Integrated approaches that raise awareness of oral health, improve self-care and expand access to dental services, may contribute to increase the use of dental services on a regular basis.

  10. Evaluation of estimation methods and power of tests of discrete covariates in repeated time-to-event parametric models: application to Gaucher patients treated by imiglucerase.

    PubMed

    Vigan, Marie; Stirnemann, Jérôme; Mentré, France

    2014-05-01

    Analysis of repeated time-to-event data is increasingly performed in pharmacometrics using parametric frailty models. The aims of this simulation study were (1) to assess estimation performance of Stochastic Approximation Expectation Maximization (SAEM) algorithm in MONOLIX, Adaptive Gaussian Quadrature (AGQ), and Laplace algorithm in PROC NLMIXED of SAS and (2) to evaluate properties of test of a dichotomous covariate on occurrence of events. The simulation setting is inspired from an analysis of occurrence of bone events after the initiation of treatment by imiglucerase in patients with Gaucher Disease (GD). We simulated repeated events with an exponential model and various dropout rates: no, low, or high. Several values of baseline hazard model, variability, number of subject, and effect of covariate were studied. For each scenario, 100 datasets were simulated for estimation performance and 500 for test performance. We evaluated estimation performance through relative bias and relative root mean square error (RRMSE). We studied properties of Wald and likelihood ratio test (LRT). We used these methods to analyze occurrence of bone events in patients with GD after starting an enzyme replacement therapy. SAEM with three chains and AGQ algorithms provided good estimates of parameters much better than SAEM with one chain and Laplace which often provided poor estimates. Despite a small number of repeated events, SAEM with three chains and AGQ gave small biases and RRMSE. Type I errors were closed to 5%, and power varied as expected for SAEM with three chains and AGQ. Probability of having at least one event under treatment was 19.1%.

  11. Sequential chemotherapy followed by reduced-intensity conditioning and allogeneic haematopoietic stem cell transplantation in adult patients with relapse or refractory acute myeloid leukaemia: a survey from the Acute Leukaemia Working Party of EBMT.

    PubMed

    Ringdén, Olle; Labopin, Myriam; Schmid, Christoph; Sadeghi, Behnam; Polge, Emmanuelle; Tischer, Johanna; Ganser, Arnold; Michallet, Mauricette; Kanz, Lothar; Schwerdtfeger, Rainer; Nagler, Arnon; Mohty, Mohamad

    2017-02-01

    This study analysed the outcome of 267 patients with relapse/refractory acute myeloid leukaemia (AML) who received sequential chemotherapy including fludarabine, cytarabine and amsacrine followed by reduced-intensity conditioning (RIC) and allogeneic haematopoietic stem cell transplantation (HSCT). The transplants in 77 patients were from matched sibling donors (MSDs) and those in 190 patients were from matched unrelated donors. Most patients (94·3%) were given anti-T-cell antibodies. The incidence of acute graft-versus-host disease (GVHD) of grades II-IV was 32·1% and that of chronic GVHD was 30·2%. The 3-year probability of non-relapse mortality (NRM) was 25·9%, that of relapse was 48·5%, that of GVHD-free and relapse-free survival (GRFS) was 17·8% and that of leukaemia-free survival (LFS) was 25·6%. In multivariate analysis, unrelated donor recipients more frequently had acute GVHD of grades II-IV [hazard ratio (HR) = 1·98, P = 0·017] and suffered less relapses (HR = 0·62, P = 0·01) than MSD recipients. Treatment with anti-T-cell antibodies reduced NRM (HR = 0·35, P = 0·01) and improved survival (HR = 0·49, P = 0·01), GRFS (HR = 0·37, P = 0·0004) and LFS (HR = 0·46, P = 0·005). Thus, sequential chemotherapy followed by RIC HSCT and use of anti-T-cell antibodies seems promising in patients with refractory AML. © 2016 John Wiley & Sons Ltd.

  12. Cost-effectiveness of pediatric bilateral cochlear implantation in Spain.

    PubMed

    Pérez-Martín, Jorge; Artaso, Miguel A; Díez, Francisco J

    2017-12-01

    To determine the incremental cost-effectiveness of bilateral versus unilateral cochlear implantation for 1-year-old children suffering from bilateral sensorineural severe to profound hearing loss from the perspective of the Spanish public health system. Cost-utility analysis. We conducted a general-population survey to estimate the quality-of-life increase contributed by the second implant. We built a Markov influence diagram and evaluated it for a life-long time horizon with a 3% discount rate in the base case. The incremental cost-effectiveness ratio of simultaneous bilateral implantation with respect to unilateral implantation for 1-year-old children with severe to profound deafness is €10,323 per quality-adjusted life year (QALY). For sequential bilateral implantation, it rises to €11,733/QALY. Both options are cost-effective for the Spanish health system, whose willingness to pay is estimated at around €30,000/QALY. The probabilistic sensitivity analysis shows that the probability of bilateral implantation being cost-effective reaches 100% for that cost-effectiveness threshold. Bilateral implantation is clearly cost-effective for the population considered. If possible, it should be done simultaneously (i.e., in one surgical operation), because it is as safe and effective as sequential implantation, and saves costs for the system and for users and their families. Sequential implantation is also cost-effective for children who have received the first implant recently, but it is difficult to determine when it ceases to be so because of the lack of detailed data. These results are specific for Spain, but the model can easily be adapted to other countries. 2C. Laryngoscope, 127:2866-2872, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  13. Improving Outcomes in Colorectal Surgery by Sequential Implementation of Multiple Standardized Care Programs.

    PubMed

    Keenan, Jeffrey E; Speicher, Paul J; Nussbaum, Daniel P; Adam, Mohamed Abdelgadir; Miller, Timothy E; Mantyh, Christopher R; Thacker, Julie K M

    2015-08-01

    The purpose of this study was to examine the impact of the sequential implementation of the enhanced recovery program (ERP) and surgical site infection bundle (SSIB) on short-term outcomes in colorectal surgery (CRS) to determine if the presence of multiple standardized care programs provides additive benefit. Institutional ACS-NSQIP data were used to identify patients who underwent elective CRS from September 2006 to March 2013. The cohort was stratified into 3 groups relative to implementation of the ERP (February 1, 2010) and SSIB (July 1, 2011). Unadjusted characteristics and 30-day outcomes were assessed, and inverse proportional weighting was then used to determine the adjusted effect of these programs. There were 787 patients included: 337, 165, and 285 in the pre-ERP/SSIB, post-ERP/pre-SSIB, and post-ERP/SSIB periods, respectively. After inverse probability weighting (IPW) adjustment, groups were balanced with respect to patient and procedural characteristics considered. Compared with the pre-ERP/SSIB group, the post-ERP/pre-SSIB group had significantly reduced length of hospitalization (8.3 vs 6.6 days, p = 0.01) but did not differ with respect to postoperative wound complications and sepsis. Subsequent introduction of the SSIB then resulted in a significant decrease in superficial SSI (16.1% vs 6.3%, p < 0.01) and postoperative sepsis (11.2% vs 1.8%, p < 0.01). Finally, inflation-adjusted mean hospital cost for a CRS admission fell from $31,926 in 2008 to $22,044 in 2013 (p < 0.01). Sequential implementation of the ERP and SSIB provided incremental improvements in CRS outcomes while controlling hospital costs, supporting their combined use as an effective strategy toward improving the quality of patient care. Copyright © 2015 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  14. A sequential data assimilation approach for the joint reconstruction of mantle convection and surface tectonics

    NASA Astrophysics Data System (ADS)

    Bocher, M.; Coltice, N.; Fournier, A.; Tackley, P. J.

    2016-01-01

    With the progress of mantle convection modelling over the last decade, it now becomes possible to solve for the dynamics of the interior flow and the surface tectonics to first order. We show here that tectonic data (like surface kinematics and seafloor age distribution) and mantle convection models with plate-like behaviour can in principle be combined to reconstruct mantle convection. We present a sequential data assimilation method, based on suboptimal schemes derived from the Kalman filter, where surface velocities and seafloor age maps are not used as boundary conditions for the flow, but as data to assimilate. Two stages (a forecast followed by an analysis) are repeated sequentially to take into account data observed at different times. Whenever observations are available, an analysis infers the most probable state of the mantle at this time, considering a prior guess (supplied by the forecast) and the new observations at hand, using the classical best linear unbiased estimate. Between two observation times, the evolution of the mantle is governed by the forward model of mantle convection. This method is applied to synthetic 2-D spherical annulus mantle cases to evaluate its efficiency. We compare the reference evolutions to the estimations obtained by data assimilation. Two parameters control the behaviour of the scheme: the time between two analyses, and the amplitude of noise in the synthetic observations. Our technique proves to be efficient in retrieving temperature field evolutions provided the time between two analyses is ≲10 Myr. If the amplitude of the a priori error on the observations is large (30 per cent), our method provides a better estimate of surface tectonics than the observations, taking advantage of the information within the physics of convection.

  15. Identifying High-Rate Flows Based on Sequential Sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Fang, Binxing; Luo, Hao

    We consider the problem of fast identification of high-rate flows in backbone links with possibly millions of flows. Accurate identification of high-rate flows is important for active queue management, traffic measurement and network security such as detection of distributed denial of service attacks. It is difficult to directly identify high-rate flows in backbone links because tracking the possible millions of flows needs correspondingly large high speed memories. To reduce the measurement overhead, the deterministic 1-out-of-k sampling technique is adopted which is also implemented in Cisco routers (NetFlow). Ideally, a high-rate flow identification method should have short identification time, low memory cost and processing cost. Most importantly, it should be able to specify the identification accuracy. We develop two such methods. The first method is based on fixed sample size test (FSST) which is able to identify high-rate flows with user-specified identification accuracy. However, since FSST has to record every sampled flow during the measurement period, it is not memory efficient. Therefore the second novel method based on truncated sequential probability ratio test (TSPRT) is proposed. Through sequential sampling, TSPRT is able to remove the low-rate flows and identify the high-rate flows at the early stage which can reduce the memory cost and identification time respectively. According to the way to determine the parameters in TSPRT, two versions of TSPRT are proposed: TSPRT-M which is suitable when low memory cost is preferred and TSPRT-T which is suitable when short identification time is preferred. The experimental results show that TSPRT requires less memory and identification time in identifying high-rate flows while satisfying the accuracy requirement as compared to previously proposed methods.

  16. Geotechnical parameter spatial distribution stochastic analysis based on multi-precision information assimilation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Rubin, Y.

    2014-12-01

    Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.

  17. Sequential detection of influenza epidemics by the Kolmogorov-Smirnov test

    PubMed Central

    2012-01-01

    Background Influenza is a well known and common human respiratory infection, causing significant morbidity and mortality every year. Despite Influenza variability, fast and reliable outbreak detection is required for health resource planning. Clinical health records, as published by the Diagnosticat database in Catalonia, host useful data for probabilistic detection of influenza outbreaks. Methods This paper proposes a statistical method to detect influenza epidemic activity. Non-epidemic incidence rates are modeled against the exponential distribution, and the maximum likelihood estimate for the decaying factor λ is calculated. The sequential detection algorithm updates the parameter as new data becomes available. Binary epidemic detection of weekly incidence rates is assessed by Kolmogorov-Smirnov test on the absolute difference between the empirical and the cumulative density function of the estimated exponential distribution with significance level 0 ≤ α ≤ 1. Results The main advantage with respect to other approaches is the adoption of a statistically meaningful test, which provides an indicator of epidemic activity with an associated probability. The detection algorithm was initiated with parameter λ0 = 3.8617 estimated from the training sequence (corresponding to non-epidemic incidence rates of the 2008-2009 influenza season) and sequentially updated. Kolmogorov-Smirnov test detected the following weeks as epidemic for each influenza season: 50−10 (2008-2009 season), 38−50 (2009-2010 season), weeks 50−9 (2010-2011 season) and weeks 3 to 12 for the current 2011-2012 season. Conclusions Real medical data was used to assess the validity of the approach, as well as to construct a realistic statistical model of weekly influenza incidence rates in non-epidemic periods. For the tested data, the results confirmed the ability of the algorithm to detect the start and the end of epidemic periods. In general, the proposed test could be applied to other data sets to quickly detect influenza outbreaks. The sequential structure of the test makes it suitable for implementation in many platforms at a low computational cost without requiring to store large data sets. PMID:23031321

  18. Sequential chemical extraction for a phosphogypsum environmental impact evaluation

    NASA Astrophysics Data System (ADS)

    Gennari, R. F.; Garcia, I.; Medina, N. H.; Silveira, M. A. G.

    2013-05-01

    Phosphogypsum (PG) is gypsum generated during phosphoric acid production. PG is stocked in large stacks or accumulated in lakes; it contains heavy metals and naturally occurring radioactive elements. The metal contamination may affect the functionality, sustainability and biodiversity of ecosystems. In this work, PG samples were analyzed by Plasma Spectrometry. Total metal content and in the extractable fraction of chemical elements were determined. For K, Ni, Zn, Cr, Cd, Ba, Pb and U, the results obtained are lower than those obtained in a Idaho plant are including and also lower than those found in the soil, indicating this PG sample analyzed probably will not cause any additional metal neither natural radiation contamination.

  19. Radiobiological concepts for treatment planning of schemes that combines external beam radiotherapy and systemic targeted radiotherapy

    NASA Astrophysics Data System (ADS)

    Fabián Calderón Marín, Carlos; González González, Joaquín Jorge; Laguardia, Rodolfo Alfonso

    2017-09-01

    The combination of radiotherapy modalities with external bundles and systemic radiotherapy (CIERT) could be a reliable alternative for patients with multiple lesions or those where treatment planning maybe difficult because organ(s)-at-risk (OARs) constraints. Radiobiological models should have the capacity for predicting the biological irradiation response considering the differences in the temporal pattern of dose delivering in both modalities. Two CIERT scenarios were studied: sequential combination in which one modality is executed following the other one and concurrent combination when both modalities are running simultaneously. Expressions are provided for calculation of the dose-response magnitudes Tumor Control Probability (TCP) and Normal Tissue Complication Probability (NTCP). General results on radiobiological modeling using the linear-quadratic (LQ) model are also discussed. Inter-subject variation of radiosensitivity and volume irradiation effect in CIERT are studied. OARs should be under control during the planning in concurrent CIERT treatment as the administered activity is increased. The formulation presented here may be used for biological evaluation of prescriptions and biological treatment planning of CIERT schemes in clinical situation.

  20. A utility-based design for randomized comparative trials with ordinal outcomes and prognostic subgroups.

    PubMed

    Murray, Thomas A; Yuan, Ying; Thall, Peter F; Elizondo, Joan H; Hofstetter, Wayne L

    2018-01-22

    A design is proposed for randomized comparative trials with ordinal outcomes and prognostic subgroups. The design accounts for patient heterogeneity by allowing possibly different comparative conclusions within subgroups. The comparative testing criterion is based on utilities for the levels of the ordinal outcome and a Bayesian probability model. Designs based on two alternative models that include treatment-subgroup interactions are considered, the proportional odds model and a non-proportional odds model with a hierarchical prior that shrinks toward the proportional odds model. A third design that assumes homogeneity and ignores possible treatment-subgroup interactions also is considered. The three approaches are applied to construct group sequential designs for a trial of nutritional prehabilitation versus standard of care for esophageal cancer patients undergoing chemoradiation and surgery, including both untreated patients and salvage patients whose disease has recurred following previous therapy. A simulation study is presented that compares the three designs, including evaluation of within-subgroup type I and II error probabilities under a variety of scenarios including different combinations of treatment-subgroup interactions. © 2018, The International Biometric Society.

  1. Statistically qualified neuro-analytic failure detection method and system

    DOEpatents

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    2002-03-02

    An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.

  2. Bayes factor design analysis: Planning for compelling evidence.

    PubMed

    Schönbrodt, Felix D; Wagenmakers, Eric-Jan

    2018-02-01

    A sizeable literature exists on the use of frequentist power analysis in the null-hypothesis significance testing (NHST) paradigm to facilitate the design of informative experiments. In contrast, there is almost no literature that discusses the design of experiments when Bayes factors (BFs) are used as a measure of evidence. Here we explore Bayes Factor Design Analysis (BFDA) as a useful tool to design studies for maximum efficiency and informativeness. We elaborate on three possible BF designs, (a) a fixed-n design, (b) an open-ended Sequential Bayes Factor (SBF) design, where researchers can test after each participant and can stop data collection whenever there is strong evidence for either [Formula: see text] or [Formula: see text], and (c) a modified SBF design that defines a maximal sample size where data collection is stopped regardless of the current state of evidence. We demonstrate how the properties of each design (i.e., expected strength of evidence, expected sample size, expected probability of misleading evidence, expected probability of weak evidence) can be evaluated using Monte Carlo simulations and equip researchers with the necessary information to compute their own Bayesian design analyses.

  3. Variable selection for marginal longitudinal generalized linear models.

    PubMed

    Cantoni, Eva; Flemming, Joanna Mills; Ronchetti, Elvezio

    2005-06-01

    Variable selection is an essential part of any statistical analysis and yet has been somewhat neglected in the context of longitudinal data analysis. In this article, we propose a generalized version of Mallows's C(p) (GC(p)) suitable for use with both parametric and nonparametric models. GC(p) provides an estimate of a measure of model's adequacy for prediction. We examine its performance with popular marginal longitudinal models (fitted using GEE) and contrast results with what is typically done in practice: variable selection based on Wald-type or score-type tests. An application to real data further demonstrates the merits of our approach while at the same time emphasizing some important robust features inherent to GC(p).

  4. The attention-weighted sample-size model of visual short-term memory: Attention capture predicts resource allocation and memory load.

    PubMed

    Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren

    2016-09-01

    We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  5. A computational theory for the classification of natural biosonar targets based on a spike code.

    PubMed

    Müller, Rolf

    2003-08-01

    A computational theory for the classification of natural biosonar targets is developed based on the properties of an example stimulus ensemble. An extensive set of echoes (84 800) from four different foliages was transcribed into a spike code using a parsimonious model (linear filtering, half-wave rectification, thresholding). The spike code is assumed to consist of time differences (interspike intervals) between threshold crossings. Among the elementary interspike intervals flanked by exceedances of adjacent thresholds, a few intervals triggered by disjoint half-cycles of the carrier oscillation stand out in terms of resolvability, visibility across resolution scales and a simple stochastic structure (uncorrelatedness). They are therefore argued to be a stochastic analogue to edges in vision. A three-dimensional feature vector representing these interspike intervals sustained a reliable target classification performance (0.06% classification error) in a sequential probability ratio test, which models sequential processing of echo trains by biological sonar systems. The dimensions of the representation are the first moments of duration and amplitude location of these interspike intervals as well as their number. All three quantities are readily reconciled with known principles of neural signal representation, since they correspond to the centre of gravity of excitation on a neural map and the total amount of excitation.

  6. Adaptive Sequential Monte Carlo for Multiple Changepoint Analysis

    DOE PAGES

    Heard, Nicholas A.; Turcotte, Melissa J. M.

    2016-05-21

    Process monitoring and control requires detection of structural changes in a data stream in real time. This paper introduces an efficient sequential Monte Carlo algorithm designed for learning unknown changepoints in continuous time. The method is intuitively simple: new changepoints for the latest window of data are proposed by conditioning only on data observed since the most recent estimated changepoint, as these observations carry most of the information about the current state of the process. The proposed method shows improved performance over the current state of the art. Another advantage of the proposed algorithm is that it can be mademore » adaptive, varying the number of particles according to the apparent local complexity of the target changepoint probability distribution. This saves valuable computing time when changes in the changepoint distribution are negligible, and enables re-balancing of the importance weights of existing particles when a significant change in the target distribution is encountered. The plain and adaptive versions of the method are illustrated using the canonical continuous time changepoint problem of inferring the intensity of an inhomogeneous Poisson process, although the method is generally applicable to any changepoint problem. Performance is demonstrated using both conjugate and non-conjugate Bayesian models for the intensity. Lastly, appendices to the article are available online, illustrating the method on other models and applications.« less

  7. Resolution of deep eudicot phylogeny and their temporal diversification using nuclear genes from transcriptomic and genomic datasets.

    PubMed

    Zeng, Liping; Zhang, Ning; Zhang, Qiang; Endress, Peter K; Huang, Jie; Ma, Hong

    2017-05-01

    Explosive diversification is widespread in eukaryotes, making it difficult to resolve phylogenetic relationships. Eudicots contain c. 75% of extant flowering plants, are important for human livelihood and terrestrial ecosystems, and have probably experienced explosive diversifications. The eudicot phylogenetic relationships, especially among those of the Pentapetalae, remain unresolved. Here, we present a highly supported eudicot phylogeny and diversification rate shifts using 31 newly generated transcriptomes and 88 other datasets covering 70% of eudicot orders. A highly supported eudicot phylogeny divided Pentapetalae into two groups: one with rosids, Saxifragales, Vitales and Santalales; the other containing asterids, Caryophyllales and Dilleniaceae, with uncertainty for Berberidopsidales. Molecular clock analysis estimated that crown eudicots originated c. 146 Ma, considerably earlier than earliest tricolpate pollen fossils and most other molecular clock estimates, and Pentapetalae sequentially diverged into eight major lineages within c. 15 Myr. Two identified increases of diversification rate are located in the stems leading to Pentapetalae and asterids, and lagged behind the gamma hexaploidization. The nuclear genes from newly generated transcriptomes revealed a well-resolved eudicot phylogeny, sequential separation of major core eudicot lineages and temporal mode of diversifications, providing new insights into the evolutionary trend of morphologies and contributions to the diversification of eudicots. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  8. The role of short-term memory impairment in nonword repetition, real word repetition, and nonword decoding: A case study.

    PubMed

    Peter, Beate

    2018-01-01

    In a companion study, adults with dyslexia and adults with a probable history of childhood apraxia of speech showed evidence of difficulty with processing sequential information during nonword repetition, multisyllabic real word repetition and nonword decoding. Results suggested that some errors arose in visual encoding during nonword reading, all levels of processing but especially short-term memory storage/retrieval during nonword repetition, and motor planning and programming during complex real word repetition. To further investigate the role of short-term memory, a participant with short-term memory impairment (MI) was recruited. MI was confirmed with poor performance during a sentence repetition and three nonword repetition tasks, all of which have a high short-term memory load, whereas typical performance was observed during tests of reading, spelling, and static verbal knowledge, all with low short-term memory loads. Experimental results show error-free performance during multisyllabic real word repetition but high counts of sequence errors, especially migrations and assimilations, during nonword repetition, supporting short-term memory as a locus of sequential processing deficit during nonword repetition. Results are also consistent with the hypothesis that during complex real word repetition, short-term memory is bypassed as the word is recognized and retrieved from long-term memory prior to producing the word.

  9. Hidden Markov model approach for identifying the modular framework of the protein backbone.

    PubMed

    Camproux, A C; Tuffery, P; Chevrolat, J P; Boisvieux, J F; Hazout, S

    1999-12-01

    The hidden Markov model (HMM) was used to identify recurrent short 3D structural building blocks (SBBs) describing protein backbones, independently of any a priori knowledge. Polypeptide chains are decomposed into a series of short segments defined by their inter-alpha-carbon distances. Basically, the model takes into account the sequentiality of the observed segments and assumes that each one corresponds to one of several possible SBBs. Fitting the model to a database of non-redundant proteins allowed us to decode proteins in terms of 12 distinct SBBs with different roles in protein structure. Some SBBs correspond to classical regular secondary structures. Others correspond to a significant subdivision of their bounding regions previously considered to be a single pattern. The major contribution of the HMM is that this model implicitly takes into account the sequential connections between SBBs and thus describes the most probable pathways by which the blocks are connected to form the framework of the protein structures. Validation of the SBBs code was performed by extracting SBB series repeated in recoding proteins and examining their structural similarities. Preliminary results on the sequence specificity of SBBs suggest promising perspectives for the prediction of SBBs or series of SBBs from the protein sequences.

  10. Random covering of the circle: the configuration-space of the free deposition process

    NASA Astrophysics Data System (ADS)

    Huillet, Thierry

    2003-12-01

    Consider a circle of circumference 1. Throw at random n points, sequentially, on this circle and append clockwise an arc (or rod) of length s to each such point. The resulting random set (the free gas of rods) is a collection of a random number of clusters with random sizes. It models a free deposition process on a 1D substrate. For such processes, we shall consider the occurrence times (number of rods) and probabilities, as n grows, of the following configurations: those avoiding rod overlap (the hard-rod gas), those for which the largest gap is smaller than rod length s (the packing gas), those (parking configurations) for which hard rod and packing constraints are both fulfilled and covering configurations. Special attention is paid to the statistical properties of each such (rare) configuration in the asymptotic density domain when ns = rgr, for some finite density rgr of points. Using results from spacings in the random division of the circle, explicit large deviation rate functions can be computed in each case from state equations. Lastly, a process consisting in selecting at random one of these specific equilibrium configurations (called the observable) can be modelled. When particularized to the parking model, this system produces parking configurations differently from Rényi's random sequential adsorption model.

  11. Spatial distribution and sequential sampling plans for Tuta absoluta (Lepidoptera: Gelechiidae) in greenhouse tomato crops.

    PubMed

    Cocco, Arturo; Serra, Giuseppe; Lentini, Andrea; Deliperi, Salvatore; Delrio, Gavino

    2015-09-01

    The within- and between-plant distribution of the tomato leafminer, Tuta absoluta (Meyrick), was investigated in order to define action thresholds based on leaf infestation and to propose enumerative and binomial sequential sampling plans for pest management applications in protected crops. The pest spatial distribution was aggregated between plants, and median leaves were the most suitable sample to evaluate the pest density. Action thresholds of 36 and 48%, 43 and 56% and 60 and 73% infested leaves, corresponding to economic thresholds of 1 and 3% damaged fruits, were defined for tomato cultivars with big, medium and small fruits respectively. Green's method was a more suitable enumerative sampling plan as it required a lower sampling effort. Binomial sampling plans needed lower average sample sizes than enumerative plans to make a treatment decision, with probabilities of error of <0.10. The enumerative sampling plan required 87 or 343 leaves to estimate the population density in extensive or intensive ecological studies respectively. Binomial plans would be more practical and efficient for control purposes, needing average sample sizes of 17, 20 and 14 leaves to take a pest management decision in order to avoid fruit damage higher than 1% in cultivars with big, medium and small fruits respectively. © 2014 Society of Chemical Industry.

  12. Sequential treatment of icotinib after first-line pemetrexed in advanced lung adenocarcinoma with unknown EGFR gene status.

    PubMed

    Zheng, Yulong; Fang, Weijia; Deng, Jing; Zhao, Peng; Xu, Nong; Zhou, Jianying

    2014-07-01

    In non-small cell lung cancer (NSCLC), the well-developed epidermal growth factor receptor (EGFR) is an important therapeutic target. EGFR activating gene mutations have been proved strongly predictive of response to EGFR-tyrosine kinase inhibitors (TKI) in NSCLC. However, both in daily clinical practice and clinical trials, patients with unknown EGFR gene status (UN-EGFR-GS) are very common. In this study, we assessed efficacy and tolerability of sequential treatment of first-line pemetrexed followed by icotinib in Chinese advanced lung adenocarcinoma with UN-EGFR-GS. We analyzed 38 patients with advanced lung adenocarcinoma with UN-EGFR-GS treated with first-line pemetrexed-based chemotherapy followed by icotinib as maintenance or second-line therapy. The response rates to pemetrexed and icotinib were 21.1% and 42.1%, respectively. The median overall survival was 27.0 months (95% CI, 19.7-34.2 months). The 12-month overall survival probability was 68.4%. The most common toxicities observed in icotinib phase were rashes, diarrheas, and elevated aminotransferase. Subgroup analysis indicated that the overall survival is correlated with response to icotinib. The sequence of first-line pemetrexed-based chemotherapy followed by icotinib treatment is a promising option for advanced lung adenocarcinoma with UN-EGFR-GS in China.

  13. A new strategy for snow-cover mapping using remote sensing data and ensemble based systems techniques

    NASA Astrophysics Data System (ADS)

    Roberge, S.; Chokmani, K.; De Sève, D.

    2012-04-01

    The snow cover plays an important role in the hydrological cycle of Quebec (Eastern Canada). Consequently, evaluating its spatial extent interests the authorities responsible for the management of water resources, especially hydropower companies. The main objective of this study is the development of a snow-cover mapping strategy using remote sensing data and ensemble based systems techniques. Planned to be tested in a near real-time operational mode, this snow-cover mapping strategy has the advantage to provide the probability of a pixel to be snow covered and its uncertainty. Ensemble systems are made of two key components. First, a method is needed to build an ensemble of classifiers that is diverse as much as possible. Second, an approach is required to combine the outputs of individual classifiers that make up the ensemble in such a way that correct decisions are amplified, and incorrect ones are cancelled out. In this study, we demonstrate the potential of ensemble systems to snow-cover mapping using remote sensing data. The chosen classifier is a sequential thresholds algorithm using NOAA-AVHRR data adapted to conditions over Eastern Canada. Its special feature is the use of a combination of six sequential thresholds varying according to the day in the winter season. Two versions of the snow-cover mapping algorithm have been developed: one is specific for autumn (from October 1st to December 31st) and the other for spring (from March 16th to May 31st). In order to build the ensemble based system, different versions of the algorithm are created by varying randomly its parameters. One hundred of the versions are included in the ensemble. The probability of a pixel to be snow, no-snow or cloud covered corresponds to the amount of votes the pixel has been classified as such by all classifiers. The overall performance of ensemble based mapping is compared to the overall performance of the chosen classifier, and also with ground observations at meteorological stations.

  14. Long-term volcanic hazard assessment on El Hierro (Canary Islands)

    NASA Astrophysics Data System (ADS)

    Becerril, L.; Bartolini, S.; Sobradelo, R.; Martí, J.; Morales, J. M.; Galindo, I.

    2014-07-01

    Long-term hazard assessment, one of the bastions of risk-mitigation programs, is required for land-use planning and for developing emergency plans. To ensure quality and representative results, long-term volcanic hazard assessment requires several sequential steps to be completed, which include the compilation of geological and volcanological information, the characterisation of past eruptions, spatial and temporal probabilistic studies, and the simulation of different eruptive scenarios. Despite being a densely populated active volcanic region that receives millions of visitors per year, no systematic hazard assessment has ever been conducted on the Canary Islands. In this paper we focus our attention on El Hierro, the youngest of the Canary Islands and the most recently affected by an eruption. We analyse the past eruptive activity to determine the spatial and temporal probability, and likely style of a future eruption on the island, i.e. the where, when and how. By studying the past eruptive behaviour of the island and assuming that future eruptive patterns will be similar, we aim to identify the most likely volcanic scenarios and corresponding hazards, which include lava flows, pyroclastic fallout and pyroclastic density currents (PDCs). Finally, we estimate their probability of occurrence. The end result, through the combination of the most probable scenarios (lava flows, pyroclastic density currents and ashfall), is the first qualitative integrated volcanic hazard map of the island.

  15. Effect of feedback mode and task difficulty on quality of timing decisions in a zero-sum game.

    PubMed

    Tikuisis, Peter; Vartanian, Oshin; Mandel, David R

    2014-09-01

    The objective was to investigate the interaction between the mode of performance outcome feedback and task difficulty on timing decisions (i.e., when to act). Feedback is widely acknowledged to affect task performance. However, the extent to which feedback display mode and its impact on timing decisions is moderated by task difficulty remains largely unknown. Participants repeatedly engaged a zero-sum game involving silent duels with a computerized opponent and were given visual performance feedback after each engagement. They were sequentially tested on three different levels of task difficulty (low, intermediate, and high) in counterbalanced order. Half received relatively simple "inside view" binary outcome feedback, and the other half received complex "outside view" hit rate probability feedback. The key dependent variables were response time (i.e., time taken to make a decision) and survival outcome. When task difficulty was low to moderate, participants were more likely to learn and perform better from hit rate probability feedback than binary outcome feedback. However, better performance with hit rate feedback exacted a higher cognitive cost manifested by higher decision response time. The beneficial effect of hit rate probability feedback on timing decisions is partially moderated by task difficulty. Performance feedback mode should be judiciously chosen in relation to task difficulty for optimal performance in tasks involving timing decisions.

  16. The first law of black hole mechanics for fields with internal gauge freedom

    NASA Astrophysics Data System (ADS)

    Prabhu, Kartik

    2017-02-01

    We derive the first law of black hole mechanics for physical theories based on a local, covariant and gauge-invariant Lagrangian where the dynamical fields transform non-trivially under the action of some internal gauge transformations. The theories of interest include General Relativity formulated in terms of tetrads, Einstein-Yang-Mills theory and Einstein-Dirac theory. Since the dynamical fields of these theories have some internal gauge freedom, we argue that there is no natural group action of diffeomorphisms of spacetime on such dynamical fields. In general, such fields cannot even be represented as smooth, globally well-defined tensor fields on spacetime. Consequently the derivation of the first law by Iyer and Wald cannot be used directly. Nevertheless, we show how such theories can be formulated on a principal bundle and that there is a natural action of automorphisms of the bundle on the fields. These bundle automorphisms encode both spacetime diffeomorphisms and internal gauge transformations. Using this reformulation we define the Noether charge associated to an infinitesimal automorphism and the corresponding notion of stationarity and axisymmetry of the dynamical fields. We first show that we can define certain potentials and charges at the horizon of a black hole so that the potentials are constant on the bifurcate Killing horizon, giving a generalised zeroth law for bifurcate Killing horizons. We further identify the gravitational potential and perturbed charge as the temperature and perturbed entropy of the black hole which gives an explicit formula for the perturbed entropy analogous to the Wald entropy formula. We then obtain a general first law of black hole mechanics for such theories. The first law relates the perturbed Hamiltonians at spatial infinity and the horizon, and the horizon contributions take the form of a ‘potential times perturbed charge’ term. We also comment on the ambiguities in defining a prescription for the total entropy for black holes.

  17. The two-year incidence of depression and anxiety disorders in spousal caregivers of persons with dementia: who is at the greatest risk?

    PubMed

    Joling, Karlijn J; van Marwijk, Harm W J; Veldhuijzen, Aaltje E; van der Horst, Henriëtte E; Scheltens, Philip; Smit, Filip; van Hout, Hein P J

    2015-03-01

    Caregivers of persons with dementia play an important and economically valuable role within society, but many may do so at a considerable cost to themselves. Knowing which caregivers have the highest risk of developing a mental disorder may contribute to better support of ultra-high-risk groups with preventive interventions. This study aims to describe the incidence of depression and anxiety disorders in caregivers and to identify its significant predictors. Prospective cohort study with a follow-up of 24 months. 181 spousal caregivers of persons with dementia without a clinical depression or anxiety disorder at baseline. Memory clinics, case management services, and primary care settings in the Netherlands. The onset of depression and anxiety was measured every 3 months with the MINI International Neuropsychiatric Interview, a structured diagnostic instrument for DSM-IV mental disorders. Potential predictors were assessed at baseline. 60% of the caregivers developed a depressive and/or anxiety disorder within 24 months: 37% a depression, 55% an anxiety disorder, and 32% both disorders. Sub-threshold depressive symptoms (Wald χ2=6.20, df=1, OR: 3.2, 95% CI: 1.28-8.03, p=0.013) and poor self-reported health of the caregiver (Wald χ2=5.56, df=1, OR: 1.17, 95% CI: 1.03-1.34, p=0.018) at baseline were significant predictors of disorder onset. Spousal caregivers of persons with dementia have a high risk to develop a mental disorder. Indicators related to the caregiver's (mental) health rather than environmental stressors such as patient characteristics or interruption of caregivers' daily activities predict disorder onset and can be used to identify caregivers for whom supporting preventive interventions are indicated. Copyright © 2015 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  18. Ground-motion modeling of the 1906 San Francisco earthquake, part I: Validation using the 1989 Loma Prieta earthquake

    USGS Publications Warehouse

    Aagaard, Brad T.; Brocher, T.M.; Dolenc, D.; Dreger, D.; Graves, R.W.; Harmsen, S.; Hartzell, S.; Larsen, S.; Zoback, M.L.

    2008-01-01

    We compute ground motions for the Beroza (1991) and Wald et al. (1991) source models of the 1989 magnitude 6.9 Loma Prieta earthquake using four different wave-propagation codes and recently developed 3D geologic and seismic velocity models. In preparation for modeling the 1906 San Francisco earthquake, we use this well-recorded earthquake to characterize how well our ground-motion simulations reproduce the observed shaking intensities and amplitude and durations of recorded motions throughout the San Francisco Bay Area. All of the simulations generate ground motions consistent with the large-scale spatial variations in shaking associated with rupture directivity and the geologic structure. We attribute the small variations among the synthetics to the minimum shear-wave speed permitted in the simulations and how they accommodate topography. Our long-period simulations, on average, under predict shaking intensities by about one-half modified Mercalli intensity (MMI) units (25%-35% in peak velocity), while our broadband simulations, on average, under predict the shaking intensities by one-fourth MMI units (16% in peak velocity). Discrepancies with observations arise due to errors in the source models and geologic structure. The consistency in the synthetic waveforms across the wave-propagation codes for a given source model suggests the uncertainty in the source parameters tends to exceed the uncertainty in the seismic velocity structure. In agreement with earlier studies, we find that a source model with slip more evenly distributed northwest and southeast of the hypocenter would be preferable to both the Beroza and Wald source models. Although the new 3D seismic velocity model improves upon previous velocity models, we identify two areas needing improvement. Nevertheless, we find that the seismic velocity model and the wave-propagation codes are suitable for modeling the 1906 earthquake and scenario events in the San Francisco Bay Area.

  19. Efficacy of a Multi-level Intervention to Reduce Injecting and Sexual Risk Behaviors among HIV-Infected People Who Inject Drugs in Vietnam: A Four-Arm Randomized Controlled Trial

    PubMed Central

    Go, Vivian F.; Frangakis, Constantine; Minh, Nguyen Le; Latkin, Carl; Ha, Tran Viet; Mo, Tran Thi; Sripaipan, Teerada; Davis, Wendy W.; Zelaya, Carla; Vu, Pham The; Celentano, David D.; Quan, Vu Minh

    2015-01-01

    Introduction Injecting drug use is a primary driver of HIV epidemics in many countries. People who inject drugs (PWID) and are HIV infected are often doubly stigmatized and many encounter difficulties reducing risk behaviors. Prevention interventions for HIV-infected PWID that provide enhanced support at the individual, family, and community level to facilitate risk-reduction are needed. Methods 455 HIV-infected PWID and 355 of their HIV negative injecting network members living in 32 sub-districts in Thai Nguyen Province were enrolled. We conducted a two-stage randomization: First, sub-districts were randomized to either a community video screening and house-to-house visits or standard of care educational pamphlets. Second, within each sub-district, participants were randomized to receive either enhanced individual level post-test counseling and group support sessions or standard of care HIV testing and counseling. This resulted in four arms: 1) standard of care; 2) community level intervention; 3) individual level intervention; and 4) community plus individual intervention. Follow-up was conducted at 6, 12, 18, and 24 months. Primary outcomes were self-reported HIV injecting and sexual risk behaviors. Secondary outcomes included HIV incidence among HIV negative network members. Results Fewer participants reported sharing injecting equipment and unprotected sex from baseline to 24 months in all arms (77% to 4% and 24% to 5% respectively). There were no significant differences at the 24-month visit among the 4 arms (Wald = 3.40 (3 df); p = 0.33; Wald = 6.73 (3 df); p = 0.08). There were a total of 4 HIV seroconversions over 24 months with no significant difference between intervention and control arms. Discussion Understanding the mechanisms through which all arms, particularly the control arm, demonstrated both low risk behaviors and low HIV incidence has important implications for policy and prevention programming. Trial Registration ClinicalTrials.gov NCT01689545 PMID:26011427

  20. Effects of an HIV peer prevention intervention on sexual and injecting risk behaviors among injecting drug users and their risk partners in Thai Nguyen, Vietnam: a randomized controlled trial.

    PubMed

    Go, Vivian F; Frangakis, Constantine; Le Minh, Nguyen; Latkin, Carl A; Ha, Tran Viet; Mo, Tran Thi; Sripaipan, Teerada; Davis, Wendy; Zelaya, Carla; Vu, Pham The; Chen, Yong; Celentano, David D; Quan, Vu Minh

    2013-11-01

    Globally, 30% of new HIV infections outside sub-Saharan Africa involve injecting drug users (IDU) and in many countries, including Vietnam, HIV epidemics are concentrated among IDU. We conducted a randomized controlled trial in Thai Nguyen, Vietnam, to evaluate whether a peer oriented behavioral intervention could reduce injecting and sexual HIV risk behaviors among IDU and their network members. 419 HIV-negative index IDU aged 18 years or older and 516 injecting and sexual network members were enrolled. Each index participant was randomly assigned to receive a series of six small group peer educator-training sessions and three booster sessions in addition to HIV testing and counseling (HTC) (intervention; n = 210) or HTC only (control; n = 209). Follow-up, including HTC, was conducted at 3, 6, 9 and 12 months post-intervention. The proportion of unprotected sex dropped significantly from 49% to 27% (SE (difference) = 3%, p < 0.01) between baseline and the 3-month visit among all index-network member pairs. However, at 12 months, post-intervention, intervention participants had a 14% greater decline in unprotected sex relative to control participants (Wald test = 10.8, df = 4, p = 0.03). This intervention effect is explained by trial participants assigned to the control arm who missed at least one standardized HTC session during follow-up and subsequently reported increased unprotected sex. The proportion of observed needle/syringe sharing dropped significantly between baseline and the 3-month visit (14% vs. 3%, SE (difference) = 2%, p < 0.01) and persisted until 12 months, but there was no difference across trial arms (Wald test = 3.74, df = 3, p = 0.44). Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Efficacy of a Multi-level Intervention to Reduce Injecting and Sexual Risk Behaviors among HIV-Infected People Who Inject Drugs in Vietnam: A Four-Arm Randomized Controlled Trial.

    PubMed

    Go, Vivian F; Frangakis, Constantine; Minh, Nguyen Le; Latkin, Carl; Ha, Tran Viet; Mo, Tran Thi; Sripaipan, Teerada; Davis, Wendy W; Zelaya, Carla; Vu, Pham The; Celentano, David D; Quan, Vu Minh

    2015-01-01

    Injecting drug use is a primary driver of HIV epidemics in many countries. People who inject drugs (PWID) and are HIV infected are often doubly stigmatized and many encounter difficulties reducing risk behaviors. Prevention interventions for HIV-infected PWID that provide enhanced support at the individual, family, and community level to facilitate risk-reduction are needed. 455 HIV-infected PWID and 355 of their HIV negative injecting network members living in 32 sub-districts in Thai Nguyen Province were enrolled. We conducted a two-stage randomization: First, sub-districts were randomized to either a community video screening and house-to-house visits or standard of care educational pamphlets. Second, within each sub-district, participants were randomized to receive either enhanced individual level post-test counseling and group support sessions or standard of care HIV testing and counseling. This resulted in four arms: 1) standard of care; 2) community level intervention; 3) individual level intervention; and 4) community plus individual intervention. Follow-up was conducted at 6, 12, 18, and 24 months. Primary outcomes were self-reported HIV injecting and sexual risk behaviors. Secondary outcomes included HIV incidence among HIV negative network members. Fewer participants reported sharing injecting equipment and unprotected sex from baseline to 24 months in all arms (77% to 4% and 24% to 5% respectively). There were no significant differences at the 24-month visit among the 4 arms (Wald = 3.40 (3 df); p = 0.33; Wald = 6.73 (3 df); p = 0.08). There were a total of 4 HIV seroconversions over 24 months with no significant difference between intervention and control arms. Understanding the mechanisms through which all arms, particularly the control arm, demonstrated both low risk behaviors and low HIV incidence has important implications for policy and prevention programming. ClinicalTrials.gov NCT01689545.

  2. The effect of a sequential structure of practice for the training of perceptual-cognitive skills in tennis

    PubMed Central

    2017-01-01

    Objective Anticipation of opponent actions, through the use of advanced (i.e., pre-event) kinematic information, can be trained using video-based temporal occlusion. Typically, this involves isolated opponent skills/shots presented as trials in a random order. However, two different areas of research concerning representative task design and contextual (non-kinematic) information, suggest this structure of practice restricts expert performance. The aim of this study was to examine the effect of a sequential structure of practice during video-based training of anticipatory behavior in tennis, as well as the transfer of these skills to the performance environment. Methods In a pre-practice-retention-transfer design, participants viewed life-sized video of tennis rallies across practice in either a sequential order (sequential group), in which participants were exposed to opponent skills/shots in the order they occur in the sport, or a non-sequential (non-sequential group) random order. Results In the video-based retention test, the sequential group was significantly more accurate in their anticipatory judgments when the retention condition replicated the sequential structure compared to the non-sequential group. In the non-sequential retention condition, the non-sequential group was more accurate than the sequential group. In the field-based transfer test, overall decision time was significantly faster in the sequential group compared to the non-sequential group. Conclusion Findings highlight the benefits of a sequential structure of practice for the transfer of anticipatory behavior in tennis. We discuss the role of contextual information, and the importance of representative task design, for the testing and training of perceptual-cognitive skills in sport. PMID:28355263

  3. Parallel discrete event simulation: A shared memory approach

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.

    1987-01-01

    With traditional event list techniques, evaluating a detailed discrete event simulation model can often require hours or even days of computation time. Parallel simulation mimics the interacting servers and queues of a real system by assigning each simulated entity to a processor. By eliminating the event list and maintaining only sufficient synchronization to insure causality, parallel simulation can potentially provide speedups that are linear in the number of processors. A set of shared memory experiments is presented using the Chandy-Misra distributed simulation algorithm to simulate networks of queues. Parameters include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential simulation of most queueing network models.

  4. Near infrared observations of S155. evidence of induced star formation?

    NASA Astrophysics Data System (ADS)

    Hunt, L. K.; Lisi, F.; Felli, M.; Tofani, G.

    At the interface of the giant molecular cloud Cepheus OB3, S155 represents one of the most interesting examples of bright rim produced by the ionization of a nearby O-star. The interaction between the ionized HII region S155 and the hot molecular core Cepheus B may constitute the ideal site for new stars, according to the sequential star-formation theory. Past observations of molecular lines have shown the evidence of a hot spot in the cloud core, probably a compact region associated to a young stellar object. New J,H,K images recently obtained with the ARNICA array at the TIRGO telescope give evidence of stars with strong near-infrared excess, which must represent the newest generation of young stars.

  5. Social anxiety in Cornelia de Lange syndrome.

    PubMed

    Richards, Caroline; Moss, Jo; O'Farrell, Laura; Kaur, Gurmeash; Oliver, Chris

    2009-08-01

    In this study we assessed the behavioral presentation of social anxiety in Cornelia de Lange syndrome (CdLS) using a contrast group of Cri du Chat syndrome (CdCS). Behaviors indicative of social anxiety were recorded in twelve children with CdLS (mean age = 11.00; SD = 5.15) and twelve children with CdCS (8.20; SD = 2.86) during social interaction. Lag sequential analysis revealed that participants with CdLS were significantly more likely to evidence behavior indicative of anxiety in close temporal proximity to the point at which they maintained eye contact or spoke. Individuals with CdLS demonstrate a heightened probability of anxiety related behavior during social interaction but only at the point at which social demand is high.

  6. Effect of glutathione on phytochelatin synthesis in tomato cells.

    PubMed

    Mendum, M L; Gupta, S C; Goldsbrough, P B

    1990-06-01

    Growth of cell suspension cultures of tomato, Lycopersicon esculentum Mill. cv VFNT-Cherry, in the presence of cadmium is inhibited by buthionine sulfoximine, an inhibitor of glutathione synthesis. Cell growth and phytochelatin synthesis are restored to cells treated with buthionine sulfoximine by the addition of glutathione to the medium. Glutathione stimulates the accumulation of phytochelatins in cadmium treated cells, indicating that availability of glutathione can limit synthesis of these peptides. Exogenous glutathione causes a disproportionate increase in the level of smaller phytochelatins, notably [gamma-Glu-Cys](2)-Gly. In the presence of buthionine sulfoximine and glutathione, phytochelatins that are produced upon exposure to cadmium incorporate little [(35)S]cysteine, indicating that these peptides are probably not synthesized by sequential addition of cysteine and glutamate to glutathione.

  7. The human brain processes repeated auditory feature conjunctions of low sequential probability.

    PubMed

    Ruusuvirta, Timo; Huotilainen, Minna

    2004-01-23

    The human brain is known to preattentively trace repeated sounds as holistic entities. It is not clear, however, whether the same holds true if these sounds are rare among other repeated sounds. Adult humans passively listened to a repeated tone with frequent (standard) and rare (deviant) conjunctions of its three features. Six equiprobable variants per conjunction type were assigned from a space built from these features so that the standard variants (P=0.15 each) were not inseparably traceable by means of their linear alignment in this space. Differential scalp-recorded event-related potentials to deviants indicate that the standard variants were traced as repeated wholes despite their preperceptual distinctiveness and resulting rarity among one another.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wemmer, D.E.; Kumar, N.V.; Metrione, R.M.

    Toxin II from Radianthus paumotensis (Rp/sub II/) has been investigated by high-resolution NMR and chemical sequencing methods. Resonance assignments have been obtained for this protein by the sequential approach. NMR assignments could not be made consistent with the previously reported primary sequence for this protein, and chemical methods have been used to determine a sequence with which the NMR data are consistent. Analysis of the 2D NOE spectra shows that the protein secondary structure is comprised of two sequences of ..beta..-sheet, probably joined into a distorted continuous sheet, connected by turns and extended loops, without any regular ..cap alpha..-helical segments.more » The residues previously implicated in activity in this class of proteins, D8 and R13, occur in a loop region.« less

  9. Multiple forms of statherin in human salivary secretions.

    PubMed

    Jensen, J L; Lamkin, M S; Troxler, R F; Oppenheim, F G

    1991-01-01

    Sequential chromatography of hydroxyapatite-adsorbed salivary proteins from submandibular/sublingual secretions on Sephadex G-50 and reversed-phase HPLC resulted in the purification of statherin and several statherin variants. Amino acid analysis, Edman degradation and carboxypeptidase digestion of the obtained protein fractions led to the determination of the complete primary structures of statherin SV1, statherin SV2, and statherin SV3. SV1 is identical to statherin but lacks the carboxyl-terminal phenylalanine residue. SV2, lacking residues 6-15, is otherwise identical to statherin. SV3 is identical to SV2 but lacks the carboxyl-terminal phenylalanine. These results provide the first evidence for multiple forms of statherin which are probably derived both by post-translational modification and alternative splicing of the statherin gene.

  10. Winnerless competition principle and prediction of the transient dynamics in a Lotka-Volterra model

    NASA Astrophysics Data System (ADS)

    Afraimovich, Valentin; Tristan, Irma; Huerta, Ramon; Rabinovich, Mikhail I.

    2008-12-01

    Predicting the evolution of multispecies ecological systems is an intriguing problem. A sufficiently complex model with the necessary predicting power requires solutions that are structurally stable. Small variations of the system parameters should not qualitatively perturb its solutions. When one is interested in just asymptotic results of evolution (as time goes to infinity), then the problem has a straightforward mathematical image involving simple attractors (fixed points or limit cycles) of a dynamical system. However, for an accurate prediction of evolution, the analysis of transient solutions is critical. In this paper, in the framework of the traditional Lotka-Volterra model (generalized in some sense), we show that the transient solution representing multispecies sequential competition can be reproducible and predictable with high probability.

  11. Winnerless competition principle and prediction of the transient dynamics in a Lotka-Volterra model.

    PubMed

    Afraimovich, Valentin; Tristan, Irma; Huerta, Ramon; Rabinovich, Mikhail I

    2008-12-01

    Predicting the evolution of multispecies ecological systems is an intriguing problem. A sufficiently complex model with the necessary predicting power requires solutions that are structurally stable. Small variations of the system parameters should not qualitatively perturb its solutions. When one is interested in just asymptotic results of evolution (as time goes to infinity), then the problem has a straightforward mathematical image involving simple attractors (fixed points or limit cycles) of a dynamical system. However, for an accurate prediction of evolution, the analysis of transient solutions is critical. In this paper, in the framework of the traditional Lotka-Volterra model (generalized in some sense), we show that the transient solution representing multispecies sequential competition can be reproducible and predictable with high probability.

  12. System For Surveillance Of Spectral Signals

    DOEpatents

    Gross, Kenneth C.; Wegerich, Stephan W.; Criss-Puszkiewicz, Cynthia; Wilks, Alan D.

    2004-10-12

    A method and system for monitoring at least one of a system, a process and a data source. A method and system have been developed for carrying out surveillance, testing and modification of an ongoing process or other source of data, such as a spectroscopic examination. A signal from the system under surveillance is collected and compared with a reference signal, a frequency domain transformation carried out for the system signal and reference signal, a frequency domain difference function established. The process is then repeated until a full range of data is accumulated over the time domain and a Sequential Probability Ratio Test ("SPRT") methodology applied to determine a three-dimensional surface plot characteristic of the operating state of the system under surveillance.

  13. System For Surveillance Of Spectral Signals

    DOEpatents

    Gross, Kenneth C.; Wegerich, Stephan; Criss-Puszkiewicz, Cynthia; Wilks, Alan D.

    2003-04-22

    A method and system for monitoring at least one of a system, a process and a data source. A method and system have been developed for carrying out surveillance, testing and modification of an ongoing process or other source of data, such as a spectroscopic examination. A signal from the system under surveillance is collected and compared with a reference signal, a frequency domain transformation carried out for the system signal and reference signal, a frequency domain difference function established. The process is then repeated until a full range of data is accumulated over the time domain and a Sequential Probability Ratio Test methodology applied to determine a three-dimensional surface plot characteristic of the operating state of the system under surveillance.

  14. System for surveillance of spectral signals

    DOEpatents

    Gross, Kenneth C.; Wegerich, Stephan W.; Criss-Puszkiewicz, Cynthia; Wilks, Alan D.

    2006-02-14

    A method and system for monitoring at least one of a system, a process and a data source. A method and system have been developed for carrying out surveillance, testing and modification of an ongoing process or other source of data, such as a spectroscopic examination. A signal from the system under surveillance is collected and compared with a reference signal, a frequency domain transformation carried out for the system signal and reference signal, a frequency domain difference function established. The process is then repeated until a full range of data is accumulated over the time domain and a Sequential Probability Ratio Test ("SPRT") methodology applied to determine a three-dimensional surface plot characteristic of the operating state of the system under surveillance.

  15. System for surveillance of spectral signals

    DOEpatents

    Gross, Kenneth C.; Wegerich, Stephan W.; Criss-Puszkiewicz, Cynthia; Wilks, Alan D.

    2001-01-01

    A method and system for monitoring at least one of a system, a process and a data source. A method and system have been developed for carrying out surveillance, testing and modification of an ongoing process or other source of data, such as a spectroscopic examination. A signal from the system under surveillance is collected and compared with a reference signal, a frequency domain transformation carried out for the system signal and reference signal, a frequency domain difference function established. The process is then repeated until a full range of data is accumulated over the time domain and a SPRT sequential probability ratio test methodology applied to determine a three-dimensional surface plot characteristic of the operating state of the system under surveillance.

  16. A versatile microprocessor-controlled hybrid receiver. [with firmware control for operation over large frequency uncertainty

    NASA Technical Reports Server (NTRS)

    Grant, T. L.

    1978-01-01

    A hybrid receiver has been designed for the Galileo Project. The receiver, located on the Galileo Orbiter, will autonomously acquire and track signals from the first atmospheric probe of Jupiter as well as demodulate, bit-synchronize, and buffer the telemetry data. The receiver has a conventional RF and LF front end but performs multiple functions digitally under firmware control. It will be a self-acquiring receiver that operates under a large frequency uncertainty; it can accommodate different modulation types, bit rates, and other parameter changes via reprogramming. A breadboard receiver and test set demonstrate a preliminary version of the sequential detection process and verify the hypothesis that a fading channel does not reduce the probability of detection.

  17. eHealth literacy and Web 2.0 health information seeking behaviors among baby boomers and older adults.

    PubMed

    Tennant, Bethany; Stellefson, Michael; Dodd, Virginia; Chaney, Beth; Chaney, Don; Paige, Samantha; Alber, Julia

    2015-03-17

    Baby boomers and older adults, a subset of the population at high risk for chronic disease, social isolation, and poor health outcomes, are increasingly utilizing the Internet and social media (Web 2.0) to locate and evaluate health information. However, among these older populations, little is known about what factors influence their eHealth literacy and use of Web 2.0 for health information. The intent of the study was to explore the extent to which sociodemographic, social determinants, and electronic device use influences eHealth literacy and use of Web 2.0 for health information among baby boomers and older adults. A random sample of baby boomers and older adults (n=283, mean 67.46 years, SD 9.98) participated in a cross-sectional, telephone survey that included the eHealth literacy scale (eHEALS) and items from the Health Information National Trends Survey (HINTS) assessing electronic device use and use of Web 2.0 for health information. An independent samples t test compared eHealth literacy among users and non-users of Web 2.0 for health information. Multiple linear and logistic regression analyses were conducted to determine associations between sociodemographic, social determinants, and electronic device use on self-reported eHealth literacy and use of Web 2.0 for seeking and sharing health information. Almost 90% of older Web 2.0 users (90/101, 89.1%) reported using popular Web 2.0 websites, such as Facebook and Twitter, to find and share health information. Respondents reporting use of Web 2.0 reported greater eHealth literacy (mean 30.38, SD 5.45, n=101) than those who did not use Web 2.0 (mean 28.31, SD 5.79, n=182), t217.60=-2.98, P=.003. Younger age (b=-0.10), more education (b=0.48), and use of more electronic devices (b=1.26) were significantly associated with greater eHealth literacy (R(2) =.17, R(2)adj =.14, F9,229=5.277, P<.001). Women were nearly three times more likely than men to use Web 2.0 for health information (OR 2.63, Wald= 8.09, df=1, P=.004). Finally, more education predicted greater use of Web 2.0 for health information, with college graduates (OR 2.57, Wald= 3.86, df =1, P=.049) and post graduates (OR 7.105, Wald= 4.278, df=1, P=.04) nearly 2 to 7 times more likely than non-high school graduates to use Web 2.0 for health information. Being younger and possessing more education was associated with greater eHealth literacy among baby boomers and older adults. Females and those highly educated, particularly at the post graduate level, reported greater use of Web 2.0 for health information. More in-depth surveys and interviews among more diverse groups of baby boomers and older adult populations will likely yield a better understanding regarding how current Web-based health information seeking and sharing behaviors influence health-related decision making.

  18. eHealth Literacy and Web 2.0 Health Information Seeking Behaviors Among Baby Boomers and Older Adults

    PubMed Central

    Tennant, Bethany; Dodd, Virginia; Chaney, Beth; Chaney, Don; Paige, Samantha; Alber, Julia

    2015-01-01

    Background Baby boomers and older adults, a subset of the population at high risk for chronic disease, social isolation, and poor health outcomes, are increasingly utilizing the Internet and social media (Web 2.0) to locate and evaluate health information. However, among these older populations, little is known about what factors influence their eHealth literacy and use of Web 2.0 for health information. Objective The intent of the study was to explore the extent to which sociodemographic, social determinants, and electronic device use influences eHealth literacy and use of Web 2.0 for health information among baby boomers and older adults. Methods A random sample of baby boomers and older adults (n=283, mean 67.46 years, SD 9.98) participated in a cross-sectional, telephone survey that included the eHealth literacy scale (eHEALS) and items from the Health Information National Trends Survey (HINTS) assessing electronic device use and use of Web 2.0 for health information. An independent samples t test compared eHealth literacy among users and non-users of Web 2.0 for health information. Multiple linear and logistic regression analyses were conducted to determine associations between sociodemographic, social determinants, and electronic device use on self-reported eHealth literacy and use of Web 2.0 for seeking and sharing health information. Results Almost 90% of older Web 2.0 users (90/101, 89.1%) reported using popular Web 2.0 websites, such as Facebook and Twitter, to find and share health information. Respondents reporting use of Web 2.0 reported greater eHealth literacy (mean 30.38, SD 5.45, n=101) than those who did not use Web 2.0 (mean 28.31, SD 5.79, n=182), t 217.60=−2.98, P=.003. Younger age (b=−0.10), more education (b=0.48), and use of more electronic devices (b=1.26) were significantly associated with greater eHealth literacy (R 2 =.17, R 2adj =.14, F9,229=5.277, P<.001). Women were nearly three times more likely than men to use Web 2.0 for health information (OR 2.63, Wald= 8.09, df=1, P=.004). Finally, more education predicted greater use of Web 2.0 for health information, with college graduates (OR 2.57, Wald= 3.86, df =1, P=.049) and post graduates (OR 7.105, Wald= 4.278, df=1, P=.04) nearly 2 to 7 times more likely than non-high school graduates to use Web 2.0 for health information. Conclusions Being younger and possessing more education was associated with greater eHealth literacy among baby boomers and older adults. Females and those highly educated, particularly at the post graduate level, reported greater use of Web 2.0 for health information. More in-depth surveys and interviews among more diverse groups of baby boomers and older adult populations will likely yield a better understanding regarding how current Web-based health information seeking and sharing behaviors influence health-related decision making. PMID:25783036

  19. Sequential Syndrome Decoding of Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.

  20. Robustness of the sequential lineup advantage.

    PubMed

    Gronlund, Scott D; Carlson, Curt A; Dailey, Sarah B; Goodsell, Charles A

    2009-06-01

    A growing movement in the United States and around the world involves promoting the advantages of conducting an eyewitness lineup in a sequential manner. We conducted a large study (N = 2,529) that included 24 comparisons of sequential versus simultaneous lineups. A liberal statistical criterion revealed only 2 significant sequential lineup advantages and 3 significant simultaneous advantages. Both sequential advantages occurred when the good photograph of the guilty suspect or either innocent suspect was in the fifth position in the sequential lineup; all 3 simultaneous advantages occurred when the poorer quality photograph of the guilty suspect or either innocent suspect was in the second position. Adjusting the statistical criterion to control for the multiple tests (.05/24) revealed no significant sequential advantages. Moreover, despite finding more conservative overall choosing for the sequential lineup, no support was found for the proposal that a sequential advantage was due to that conservative criterion shift. Unless lineups with particular characteristics predominate in the real world, there appears to be no strong preference for conducting lineups in either a sequential or a simultaneous manner. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

Top