Whiley, Phillip J.; Parsons, Michael T.; Leary, Jennifer; Tucker, Kathy; Warwick, Linda; Dopita, Belinda; Thorne, Heather; Lakhani, Sunil R.; Goldgar, David E.; Brown, Melissa A.; Spurdle, Amanda B.
2014-01-01
Rare exonic, non-truncating variants in known cancer susceptibility genes such as BRCA1 and BRCA2 are problematic for genetic counseling and clinical management of relevant families. This study used multifactorial likelihood analysis and/or bioinformatically-directed mRNA assays to assess pathogenicity of 19 BRCA1 or BRCA2 variants identified following patient referral to clinical genetic services. Two variants were considered to be pathogenic (Class 5). BRCA1:c.4484G> C(p.Arg1495Thr) was shown to result in aberrant mRNA transcripts predicted to encode truncated proteins. The BRCA1:c.122A>G(p.His41Arg) RING-domain variant was found from multifactorial likelihood analysis to have a posterior probability of pathogenicity of 0.995, a result consistent with existing protein functional assay data indicating lost BARD1 binding and ubiquitin ligase activity. Of the remaining variants, seven were determined to be not clinically significant (Class 1), nine were likely not pathogenic (Class 2), and one was uncertain (Class 3).These results have implications for genetic counseling and medical management of families carrying these specific variants. They also provide additional multifactorial likelihood variant classifications as reference to evaluate the sensitivity and specificity of bioinformatic prediction tools and/or functional assay data in future studies. PMID:24489791
Lovelock, Paul K; Spurdle, Amanda B; Mok, Myth T S; Farrugia, Daniel J; Lakhani, Sunil R; Healey, Sue; Arnold, Stephen; Buchanan, Daniel; Couch, Fergus J; Henderson, Beric R; Goldgar, David E; Tavtigian, Sean V; Chenevix-Trench, Georgia; Brown, Melissa A
2007-01-01
Many of the DNA sequence variants identified in the breast cancer susceptibility gene BRCA1 remain unclassified in terms of their potential pathogenicity. Both multifactorial likelihood analysis and functional approaches have been proposed as a means to elucidate likely clinical significance of such variants, but analysis of the comparative value of these methods for classifying all sequence variants has been limited. We have compared the results from multifactorial likelihood analysis with those from several functional analyses for the four BRCA1 sequence variants A1708E, G1738R, R1699Q, and A1708V. Our results show that multifactorial likelihood analysis, which incorporates sequence conservation, co-inheritance, segregation, and tumour immunohistochemical analysis, may improve classification of variants. For A1708E, previously shown to be functionally compromised, analysis of oestrogen receptor, cytokeratin 5/6, and cytokeratin 14 tumour expression data significantly strengthened the prediction of pathogenicity, giving a posterior probability of pathogenicity of 99%. For G1738R, shown to be functionally defective in this study, immunohistochemistry analysis confirmed previous findings of inconsistent 'BRCA1-like' phenotypes for the two tumours studied, and the posterior probability for this variant was 96%. The posterior probabilities of R1699Q and A1708V were 54% and 69%, respectively, only moderately suggestive of increased risk. Interestingly, results from functional analyses suggest that both of these variants have only partial functional activity. R1699Q was defective in foci formation in response to DNA damage and displayed intermediate transcriptional transactivation activity but showed no evidence for centrosome amplification. In contrast, A1708V displayed an intermediate transcriptional transactivation activity and a normal foci formation response in response to DNA damage but induced centrosome amplification. These data highlight the need for a range of functional studies to be performed in order to identify variants with partially compromised function. The results also raise the possibility that A1708V and R1699Q may be associated with a low or moderate risk of cancer. While data pooling strategies may provide more information for multifactorial analysis to improve the interpretation of the clinical significance of these variants, it is likely that the development of current multifactorial likelihood approaches and the consideration of alternative statistical approaches will be needed to determine whether these individually rare variants do confer a low or moderate risk of breast cancer.
Lovelock, Paul K; Spurdle, Amanda B; Mok, Myth TS; Farrugia, Daniel J; Lakhani, Sunil R; Healey, Sue; Arnold, Stephen; Buchanan, Daniel; Investigators, kConFab; Couch, Fergus J; Henderson, Beric R; Goldgar, David E; Tavtigian, Sean V; Chenevix-Trench, Georgia; Brown, Melissa A
2007-01-01
Introduction Many of the DNA sequence variants identified in the breast cancer susceptibility gene BRCA1 remain unclassified in terms of their potential pathogenicity. Both multifactorial likelihood analysis and functional approaches have been proposed as a means to elucidate likely clinical significance of such variants, but analysis of the comparative value of these methods for classifying all sequence variants has been limited. Methods We have compared the results from multifactorial likelihood analysis with those from several functional analyses for the four BRCA1 sequence variants A1708E, G1738R, R1699Q, and A1708V. Results Our results show that multifactorial likelihood analysis, which incorporates sequence conservation, co-inheritance, segregation, and tumour immunohistochemical analysis, may improve classification of variants. For A1708E, previously shown to be functionally compromised, analysis of oestrogen receptor, cytokeratin 5/6, and cytokeratin 14 tumour expression data significantly strengthened the prediction of pathogenicity, giving a posterior probability of pathogenicity of 99%. For G1738R, shown to be functionally defective in this study, immunohistochemistry analysis confirmed previous findings of inconsistent 'BRCA1-like' phenotypes for the two tumours studied, and the posterior probability for this variant was 96%. The posterior probabilities of R1699Q and A1708V were 54% and 69%, respectively, only moderately suggestive of increased risk. Interestingly, results from functional analyses suggest that both of these variants have only partial functional activity. R1699Q was defective in foci formation in response to DNA damage and displayed intermediate transcriptional transactivation activity but showed no evidence for centrosome amplification. In contrast, A1708V displayed an intermediate transcriptional transactivation activity and a normal foci formation response in response to DNA damage but induced centrosome amplification. Conclusion These data highlight the need for a range of functional studies to be performed in order to identify variants with partially compromised function. The results also raise the possibility that A1708V and R1699Q may be associated with a low or moderate risk of cancer. While data pooling strategies may provide more information for multifactorial analysis to improve the interpretation of the clinical significance of these variants, it is likely that the development of current multifactorial likelihood approaches and the consideration of alternative statistical approaches will be needed to determine whether these individually rare variants do confer a low or moderate risk of breast cancer. PMID:18036263
Falls prevention in the elderly: translating evidence into practice.
Luk, James K H; Chan, T Y; Chan, Daniel K Y
2015-04-01
Falls are a common problem in the elderly. A common error in their management is that injury from the fall is treated, without finding its cause. Thus a proactive approach is important to screen for the likelihood of fall in the elderly. Fall assessment usually includes a focused history and a targeted examination. Timed up-and-go test can be performed quickly and is able to predict the likelihood of fall. Evidence-based fall prevention interventions include multi-component group or home-based exercises, participation in Tai Chi, environmental modifications, medication review, management of foot and footwear problems, vitamin D supplementation, and management of cardiovascular problems. If possible, these are best implemented in the form of multifactorial intervention. Bone health enhancement for residential care home residents and appropriate community patients, and prescription of hip protectors for residential care home residents are also recommended. Multifactorial intervention may also be useful in a hospital and residential care home setting. Use of physical restraints is not recommended for fall prevention.
Seligman, D A; Pullinger, A G
2006-11-01
To determine whether patients with temporomandibular joint disease or masticatory muscle pain can be usefully differentiated from asymptomatic controls using multifactorial classification tree models of attrition severity and/or rates. Measures of attrition severity and rates in patients diagnosed with disc displacement (n = 52), osteoarthrosis (n = 74), or masticatory muscle pain only (n = 43) were compared against those in asymptomatic controls (n = 132). Cross-validated classification tree models were tested for fit with sensitivity, specificity, accuracy and log likelihood accountability. The model for identifying asymptomatic controls only required the three measures of attrition severity (anterior, mediotrusive and laterotrusive posterior) to be differentiated from the patients with a 74.2 +/- 3.8% cross-validation accuracy. This compared with cross-validation accuracies of 69.7 +/- 3.7% for differentiating disc displacement using anterior and laterotrusive attrition severity, 68.7 +/- 3.9% for differentiating disc displacement using anterior and laterotrusive attrition rates, 70.9 +/- 3.3% for differentiating osteoarthrosis using anterior attrition severity and rates, 94.6 +/- 2.1% for differentiating myofascial pain using mediotrusive and laterotrusive attrition severity, and 92.0 +/- 2.1% for differentiating myofascial pain using mediotrusive and anterior attrition rates. The myofascial pain models exceeded the > or =75% sensitivity and > or =90% specificity thresholds recommended for diagnostic tests, and the asymptomatic control model approached these thresholds. Multifactorial models using attrition severity and rates may differentiate masticatory muscle pain patients from asymptomatic controls, and have some predictive value for differentiating intracapsular temporomandibular disorder patients as well.
Factors associated with participation in physical activity among adolescents in Malaysia.
Cheah, Yong Kang; Lim, Hock Kuang; Kee, Chee Cheong; Ghazali, Sumarni Mohd
2016-11-01
The rising prevalence of non-communicable diseases (NCDs) has become a serious public health issue. Among the multi-factorial drivers behind NCDs are modifiable health risk factors, most notably, physical inactivity. In response to the nearly global policy priority of encouraging regular participation in physical activity, the objective of the present study is to examine the factors that determine participation in physical activity among Malaysian adolescents. Nationally representative data consisting of a large sample size was used. A censored regression model was developed to estimate the likelihood of participation and time spent on physical activity. There are significant relationships between physical activity and gender, ethnicity, self-rated academic performance, maternal education, household size and time spent on physical education. The present study provides new insights into the factors affecting physical activity participation among adolescents. Specifically, self-rated excellent academic performance, household size and physical education can increase the likelihood of being physically active. Evidence of the present study implies that policy makers should pay special attention to females, Chinese, adolescents with self-rated poor academic performance and adolescents who have low maternal education.
Universal etiology, multifactorial diseases and the constitutive model of disease classification.
Fuller, Jonathan
2018-02-01
Infectious diseases are often said to have a universal etiology, while chronic and noncommunicable diseases are said to be multifactorial in their etiology. It has been argued that the universal etiology of an infectious disease results from its classification using a monocausal disease model. In this article, I will reconstruct the monocausal model and argue that modern 'multifactorial diseases' are not monocausal by definition. 'Multifactorial diseases' are instead defined according to a constitutive disease model. On closer analysis, infectious diseases are also defined using the constitutive model rather than the monocausal model. As a result, our classification models alone cannot explain why infectious diseases have a universal etiology while chronic and noncommunicable diseases lack one. The explanation is instead provided by the Nineteenth Century germ theorists. Copyright © 2017 Elsevier Ltd. All rights reserved.
Transgender populations and HIV: unique risks, challenges and opportunities.
Wansom, Tanyaporn; Guadamuz, Thomas E; Vasan, Sandhya
2016-04-01
Due to unique social, behavioural, structural and biological issues, transgender (TG) populations, especially TG women, are at high risk for HIV acquisition. This increased risk is multifactorial, due to differing psychosocial risk factors, poorer access to TG-specific healthcare, a higher likelihood of using exogenous hormones or fillers without direct medical supervision, interactions between hormonal therapy and antiretroviral therapy, and direct effects of hormonal therapy on HIV acquisition and immune control. Further research is needed to elucidate these mechanisms of risk and to help design interventions to reduce HIV risk among transgender populations.
Multifactorial disease risk calculator: Risk prediction for multifactorial disease pedigrees.
Campbell, Desmond D; Li, Yiming; Sham, Pak C
2018-03-01
Construction of multifactorial disease models from epidemiological findings and their application to disease pedigrees for risk prediction is nontrivial for all but the simplest of cases. Multifactorial Disease Risk Calculator is a web tool facilitating this. It provides a user-friendly interface, extending a reported methodology based on a liability-threshold model. Multifactorial disease models incorporating all the following features in combination are handled: quantitative risk factors (including polygenic scores), categorical risk factors (including major genetic risk loci), stratified age of onset curves, and the partition of the population variance in disease liability into genetic, shared, and unique environment effects. It allows the application of such models to disease pedigrees. Pedigree-related outputs are (i) individual disease risk for pedigree members, (ii) n year risk for unaffected pedigree members, and (iii) the disease pedigree's joint liability distribution. Risk prediction for each pedigree member is based on using the constructed disease model to appropriately weigh evidence on disease risk available from personal attributes and family history. Evidence is used to construct the disease pedigree's joint liability distribution. From this, lifetime and n year risk can be predicted. Example disease models and pedigrees are provided at the website and are used in accompanying tutorials to illustrate the features available. The website is built on an R package which provides the functionality for pedigree validation, disease model construction, and risk prediction. Website: http://grass.cgs.hku.hk:3838/mdrc/current. © 2017 WILEY PERIODICALS, INC.
Mahmood, Eitezaz; Matyal, Robina; Mueller, Ariel; Mahmood, Feroze; Tung, Avery; Montealegre-Gallegos, Mario; Schermerhorn, Marc; Shahul, Sajid
2018-03-01
In some institutions, the current blood ordering practice does not discriminate minimally invasive endovascular aneurysm repair (EVAR) from open procedures, with consequent increasing costs and likelihood of blood product wastage for EVARs. This limitation in practice can possibly be addressed with the development of a reliable prediction model for transfusion risk in EVAR patients. We used the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) database to create a model for prediction of intraoperative blood transfusion occurrence in patients undergoing EVAR. Afterward, we tested our predictive model on the Vascular Study Group of New England (VSGNE) database. We used the ACS NSQIP database for patients who underwent EVAR from 2011 to 2013 (N = 4709) as our derivation set for identifying a risk index for predicting intraoperative blood transfusion. We then developed a clinical risk score and validated this model using patients who underwent EVAR from 2003 to 2014 in the VSGNE database (N = 4478). The transfusion rates were 8.4% and 6.1% for the ACS NSQIP (derivation set) and VSGNE (validation) databases, respectively. Hemoglobin concentration, American Society of Anesthesiologists class, age, and aneurysm diameter predicted blood transfusion in the derivation set. When it was applied on the validation set, our risk index demonstrated good discrimination in both the derivation and validation set (C statistic = 0.73 and 0.70, respectively) and calibration using the Hosmer-Lemeshow test (P = .27 and 0.31) for both data sets. We developed and validated a risk index for predicting the likelihood of intraoperative blood transfusion in EVAR patients. Implementation of this index may facilitate the blood management strategies specific for EVAR. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Cloninger, C R; Rice, J; Reich, T
1979-01-01
A general linear model of combined polygenic-cultural inheritance is described. The model allows for phenotypic assortative mating, common environment, maternal and paternal effects, and genic-cultural correlation. General formulae for phenotypic correlation between family members in extended pedigrees are given for both primary and secondary assortative mating. A FORTRAN program BETA, available upon request, is used to provide maximum likelihood estimates of the parameters from reported correlations. American data about IQ and Burks' culture index are analyzed. Both cultural and genetic components of phenotypic variance are observed to make significant and substantial contributions to familial resemblance in IQ. The correlation between the environments of DZ twins is found to equal that of singleton sibs, not that of MZ twins. Burks' culture index is found to be an imperfect measure of midparent IQ rather than an index of home environment as previously assumed. Conditions under which the parameters of the model may be uniquely and precisely estimated are discussed. Interpretation of variance components in the presence of assortative mating and genic-cultural covariance is reviewed. A conservative, but robust, approach to the use of environmental indices is described. PMID:453202
ERIC Educational Resources Information Center
Wallen, Eva Flygare; Mullerdorf, Maria; Christensson, Kyllike; Marcus, Claude
2013-01-01
Adolescents with intellectual disabilities (ID) have an increased prevalence of being overweight and having cardiometabolic diseases as adults, in part due to poor eating habits with an inadequate intake of vegetables. The aim of this study was to evaluate whether a multifactorial school intervention using the "Plate Model" results in…
Hamilton, Jada G; Waters, Erika A
2018-02-01
People who believe that cancer has both genetic and behavioral risk factors have more accurate mental models of cancer causation and may be more likely to engage in cancer screening behaviors than people who do not hold such multifactorial causal beliefs. This research explored possible health cognitions and emotions that might produce such differences. Using nationally representative cross-sectional data from the US Health Information National Trends Survey (N = 2719), we examined whether endorsing a multifactorial model of cancer causation was associated with perceptions of risk and other cancer-related cognitions and affect. Data were analyzed using linear regression with jackknife variance estimation and procedures to account for the complex survey design and weightings. Bivariate and multivariable analyses indicated that people who endorsed multifactorial beliefs about cancer had higher absolute risk perceptions, lower pessimism about cancer prevention, and higher worry about harm from environmental toxins that could be ingested or that emanate from consumer products (Ps < .05). Bivariate analyses indicated that multifactorial beliefs were also associated with higher feelings of risk, but multivariable analyses suggested that this effect was accounted for by the negative affect associated with reporting a family history of cancer. Multifactorial beliefs were not associated with believing that everything causes cancer or that there are too many cancer recommendations to follow (Ps > .05). Holding multifactorial causal beliefs about cancer are associated with a constellation of risk perceptions, health cognitions, and affect that may motivate cancer prevention and detection behavior. Copyright © 2017 John Wiley & Sons, Ltd.
Multifactorial modelling of high-temperature treatment of timber in the saturated water steam medium
NASA Astrophysics Data System (ADS)
Prosvirnikov, D. B.; Safin, R. G.; Ziatdinova, D. F.; Timerbaev, N. F.; Lashkov, V. A.
2016-04-01
The paper analyses experimental data obtained in studies of high-temperature treatment of softwood and hardwood in an environment of saturated water steam. Data were processed in the Curve Expert software for the purpose of statistical modelling of processes and phenomena occurring during this process. The multifactorial modelling resulted in the empirical dependences, allowing determining the main parameters of this type of hydrothermal treatment with high accuracy.
Iturria-Medina, Yasser; Carbonell, Félix M; Sotero, Roberto C; Chouinard-Decorte, Francois; Evans, Alan C
2017-05-15
Generative models focused on multifactorial causal mechanisms in brain disorders are scarce and generally based on limited data. Despite the biological importance of the multiple interacting processes, their effects remain poorly characterized from an integrative analytic perspective. Here, we propose a spatiotemporal multifactorial causal model (MCM) of brain (dis)organization and therapeutic intervention that accounts for local causal interactions, effects propagation via physical brain networks, cognitive alterations, and identification of optimum therapeutic interventions. In this article, we focus on describing the model and applying it at the population-based level for studying late onset Alzheimer's disease (LOAD). By interrelating six different neuroimaging modalities and cognitive measurements, this model accurately predicts spatiotemporal alterations in brain amyloid-β (Aβ) burden, glucose metabolism, vascular flow, resting state functional activity, structural properties, and cognitive integrity. The results suggest that a vascular dysregulation may be the most-likely initial pathologic event leading to LOAD. Nevertheless, they also suggest that LOAD it is not caused by a unique dominant biological factor (e.g. vascular or Aβ) but by the complex interplay among multiple relevant direct interactions. Furthermore, using theoretical control analysis of the identified population-based multifactorial causal network, we show the crucial advantage of using combinatorial over single-target treatments, explain why one-target Aβ based therapies might fail to improve clinical outcomes, and propose an efficiency ranking of possible LOAD interventions. Although still requiring further validation at the individual level, this work presents the first analytic framework for dynamic multifactorial brain (dis)organization that may explain both the pathologic evolution of progressive neurological disorders and operationalize the influence of multiple interventional strategies. Copyright © 2017 Elsevier Inc. All rights reserved.
Workarounds in the Workplace: A Second Look.
Seaman, Jennifer B; Erlen, Judith A
2015-01-01
Nursing workarounds have garnered increased attention over the past 15 years, corresponding with an increased focus on patient safety and evidence-based practice and a rise in the use of health information technologies (HITs). Workarounds have typically been viewed as deviations from best practice that put patients at risk for poor outcomes. However, this narrow view fails to take into consideration the multifactorial origins of workarounds. The authors explore the ways in which evidence-based protocols and HIT, designed to improve patient safety and quality, can have an unintended consequence of increasing the likelihood of nurses engaging in workarounds. The article also examines workarounds considering the ethical obligations of both nurses and administrative leaders to optimize patient safety and quality.
Hill, Keith D; Day, Lesley; Haines, Terry P
2014-01-01
Purpose To investigate previous, current, or planned participation in, and perceptions toward, multifactorial fall prevention programs such as those delivered through a falls clinic in the community setting, and to identify factors influencing older people’s intent to undertake these interventions. Design and methods Community-dwelling people aged >70 years completed a telephone survey. Participants were randomly selected from an electronic residential telephone listing, but purposeful sampling was used to include equal numbers with and without common chronic health conditions associated with fall-related hospitalization. The survey included scenarios for fall prevention interventions, including assessment/multifactorial interventions, such as those delivered through a falls clinic. Participants were asked about previous exposure to, or intent to participate in, the interventions. A path model analysis was used to identify factors associated with intent to participate in assessment/multifactorial interventions. Results Thirty of 376 participants (8.0%) reported exposure to a multifactorial falls clinic-type intervention in the past 5 years, and 16.0% expressed intention to undertake this intervention. Of the 132 participants who reported one or more falls in the past 12 months, over one-third were undecided or disagreed that a falls clinic type of intervention would be of benefit to them. Four elements from the theoretical model positively influenced intention to participate in the intervention: personal perception of intervention effectiveness, self-perceived risk of falls, self-perceived risk of injury, and inability to walk up/down steps without a handrail (P<0.05). Conclusion Multifactorial falls clinic-type interventions are not commonly accessed or considered as intended fall prevention approaches among community-dwelling older people, even among those with falls in the past 12 months. Factors identified as influencing intention to undertake these interventions may be useful in promoting or targeting these interventions. PMID:25473276
Aldrin, Magne; Raastad, Ragnhild; Tvete, Ingunn Fride; Berild, Dag; Frigessi, Arnoldo; Leegaard, Truls; Monnet, Dominique L; Walberg, Mette; Müller, Fredrik
2013-04-15
Association between previous antibiotic use and emergence of antibiotic resistance has been reported for several microorganisms. The relationship has been extensively studied, and although the causes of antibiotic resistance are multi-factorial, clear evidence of antibiotic use as a major risk factor exists. Most studies are carried out in countries with high consumption of antibiotics and corresponding high levels of antibiotic resistance, and currently, little is known whether and at what level the associations are detectable in a low antibiotic consumption environment. We conduct an ecological, retrospective study aimed at determining the impact of antibiotic consumption on antibiotic-resistant Pseudomonas aeruginosa in three hospitals in Norway, a country with low levels of antibiotic use. We construct a sophisticated statistical model to capture such low signals. To reduce noise, we conduct our study at hospital ward level. We propose a random effect Poisson or binomial regression model, with a reparametrisation that allows us to reduce the number of parameters. Inference is likelihood based. Through scenario simulation, we study the potential effects of reduced or increased antibiotic use. Results clearly indicate that the effects of consumption on resistance are present under conditions with relatively low use of antibiotic agents. This strengthens the recommendation on prudent use of antibiotics, even when consumption is relatively low. Copyright © 2012 John Wiley & Sons, Ltd.
Bomer, Ilanit; Saure, Carola; Caminiti, Carolina; Ramos, Javier Gonzales; Zuccaro, Graciela; Brea, Mercedes; Bravo, Mónica; Maza, Carmen
2015-11-01
Craniopharyngioma is a histologically benign brain malformation with a fundamental role in satiety modulation, causing obesity in up to 52% of patients. To evaluate cardiovascular risk factors, body composition, resting energy expenditure (REE), and energy intake in craniopharyngioma patients and to compare the data with those from children with multifactorial obesity. All obese children and adolescents who underwent craniopharyngioma resection and a control group of children with multifactorial obesity in follow-up between May 2012 and April 2013. Anthropometric measurements, bioelectrical impedance, indirect calorimetry, energy intake, homeostatic model assessment insulin resistance (HOMA-IR), and dyslipidemia were evaluated. Twenty-three patients with craniopharyngioma and 43 controls were included. Children with craniopharyngioma-related obesity had a lower fat-free mass percentage (62.4 vs. 67.5; p=0.01) and a higher fat mass percentage (37.5 vs. 32.5; p=0.01) compared to those with multifactorial obesity. A positive association was found between %REE and %fat-free mass in subjects with multifactorial obesity (68±1% in normal REE vs. 62.6±1% in low REE; p=0.04), but not in craniopharyngioma patients (62±2.7 in normal REE vs. 61.2±1.8% in low REE; p=0.8). No differences were found in metabolic involvement or energy intake. REE was lower in craniopharyngioma patients compared to children with multifactorial obesity regardless of the amount of fat-free mass, suggesting that other factors may be responsible for the lower REE.
Iturria-Medina, Yasser; Sotero, Roberto C; Toussaint, Paule J; Evans, Alan C
2014-11-01
Misfolded proteins (MP) are a key component in aging and associated neurodegenerative disorders. For example, misfolded Amyloid-ß (Aß) and tau proteins are two neuropathogenic hallmarks of Alzheimer's disease. Mechanisms underlying intra-brain MP propagation/deposition remain essentially uncharacterized. Here, is introduced an epidemic spreading model (ESM) for MP dynamics that considers propagation-like interactions between MP agents and the brain's clearance response across the structural connectome. The ESM reproduces advanced Aß deposition patterns in the human brain (explaining 46∼56% of the variance in regional Aß loads, in 733 subjects from the ADNI database). Furthermore, this model strongly supports a) the leading role of Aß clearance deficiency and early Aß onset age during Alzheimer's disease progression, b) that effective anatomical distance from Aß outbreak region explains regional Aß arrival time and Aß deposition likelihood, c) the multi-factorial impact of APOE e4 genotype, gender and educational level on lifetime intra-brain Aß propagation, and d) the modulatory impact of Aß propagation history on tau proteins concentrations, supporting the hypothesis of an interrelated pathway between Aß pathophysiology and tauopathy. To our knowledge, the ESM is the first computational model highlighting the direct link between structural brain networks, production/clearance of pathogenic proteins and associated intercellular transfer mechanisms, individual genetic/demographic properties and clinical states in health and disease. In sum, the proposed ESM constitutes a promising framework to clarify intra-brain region to region transference mechanisms associated with aging and neurodegenerative disorders.
Iturria-Medina, Yasser; Sotero, Roberto C.; Toussaint, Paule J.; Evans, Alan C.
2014-01-01
Misfolded proteins (MP) are a key component in aging and associated neurodegenerative disorders. For example, misfolded Amyloid-ß (Aß) and tau proteins are two neuropathogenic hallmarks of Alzheimer's disease. Mechanisms underlying intra-brain MP propagation/deposition remain essentially uncharacterized. Here, is introduced an epidemic spreading model (ESM) for MP dynamics that considers propagation-like interactions between MP agents and the brain's clearance response across the structural connectome. The ESM reproduces advanced Aß deposition patterns in the human brain (explaining 46∼56% of the variance in regional Aß loads, in 733 subjects from the ADNI database). Furthermore, this model strongly supports a) the leading role of Aß clearance deficiency and early Aß onset age during Alzheimer's disease progression, b) that effective anatomical distance from Aß outbreak region explains regional Aß arrival time and Aß deposition likelihood, c) the multi-factorial impact of APOE e4 genotype, gender and educational level on lifetime intra-brain Aß propagation, and d) the modulatory impact of Aß propagation history on tau proteins concentrations, supporting the hypothesis of an interrelated pathway between Aß pathophysiology and tauopathy. To our knowledge, the ESM is the first computational model highlighting the direct link between structural brain networks, production/clearance of pathogenic proteins and associated intercellular transfer mechanisms, individual genetic/demographic properties and clinical states in health and disease. In sum, the proposed ESM constitutes a promising framework to clarify intra-brain region to region transference mechanisms associated with aging and neurodegenerative disorders. PMID:25412207
Wilson, Bethany J; Nicholas, Frank W; James, John W; Wade, Claire M; Tammen, Imke; Raadsma, Herman W; Castle, Kao; Thomson, Peter C
2012-01-01
Canine Hip Dysplasia (CHD) is a common, painful and debilitating orthopaedic disorder of dogs with a partly genetic, multifactorial aetiology. Worldwide, potential breeding dogs are evaluated for CHD using radiographically based screening schemes such as the nine ordinally-scored British Veterinary Association Hip Traits (BVAHTs). The effectiveness of selective breeding based on screening results requires that a significant proportion of the phenotypic variation is caused by the presence of favourable alleles segregating in the population. This proportion, heritability, was measured in a cohort of 13,124 Australian German Shepherd Dogs born between 1976 and 2005, displaying phenotypic variation for BVAHTs, using ordinal, linear and binary mixed models fitted by a Restricted Maximum Likelihood method. Heritability estimates for the nine BVAHTs ranged from 0.14-0.24 (ordinal models), 0.14-0.25 (linear models) and 0.12-0.40 (binary models). Heritability for the summed BVAHT phenotype was 0.30 ± 0.02. The presence of heritable variation demonstrates that selection based on BVAHTs has the potential to improve BVAHT scores in the population. Assuming a genetic correlation between BVAHT scores and CHD-related pain and dysfunction, the welfare of Australian German Shepherds can be improved by continuing to consider BVAHT scores in the selection of breeding dogs, but that as heritability values are only moderate in magnitude the accuracy, and effectiveness, of selection could be improved by the use of Estimated Breeding Values in preference to solely phenotype based selection of breeding animals.
Neigh, G N; Ritschel, L A; Kilpela, L S; Harrell, C S; Bourke, C H
2013-09-26
The genetic, biological, and environmental backgrounds of an organism fundamentally influence the balance between risk and resilience to stress. Sex, age, and environment transact with responses to trauma in ways that can mitigate or exacerbate the likelihood that post-traumatic stress disorder will develop. Translational approaches to modeling affective disorders in animals will ultimately provide novel treatments and a better understanding of the neurobiological underpinnings behind these debilitating disorders. The extant literature on trauma/stress has focused predominately on limbic and cortical structures that innervate the hypothalamic-pituitary-adrenal axis and influence glucocorticoid-mediated negative feedback. It is through these neuroendocrine pathways that a self-perpetuating fear memory can propagate the long-term effects of early life trauma. Recent work incorporating translational approaches has provided novel pathways that can be influenced by early life stress, such as the glucocorticoid receptor chaperones, including FKBP51. Animal models of stress have differing effects on behavior and endocrine pathways; however, complete models replicating clinical characteristics of risk and resilience have not been rigorously studied. This review discusses a four-factor model that considers the importance of studying both risk and resilience in understanding the developmental response to trauma/stress. Consideration of the multifactorial nature of clinical populations in the design of preclinical models and the application of preclinical findings to clinical treatment approaches comprise the core of translational reciprocity, which is discussed in the context of the four-factor model. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.
Waters, Erika A.; Wheeler, Courtney; Hamilton, Jada G.
2016-01-01
Background Understanding that cancer is caused by both genetic and behavioral risk factors is an important component of genomic literacy. However, a considerable percentage of people in the U.S. do not endorse such multifactorial beliefs. Methods Using nationally representative cross-sectional data from the U.S. Health Information National Trends Survey (N=2,529), we examined how information seeking, information scanning, and key information processing characteristics were associated with endorsing a multifactorial model of cancer causation. Results Multifactorial beliefs about cancer were more common among respondents who engaged in cancer information scanning (p=.001), were motivated to process health information (p=.005), and who reported a family history of cancer (p=.0002). Respondents who reported having previous negative information seeking experiences had lower odds of endorsing multifactorial beliefs (p=.01). Multifactorial beliefs were not associated with cancer information seeking, trusting cancer information obtained from the Internet, trusting cancer information from a physician, self-efficacy for obtaining cancer information, numeracy, or being aware of direct-to-consumer genetic testing (ps>.05). Conclusion Gaining additional understanding of how people access, process, and use health information will be critical for the continued development and dissemination of effective health communication interventions and for the further translation of genomics research to public health and clinical practice. PMID:27661291
Waters, Erika A; Wheeler, Courtney; Hamilton, Jada G
2016-01-01
Understanding that cancer is caused by both genetic and behavioral risk factors is an important component of genomic literacy. However, a considerable percentage of people in the United States do not endorse such multifactorial beliefs. Using nationally representative cross-sectional data from the U.S. Health Information National Trends Survey (N = 2,529), we examined how information seeking, information scanning, and key information-processing characteristics were associated with endorsing a multifactorial model of cancer causation. Multifactorial beliefs about cancer were more common among respondents who engaged in cancer information scanning (p = .001), were motivated to process health information (p = .005), and reported a family history of cancer (p = .0002). Respondents who reported having previous negative information-seeking experiences had lower odds of endorsing multifactorial beliefs (p = .01). Multifactorial beliefs were not associated with cancer information seeking, trusting cancer information obtained from the Internet, trusting cancer information from a physician, self-efficacy for obtaining cancer information, numeracy, or being aware of direct-to-consumer genetic testing (ps > .05). Gaining additional understanding of how people access, process, and use health information will be critical for the continued development and dissemination of effective health communication interventions and for the further translation of genomics research to public health and clinical practice.
Biological adaptive control model: a mechanical analogue of multi-factorial bone density adaptation.
Davidson, Peter L; Milburn, Peter D; Wilson, Barry D
2004-03-21
The mechanism of how bone adapts to every day demands needs to be better understood to gain insight into situations in which the musculoskeletal system is perturbed. This paper offers a novel multi-factorial mathematical model of bone density adaptation which combines previous single-factor models in a single adaptation system as a means of gaining this insight. Unique aspects of the model include provision for interaction between factors and an estimation of the relative contribution of each factor. This interacting system is considered analogous to a Newtonian mechanical system and the governing response equation is derived as a linear version of the adaptation process. The transient solution to sudden environmental change is found to be exponential or oscillatory depending on the balance between cellular activation and deactivation frequencies.
Estimating the variance for heterogeneity in arm-based network meta-analysis.
Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R
2018-04-19
Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.
Wang, Xun; Sun, Beibei; Liu, Boyang; Fu, Yaping; Zheng, Pan
2017-01-01
Experimental design focuses on describing or explaining the multifactorial interactions that are hypothesized to reflect the variation. The design introduces conditions that may directly affect the variation, where particular conditions are purposely selected for observation. Combinatorial design theory deals with the existence, construction and properties of systems of finite sets whose arrangements satisfy generalized concepts of balance and/or symmetry. In this work, borrowing the concept of "balance" in combinatorial design theory, a novel method for multifactorial bio-chemical experiments design is proposed, where balanced templates in combinational design are used to select the conditions for observation. Balanced experimental data that covers all the influencing factors of experiments can be obtianed for further processing, such as training set for machine learning models. Finally, a software based on the proposed method is developed for designing experiments with covering influencing factors a certain number of times.
Spurdle, Amanda B
2010-06-01
Multifactorial models developed for BRCA1/2 variant classification have proved very useful for delineating BRCA1/2 variants associated with very high risk of cancer, or with little clinical significance. Recent linkage of this quantitative assessment of risk to clinical management guidelines has provided a basis to standardize variant reporting, variant classification and management of families with such variants, and can theoretically be applied to any disease gene. As proof of principle, the multifactorial approach already shows great promise for application to the evaluation of mismatch repair gene variants identified in families with suspected Lynch syndrome. However there is need to be cautious of the noted limitations and caveats of the current model, some of which may be exacerbated by differences in ascertainment and biological pathways to disease for different cancer syndromes.
NASA Astrophysics Data System (ADS)
Zeng, X.
2015-12-01
A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.
Theories of Memory and Aging: A Look at the Past and a Glimpse of the Future
Festini, Sara B.
2017-01-01
The present article reviews theories of memory and aging over the past 50 years. Particularly notable is a progression from early single-mechanism perspectives to complex multifactorial models proposed to account for commonly observed age deficits in memory function. The seminal mechanistic theories of processing speed, limited resources, and inhibitory deficits are discussed and viewed as especially important theories for understanding age-related memory decline. Additionally, advances in multivariate techniques including structural equation modeling provided new tools that led to the development of more complex multifactorial theories than existed earlier. The important role of neuroimaging is considered, along with the current prevalence of intervention studies. We close with predictions about new directions that future research on memory and aging will take. PMID:27257229
Moderator's view: Predictive models: a prelude to precision nephrology.
Zoccali, Carmine
2017-05-01
Appropriate diagnosis is fundamental in medicine because it sets the basis for the prediction of disease outcome at the single patient level (prognosis) and decisions regarding the most appropriate therapy. However, given the large series of social, clinical and biological factors that determine the likelihood of an individual's future outcome, prognosis only partly depends on diagnosis and aetiology and treatment is not decided solely on the basis of the underlying diagnosis. This issue is crucial in multifactorial diseases like atherosclerosis, where the use of statins has now shifted from 'treating hypercholesterolaemia' to 'treating the risk of adverse cardiovascular events'. Approaches that take due account of prognosis limit the lingering risk of over-diagnosis and maximize the value of prognostic information in the clinical decision process. In the nephrology realm, the application of a well-validated risk equation for kidney failure in Canada led to a 35% reduction in new referrals. Prognostic models based on simple clinical data extractable from clinical files have recently been developed to predict all-cause and cardiovascular mortality in end-stage kidney disease patients. However, research on predictive models in renal diseases remains suboptimal and non-accounting for competing events and measurement errors, and a lack of calibration analyses and external validation are common fallacies in currently available studies. More focus on this blossoming research area is desirable. The nephrology community may now start to apply the best validated risk scores and further test their potential usefulness in chronic kidney disease patients in diverse clinical situations and geographical areas. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Pineda, M; González-Acosta, M; Thompson, B A; Sánchez, R; Gómez, C; Martínez-López, J; Perea, J; Caldés, T; Rodríguez, Y; Landolfi, S; Balmaña, J; Lázaro, C; Robles, L; Capellá, G; Rueda, D
2015-06-01
Lynch syndrome (LS) is an autosomal dominant cancer-susceptibility disease caused by inactivating germline mutations in mismatch repair (MMR) genes. Variants of unknown significance (VUS) are often detected in mutational analysis of MMR genes. Here we describe a large family fulfilling Amsterdam I criteria carrying two rare VUS in the MLH1 gene: c.121G > C (p.D41H) and c.2128A > G (p.N710D). Collection of clinico-pathological data, multifactorial analysis, in silico predictions, and functional analyses were used to elucidate the clinical significance of the identified MLH1 VUS. Only the c.121G > C variant cosegregated with LS-associated tumors in the family. Diagnosed colorectal tumors were microsatellite unstable although immunohistochemical staining revealed no loss of MMR proteins expression. Multifactorial likelihood analysis classified c.2128A > G as a non-pathogenic variant and c.121G > C as pathogenic. In vitro functional tests revealed impaired MMR activity and diminished expression of c.121G > C. Accordingly, the N710 residue is located in the unconserved MLH1 C-terminal domain, whereas D41 is highly conserved and located in the ATPase domain. The obtained results will enable adequate genetic counseling of c.121G > C and c.2128A > G variant carriers and their families. Furthermore, they exemplify how cumulative data and comprehensive analyses are mandatory to refine the classification of MMR variants. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Salahuddin, Nawal; Mohamed, Alaa; Alharbi, Nadia; Ansari, Hamad; Zaza, Khaled J; Marashly, Qussay; Hussain, Iqbal; Solaiman, Othman; Wetterberg, Torbjorn V; Maghrabi, Khalid
2016-10-25
Unexplained coma after critical illness can be multifactorial. We evaluated the diagnostic ability of bedside Optic Nerve Sheath Diameter [ONSD] as a screening test for non-traumatic radiographic cerebral edema. In a prospective study, mixed medical-surgical intensive care units [ICU] patients with non-traumatic coma [GCS < 9] underwent bedside ultrasonographic ONSD measurements. Non-traumatic radiographic cerebral edema [NTRCE] was defined as > 5 mm midline shift, cisternal, sulcal effacement, or hydrocephalus on CT. NTRCE was identified in 31 of 102 patients [30.4 %]. The area under the ROC curve for detecting radiographic edema by ONSD was 0.785 [95 % CI 0.695-0.874, p <0.001]. ONSD diameter of 0.57 cm was found to be the best cutoff threshold with a sensitivity 84 % and specificity 71 %, AUC 0.785 [95 % CI 0.695-0.874, p <0.001]. Using ONSD as a bedside test increased the post-test odds ratio [OR] for NTRCE by 2.89 times [positive likelihood ratio], whereas post-test OR for NTRCE decreased markedly given a negative ONSD test [ONSD measurement less than 0.57 cm]; negative likelihood ratio 0.22. The use of ONSD as a bedside test in patients with non-traumatic coma has diagnostic value in identifying patients with non-traumatic radiographic cerebral edema.
González-Acosta, Maribel; Del Valle, Jesús; Navarro, Matilde; Thompson, Bryony A; Iglesias, Sílvia; Sanjuan, Xavier; Paúles, María José; Padilla, Natàlia; Fernández, Anna; Cuesta, Raquel; Teulé, Àlex; Plotz, Guido; Cadiñanos, Juan; de la Cruz, Xavier; Balaguer, Francesc; Lázaro, Conxi; Pineda, Marta; Capellá, Gabriel
2017-10-01
The clinical spectrum of germline mismatch repair (MMR) gene variants continues increasing, encompassing Lynch syndrome, Constitutional MMR Deficiency (CMMRD), and the recently reported MSH3-associated polyposis. Genetic diagnosis of these hereditary cancer syndromes is often hampered by the presence of variants of unknown significance (VUS) and overlapping phenotypes. Two PMS2 VUS, c.2149G>A (p.V717M) and c.2444C>T (p.S815L), were identified in trans in one individual diagnosed with early-onset colorectal cancer (CRC) who belonged to a family fulfilling clinical criteria for hereditary cancer. Clinico-pathological data, multifactorial likelihood calculations and functional analyses were used to refine their clinical significance. Likelihood analysis based on cosegregation and tumor data classified the c.2444C>T variant as pathogenic, which was supported by impaired MMR activity associated with diminished protein expression in functional assays. Conversely, the c.2149G>A variant displayed MMR proficiency and protein stability. These results, in addition to the conserved PMS2 expression in normal tissues and the absence of germline microsatellite instability (gMSI) in the biallelic carrier ruled out a CMMRD diagnosis. The use of comprehensive strategies, including functional and clinico-pathological information, is mandatory to improve the clinical interpretation of naturally occurring MMR variants. This is critical for appropriate clinical management of cancer syndromes associated to MMR gene mutations.
Assessment of parametric uncertainty for groundwater reactive transport modeling,
Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun
2014-01-01
The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.
Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures
ERIC Educational Resources Information Center
Jeon, Minjeong; Rabe-Hesketh, Sophia
2012-01-01
In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…
Horikawa, Yukio
2018-02-06
Maturity-onset diabetes of the young (MODY) is a form of diabetes classically characterized as having autosomal dominant inheritance, onset before the age of 25 years in at least one family member and partly preserved pancreatic β-cell function. The 14 responsible genes are reported to be MODY type 1~14, of which MODY 2 and 3 might be the most common forms. Although MODY is currently classified as diabetes of a single gene defect, it has become clear that mutations in rare MODYs, such as MODY 5 and MODY 6, have small mutagenic effects and low penetrance. In addition, as there are differences in the clinical phenotypes caused by the same mutation even in the same family, other phenotypic modifying factors are thought to exist; MODY could well have characteristics of type 2 diabetes mellitus, which is of multifactorial origin. Here, we outline the effects of genetic and environmental factors on the known phenotypes of MODY, focusing mainly on the examples of MODY 5 and 6, which have low penetrance, as suggestive models for elucidating the multifactorial origin of type 2 diabetes mellitus. © 2018 The Authors. Journal of Diabetes Investigation published by Asian Association for the Study of Diabetes (AASD) and John Wiley & Sons Australia, Ltd.
Daack-Hirsch, Sandra; Shah, Lisa L; Cady, Alyssa D
2018-03-01
Using the familial risk perception (FRP) model as a framework, we elicited causal and inheritance explanations for type 2 diabetes (T2D) from people who do not have T2D but have a family history for it. We identified four composite mental models for cause of T2D: (a) purely genetic; (b) purely behavioral/environmental; (c) direct multifactorial, in which risk factors interact and over time directly lead to T2D; and (d) indirect multifactorial, in which risk factors interact and over time cause a precursor health condition (such as obesity or metabolic syndrome) that leads to T2D. Interestingly, participants described specific risk factors such as genetics, food habits, lifestyle, weight, and culture as "running in the family." Our findings provide insight into lay beliefs about T2D that can be used by clinicians to anticipate or make sense of responses to questions they pose to patients about mental models for T2D.
Jeon, Jihyoun; Hsu, Li; Gorfine, Malka
2012-07-01
Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.
Illness in the Family: Old Myths and New Truths! Unit for Child Studies. Selected Papers Number 19.
ERIC Educational Resources Information Center
Perkins, Richard; Oldenburg, Brian
A multifactorial model of phases in the development and progress of physical illness is described, and the model's utility is illustrated. The model consists of antecedent and concurrent conditions and consequences related to physical, psychological, and social factors and their interaction. The application of the model is illustrated by a…
Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.
Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram
2017-02-01
In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.
Paninski, Liam; Haith, Adrian; Szirtes, Gabor
2008-02-01
We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.
The public health disaster trust scale: validation of a brief measure.
Eisenman, David P; Williams, Malcolm V; Glik, Deborah; Long, Anna; Plough, Alonzo L; Ong, Michael
2012-01-01
Trust contributes to community resilience by the critical influence it has on the community's responses to public health recommendations before, during, and after disasters. However, trust in public health is a multifactorial concept that has rarely been defined and measured empirically in public health jurisdictional risk assessment surveys. Measuring trust helps public health departments identify and ameliorate a threat to effective risk communications and increase resilience. Such a measure should be brief to be incorporated into assessments conducted by public health departments. We report on a brief scale of public health disaster-related trust, its psychometric properties, and its validity. On the basis of a literature review, our conceptual model of public health disaster-related trust and previously conducted focus groups, we postulated that public health disaster-related trust includes 4 major domains: competency, honesty, fairness, and confidentiality. A random-digit-dialed telephone survey of the Los Angeles county population, conducted in 2004-2005 in 6 languages. Two thousand five hundred eighty-eight adults aged 18 years and older including oversamples of African Americans and Asian Americans. Trust was measured by 4 items scored on a 4-point Likert scale. A summary score from 4 to 16 was constructed. Scores ranged from 4 to 16 and were normally distributed with a mean of 8.5 (SD 2.7). Cronbach α = 0.79. As hypothesized, scores were lower among racial/ethnic minority populations than whites. Also, trust was associated with lower likelihood of following public health recommendations in a hypothetical disaster and lower likelihood of household disaster preparedness. The Public Health Disaster Trust scale may facilitate identifying communities where trust is low and prioritizing them for inclusion in community partnership building efforts under Function 2 of the Centers for Disease Control and Prevention's Public Health Preparedness Capability 1. The scale is brief, reliable, and validated in multiple ethnic populations and languages.
An Improved Nested Sampling Algorithm for Model Selection and Assessment
NASA Astrophysics Data System (ADS)
Zeng, X.; Ye, M.; Wu, J.; WANG, D.
2017-12-01
Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.
A Developmental-Genetic Model of Alcoholism: Implications for Genetic Research.
ERIC Educational Resources Information Center
Devor, Eric J.
1994-01-01
Research for biological-genetic markers of alcoholism is discussed in context of a multifactorial, heterogeneous, developmental model. Suggested that strategies used in linkage and association studies will require modification. Also suggested several extant associations of genetic markers represent true secondary interactive phenomena that alter…
Likelihood Ratio Tests for Special Rasch Models
ERIC Educational Resources Information Center
Hessen, David J.
2010-01-01
In this article, a general class of special Rasch models for dichotomous item scores is considered. Although Andersen's likelihood ratio test can be used to test whether a Rasch model fits to the data, the test does not differentiate between special Rasch models. Therefore, in this article, new likelihood ratio tests are proposed for testing…
Schulz, Claudia; Lindlbauer, Ivonne; Rapp, Kilian; Becker, Clemens; König, Hans-Helmut
2017-06-01
Femoral fractures are frequently consequences of falls in nursing homes and are associated with considerable costs and unfavorable outcomes such as immobility and mortality. The purpose of this study was to examine the long-term effectiveness of a multifactorial fall and fracture prevention program in nursing homes in terms of reducing femoral fractures. Retrospective cohort study. Nursing homes. Health insurance claims data for 2005-2013 including 85,148 insurants of a sickness fund (Allgemeine Ortskrankenkasse Bayern), aged 65 years or older and living in 802 nursing homes in Bavaria, Germany. The fall prevention program was implemented stepwise in 4 time-lagged waves in almost 1,000 nursing homes in Bavaria, Germany, and was financially supported by a Bavarian statutory health insurance for the initial period of 3 years after implementation. The components of Bavarian Fall and Fracture Prevention Program were related to the staff (education), to the residents (progressive strength and balance training, medication, hip protectors), and suggested environmental adaptations as well as fall documentation and feedback on fall statistics. Data were used to create an unbalanced panel data set with observations per resident and quarterly period. We designed each wave to have 9 quarters (2.25 years) before implementation and 15 quarters (3.75 years) as follow-up period, respectively. Time trend-adjusted logistic generalized estimating equations were used to examine the impact of implementation of the fall prevention program on the likelihood of femoral fractures, controlling for resident and nursing home characteristics. The analysis took into account that the fall prevention program was implemented in 4 time-lagged waves. The implementation of the fall prevention program was not associated with a significant reduction in femoral fractures. Only a transient reduction of femoral fractures in the first wave was observed. Patient characteristics were positively associated with the likelihood of femoral fractures (P < .001); women compared to men [odds ratio (OR) = 0.877], age category 2 (OR = 1.486) and 3 (OR = 1.973) compared to category 1, care level 1 compared to 2 (OR = 0.897) and 3 (OR = 0.426), and a prior fracture (OR = 2.230) significantly increased the likelihood of a femoral fracture. There was no evidence for the long-term effectiveness of the fall prevention program in nursing homes. The restriction of the transient reduction to the first implementation wave may be explainable by a higher motivation of nursing homes starting first with the fall prevention program. Efforts should be directed to further identify factors that determine the long-term effectiveness of fall prevention programs in nursing homes. Copyright © 2017 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Criss, Amy H.; McClelland, James L.
2006-01-01
The subjective likelihood model [SLiM; McClelland, J. L., & Chappell, M. (1998). Familiarity breeds differentiation: a subjective-likelihood approach to the effects of experience in recognition memory. "Psychological Review," 105(4), 734-760.] and the retrieving effectively from memory model [REM; Shiffrin, R. M., & Steyvers, M. (1997). A model…
NASA Astrophysics Data System (ADS)
Nourali, Mahrouz; Ghahraman, Bijan; Pourreza-Bilondi, Mohsen; Davary, Kamran
2016-09-01
In the present study, DREAM(ZS), Differential Evolution Adaptive Metropolis combined with both formal and informal likelihood functions, is used to investigate uncertainty of parameters of the HEC-HMS model in Tamar watershed, Golestan province, Iran. In order to assess the uncertainty of 24 parameters used in HMS, three flood events were used to calibrate and one flood event was used to validate the posterior distributions. Moreover, performance of seven different likelihood functions (L1-L7) was assessed by means of DREAM(ZS)approach. Four likelihood functions, L1-L4, Nash-Sutcliffe (NS) efficiency, Normalized absolute error (NAE), Index of agreement (IOA), and Chiew-McMahon efficiency (CM), is considered as informal, whereas remaining (L5-L7) is represented in formal category. L5 focuses on the relationship between the traditional least squares fitting and the Bayesian inference, and L6, is a hetereoscedastic maximum likelihood error (HMLE) estimator. Finally, in likelihood function L7, serial dependence of residual errors is accounted using a first-order autoregressive (AR) model of the residuals. According to the results, sensitivities of the parameters strongly depend on the likelihood function, and vary for different likelihood functions. Most of the parameters were better defined by formal likelihood functions L5 and L7 and showed a high sensitivity to model performance. Posterior cumulative distributions corresponding to the informal likelihood functions L1, L2, L3, L4 and the formal likelihood function L6 are approximately the same for most of the sub-basins, and these likelihood functions depict almost a similar effect on sensitivity of parameters. 95% total prediction uncertainty bounds bracketed most of the observed data. Considering all the statistical indicators and criteria of uncertainty assessment, including RMSE, KGE, NS, P-factor and R-factor, results showed that DREAM(ZS) algorithm performed better under formal likelihood functions L5 and L7, but likelihood function L5 may result in biased and unreliable estimation of parameters due to violation of the residualerror assumptions. Thus, likelihood function L7 provides posterior distribution of model parameters credibly and therefore can be employed for further applications.
Feldman, Jonathan M.; Serebrisky, Denise; Spray, Amanda
2012-01-01
Background Causes of children’s asthma health disparities are complex. Parents’ asthma illness representations may play a role. Purpose The study aims to test a theoretically based, multi-factorial model for ethnic disparities in children’s acute asthma visits through parental illness representations. Methods Structural equation modeling investigated the association of parental asthma illness representations, sociodemographic characteristics, health care provider factors, and social–environmental context with children’s acute asthma visits among 309 White, Puerto Rican, and African American families was conducted. Results Forty-five percent of the variance in illness representations and 30% of the variance in acute visits were accounted for. Statistically significant differences in illness representations were observed by ethnic group. Approximately 30% of the variance in illness representations was explained for whites, 23% for African Americans, and 26% for Puerto Ricans. The model accounted for >30% of the variance in acute visits for African Americans and Puerto Ricans but only 19% for the whites. Conclusion The model provides preliminary support that ethnic heterogeneity in asthma illness representations affects children’s health outcomes. PMID:22160799
How Stuttering Develops: The Multifactorial Dynamic Pathways Theory
ERIC Educational Resources Information Center
Smith, Anne; Weber, Christine
2017-01-01
Purpose: We advanced a multifactorial, dynamic account of the complex, nonlinear interactions of motor, linguistic, and emotional factors contributing to the development of stuttering. Our purpose here is to update our account as the multifactorial dynamic pathways theory. Method: We review evidence related to how stuttering develops, including…
How much to trust the senses: Likelihood learning
Sato, Yoshiyuki; Kording, Konrad P.
2014-01-01
Our brain often needs to estimate unknown variables from imperfect information. Our knowledge about the statistical distributions of quantities in our environment (called priors) and currently available information from sensory inputs (called likelihood) are the basis of all Bayesian models of perception and action. While we know that priors are learned, most studies of prior-likelihood integration simply assume that subjects know about the likelihood. However, as the quality of sensory inputs change over time, we also need to learn about new likelihoods. Here, we show that human subjects readily learn the distribution of visual cues (likelihood function) in a way that can be predicted by models of statistically optimal learning. Using a likelihood that depended on color context, we found that a learned likelihood generalized to new priors. Thus, we conclude that subjects learn about likelihood. PMID:25398975
Maximum likelihood estimation of finite mixture model for economic data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
The Robust Learning Model (RLM): A Comprehensive Approach to a New Online University
ERIC Educational Resources Information Center
Neumann, Yoram; Neumann, Edith F.
2010-01-01
This paper outlines the components of the Robust Learning Model (RLM) as a conceptual framework for creating a new online university offering numerous degree programs at all degree levels. The RLM is a multi-factorial model based on the basic belief that successful learning outcomes depend on multiple factors employed together in a holistic…
Acts of God and/or Rites of Families: Accidental Versus Inflicted Child Disabilities.
ERIC Educational Resources Information Center
Meier, John H.; Sloan, Michael P.
A multifactorial model is presented that depicts a representative set of dimensions involved in child abuse and neglect. The model includes parental, ecological, and child factors linked to precipitating situations and/or events that result in child abuse or neglect. An excerpt from the records of an abused child illustrates the model. Also…
Orozco-Beltran, Domingo; Ruescas-Escolano, Esther; Navarro-Palazón, Ana Isabel; Cordero, Alberto; Gaubert-Tortosa, María; Navarro-Perez, Jorge; Carratalá-Munuera, Concepción; Pertusa-Martínez, Salvador; Soler-Bahilo, Enrique; Brotons-Muntó, Francisco; Bort-Cubero, Jose; Nuñez-Martinez, Miguel Angel; Bertomeu-Martinez, Vicente; Gil-Guillen, Vicente Francisco
2013-08-02
To evaluate the effectiveness of a new multifactorial intervention to improve health care for chronic ischemic heart disease patients in primary care. The strategy has two components: a) organizational for the patient/professional relationship and b) training for professionals. Experimental study. Randomized clinical trial. Follow-up period: one year. primary care, multicenter (15 health centers). For the intervention group 15 health centers are selected from those participating in ESCARVAL study. Once the center agreed to participate patients are randomly selected from the total amount of patients with ischemic heart disease registered in the electronic health records. For the control group a random sample of patients with ischemic heart disease is selected from all 72 health centers electronic records. This study aims to evaluate the efficacy of a multifactorial intervention strategy involving patients with ischemic heart disease for the improvement of the degree of control of the cardiovascular risk factors and of the quality of life, number of visits, and number of hospitalizations. NCT01826929.
Improving and Evaluating Nested Sampling Algorithm for Marginal Likelihood Estimation
NASA Astrophysics Data System (ADS)
Ye, M.; Zeng, X.; Wu, J.; Wang, D.; Liu, J.
2016-12-01
With the growing impacts of climate change and human activities on the cycle of water resources, an increasing number of researches focus on the quantification of modeling uncertainty. Bayesian model averaging (BMA) provides a popular framework for quantifying conceptual model and parameter uncertainty. The ensemble prediction is generated by combining each plausible model's prediction, and each model is attached with a model weight which is determined by model's prior weight and marginal likelihood. Thus, the estimation of model's marginal likelihood is crucial for reliable and accurate BMA prediction. Nested sampling estimator (NSE) is a new proposed method for marginal likelihood estimation. The process of NSE is accomplished by searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm is often used for local sampling. However, M-H is not an efficient sampling algorithm for high-dimensional or complicated parameter space. For improving the efficiency of NSE, it could be ideal to incorporate the robust and efficient sampling algorithm - DREAMzs into the local sampling of NSE. The comparison results demonstrated that the improved NSE could improve the efficiency of marginal likelihood estimation significantly. However, both improved and original NSEs suffer from heavy instability. In addition, the heavy computation cost of huge number of model executions is overcome by using an adaptive sparse grid surrogates.
A developmental, biopsychosocial model for the treatment of children with gender identity disorder.
Zucker, Kenneth J; Wood, Hayley; Singh, Devita; Bradley, Susan J
2012-01-01
This article provides a summary of the therapeutic model and approach used in the Gender Identity Service at the Centre for Addiction and Mental Health in Toronto. The authors describe their assessment protocol, describe their current multifactorial case formulation model, including a strong emphasis on developmental factors, and provide clinical examples of how the model is used in the treatment.
Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies
Rukhin, Andrew L.
2011-01-01
A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed. PMID:26989583
Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.
Rukhin, Andrew L
2011-01-01
A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.
Viewing a Phonological Deficit within a Multifactorial Model of Dyslexia
ERIC Educational Resources Information Center
Catts, Hugh W.; McIlraith, Autumn; Bridges, Mindy Sittner; Nielsen, Diane Corcoran
2017-01-01
Participants were administered multiple measures of phonological awareness, oral language, and rapid automatized naming at the beginning of kindergarten and multiple measures of word reading at the end of second grade. A structural equation model was fit to the data and latent scores were used to identify children with a deficit in phonological…
Hurdle models for multilevel zero-inflated data via h-likelihood.
Molas, Marek; Lesaffre, Emmanuel
2010-12-30
Count data often exhibit overdispersion. One type of overdispersion arises when there is an excess of zeros in comparison with the standard Poisson distribution. Zero-inflated Poisson and hurdle models have been proposed to perform a valid likelihood-based analysis to account for the surplus of zeros. Further, data often arise in clustered, longitudinal or multiple-membership settings. The proper analysis needs to reflect the design of a study. Typically random effects are used to account for dependencies in the data. We examine the h-likelihood estimation and inference framework for hurdle models with random effects for complex designs. We extend the h-likelihood procedures to fit hurdle models, thereby extending h-likelihood to truncated distributions. Two applications of the methodology are presented. Copyright © 2010 John Wiley & Sons, Ltd.
Early-onset schizophrenia: Symptoms and social class of origin.
Gallagher, Bernard J; Jones, Brian J
2017-09-01
The genesis of schizophrenia is multifactorial, including biological and environmental risk factors. We tested for an interactive effect between early-onset schizophrenia (EOS) and social class of origins (socioeconomic status (SES)). Data were further analyzed for a possible connection to type of schizophrenic symptoms. Sampling/Methods: Data for the study are taken from the medical records of 642 patients from a large state hospital in the northeastern United States. Clinical assessments were divided into positive and negative symptomatology through application of the Scale for the Assessment of Negative Symptoms (SANS), the Scale for the Assessment of Positive Symptoms (SAPS) and the Positive and Negative Syndrome Scale (PANSS). Detailed information about age of onset and SES of origin was obtained through Social Service Assessment interviews. We uncovered a significant impact of EOS among the poor that elevates risk for negative symptomatology. Poor SES alone does not increase the likelihood of EOS, but it magnifies the deleterious effect of EOS on negative symptoms. Future research on these variables may inform the relative contribution of each.
The influence of violent media on children and adolescents:a public-health approach.
Browne, Kevin D; Hamilton-Giachritsis, Catherine
There is continuing debate on the extent of the effects of media violence on children and young people, and how to investigate these effects. The aim of this review is to consider the research evidence from a public-health perspective. A search of published work revealed five meta-analytic reviews and one quasi-systematic review, all of which were from North America. There is consistent evidence that violent imagery in television, film and video, and computer games has substantial short-term effects on arousal, thoughts, and emotions, increasing the likelihood of aggressive or fearful behaviour in younger children, especially in boys. The evidence becomes inconsistent when considering older children and teenagers, and long-term outcomes for all ages. The multifactorial nature of aggression is emphasised, together with the methodological difficulties of showing causation. Nevertheless, a small but significant association is shown in the research, with an effect size that has a substantial effect on public health. By contrast, only weak evidence from correlation studies links media violence directly to crime.
MODEL-BASED CLUSTERING FOR CLASSIFICATION OF AQUATIC SYSTEMS AND DIAGNOSIS OF ECOLOGICAL STRESS
Clustering approaches were developed using the classification likelihood, the mixture likelihood, and also using a randomization approach with a model index. Using a clustering approach based on the mixture and classification likelihoods, we have developed an algorithm that...
NASA Astrophysics Data System (ADS)
Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen
2018-07-01
Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper, we use massive asymptotically optimal data compression to reduce the dimensionality of the data space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parametrized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate DELFI with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological data sets.
Arlen, Angela M; Alexander, Siobhan E; Wald, Moshe; Cooper, Christopher S
2016-10-01
Factors influencing the decision to surgically correct vesicoureteral reflux (VUR) include risk of breakthrough febrile urinary tract infection (fUTI) or renal scarring, and decreased likelihood of spontaneous resolution. Improved identification of children at risk for recurrent fUTI may impact management decisions, and allow for more individualized VUR management. We have developed and investigated the accuracy of a multivariable computational model to predict probability of breakthrough fUTI in children with primary VUR. Children with primary VUR and detailed clinical and voiding cystourethrogram (VCUG) data were identified. Patient demographics, VCUG findings including grade, laterality, and bladder volume at onset of VUR, UTI history, presence of bladder-bowel dysfunction (BBD), and breakthrough fUTI were assessed. The VCUG dataset was randomized into a training set of 288 with a separate representational cross-validation set of 96. Various model types and architectures were investigated using neUROn++, a set of C++ programs. Two hundred fifty-five children (208 girls, 47 boys) diagnosed with primary VUR at a mean age of 3.1 years (±2.6) met all inclusion criteria. A total 384 VCUGs were analyzed. Median follow-up was 24 months (interquartile range 12-52 months). Sixty-eight children (26.7%) experienced 90 breakthrough fUTI events. Dilating VUR, reflux occurring at low bladder volumes, BBD, and history of multiple infections/fUTI were associated with breakthrough fUTI (Table). A 2-hidden node neural network model had the best fit with a receiver operating characteristic curve area of 0.755 for predicting breakthrough fUTI. The risk of recurrent febrile infections, renal parenchymal scarring, and likelihood of spontaneous resolution, as well as parental preference all influence management of primary VUR. The genesis of UTI is multifactorial, making precise prediction of an individual child's risk of breakthrough fUTI challenging. Demonstrated risk factors for UTI include age, gender, VUR grade, reflux at low bladder volume, BBD, and UTI history. We developed a prognostic calculator using a multivariable model with 76% accuracy that can be deployed for availability on the Internet, allowing input variables to be entered to calculate the odds of an individual child developing a breakthrough fUTI. A computational model using multiple variables including bladder volume at onset of VUR provides individualized prediction of children at risk for breakthrough fUTI. A web-based prognostic calculator based on this model will provide a useful tool for assessing personalized risk of breakthrough fUTI in children with primary VUR. Copyright © 2016 Journal of Pediatric Urology Company. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Jones, Douglas H.
The progress of modern mental test theory depends very much on the techniques of maximum likelihood estimation, and many popular applications make use of likelihoods induced by logistic item response models. While, in reality, item responses are nonreplicate within a single examinee and the logistic models are only ideal, practitioners make…
Factors related to choosing an academic career track among spine fellowship applicants.
Park, Daniel K; Rhee, John M; Wu, Baohua; Easley, Kirk
2013-03-01
Retrospective review. To identify factors associated with the likelihood of spine surgery fellowship applicants choosing an academic job upon fellowship completion. Training academic spine surgeons is an important goal of many spine fellowships. However, there are no established criteria associated with academic job choice to guide selection committees. Two hundred three consecutive applications of candidates who were granted an interview to a single spine surgical fellowship from 2005 to 2010 were analyzed. Factors investigated included the following: membership in honor societies; number of publications, presentations, and book chapters; age; completion of an additional degree; completion of a research fellowship; teaching experience; marital status; graduation from a top-20 school; attendance in a residency with a spine fellowship; and comments made in personal statements and letters of recommendation. The job taken upon graduation from fellowship was determined. The χ2 test or Fisher exact test was used to estimate the strength of the association between the covariates and response. Significant variables were selected for further multivariate analysis. The following were significantly associated in a univariable analysis with academia: 5 or more national presentations; completion of a research fellowship; attendance in a top-20 medical school; stated desire in the personal statement to become an academic surgeon; and letters of reference stating likelihood of pursuing academics on hiring the applicant. When significant variables were selected for multivariable analysis, completion of a research fellowship, graduation from a top-20 medical school, and stated desire in the personal statement to become an academic surgeon were most strongly associated with choice of academia. Although job choice is multifactorial, the present study demonstrates that there are objective factors listed on spine fellowship applications associated with a significantly higher likelihood of academic job choice. Analyzing these factors may help selection committees evaluate spine fellowship applicants consistent with the academic missions of their programs.
Risk prediction and aversion by anterior cingulate cortex.
Brown, Joshua W; Braver, Todd S
2007-12-01
The recently proposed error-likelihood hypothesis suggests that anterior cingulate cortex (ACC) and surrounding areas will become active in proportion to the perceived likelihood of an error. The hypothesis was originally derived from a computational model prediction. The same computational model now makes a further prediction that ACC will be sensitive not only to predicted error likelihood, but also to the predicted magnitude of the consequences, should an error occur. The product of error likelihood and predicted error consequence magnitude collectively defines the general "expected risk" of a given behavior in a manner analogous but orthogonal to subjective expected utility theory. New fMRI results from an incentivechange signal task now replicate the error-likelihood effect, validate the further predictions of the computational model, and suggest why some segments of the population may fail to show an error-likelihood effect. In particular, error-likelihood effects and expected risk effects in general indicate greater sensitivity to earlier predictors of errors and are seen in risk-averse but not risk-tolerant individuals. Taken together, the results are consistent with an expected risk model of ACC and suggest that ACC may generally contribute to cognitive control by recruiting brain activity to avoid risk.
Finite mixture model: A maximum likelihood estimation approach on time series data
NASA Astrophysics Data System (ADS)
Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-09-01
Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.
Ratmann, Oliver; Andrieu, Christophe; Wiuf, Carsten; Richardson, Sylvia
2009-06-30
Mathematical models are an important tool to explain and comprehend complex phenomena, and unparalleled computational advances enable us to easily explore them without any or little understanding of their global properties. In fact, the likelihood of the data under complex stochastic models is often analytically or numerically intractable in many areas of sciences. This makes it even more important to simultaneously investigate the adequacy of these models-in absolute terms, against the data, rather than relative to the performance of other models-but no such procedure has been formally discussed when the likelihood is intractable. We provide a statistical interpretation to current developments in likelihood-free Bayesian inference that explicitly accounts for discrepancies between the model and the data, termed Approximate Bayesian Computation under model uncertainty (ABCmicro). We augment the likelihood of the data with unknown error terms that correspond to freely chosen checking functions, and provide Monte Carlo strategies for sampling from the associated joint posterior distribution without the need of evaluating the likelihood. We discuss the benefit of incorporating model diagnostics within an ABC framework, and demonstrate how this method diagnoses model mismatch and guides model refinement by contrasting three qualitative models of protein network evolution to the protein interaction datasets of Helicobacter pylori and Treponema pallidum. Our results make a number of model deficiencies explicit, and suggest that the T. pallidum network topology is inconsistent with evolution dominated by link turnover or lateral gene transfer alone.
A general methodology for maximum likelihood inference from band-recovery data
Conroy, M.J.; Williams, B.K.
1984-01-01
A numerical procedure is described for obtaining maximum likelihood estimates and associated maximum likelihood inference from band- recovery data. The method is used to illustrate previously developed one-age-class band-recovery models, and is extended to new models, including the analysis with a covariate for survival rates and variable-time-period recovery models. Extensions to R-age-class band- recovery, mark-recapture models, and twice-yearly marking are discussed. A FORTRAN program provides computations for these models.
Gilligan, S B; Borecki, I B; Mathew, S; Vijaykumar, M; Malhotra, K C; Rao, D C
1987-09-01
Accessory triradii and the atd angle were examined via complex segregation analysis in order to evaluate possible genetic effects on these dermatoglyphic traits, measured in an endogamous Brahmin caste of peninsular India. The phenotypes considered included: presence of accessory palmar triradii a' and d', associated with the interdigital areas II and IV, respectively; presence of an accessory axial triradius tt' associated with the proximal margin of the palm; and an arctanh-transformation of the atd angle measurement. For all accessory triradii considered in the present investigation familial resemblance was evident. The most parsimonious model which could account for the observed resemblance was a multifactorial model that includes polygenic effects as well as transmissible environmental effects that are inherited in the same pattern as polygenes. Evidence of familial resemblance was also found for the arctanh-transformed atd angle, which could be attributed, initially, to both a major effect and a multifactorial component. Tests of transmission of a putative major gene were performed which yielded results consistent with Mendelian transmission, although an alternative test of no transmission of the major effect also fit the data. In light of these contrasting results we are precluded from accepting with confidence the notion of a major gene influence on the atd angle. We have concluded that the accessory triradii a', d', and tt', and the atd angle are influenced by multifactorial effects, including additive polygenes and possible environmental factors, such as intrauterine effects.
Unified framework to evaluate panmixia and migration direction among multiple sampling locations.
Beerli, Peter; Palczewski, Michal
2010-05-01
For many biological investigations, groups of individuals are genetically sampled from several geographic locations. These sampling locations often do not reflect the genetic population structure. We describe a framework using marginal likelihoods to compare and order structured population models, such as testing whether the sampling locations belong to the same randomly mating population or comparing unidirectional and multidirectional gene flow models. In the context of inferences employing Markov chain Monte Carlo methods, the accuracy of the marginal likelihoods depends heavily on the approximation method used to calculate the marginal likelihood. Two methods, modified thermodynamic integration and a stabilized harmonic mean estimator, are compared. With finite Markov chain Monte Carlo run lengths, the harmonic mean estimator may not be consistent. Thermodynamic integration, in contrast, delivers considerably better estimates of the marginal likelihood. The choice of prior distributions does not influence the order and choice of the better models when the marginal likelihood is estimated using thermodynamic integration, whereas with the harmonic mean estimator the influence of the prior is pronounced and the order of the models changes. The approximation of marginal likelihood using thermodynamic integration in MIGRATE allows the evaluation of complex population genetic models, not only of whether sampling locations belong to a single panmictic population, but also of competing complex structured population models.
Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; ...
2014-10-16
Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genesmore » and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface.« less
Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; Chia, Nicholas; Price, Nathan D.
2014-01-01
Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genes and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface. PMID:25329157
Ferrer, Assumpta; Formiga, Francesc; Sanz, Héctor; de Vries, Oscar J; Badia, Teresa; Pujol, Ramón
2014-01-01
Background The purpose of this study was to assess the effectiveness of a multifactorial intervention to reduce falls among the oldest-old people, including individuals with cognitive impairment or comorbidities. Methods A randomized, single-blind, parallel-group clinical trial was conducted from January 2009 to December 2010 in seven primary health care centers in Baix Llobregat (Barcelona). Of 696 referred people who were born in 1924, 328 were randomized to an intervention group or a control group. The intervention model used an algorithm and was multifaceted for both patients and their primary care providers. Primary outcomes were risk of falling and time until falls. Data analyses were by intention-to-treat. Results Sixty-five (39.6%) subjects in the intervention group and 48 (29.3%) in the control group fell during follow-up. The difference in the risk of falls was not significant (relative risk 1.28, 95% confidence interval [CI] 0.94–1.75). Cox regression models with time from randomization to the first fall were not significant. Cox models for recurrent falls showed that intervention had a negative effect (hazard ratio [HR] 1.46, 95% CI 1.03–2.09) and that functional impairment (HR 1.42, 95% CI 0.97–2.12), previous falls (HR 1.09, 95% CI 0.74–1.60), and cognitive impairment (HR 1.08, 95% CI 0.72–1.60) had no effect on the assessment. Conclusion This multifactorial intervention among octogenarians, including individuals with cognitive impairment or comorbidities, did not result in a reduction in falls. A history of previous falls, disability, and cognitive impairment had no effect on the program among the community-dwelling subjects in this study. PMID:24596458
Ansell, Emily B; Pinto, Anthony; Edelen, Maria Orlando; Grilo, Carlos M
2013-01-01
Objective To examine 1-, 2-, and 3-factor model structures through confirmatory analytic procedures for Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) obsessive–compulsive personality disorder (OCPD) criteria in patients with binge eating disorder (BED). Method Participants were consecutive outpatients (n = 263) with binge eating disorder and were assessed with semi-structured interviews. The 8 OCPD criteria were submitted to confirmatory factor analyses in Mplus Version 4.2 (Los Angeles, CA) in which previously identified factor models of OCPD were compared for fit, theoretical relevance, and parsimony. Nested models were compared for significant improvements in model fit. Results Evaluation of indices of fit in combination with theoretical considerations suggest a multifactorial model is a significant improvement in fit over the current DSM-IV single-factor model of OCPD. Though the data support both 2- and 3-factor models, the 3-factor model is hindered by an underspecified third factor. Conclusion A multifactorial model of OCPD incorporating the factors perfectionism and rigidity represents the best compromise of fit and theory in modelling the structure of OCPD in patients with BED. A third factor representing miserliness may be relevant in BED populations but needs further development. The perfectionism and rigidity factors may represent distinct intrapersonal and interpersonal attempts at control and may have implications for the assessment of OCPD. PMID:19087485
Ansell, Emily B; Pinto, Anthony; Edelen, Maria Orlando; Grilo, Carlos M
2008-12-01
To examine 1-, 2-, and 3-factor model structures through confirmatory analytic procedures for Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) obsessive-compulsive personality disorder (OCPD) criteria in patients with binge eating disorder (BED). Participants were consecutive outpatients (n = 263) with binge eating disorder and were assessed with semi-structured interviews. The 8 OCPD criteria were submitted to confirmatory factor analyses in Mplus Version 4.2 (Los Angeles, CA) in which previously identified factor models of OCPD were compared for fit, theoretical relevance, and parsimony. Nested models were compared for significant improvements in model fit. Evaluation of indices of fit in combination with theoretical considerations suggest a multifactorial model is a significant improvement in fit over the current DSM-IV single- factor model of OCPD. Though the data support both 2- and 3-factor models, the 3-factor model is hindered by an underspecified third factor. A multifactorial model of OCPD incorporating the factors perfectionism and rigidity represents the best compromise of fit and theory in modelling the structure of OCPD in patients with BED. A third factor representing miserliness may be relevant in BED populations but needs further development. The perfectionism and rigidity factors may represent distinct intrapersonal and interpersonal attempts at control and may have implications for the assessment of OCPD.
Multilevel and Latent Variable Modeling with Composite Links and Exploded Likelihoods
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders
2007-01-01
Composite links and exploded likelihoods are powerful yet simple tools for specifying a wide range of latent variable models. Applications considered include survival or duration models, models for rankings, small area estimation with census information, models for ordinal responses, item response models with guessing, randomized response models,…
Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J
2013-01-01
Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.
A long-term earthquake rate model for the central and eastern United States from smoothed seismicity
Moschetti, Morgan P.
2015-01-01
I present a long-term earthquake rate model for the central and eastern United States from adaptive smoothed seismicity. By employing pseudoprospective likelihood testing (L-test), I examined the effects of fixed and adaptive smoothing methods and the effects of catalog duration and composition on the ability of the models to forecast the spatial distribution of recent earthquakes. To stabilize the adaptive smoothing method for regions of low seismicity, I introduced minor modifications to the way that the adaptive smoothing distances are calculated. Across all smoothed seismicity models, the use of adaptive smoothing and the use of earthquakes from the recent part of the catalog optimizes the likelihood for tests with M≥2.7 and M≥4.0 earthquake catalogs. The smoothed seismicity models optimized by likelihood testing with M≥2.7 catalogs also produce the highest likelihood values for M≥4.0 likelihood testing, thus substantiating the hypothesis that the locations of moderate-size earthquakes can be forecast by the locations of smaller earthquakes. The likelihood test does not, however, maximize the fraction of earthquakes that are better forecast than a seismicity rate model with uniform rates in all cells. In this regard, fixed smoothing models perform better than adaptive smoothing models. The preferred model of this study is the adaptive smoothed seismicity model, based on its ability to maximize the joint likelihood of predicting the locations of recent small-to-moderate-size earthquakes across eastern North America. The preferred rate model delineates 12 regions where the annual rate of M≥5 earthquakes exceeds 2×10−3. Although these seismic regions have been previously recognized, the preferred forecasts are more spatially concentrated than the rates from fixed smoothed seismicity models, with rate increases of up to a factor of 10 near clusters of high seismic activity.
Likelihood testing of seismicity-based rate forecasts of induced earthquakes in Oklahoma and Kansas
Moschetti, Morgan P.; Hoover, Susan M.; Mueller, Charles
2016-01-01
Likelihood testing of induced earthquakes in Oklahoma and Kansas has identified the parameters that optimize the forecasting ability of smoothed seismicity models and quantified the recent temporal stability of the spatial seismicity patterns. Use of the most recent 1-year period of earthquake data and use of 10–20-km smoothing distances produced the greatest likelihood. The likelihood that the locations of January–June 2015 earthquakes were consistent with optimized forecasts decayed with increasing elapsed time between the catalogs used for model development and testing. Likelihood tests with two additional sets of earthquakes from 2014 exhibit a strong sensitivity of the rate of decay to the smoothing distance. Marked reductions in likelihood are caused by the nonstationarity of the induced earthquake locations. Our results indicate a multiple-fold benefit from smoothed seismicity models in developing short-term earthquake rate forecasts for induced earthquakes in Oklahoma and Kansas, relative to the use of seismic source zones.
Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
Tests for detecting overdispersion in models with measurement error in covariates.
Yang, Yingsi; Wong, Man Yu
2015-11-30
Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.
Mixture Rasch Models with Joint Maximum Likelihood Estimation
ERIC Educational Resources Information Center
Willse, John T.
2011-01-01
This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…
The Epigenetic Landscape of Alcoholism
Krishnan, Harish R.; Sakharkar, Amul J.; Teppen, Tara L.; Berkel, Tiffani D.M.; Pandey, Subhash C.
2015-01-01
Alcoholism is a complex psychiatric disorder that has a multifactorial etiology. Epigenetic mechanisms are uniquely capable of accounting for the multifactorial nature of the disease in that they are highly stable and are affected by environmental factors, including alcohol itself. Chromatin remodeling causes changes in gene expression in specific brain regions contributing to the endophenotypes of alcoholism such as tolerance and dependence. The epigenetic mechanisms that regulate changes in gene expression observed in addictive behaviors respond not only to alcohol exposure, but also to comorbid psychopathology such as the presence of anxiety and stress. This review summarizes recent developments in epigenetic research that may play a role in alcoholism. We propose that pharmacologically manipulating epigenetic targets, as demonstrated in various preclinical models, holds great therapeutic potential in the treatment and prevention of alcoholism. PMID:25131543
Oellgaard, Jens; Gæde, Peter; Rossing, Peter; Rørth, Rasmus; Køber, Lars; Parving, Hans-Henrik; Pedersen, Oluf
2018-05-30
In type 2 diabetes mellitus, heart failure is a frequent, potentially fatal and often forgotten complication. Glucose-lowering agents and adjuvant therapies modify the risk of heart failure. We recently reported that 7.8 years of intensified compared with conventional multifactorial intervention in individuals with type 2 diabetes and microalbuminuria in the Steno-2 study reduced the risk of cardiovascular disease and prolonged life over 21.2 years of follow-up. In this post hoc analysis, we examine the impact of intensified multifactorial intervention on the risk of hospitalisation for heart failure. One hundred and sixty individuals were randomised to conventional or intensified multifactorial intervention, using sealed envelopes. The trial was conducted using the Prospective, Randomised, Open, Blinded Endpoints (PROBE) design. After 7.8 years, all individuals were offered intensified therapy and the study continued as an observational follow-up study for an additional 13.4 years. Heart-failure hospitalisations were adjudicated from patient records by an external expert committee blinded for treatment allocation. Event rates were compared using a Cox regression model adjusted for age and sex. Eighty patients were assigned to each treatment group. Ten patients undergoing intensive therapy vs 24 undergoing conventional therapy were hospitalised for heart failure during follow-up. The HR (95% CI) was 0.30 (0.14, 0.64), p = 0.002 in the intensive-therapy group compared with the conventional-therapy group. Including death in the endpoint did not lead to an alternate overall outcome; HR 0.51 (0.34, 0.76), p = 0.001. In a pooled cohort analysis, an increase in plasma N-terminal pro-B-type natriuretic peptide (NT-proBNP) during the first two years of the trial was associated with incident heart failure. Intensified, multifactorial intervention for 7.8 years in type 2 diabetic individuals with microalbuminuria reduced the risk of hospitalisation for heart failure by 70% during a total of 21.2 years of observation. ClinicalTrials.gov NCT00320008.
Multifactorial discrimination as a fundamental cause of mental health inequities.
Khan, Mariam; Ilcisin, Misja; Saxton, Katherine
2017-03-04
The theory of fundamental causes explains why health disparities persist over time, even as risk factors, mechanisms, and diseases change. Using an intersectional framework, we evaluated multifactorial discrimination as a fundamental cause of mental health disparities. Using baseline data from the Project STRIDE: Stress, Identity, and Mental Health study, we examined the health effects of discrimination among individuals who self-identified as lesbian, gay, or bisexual. We used logistic and linear regression to assess whether multifactorial discrimination met the four criteria designating a fundamental cause, namely that the cause: 1) influences multiple health outcomes, 2) affects multiple risk factors, 3) involves access to resources that can be leveraged to reduce consequences of disease, and 4) reproduces itself in varied contexts through changing mechanisms. Multifactorial discrimination predicted high depression scores, psychological well-being, and substance use disorder diagnosis. Discrimination was positively associated with risk factors for high depression scores: chronic strain and total number of stressful life events. Discrimination was associated with significantly lower levels of mastery and self-esteem, protective factors for depressive symptomatology. Even after controlling for risk factors, discrimination remained a significant predictor for high depression scores. Among subjects with low depression scores, multifactorial discrimination also predicted anxiety and aggregate mental health scores. Multifactorial discrimination should be considered a fundamental cause of mental health inequities and may be an important cause of broad health disparities among populations with intersecting social identities.
A comparison of cosegregation analysis methods for the clinical setting.
Rañola, John Michael O; Liu, Quanhui; Rosenthal, Elisabeth A; Shirts, Brian H
2018-04-01
Quantitative cosegregation analysis can help evaluate the pathogenicity of genetic variants. However, genetics professionals without statistical training often use simple methods, reporting only qualitative findings. We evaluate the potential utility of quantitative cosegregation in the clinical setting by comparing three methods. One thousand pedigrees each were simulated for benign and pathogenic variants in BRCA1 and MLH1 using United States historical demographic data to produce pedigrees similar to those seen in the clinic. These pedigrees were analyzed using two robust methods, full likelihood Bayes factors (FLB) and cosegregation likelihood ratios (CSLR), and a simpler method, counting meioses. Both FLB and CSLR outperform counting meioses when dealing with pathogenic variants, though counting meioses is not far behind. For benign variants, FLB and CSLR greatly outperform as counting meioses is unable to generate evidence for benign variants. Comparing FLB and CSLR, we find that the two methods perform similarly, indicating that quantitative results from either of these methods could be combined in multifactorial calculations. Combining quantitative information will be important as isolated use of cosegregation in single families will yield classification for less than 1% of variants. To encourage wider use of robust cosegregation analysis, we present a website ( http://www.analyze.myvariant.org ) which implements the CSLR, FLB, and Counting Meioses methods for ATM, BRCA1, BRCA2, CHEK2, MEN1, MLH1, MSH2, MSH6, and PMS2. We also present an R package, CoSeg, which performs the CSLR analysis on any gene with user supplied parameters. Future variant classification guidelines should allow nuanced inclusion of cosegregation evidence against pathogenicity.
Multiple robustness in factorized likelihood models.
Molina, J; Rotnitzky, A; Sued, M; Robins, J M
2017-09-01
We consider inference under a nonparametric or semiparametric model with likelihood that factorizes as the product of two or more variation-independent factors. We are interested in a finite-dimensional parameter that depends on only one of the likelihood factors and whose estimation requires the auxiliary estimation of one or several nuisance functions. We investigate general structures conducive to the construction of so-called multiply robust estimating functions, whose computation requires postulating several dimension-reducing models but which have mean zero at the true parameter value provided one of these models is correct.
Kaczmarek, Maria; Stawińska-Witoszyńska, Barbara; Krzyżaniak, Alicja; Krzywińska-Wiewiorowska, Małgorzata; Siwińska, Aldona
2015-11-01
In Poland, there is no data on parental socioeconomic status (SES) as a potent risk factor in adolescent elevated blood pressure, although social differences in somatic growth and maturation of children and adolescents have been recorded since the 1980s. This study aimed to evaluate the association between parental SES and blood pressure levels of their adolescent offspring. A cross-sectional survey was carried out between 2009 and 2010 on a sample of 4941 students (2451 boys and 2490 girls) aged 10-18, participants in the ADOPOLNOR study. The depended outcome variable was the level of blood pressure (optimal, pre- and hypertension) and explanatory variables included place of residence and indicators of parental SES: family size, parental educational attainments and occupation status, income adequacy and family wealth. The final selected model of the multiple multinomial logistic regression analysis (MLRA) with backward elimination procedure revealed the multifactorial dependency of blood pressure levels on maternal educational attainment, paternal occupation and income adequacy interrelated to urbanization category of the place of residence after controlling for family history of hypertension, an adolescent's sex, age and weight status. Consistent rural-to-urban and socioeconomic gradients were found in prevalence of elevated blood pressure, which increased with continuous lines from large cities through small- to medium-sized cities to villages and from high-SES to low-SES familial environments. The adjusted likelihood of developing systolic and diastolic hypertension decreased with each step increase in maternal educational attainment and increased urbanization category. The likelihood of developing prehypertension decreased with increased urbanization category, maternal education, paternal employment status and income adequacy. Weight status appeared to be the strongest confounder of adolescent blood pressure level and, at the same time, a mediator between their blood pressure and parental SES. The findings of the present study confirmed socioeconomic disparities in blood pressure levels among adolescents. This calls for regularly performed blood pressure assessment and monitoring in the adolescent population. It is recommended to focus on obesity prevention and socioeconomic health inequalities by further trying to improve living and working conditions in adverse rural environments.
Cordell, H J; Todd, J A; Bennett, S T; Kawaguchi, Y; Farrall, M
1995-10-01
To investigate the genetic component of multifactorial diseases such as type 1 (insulin-dependent) diabetes mellitus (IDDM), models involving the joint action of several disease loci are important. These models can give increased power to detect an effect and a greater understanding of etiological mechanisms. Here, we present an extension of the maximum lod score method of N. Risch, which allows the simultaneous detection and modeling of two unlinked disease loci. Genetic constraints on the identical-by-descent sharing probabilities, analogous to the "triangle" restrictions in the single-locus method, are derived, and the size and power of the test statistics are investigated. The method is applied to affected-sib-pair data, and the joint effects of IDDM1 (HLA) and IDDM2 (the INS VNTR) and of IDDM1 and IDDM4 (FGF3-linked) are assessed with relation to the development of IDDM. In the presence of genetic heterogeneity, there is seen to be a significant advantage in analyzing more than one locus simultaneously. Analysis of these families indicates that the effects at IDDM1 and IDDM2 are well described by a multiplicative genetic model, while those at IDDM1 and IDDM4 follow a heterogeneity model.
Cordell, H J; Todd, J A; Bennett, S T; Kawaguchi, Y; Farrall, M
1995-01-01
To investigate the genetic component of multifactorial diseases such as type 1 (insulin-dependent) diabetes mellitus (IDDM), models involving the joint action of several disease loci are important. These models can give increased power to detect an effect and a greater understanding of etiological mechanisms. Here, we present an extension of the maximum lod score method of N. Risch, which allows the simultaneous detection and modeling of two unlinked disease loci. Genetic constraints on the identical-by-descent sharing probabilities, analogous to the "triangle" restrictions in the single-locus method, are derived, and the size and power of the test statistics are investigated. The method is applied to affected-sib-pair data, and the joint effects of IDDM1 (HLA) and IDDM2 (the INS VNTR) and of IDDM1 and IDDM4 (FGF3-linked) are assessed with relation to the development of IDDM. In the presence of genetic heterogeneity, there is seen to be a significant advantage in analyzing more than one locus simultaneously. Analysis of these families indicates that the effects at IDDM1 and IDDM2 are well described by a multiplicative genetic model, while those at IDDM1 and IDDM4 follow a heterogeneity model. PMID:7573054
Estimating Function Approaches for Spatial Point Processes
NASA Astrophysics Data System (ADS)
Deng, Chong
Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.
Bayesian logistic regression approaches to predict incorrect DRG assignment.
Suleiman, Mani; Demirhan, Haydar; Boyd, Leanne; Girosi, Federico; Aksakalli, Vural
2018-05-07
Episodes of care involving similar diagnoses and treatments and requiring similar levels of resource utilisation are grouped to the same Diagnosis-Related Group (DRG). In jurisdictions which implement DRG based payment systems, DRGs are a major determinant of funding for inpatient care. Hence, service providers often dedicate auditing staff to the task of checking that episodes have been coded to the correct DRG. The use of statistical models to estimate an episode's probability of DRG error can significantly improve the efficiency of clinical coding audits. This study implements Bayesian logistic regression models with weakly informative prior distributions to estimate the likelihood that episodes require a DRG revision, comparing these models with each other and to classical maximum likelihood estimates. All Bayesian approaches had more stable model parameters than maximum likelihood. The best performing Bayesian model improved overall classification per- formance by 6% compared to maximum likelihood, with a 34% gain compared to random classification, respectively. We found that the original DRG, coder and the day of coding all have a significant effect on the likelihood of DRG error. Use of Bayesian approaches has improved model parameter stability and classification accuracy. This method has already lead to improved audit efficiency in an operational capacity.
Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H
2017-03-01
To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.
Valachi, Bethany; Valachi, Keith
2003-12-01
The authors reviewed studies to identify methods for dental operators to use to prevent the development of musculoskeletal disorders, or MSDs. The authors reviewed studies that related to the prevention of MSDs among dental operators. Some studies investigated the relationship between the biomechanics of seated working postures and physiological damage or pain. Other studies suggested that repeated unidirectional twisting of the trunk can lead to low back pain, while yet other studies examined the detrimental effects of working in one position for prolonged periods. Additional studies confirmed the roles that operators' flexibility and core strength can play in balanced musculoskeletal health and the need for operators to know how to properly adjust ergonomic equipment. This review indicates that strategies to prevent the multifactorial problem of dental operators' developing MSDs exist. These strategies address deficiencies in operator position, posture, flexibility, strength and ergonomics. Education and additional research are needed to promote an understanding of the complexity of the problem and to address the problem's multifactorial nature. A comprehensive approach to address the problem of MSDs in dentistry represents a paradigm shift in how operators work. New educational models that incorporate a multifactorial approach can be developed to help dental operators manage and prevent MSDs effectively.
Yiu, Sean; Tom, Brian Dm
2017-01-01
Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.
Genealogical Working Distributions for Bayesian Model Testing with Phylogenetic Uncertainty
Baele, Guy; Lemey, Philippe; Suchard, Marc A.
2016-01-01
Marginal likelihood estimates to compare models using Bayes factors frequently accompany Bayesian phylogenetic inference. Approaches to estimate marginal likelihoods have garnered increased attention over the past decade. In particular, the introduction of path sampling (PS) and stepping-stone sampling (SS) into Bayesian phylogenetics has tremendously improved the accuracy of model selection. These sampling techniques are now used to evaluate complex evolutionary and population genetic models on empirical data sets, but considerable computational demands hamper their widespread adoption. Further, when very diffuse, but proper priors are specified for model parameters, numerical issues complicate the exploration of the priors, a necessary step in marginal likelihood estimation using PS or SS. To avoid such instabilities, generalized SS (GSS) has recently been proposed, introducing the concept of “working distributions” to facilitate—or shorten—the integration process that underlies marginal likelihood estimation. However, the need to fix the tree topology currently limits GSS in a coalescent-based framework. Here, we extend GSS by relaxing the fixed underlying tree topology assumption. To this purpose, we introduce a “working” distribution on the space of genealogies, which enables estimating marginal likelihoods while accommodating phylogenetic uncertainty. We propose two different “working” distributions that help GSS to outperform PS and SS in terms of accuracy when comparing demographic and evolutionary models applied to synthetic data and real-world examples. Further, we show that the use of very diffuse priors can lead to a considerable overestimation in marginal likelihood when using PS and SS, while still retrieving the correct marginal likelihood using both GSS approaches. The methods used in this article are available in BEAST, a powerful user-friendly software package to perform Bayesian evolutionary analyses. PMID:26526428
Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les
2008-01-01
To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.
Reconceptualizing Social Influence in Counseling: The Elaboration Likelihood Model.
ERIC Educational Resources Information Center
McNeill, Brian W.; Stoltenberg, Cal D.
1989-01-01
Presents Elaboration Likelihood Model (ELM) of persuasion (a reconceptualization of the social influence process) as alternative model of attitude change. Contends ELM unifies conflicting social psychology results and can potentially account for inconsistent research findings in counseling psychology. Provides guidelines on integrating…
Royle, J. Andrew; Sutherland, Christopher S.; Fuller, Angela K.; Sun, Catherine C.
2015-01-01
We develop a likelihood analysis framework for fitting spatial capture-recapture (SCR) models to data collected on class structured or stratified populations. Our interest is motivated by the necessity of accommodating the problem of missing observations of individual class membership. This is particularly problematic in SCR data arising from DNA analysis of scat, hair or other material, which frequently yields individual identity but fails to identify the sex. Moreover, this can represent a large fraction of the data and, given the typically small sample sizes of many capture-recapture studies based on DNA information, utilization of the data with missing sex information is necessary. We develop the class structured likelihood for the case of missing covariate values, and then we address the scaling of the likelihood so that models with and without class structured parameters can be formally compared regardless of missing values. We apply our class structured model to black bear data collected in New York in which sex could be determined for only 62 of 169 uniquely identified individuals. The models containing sex-specificity of both the intercept of the SCR encounter probability model and the distance coefficient, and including a behavioral response are strongly favored by log-likelihood. Estimated population sex ratio is strongly influenced by sex structure in model parameters illustrating the importance of rigorous modeling of sex differences in capture-recapture models.
The epigenetic landscape of alcoholism.
Krishnan, Harish R; Sakharkar, Amul J; Teppen, Tara L; Berkel, Tiffani D M; Pandey, Subhash C
2014-01-01
Alcoholism is a complex psychiatric disorder that has a multifactorial etiology. Epigenetic mechanisms are uniquely capable of accounting for the multifactorial nature of the disease in that they are highly stable and are affected by environmental factors, including alcohol itself. Chromatin remodeling causes changes in gene expression in specific brain regions contributing to the endophenotypes of alcoholism such as tolerance and dependence. The epigenetic mechanisms that regulate changes in gene expression observed in addictive behaviors respond not only to alcohol exposure but also to comorbid psychopathology such as the presence of anxiety and stress. This review summarizes recent developments in epigenetic research that may play a role in alcoholism. We propose that pharmacologically manipulating epigenetic targets, as demonstrated in various preclinical models, hold great therapeutic potential in the treatment and prevention of alcoholism. © 2014 Elsevier Inc. All rights reserved.
2000-06-01
A multifactorial model was used to identify child, sociodemographic, paternal, and maternal characteristics associated with 2 aspects of fathers' parenting. Fathers were interviewed about their caregiving responsibilities at 6, 15, 24, and 36 months, and a subset was videotaped during father-child play at 6 and 36 months. Caregiving activities and sensitivity during play interactions were predicted by different factors. Fathers were more involved in caregiving when fathers worked fewer hours and mothers worked more hours, when fathers and mothers were younger, when fathers had more positive personalities, when mothers reported greater marital intimacy, and when children were boys. Fathers who had less traditional child-rearing beliefs, were older, and reported more marital intimacy were more sensitive during play. These findings are consistent with a multifactorial and multidimensional view of fathering.
Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin
2016-01-01
In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680
Alimohammadi, Mona; Pichardo-Almarza, Cesar; Agu, Obiekezie; Díaz-Zuccarini, Vanessa
2017-01-01
Atherogenesis, the formation of plaques in the wall of blood vessels, starts as a result of lipid accumulation (low-density lipoprotein cholesterol) in the vessel wall. Such accumulation is related to the site of endothelial mechanotransduction, the endothelial response to mechanical stimuli and haemodynamics, which determines biochemical processes regulating the vessel wall permeability. This interaction between biomechanical and biochemical phenomena is complex, spanning different biological scales and is patient-specific, requiring tools able to capture such mathematical and biological complexity in a unified framework. Mathematical models offer an elegant and efficient way of doing this, by taking into account multifactorial and multiscale processes and mechanisms, in order to capture the fundamentals of plaque formation in individual patients. In this study, a mathematical model to understand plaque and calcification locations is presented: this model provides a strong interpretability and physical meaning through a multiscale, complex index or metric (the penetration site of low-density lipoprotein cholesterol, expressed as volumetric flux). Computed tomography scans of the aortic bifurcation and iliac arteries are analysed and compared with the results of the multifactorial model. The results indicate that the model shows potential to predict the majority of the plaque locations, also not predicting regions where plaques are absent. The promising results from this case study provide a proof of concept that can be applied to a larger patient population. PMID:28427316
The Equivalence of Two Methods of Parameter Estimation for the Rasch Model.
ERIC Educational Resources Information Center
Blackwood, Larry G.; Bradley, Edwin L.
1989-01-01
Two methods of estimating parameters in the Rasch model are compared. The equivalence of likelihood estimations from the model of G. J. Mellenbergh and P. Vijn (1981) and from usual unconditional maximum likelihood (UML) estimation is demonstrated. Mellenbergh and Vijn's model is a convenient method of calculating UML estimates. (SLD)
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.
2017-11-01
This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation
Su, Jingjun; Du, Xinzhong; Li, Xuyong
2018-05-16
Uncertainty analysis is an important prerequisite for model application. However, the existing phosphorus (P) loss indexes or indicators were rarely evaluated. This study applied generalized likelihood uncertainty estimation (GLUE) method to assess the uncertainty of parameters and modeling outputs of a non-point source (NPS) P indicator constructed in R language. And the influences of subjective choices of likelihood formulation and acceptability threshold of GLUE on model outputs were also detected. The results indicated the following. (1) Parameters RegR 2 , RegSDR 2 , PlossDP fer , PlossDP man , DPDR, and DPR were highly sensitive to overall TP simulation and their value ranges could be reduced by GLUE. (2) Nash efficiency likelihood (L 1 ) seemed to present better ability in accentuating high likelihood value simulations than the exponential function (L 2 ) did. (3) The combined likelihood integrating the criteria of multiple outputs acted better than single likelihood in model uncertainty assessment in terms of reducing the uncertainty band widths and assuring the fitting goodness of whole model outputs. (4) A value of 0.55 appeared to be a modest choice of threshold value to balance the interests between high modeling efficiency and high bracketing efficiency. Results of this study could provide (1) an option to conduct NPS modeling under one single computer platform, (2) important references to the parameter setting for NPS model development in similar regions, (3) useful suggestions for the application of GLUE method in studies with different emphases according to research interests, and (4) important insights into the watershed P management in similar regions.
A strategy for improved computational efficiency of the method of anchored distributions
NASA Astrophysics Data System (ADS)
Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram
2013-06-01
This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.
A Bayesian Alternative for Multi-objective Ecohydrological Model Specification
NASA Astrophysics Data System (ADS)
Tang, Y.; Marshall, L. A.; Sharma, A.; Ajami, H.
2015-12-01
Process-based ecohydrological models combine the study of hydrological, physical, biogeochemical and ecological processes of the catchments, which are usually more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov Chain Monte Carlo (MCMC) techniques. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological framework. In our study, a formal Bayesian approach is implemented in an ecohydrological model which combines a hydrological model (HyMOD) and a dynamic vegetation model (DVM). Simulations focused on one objective likelihood (Streamflow/LAI) and multi-objective likelihoods (Streamflow and LAI) with different weights are compared. Uniform, weakly informative and strongly informative prior distributions are used in different simulations. The Kullback-leibler divergence (KLD) is used to measure the dis(similarity) between different priors and corresponding posterior distributions to examine the parameter sensitivity. Results show that different prior distributions can strongly influence posterior distributions for parameters, especially when the available data is limited or parameters are insensitive to the available data. We demonstrate differences in optimized parameters and uncertainty limits in different cases based on multi-objective likelihoods vs. single objective likelihoods. We also demonstrate the importance of appropriately defining the weights of objectives in multi-objective calibration according to different data types.
Two models for evaluating landslide hazards
Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.
2006-01-01
Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.
Ning, Jing; Chen, Yong; Piao, Jin
2017-07-01
Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Models of comorbidity for multifactorial disorders.
Neale, M C; Kendler, K S
1995-01-01
We develop several formal models for comorbidity between multifactorial disorders. Based on the work of D. N. Klein and L. P. Riso, the models include (i) alternate forms, where the two disorders have the same underlying continuum of liability; (ii) random multiformity, in which affection status on one disorder abruptly increases risk for the second; (iii) extreme multiformity, where only extreme cases have an abruptly increased risk for the second disorder; (iv) three independent disorders, in which excess comorbid cases are due to a separate, third disorder; (v) correlated liabilities, where the risk factors for the two disorders correlate; and (vi) direct causal models, where the liability for one disorder is a cause of the other disorder. These models are used to make quantitative predictions about the relative proportions of pairs of relatives who are classified according to whether each relative has neither disorder, disorder A but not B, disorder B but not A, or both A and B. For illustration, we analyze data on major depression (MD) and generalized anxiety disorder (GAD) assessed in adult female MZ and DZ twins, which enable estimation of the relative impact of genetic and environmental factors. Several models are rejected--that comorbid cases are due to chance; multiformity of GAD; a third independent disorder; and GAD being a cause of MD. Of the models that fit the data, correlated liabilities, MD causes GAD, and reciprocal causation seem best. MD appears to be a source of liability for GAD. Possible extensions to the models are discussed. PMID:7573055
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE)
Boker, Steven M.; Brick, Timothy R.; Pritikin, Joshua N.; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D.; Maes, Hermine H.; Neale, Michael C.
2015-01-01
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly-impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participants’ personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual’s data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies. PMID:26717128
USDA-ARS?s Scientific Manuscript database
Bovine mastitis is an inflammation-driven disease of the bovine mammary gland that costs the global dairy industry several billion dollars per annum. Because disease susceptibility is a multi-factorial complex phenotype, a multi-omic integrative biology approach is required to dissect the multilayer...
Møller, Jens K S; Jakobsen, Marianne; Weber, Claus J; Martinussen, Torben; Skibsted, Leif H; Bertelsen, Grete
2003-02-01
A multifactorial design, including (1) percent residual oxygen, (2) oxygen transmission rate of packaging film (OTR), (3) product to headspace volume ratio, (4) illuminance level and (5) nitrite level during curing, was established to investigate factors affecting light-induced oxidative discoloration of cured ham (packaged in modified atmosphere of 20% carbon dioxide and balanced with nitrogen) during 14 days of chill storage. Univariate statistical analysis found significant effects of all main factors on the redness (tristimulus a-value) of the ham. Subsequently, Response Surface Modelling of the data further proved that the interactions between packaging and storage conditions are important when optimising colour stability. The measured content of oxygen in the headspace was incorporated in the model and the interaction between measured oxygen content in the headspace and the product to headspace volume ratio was found to be crucial. Thus, it is not enough to keep the headspace oxygen level low, if the headspace volume at the same time is large, there will still be sufficient oxygen for colour deteriorating processes to take place.
Kamysheva, Ekaterina; Skouteris, Helen; Wertheim, Eleanor H; Paxton, Susan J; Milgrom, Jeannette
2008-06-01
The aim of this cross-sectional study was to investigate relationships among women's body attitudes, physical symptoms, self-esteem, depression, and sleep quality during pregnancy. Pregnant women (N=215) at 15-25 weeks gestation completed a questionnaire including four body image subscales assessing self-reported feeling fat, attractiveness, strength/fitness, and salience of weight and shape. Women reported on 29 pregnancy-related physical complaints, and completed the Beck Depression Inventory, Rosenberg Self-esteem Scale, and Pittsburgh Sleep Quality Index. In regressions, controlling for retrospective reports of body image, more frequent and intense physical symptoms were related to viewing the self as less strong/fit, and to poorer sleep quality and more depressive symptoms. In a multi-factorial model extending previous research, paths were found from sleep quality to depressive symptoms to self-esteem; self-esteem was found to be a mediator associated with lower scores on feeling fat and salience of weight and shape, and on higher perceived attractiveness.
The Effects of Model Misspecification and Sample Size on LISREL Maximum Likelihood Estimates.
ERIC Educational Resources Information Center
Baldwin, Beatrice
The robustness of LISREL computer program maximum likelihood estimates under specific conditions of model misspecification and sample size was examined. The population model used in this study contains one exogenous variable; three endogenous variables; and eight indicator variables, two for each latent variable. Conditions of model…
A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses
ERIC Educational Resources Information Center
Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini
2012-01-01
The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…
Robust analysis of semiparametric renewal process models
Lin, Feng-Chang; Truong, Young K.; Fine, Jason P.
2013-01-01
Summary A rate model is proposed for a modulated renewal process comprising a single long sequence, where the covariate process may not capture the dependencies in the sequence as in standard intensity models. We consider partial likelihood-based inferences under a semiparametric multiplicative rate model, which has been widely studied in the context of independent and identical data. Under an intensity model, gap times in a single long sequence may be used naively in the partial likelihood with variance estimation utilizing the observed information matrix. Under a rate model, the gap times cannot be treated as independent and studying the partial likelihood is much more challenging. We employ a mixing condition in the application of limit theory for stationary sequences to obtain consistency and asymptotic normality. The estimator's variance is quite complicated owing to the unknown gap times dependence structure. We adapt block bootstrapping and cluster variance estimators to the partial likelihood. Simulation studies and an analysis of a semiparametric extension of a popular model for neural spike train data demonstrate the practical utility of the rate approach in comparison with the intensity approach. PMID:24550568
Models and analysis for multivariate failure time data
NASA Astrophysics Data System (ADS)
Shih, Joanna Huang
The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.
Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models
ERIC Educational Resources Information Center
Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai
2011-01-01
Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…
Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM
ERIC Educational Resources Information Center
Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman
2012-01-01
This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…
Eng, Kenny; Carlisle, Daren M.; Wolock, David M.; Falcone, James A.
2013-01-01
An approach is presented in this study to aid water-resource managers in characterizing streamflow alteration at ungauged rivers. Such approaches can be used to take advantage of the substantial amounts of biological data collected at ungauged rivers to evaluate the potential ecological consequences of altered streamflows. National-scale random forest statistical models are developed to predict the likelihood that ungauged rivers have altered streamflows (relative to expected natural condition) for five hydrologic metrics (HMs) representing different aspects of the streamflow regime. The models use human disturbance variables, such as number of dams and road density, to predict the likelihood of streamflow alteration. For each HM, separate models are derived to predict the likelihood that the observed metric is greater than (‘inflated’) or less than (‘diminished’) natural conditions. The utility of these models is demonstrated by applying them to all river segments in the South Platte River in Colorado, USA, and for all 10-digit hydrologic units in the conterminous United States. In general, the models successfully predicted the likelihood of alteration to the five HMs at the national scale as well as in the South Platte River basin. However, the models predicting the likelihood of diminished HMs consistently outperformed models predicting inflated HMs, possibly because of fewer sites across the conterminous United States where HMs are inflated. The results of these analyses suggest that the primary predictors of altered streamflow regimes across the Nation are (i) the residence time of annual runoff held in storage in reservoirs, (ii) the degree of urbanization measured by road density and (iii) the extent of agricultural land cover in the river basin.
Julien, Clavel; Leandro, Aristide; Hélène, Morlon
2018-06-19
Working with high-dimensional phylogenetic comparative datasets is challenging because likelihood-based multivariate methods suffer from low statistical performances as the number of traits p approaches the number of species n and because some computational complications occur when p exceeds n. Alternative phylogenetic comparative methods have recently been proposed to deal with the large p small n scenario but their use and performances are limited. Here we develop a penalized likelihood framework to deal with high-dimensional comparative datasets. We propose various penalizations and methods for selecting the intensity of the penalties. We apply this general framework to the estimation of parameters (the evolutionary trait covariance matrix and parameters of the evolutionary model) and model comparison for the high-dimensional multivariate Brownian (BM), Early-burst (EB), Ornstein-Uhlenbeck (OU) and Pagel's lambda models. We show using simulations that our penalized likelihood approach dramatically improves the estimation of evolutionary trait covariance matrices and model parameters when p approaches n, and allows for their accurate estimation when p equals or exceeds n. In addition, we show that penalized likelihood models can be efficiently compared using Generalized Information Criterion (GIC). We implement these methods, as well as the related estimation of ancestral states and the computation of phylogenetic PCA in the R package RPANDA and mvMORPH. Finally, we illustrate the utility of the new proposed framework by evaluating evolutionary models fit, analyzing integration patterns, and reconstructing evolutionary trajectories for a high-dimensional 3-D dataset of brain shape in the New World monkeys. We find a clear support for an Early-burst model suggesting an early diversification of brain morphology during the ecological radiation of the clade. Penalized likelihood offers an efficient way to deal with high-dimensional multivariate comparative data.
2016-10-01
prediction models will vary by age and sex . Hypothesis 3: A multi-factorial prediction model that accurately predicts risk of new and recurring injuries...members for injury risk after they have been cleared to return to duty from an injury is of great importance. The purpose of this project is to determine ...It turns out that many patients are not formally discharged from rehabilitation. Many of them “ self -discharge” and just stop coming back, either
Gittner, LisaAnn S; Kilbourne, Barbara J; Vadapalli, Ravi; Khan, Hafiz M K; Langston, Michael A
Obesity is both multifactorial and multimodal, making it difficult to identify, unravel and distinguish causative and contributing factors. The lack of a clear model of aetiology hampers the design and evaluation of interventions to prevent and reduce obesity. Using modern graph-theoretical algorithms, we are able to coalesce and analyse thousands of inter-dependent variables and interpret their putative relationships to obesity. Our modelling is different from traditional approaches; we make no a priori assumptions about the population, and model instead based on the actual characteristics of a population. Paracliques, noise-resistant collections of highly-correlated variables, are differentially distilled from data taken over counties associated with low versus high obesity rates. Factor analysis is then applied and a model is developed. Latent variables concentrated around social deprivation, community infrastructure and climate, and especially heat stress were connected to obesity. Infrastructure, environment and community organisation differed in counties with low versus high obesity rates. Clear connections of community infrastructure with obesity in our results lead us to conclude that community level interventions are critical. This effort suggests that it might be useful to study and plan interventions around community organisation and structure, rather than just the individual, to combat the nation's obesity epidemic. Copyright © 2017 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.
2013-01-01
Background To evaluate the effectiveness of a new multifactorial intervention to improve health care for chronic ischemic heart disease patients in primary care. The strategy has two components: a) organizational for the patient/professional relationship and b) training for professionals. Methods/design Experimental study. Randomized clinical trial. Follow-up period: one year. Study setting: primary care, multicenter (15 health centers). For the intervention group 15 health centers are selected from those participating in ESCARVAL study. Once the center agreed to participate patients are randomly selected from the total amount of patients with ischemic heart disease registered in the electronic health records. For the control group a random sample of patients with ischemic heart disease is selected from all 72 health centers electronic records. Intervention components: a) Organizational intervention on the patient/professional relationship. Centered on the Chronic Care Model, the Stanford Expert Patient Program and the Kaiser Permanente model: Teamwork, informed and active patient, decision making shared with the patient, recommendations based on clinical guidelines, single electronic medical history per patient that allows the use of indicators for risk monitoring and stratification. b) Formative strategy for professionals: 4 face-to-face training workshops (one every 3 months), monthly update clinical sessions, online tutorial by a cardiologist, availability through the intranet of the action protocol and related documents. Measurements: Blood pressure, blood glucose, HbA1c, lipid profile and smoking. Frequent health care visits. Number of hospitalizations related to vascular disease. Therapeutic compliance. Drug use. Discussion This study aims to evaluate the efficacy of a multifactorial intervention strategy involving patients with ischemic heart disease for the improvement of the degree of control of the cardiovascular risk factors and of the quality of life, number of visits, and number of hospitalizations. Trial registration NCT01826929 PMID:23915267
Maximum Likelihood Item Easiness Models for Test Theory Without an Answer Key
Batchelder, William H.
2014-01-01
Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce two extensions to the basic model in order to account for item rating easiness/difficulty. The first extension is a multiplicative model and the second is an additive model. We show how the multiplicative model is related to the Rasch model. We describe several maximum-likelihood estimation procedures for the models and discuss issues of model fit and identifiability. We describe how the CCT models could be used to give alternative consensus-based measures of reliability. We demonstrate the utility of both the basic and extended models on a set of essay rating data and give ideas for future research. PMID:29795812
Modeling gene expression measurement error: a quasi-likelihood approach
Strimmer, Korbinian
2003-01-01
Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution) or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale). Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood). Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic) variance structure of the data. As the quasi-likelihood behaves (almost) like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye) effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also improved the power of tests to identify differential expression. PMID:12659637
Model averaging techniques for quantifying conceptual model uncertainty.
Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg
2010-01-01
In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.
Likelihood ratio-based integrated personal risk assessment of type 2 diabetes.
Sato, Noriko; Htun, Nay Chi; Daimon, Makoto; Tamiya, Gen; Kato, Takeo; Kubota, Isao; Ueno, Yoshiyuki; Yamashita, Hidetoshi; Fukao, Akira; Kayama, Takamasa; Muramatsu, Masaaki
2014-01-01
To facilitate personalized health care for multifactorial diseases, risks of genetic and clinical/environmental factors should be assessed together for each individual in an integrated fashion. This approach is possible with the likelihood ratio (LR)-based risk assessment system, as this system can incorporate manifold tests. We examined the usefulness of this system for assessing type 2 diabetes (T2D). Our system employed 29 genetic susceptibility variants, body mass index (BMI), and hypertension as risk factors whose LRs can be estimated from openly available T2D association data for the Japanese population. The pretest probability was set at a sex- and age-appropriate population average of diabetes prevalence. The classification performance of our LR-based risk assessment was compared to that of a non-invasive screening test for diabetes called TOPICS (with score based on age, sex, family history, smoking, BMI, and hypertension) using receiver operating characteristic analysis with a community cohort (n = 1263). The area under the receiver operating characteristic curve (AUC) for the LR-based assessment and TOPICS was 0.707 (95% CI 0.665-0.750) and 0.719 (0.675-0.762), respectively. These AUCs were much higher than that of a genetic risk score constructed using the same genetic susceptibility variants, 0.624 (0.574-0.674). The use of ethnically matched LRs is necessary for proper personal risk assessment. In conclusion, although LR-based integrated risk assessment for T2D still requires additional tests that evaluate other factors, such as risks involved in missing heritability, our results indicate the potential usability of LR-based assessment system and stress the importance of stratified epidemiological investigations in personalized medicine.
ERIC Educational Resources Information Center
Levy, Roy
2010-01-01
SEMModComp, a software package for conducting likelihood ratio tests for mean and covariance structure modeling is described. The package is written in R and freely available for download or on request.
Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood
NASA Astrophysics Data System (ADS)
Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim
2017-04-01
Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models
Load Carriage Capacity of the Dismounted Combatant - A Commanders’ Guide
2012-10-01
predictive model has been used throughout this document to predict the physiological burden (i.e. energy cost ) of representative load carriage...scenarios. As a general guide this model indicates that a 10 kg increase in external load is metabolically equivalent (i.e. energy cost ) to an increase...larger increases in energy cost for a load carriage task. The multi-factorial nature of human load carriage capacity makes it difficult to set
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
Technical Note: Approximate Bayesian parameterization of a complex tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2013-08-01
Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.
Maximum Likelihood Analysis of Nonlinear Structural Equation Models with Dichotomous Variables
ERIC Educational Resources Information Center
Song, Xin-Yuan; Lee, Sik-Yum
2005-01-01
In this article, a maximum likelihood approach is developed to analyze structural equation models with dichotomous variables that are common in behavioral, psychological and social research. To assess nonlinear causal effects among the latent variables, the structural equation in the model is defined by a nonlinear function. The basic idea of the…
Does the Use of Multifactorial Training Methods Increase Practitioners' Competence?
ERIC Educational Resources Information Center
Pittman, Corinthus Omari; Lawdis, Katina
2017-01-01
Skilled therapy practitioners are required by their governing associations to seek professional development per licensure requirements. These requirements facilitate clinical reasoning and confidence during patient care. There are limited online professional development workshops, especially ones that offer multifactorial training as an…
NASA Technical Reports Server (NTRS)
1979-01-01
The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.
Janssen, Eva; van Osch, Liesbeth; Lechner, Lilian; Candel, Math; de Vries, Hein
2012-01-01
Despite the increased recognition of affect in guiding probability estimates, perceived risk has been mainly operationalised in a cognitive way and the differentiation between rational and intuitive judgements is largely unexplored. This study investigated the validity of a measurement instrument differentiating cognitive and affective probability beliefs and examined whether behavioural decision making is mainly guided by cognition or affect. Data were obtained from four surveys focusing on smoking (N=268), fruit consumption (N=989), sunbed use (N=251) and sun protection (N=858). Correlational analyses showed that affective likelihood was more strongly correlated with worry compared to cognitive likelihood and confirmatory factor analysis provided support for a two-factor model of perceived likelihood instead of a one-factor model (i.e. cognition and affect combined). Furthermore, affective likelihood was significantly associated with the various outcome variables, whereas the association for cognitive likelihood was absent in three studies. The findings provide support for the construct validity of the measures used to assess cognitive and affective likelihood. Since affective likelihood might be a better predictor of health behaviour than the commonly used cognitive operationalisation, both dimensions should be considered in future research.
Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst
2012-01-01
When a neuronal spike train is observed, what can we say about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then to choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate and fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that its unique global minimum can thus be found by gradient descent techniques. The global minimum property requires independence of spike time intervals. Lack of history dependence is, however, an important constraint that is not fulfilled in many biological neurons which are known to generate a rich repertoire of spiking behaviors that are incompatible with history independence. Therefore, we expanded the integrate and fire model by including one additional variable, a variable threshold (Mihalas & Niebur, 2009) allowing for history-dependent firing patterns. This neuronal model produces a large number of spiking behaviors while still being linear. Linearity is important as it maintains the distribution of the random variables and still allows for maximum likelihood methods to be used. In this study we show that, although convexity of the negative log-likelihood is not guaranteed for this model, the minimum of the negative log-likelihood function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) frequently reaches the global minimum. PMID:21851282
Likelihoods for fixed rank nomination networks
HOFF, PETER; FOSDICK, BAILEY; VOLFOVSKY, ALEX; STOVEL, KATHERINE
2014-01-01
Many studies that gather social network data use survey methods that lead to censored, missing, or otherwise incomplete information. For example, the popular fixed rank nomination (FRN) scheme, often used in studies of schools and businesses, asks study participants to nominate and rank at most a small number of contacts or friends, leaving the existence of other relations uncertain. However, most statistical models are formulated in terms of completely observed binary networks. Statistical analyses of FRN data with such models ignore the censored and ranked nature of the data and could potentially result in misleading statistical inference. To investigate this possibility, we compare Bayesian parameter estimates obtained from a likelihood for complete binary networks with those obtained from likelihoods that are derived from the FRN scheme, and therefore accommodate the ranked and censored nature of the data. We show analytically and via simulation that the binary likelihood can provide misleading inference, particularly for certain model parameters that relate network ties to characteristics of individuals and pairs of individuals. We also compare these different likelihoods in a data analysis of several adolescent social networks. For some of these networks, the parameter estimates from the binary and FRN likelihoods lead to different conclusions, indicating the importance of analyzing FRN data with a method that accounts for the FRN survey design. PMID:25110586
Effects of smoking abstinence on impulsive behavior among smokers high and low in ADHD-like symptoms
Hawk, Larry W.
2011-01-01
Rationale Impulsivity, a multifaceted construct that includes inhibitory control and heightened preference for immediate reward, is central to models of drug use and abuse. Within a self-medication framework, abstinence from smoking may lead to an increase in impulsive behavior and the likelihood of relapse, particularly among persons with disorders (e.g., attention-deficit/hyperactivity disorder, ADHD) and personality traits (e.g., impulsivity) linked to impulsive behavior. Objectives This study aimed to examine the effects of smoking abstinence on multiple measures of impulsivity among a non-clinical sample of adult smokers selected for high and low levels of ADHD symptoms. Methods In a within-subjects design, participants selected for high or low levels of self-reported ADHD symptoms (N=56) completed sessions following overnight abstinence and when smoking as usual (order counterbalanced). Measures of impulsive behavior included response inhibition (i.e., stop signal task), interference control (i.e., attentional modification of prepulse inhibition (PPI) of startle), and impulsive choice (i.e., hypothetical delay discounting). Results As hypothesized, abstinence decreased response inhibition and PPI. Although ADHD symptoms moderated abstinence effects on impulsive choice and response inhibition, the pattern was opposite to our predictions: the low-ADHD group responded more impulsively when abstinent, whereas the high-ADHD group was relatively unaffected by abstinence. Conclusions These findings highlight the importance of utilizing multiple laboratory measures to examine a multifactorial construct such as impulsive behavior and raise questions about how best to assess symptoms of ADHD and impulsivity among non-abstinent smokers. PMID:21559802
USDA-ARS?s Scientific Manuscript database
In recent years, increased awareness of the potential interactions between rising atmospheric CO2 concentrations ([CO2]) and temperature has illustrated the importance of multi-factorial ecosystem manipulation experiments for validating Earth System models. To address the urgent need for increased u...
Inferring the parameters of a Markov process from snapshots of the steady state
NASA Astrophysics Data System (ADS)
Dettmer, Simon L.; Berg, Johannes
2018-02-01
We seek to infer the parameters of an ergodic Markov process from samples taken independently from the steady state. Our focus is on non-equilibrium processes, where the steady state is not described by the Boltzmann measure, but is generally unknown and hard to compute, which prevents the application of established equilibrium inference methods. We propose a quantity we call propagator likelihood, which takes on the role of the likelihood in equilibrium processes. This propagator likelihood is based on fictitious transitions between those configurations of the system which occur in the samples. The propagator likelihood can be derived by minimising the relative entropy between the empirical distribution and a distribution generated by propagating the empirical distribution forward in time. Maximising the propagator likelihood leads to an efficient reconstruction of the parameters of the underlying model in different systems, both with discrete configurations and with continuous configurations. We apply the method to non-equilibrium models from statistical physics and theoretical biology, including the asymmetric simple exclusion process (ASEP), the kinetic Ising model, and replicator dynamics.
Gaussian copula as a likelihood function for environmental models
NASA Astrophysics Data System (ADS)
Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.
2017-12-01
Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an interesting departure from the usage of fully parametric distributions as likelihood functions - and they could help us to better capture the statistical properties of errors and make more reliable predictions.
Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...
2017-11-08
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Xin; Garikapati, Venu M.; You, Daehyun
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
Maximum Likelihood Analysis of a Two-Level Nonlinear Structural Equation Model with Fixed Covariates
ERIC Educational Resources Information Center
Lee, Sik-Yum; Song, Xin-Yuan
2005-01-01
In this article, a maximum likelihood (ML) approach for analyzing a rather general two-level structural equation model is developed for hierarchically structured data that are very common in educational and/or behavioral research. The proposed two-level model can accommodate nonlinear causal relations among latent variables as well as effects…
Detecting Growth Shape Misspecifications in Latent Growth Models: An Evaluation of Fit Indexes
ERIC Educational Resources Information Center
Leite, Walter L.; Stapleton, Laura M.
2011-01-01
In this study, the authors compared the likelihood ratio test and fit indexes for detection of misspecifications of growth shape in latent growth models through a simulation study and a graphical analysis. They found that the likelihood ratio test, MFI, and root mean square error of approximation performed best for detecting model misspecification…
ERIC Educational Resources Information Center
Petty, Richard E.; And Others
1987-01-01
Answers James Stiff's criticism of the Elaboration Likelihood Model (ELM) of persuasion. Corrects certain misperceptions of the ELM and criticizes Stiff's meta-analysis that compares ELM predictions with those derived from Kahneman's elastic capacity model. Argues that Stiff's presentation of the ELM and the conclusions he draws based on the data…
Maximum Likelihood Item Easiness Models for Test Theory without an Answer Key
ERIC Educational Resources Information Center
France, Stephen L.; Batchelder, William H.
2015-01-01
Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce…
ERIC Educational Resources Information Center
Kelderman, Henk
1992-01-01
Describes algorithms used in the computer program LOGIMO for obtaining maximum likelihood estimates of the parameters in loglinear models. These algorithms are also useful for the analysis of loglinear item-response theory models. Presents modified versions of the iterative proportional fitting and Newton-Raphson algorithms. Simulated data…
ERIC Educational Resources Information Center
Kieftenbeld, Vincent; Natesan, Prathiba
2012-01-01
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Profile-likelihood Confidence Intervals in Item Response Theory Models.
Chalmers, R Philip; Pek, Jolynn; Liu, Yang
2017-01-01
Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.
Floros, J; Wang, G
2001-05-01
The high degree of similarity at the molecular level, between humans and other species, has provided the rationale for the use of a variety of species as model systems in research, resulting in enormous advances in biological sciences and medicine. In contrast, the individual variability observed among humans, for example, in external physique, organ functionality and others, is accounted for, by only a fraction of 1% of differences at the DNA level. These small differences, which are essential for understanding disease pathogenesis, have posed enormous challenges in medicine, as we try to understand why patients may respond differently to drugs or why one patient has complications and another does not. Differences in outcome are most likely the result of interactions among genetic components themselves and/or the environment at the molecular, cellular, organ, or organismal level, or the macroenvironment. In this paper: (1) we consider some issues for multifactorial disease pathogenesis; (2) we provide a review of human SP-A and how the knowledge gained and the characteristics of the hSP-A system may serve as a model in the study of disease with multifactorial etiology; and (3) we describe examples where hSP-A has been used in the study of disease.
2010-06-01
GMKPF represents a better and more flexible alternative to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ...accurate results relative to GML and EML when the network delays are modeled in terms of a single non-Gaussian/non-exponential distribution or as a...to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ) estimators for clock offset estimation in non-Gaussian or non
Comparison of two weighted integration models for the cueing task: linear and likelihood
NASA Technical Reports Server (NTRS)
Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.
2003-01-01
In a task in which the observer must detect a signal at two locations, presenting a precue that predicts the location of a signal leads to improved performance with a valid cue (signal location matches the cue), compared to an invalid cue (signal location does not match the cue). The cue validity effect has often been explained with a limited capacity attentional mechanism improving the perceptual quality at the cued location. Alternatively, the cueing effect can also be explained by unlimited capacity models that assume a weighted combination of noisy responses across the two locations. We compare two weighted integration models, a linear model and a sum of weighted likelihoods model based on a Bayesian observer. While qualitatively these models are similar, quantitatively they predict different cue validity effects as the signal-to-noise ratios (SNR) increase. To test these models, 3 observers performed in a cued discrimination task of Gaussian targets with an 80% valid precue across a broad range of SNR's. Analysis of a limited capacity attentional switching model was also included and rejected. The sum of weighted likelihoods model best described the psychophysical results, suggesting that human observers approximate a weighted combination of likelihoods, and not a weighted linear combination.
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...
2016-02-05
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less
RNAi control of aflatoxins in peanut plants, a multifactorial system
USDA-ARS?s Scientific Manuscript database
RNA-interference (RNAi)-mediated control of aflatoxin contamination in peanut plants is a multifactorial and hyper variable system. The use of RNAi biotechnology to silence single genes in plants has inherently high-variability among transgenic events. Also the level of expression of small interfe...
Maximum Likelihood Estimation of Nonlinear Structural Equation Models.
ERIC Educational Resources Information Center
Lee, Sik-Yum; Zhu, Hong-Tu
2002-01-01
Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)
ERIC Educational Resources Information Center
Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.
2003-01-01
Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)
Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling
NASA Astrophysics Data System (ADS)
Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.
2017-04-01
Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with respect to other commonly used approaches in the literature.
Bayesian experimental design for models with intractable likelihoods.
Drovandi, Christopher C; Pettitt, Anthony N
2013-12-01
In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables. © 2013, The International Biometric Society.
Poisson point process modeling for polyphonic music transcription.
Peeling, Paul; Li, Chung-fai; Godsill, Simon
2007-04-01
Peaks detected in the frequency domain spectrum of a musical chord are modeled as realizations of a nonhomogeneous Poisson point process. When several notes are superimposed to make a chord, the processes for individual notes combine to give another Poisson process, whose likelihood is easily computable. This avoids a data association step linking individual harmonics explicitly with detected peaks in the spectrum. The likelihood function is ideal for Bayesian inference about the unknown note frequencies in a chord. Here, maximum likelihood estimation of fundamental frequencies shows very promising performance on real polyphonic piano music recordings.
Maximum Likelihood Estimation of Nonlinear Structural Equation Models with Ignorable Missing Data
ERIC Educational Resources Information Center
Lee, Sik-Yum; Song, Xin-Yuan; Lee, John C. K.
2003-01-01
The existing maximum likelihood theory and its computer software in structural equation modeling are established on the basis of linear relationships among latent variables with fully observed data. However, in social and behavioral sciences, nonlinear relationships among the latent variables are important for establishing more meaningful models…
The Elaboration Likelihood Model: Implications for the Practice of School Psychology.
ERIC Educational Resources Information Center
Petty, Richard E.; Heesacker, Martin; Hughes, Jan N.
1997-01-01
Reviews a contemporary theory of attitude change, the Elaboration Likelihood Model (ELM) of persuasion, and addresses its relevance to school psychology. Claims that a key postulate of ELM is that attitude change results from thoughtful (central route) or nonthoughtful (peripheral route) processes. Illustrations of ELM's utility for school…
Counseling Pretreatment and the Elaboration Likelihood Model of Attitude Change.
ERIC Educational Resources Information Center
Heesacker, Martin
1986-01-01
Results of the application of the Elaboration Likelihood Model (ELM) to a counseling context revealed that more favorable attitudes toward counseling occurred as subjects' ego involvement increased and as intervention quality improved. Counselor credibility affected the degree to which subjects' attitudes reflected argument quality differences.…
Application of the Elaboration Likelihood Model of Attitude Change to Assertion Training.
ERIC Educational Resources Information Center
Ernst, John M.; Heesacker, Martin
1993-01-01
College students (n=113) participated in study comparing effects of elaboration likelihood model (ELM) based assertion workshop with those of typical assertion workshop. ELM-based workshop was significantly better at producing favorable attitude change, greater intention to act assertively, and more favorable evaluations of workshop content.…
Consistency of Rasch Model Parameter Estimation: A Simulation Study.
ERIC Educational Resources Information Center
van den Wollenberg, Arnold L.; And Others
1988-01-01
The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…
Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...
Modeling abundance effects in distance sampling
Royle, J. Andrew; Dawson, D.K.; Bates, S.
2004-01-01
Distance-sampling methods are commonly used in studies of animal populations to estimate population density. A common objective of such studies is to evaluate the relationship between abundance or density and covariates that describe animal habitat or other environmental influences. However, little attention has been focused on methods of modeling abundance covariate effects in conventional distance-sampling models. In this paper we propose a distance-sampling model that accommodates covariate effects on abundance. The model is based on specification of the distance-sampling likelihood at the level of the sample unit in terms of local abundance (for each sampling unit). This model is augmented with a Poisson regression model for local abundance that is parameterized in terms of available covariates. Maximum-likelihood estimation of detection and density parameters is based on the integrated likelihood, wherein local abundance is removed from the likelihood by integration. We provide an example using avian point-transect data of Ovenbirds (Seiurus aurocapillus) collected using a distance-sampling protocol and two measures of habitat structure (understory cover and basal area of overstory trees). The model yields a sensible description (positive effect of understory cover, negative effect on basal area) of the relationship between habitat and Ovenbird density that can be used to evaluate the effects of habitat management on Ovenbird populations.
Modeling regional variation in riverine fish biodiversity in the Arkansas-White-Red River basin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schweizer, Peter E; Jager, Yetta
The patterns of biodiversity in freshwater systems are shaped by biogeography, environmental gradients, and human-induced factors. In this study, we developed empirical models to explain fish species richness in subbasins of the Arkansas White Red River basin as a function of discharge, elevation, climate, land cover, water quality, dams, and longitudinal position. We used information-theoretic criteria to compare generalized linear mixed models and identified well-supported models. Subbasin attributes that were retained as predictors included discharge, elevation, number of downstream dams, percent forest, percent shrubland, nitrate, total phosphorus, and sediment. The random component of our models, which assumed a negative binomialmore » distribution, included spatial correlation within larger river basins and overdispersed residual variance. This study differs from previous biodiversity modeling efforts in several ways. First, obtaining likelihoods for negative binomial mixed models, and thereby avoiding reliance on quasi-likelihoods, has only recently become practical. We found the ranking of models based on these likelihood estimates to be more believable than that produced using quasi-likelihoods. Second, because we had access to a regional-scale watershed model for this river basin, we were able to include model-estimated water quality attributes as predictors. Thus, the resulting models have potential value as tools with which to evaluate the benefits of water quality improvements to fish.« less
Regression estimators for generic health-related quality of life and quality-adjusted life years.
Basu, Anirban; Manca, Andrea
2012-01-01
To develop regression models for outcomes with truncated supports, such as health-related quality of life (HRQoL) data, and account for features typical of such data such as a skewed distribution, spikes at 1 or 0, and heteroskedasticity. Regression estimators based on features of the Beta distribution. First, both a single equation and a 2-part model are presented, along with estimation algorithms based on maximum-likelihood, quasi-likelihood, and Bayesian Markov-chain Monte Carlo methods. A novel Bayesian quasi-likelihood estimator is proposed. Second, a simulation exercise is presented to assess the performance of the proposed estimators against ordinary least squares (OLS) regression for a variety of HRQoL distributions that are encountered in practice. Finally, the performance of the proposed estimators is assessed by using them to quantify the treatment effect on QALYs in the EVALUATE hysterectomy trial. Overall model fit is studied using several goodness-of-fit tests such as Pearson's correlation test, link and reset tests, and a modified Hosmer-Lemeshow test. The simulation results indicate that the proposed methods are more robust in estimating covariate effects than OLS, especially when the effects are large or the HRQoL distribution has a large spike at 1. Quasi-likelihood techniques are more robust than maximum likelihood estimators. When applied to the EVALUATE trial, all but the maximum likelihood estimators produce unbiased estimates of the treatment effect. One and 2-part Beta regression models provide flexible approaches to regress the outcomes with truncated supports, such as HRQoL, on covariates, after accounting for many idiosyncratic features of the outcomes distribution. This work will provide applied researchers with a practical set of tools to model outcomes in cost-effectiveness analysis.
Measurement of CIB power spectra with CAM-SPEC from Planck HFI maps
NASA Astrophysics Data System (ADS)
Mak, Suet Ying; Challinor, Anthony; Efstathiou, George; Lagache, Guilaine
2015-08-01
We present new measurements of the cosmic infrared background (CIB) anisotropies and its first likelihood using Planck HFI data at 353, 545, and 857 GHz. The measurements are based on cross-frequency power spectra and likelihood analysis using the CAM-SPEC package, rather than map based template removal of foregrounds as done in previous Planck CIB analysis. We construct the likelihood of the CIB temperature fluctuations, an extension of CAM-SPEC likelihood as used in CMB analysis to higher frequency, and use it to drive the best estimate of the CIB power spectrum over three decades in multiple moment, l, covering 50 ≤ l ≤ 2500. We adopt parametric models of the CIB and foreground contaminants (Galactic cirrus, infrared point sources, and cosmic microwave background anisotropies), and calibrate the dataset uniformly across frequencies with known Planck beam and noise properties in the likelihood construction. We validate our likelihood through simulations and extensive suite of consistency tests, and assess the impact of instrumental and data selection effects on the final CIB power spectrum constraints. Two approaches are developed for interpreting the CIB power spectrum. The first approach is based on simple parametric model which model the cross frequency power using amplitudes, correlation coefficients, and known multipole dependence. The second approach is based on the physical models for galaxy clustering and the evolution of infrared emission of galaxies. The new approaches fit all auto- and cross- power spectra very well, with the best fit of χ2ν = 1.04 (parametric model). Using the best foreground solution, we find that the cleaned CIB power spectra are in good agreement with previous Planck and Herschel measurements.
ERIC Educational Resources Information Center
Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike
2011-01-01
It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…
de Groot, Maartje H.; van Campen, Jos P.; Beijnen, Jos H.; Hortobágyi, Tibor; Vuillerme, Nicolas; Lamoth, Claudine C. J.
2017-01-01
Fall prediction in geriatric patients remains challenging because the increased fall risk involves multiple, interrelated factors caused by natural aging and/or pathology. Therefore, we used a multi-factorial statistical approach to model categories of modifiable fall risk factors among geriatric patients to identify fallers with highest sensitivity and specificity with a focus on gait performance. Patients (n = 61, age = 79; 41% fallers) underwent extensive screening in three categories: (1) patient characteristics (e.g., handgrip strength, medication use, osteoporosis-related factors) (2) cognitive function (global cognition, memory, executive function), and (3) gait performance (speed-related and dynamic outcomes assessed by tri-axial trunk accelerometry). Falls were registered prospectively (mean follow-up 8.6 months) and one year retrospectively. Principal Component Analysis (PCA) on 11 gait variables was performed to determine underlying gait properties. Three fall-classification models were then built using Partial Least Squares–Discriminant Analysis (PLS-DA), with separate and combined analyses of the fall risk factors. PCA identified ‘pace’, ‘variability’, and ‘coordination’ as key properties of gait. The best PLS-DA model produced a fall classification accuracy of AUC = 0.93. The specificity of the model using patient characteristics was 60% but reached 80% when cognitive and gait outcomes were added. The inclusion of cognition and gait dynamics in fall classification models reduced misclassification. We therefore recommend assessing geriatric patients’ fall risk using a multi-factorial approach that incorporates patient characteristics, cognition, and gait dynamics. PMID:28575126
Kikkert, Lisette H J; de Groot, Maartje H; van Campen, Jos P; Beijnen, Jos H; Hortobágyi, Tibor; Vuillerme, Nicolas; Lamoth, Claudine C J
2017-01-01
Fall prediction in geriatric patients remains challenging because the increased fall risk involves multiple, interrelated factors caused by natural aging and/or pathology. Therefore, we used a multi-factorial statistical approach to model categories of modifiable fall risk factors among geriatric patients to identify fallers with highest sensitivity and specificity with a focus on gait performance. Patients (n = 61, age = 79; 41% fallers) underwent extensive screening in three categories: (1) patient characteristics (e.g., handgrip strength, medication use, osteoporosis-related factors) (2) cognitive function (global cognition, memory, executive function), and (3) gait performance (speed-related and dynamic outcomes assessed by tri-axial trunk accelerometry). Falls were registered prospectively (mean follow-up 8.6 months) and one year retrospectively. Principal Component Analysis (PCA) on 11 gait variables was performed to determine underlying gait properties. Three fall-classification models were then built using Partial Least Squares-Discriminant Analysis (PLS-DA), with separate and combined analyses of the fall risk factors. PCA identified 'pace', 'variability', and 'coordination' as key properties of gait. The best PLS-DA model produced a fall classification accuracy of AUC = 0.93. The specificity of the model using patient characteristics was 60% but reached 80% when cognitive and gait outcomes were added. The inclusion of cognition and gait dynamics in fall classification models reduced misclassification. We therefore recommend assessing geriatric patients' fall risk using a multi-factorial approach that incorporates patient characteristics, cognition, and gait dynamics.
Sex similarities and differences in risk factors for recurrence of major depression.
van Loo, Hanna M; Aggen, Steven H; Gardner, Charles O; Kendler, Kenneth S
2017-11-27
Major depression (MD) occurs about twice as often in women as in men, but it is unclear whether sex differences subsist after disease onset. This study aims to elucidate potential sex differences in rates and risk factors for MD recurrence, in order to improve prediction of course of illness and understanding of its underlying mechanisms. We used prospective data from a general population sample (n = 653) that experienced a recent episode of MD. A diverse set of potential risk factors for recurrence of MD was analyzed using Cox models subject to elastic net regularization for males and females separately. Accuracy of the prediction models was tested in same-sex and opposite-sex test data. Additionally, interactions between sex and each of the risk factors were investigated to identify potential sex differences. Recurrence rates and the impact of most risk factors were similar for men and women. For both sexes, prediction models were highly multifactorial including risk factors such as comorbid anxiety, early traumas, and family history. Some subtle sex differences were detected: for men, prediction models included more risk factors concerning characteristics of the depressive episode and family history of MD and generalized anxiety, whereas for women, models included more risk factors concerning early and recent adverse life events and socioeconomic problems. No prominent sex differences in risk factors for recurrence of MD were found, potentially indicating similar disease maintaining mechanisms for both sexes. Course of MD is a multifactorial phenomenon for both males and females.
Liu, Fang; Eugenio, Evercita C
2018-04-01
Beta regression is an increasingly popular statistical technique in medical research for modeling of outcomes that assume values in (0, 1), such as proportions and patient reported outcomes. When outcomes take values in the intervals [0,1), (0,1], or [0,1], zero-or-one-inflated beta (zoib) regression can be used. We provide a thorough review on beta regression and zoib regression in the modeling, inferential, and computational aspects via the likelihood-based and Bayesian approaches. We demonstrate the statistical and practical importance of correctly modeling the inflation at zero/one rather than ad hoc replacing them with values close to zero/one via simulation studies; the latter approach can lead to biased estimates and invalid inferences. We show via simulation studies that the likelihood-based approach is computationally faster in general than MCMC algorithms used in the Bayesian inferences, but runs the risk of non-convergence, large biases, and sensitivity to starting values in the optimization algorithm especially with clustered/correlated data, data with sparse inflation at zero and one, and data that warrant regularization of the likelihood. The disadvantages of the regular likelihood-based approach make the Bayesian approach an attractive alternative in these cases. Software packages and tools for fitting beta and zoib regressions in both the likelihood-based and Bayesian frameworks are also reviewed.
Bansal, Yogita; Silakari, Om
2014-11-01
Polyfunctional compounds comprise a novel class of therapeutic agents for treatment of multifactorial diseases. The present study reports a series of benzimidazole-non-steroidal anti-inflammatory drugs (NSAIDs) conjugates (1-10) as novel polyfunctional compounds synthesized in the presence of orthophosphoric acid. The compounds were evaluated for anti-inflammatory (carageenan-induced paw edema model), immunomodulatory (direct haemagglutination test and carbon clearance index models), antioxidant (in vitro and in vivo) and for ulcerogenic effects. Each of the compound has retained the anti-inflammatory activity of the corresponding parent NSAID while exhibiting significantly reduced gastric ulcers. Additionally, the compounds are found to possess potent immunostimulatory and antioxidant activities. The compound 8 was maximally potent (antibody titre value 358.4 ± 140.21, carbon clearance index 0.053 ± 0.002 and antioxidant EC50 value 0.03 ± 0.006). These compounds, exhibiting such multiple pharmacological activities, can be taken as lead for the development of potent drugs for the treatment of chronic multifactorial diseases involving inflammation, immune system modulation and oxidative stress such as cancers. The Lipinski's parameters suggested the compounds to be bear drug like properties.
Toward a multifactorial model of expertise: beyond born versus made.
Hambrick, David Z; Burgoyne, Alexander P; Macnamara, Brooke N; Ullén, Fredrik
2018-02-15
The debate over the origins of individual differences in expertise has raged for over a century in psychology. The "nature" view holds that expertise reflects "innate talent"-that is, genetically determined abilities. The "nurture" view counters that, if talent even exists, its effects on ultimate performance are negligible. While no scientist takes seriously a strict nature-only view of expertise, the nurture view has gained tremendous popularity over the past several decades. This environmentalist view holds that individual differences in expertise reflect training history, with no important contribution to ultimate performance by innate ability ("talent"). Here, we argue that, despite its popularity, this view is inadequate to account for the evidence concerning the origins of expertise that has accumulated since the view was first proposed. More generally, we argue that the nature versus nurture debate in research on expertise is over-or certainly should be, as it has been in other areas of psychological research for decades. We describe a multifactorial model for research on the nature and nurture of expertise, which we believe will provide a progressive direction for future research on expertise. © 2018 New York Academy of Sciences.
Evangelista, Laura; Panunzio, Annalori; Polverosi, Roberta; Pomerri, Fabio; Rubello, Domenico
2014-03-01
The purpose of this study was to determine likelihood of malignancy for indeterminate lung nodules identified on CT comparing two standardized models with (18)F-FDG PET/CT. Fifty-nine cancer patients with indeterminate lung nodules (solid tumors; diameter, ≥5 mm) on CT had FDG PET/CT for lesion characterization. Mayo Clinic and Veterans Affairs Cooperative Study models of likelihood of malignancy were applied to solitary pulmonary nodules. High probability of malignancy was assigned a priori for multiple nodules. Low (<5%), intermediate (5-60%), and high (>60%) pretest malignancy probabilities were analyzed separately. Patients were reclassified with PET/CT. Histopathology or 2-year imaging follow-up established diagnosis. Outcome-based reclassification differences were defined as net reclassification improvement. A null hypothesis of asymptotic test was applied. Thirty-one patients had histology-proven malignancy. PET/CT was true-positive in 24 and true-negative in 25 cases. Negative predictive value was 78% and positive predictive value was 89%. On the basis of the Mayo Clinic model (n=31), 18 patients had low, 12 had intermediate, and one had high pretest likelihood; on the basis of the Veterans Affairs model (n=26), 5 patients had low, 20 had intermediate, and one had high pretest likelihood. Because of multiple lung nodules, 28 patients were classified as having high malignancy risk. PET/CT showed 32 negative and 27 positive scans. Net reclassification improvements respectively were 0.95 and 1.6 for Mayo Clinic and Veterans Affairs models (both p<0.0001). Fourteen of 31 (45.2%) and 12 of 26 (46.2%) patients with low and intermediate pretest likelihood, respectively, had positive findings on PET/CT for the Mayo Clinic and Veterans Affairs models, respectively. Of 15 patients with high pretest likelihood and negative findings on PET/CT, 13 (86.7%) did not have lung malignancy. PET/CT improves stratification of cancer patients with indeterminate pulmonary nodules. A substantial number of patients considered at low and intermediate pretest likelihood of malignancy with histology-proven lung malignancy showed abnormal PET/CT findings.
Selection of higher order regression models in the analysis of multi-factorial transcription data.
Prazeres da Costa, Olivia; Hoffman, Arthur; Rey, Johannes W; Mansmann, Ulrich; Buch, Thorsten; Tresch, Achim
2014-01-01
Many studies examine gene expression data that has been obtained under the influence of multiple factors, such as genetic background, environmental conditions, or exposure to diseases. The interplay of multiple factors may lead to effect modification and confounding. Higher order linear regression models can account for these effects. We present a new methodology for linear model selection and apply it to microarray data of bone marrow-derived macrophages. This experiment investigates the influence of three variable factors: the genetic background of the mice from which the macrophages were obtained, Yersinia enterocolitica infection (two strains, and a mock control), and treatment/non-treatment with interferon-γ. We set up four different linear regression models in a hierarchical order. We introduce the eruption plot as a new practical tool for model selection complementary to global testing. It visually compares the size and significance of effect estimates between two nested models. Using this methodology we were able to select the most appropriate model by keeping only relevant factors showing additional explanatory power. Application to experimental data allowed us to qualify the interaction of factors as either neutral (no interaction), alleviating (co-occurring effects are weaker than expected from the single effects), or aggravating (stronger than expected). We find a biologically meaningful gene cluster of putative C2TA target genes that appear to be co-regulated with MHC class II genes. We introduced the eruption plot as a tool for visual model comparison to identify relevant higher order interactions in the analysis of expression data obtained under the influence of multiple factors. We conclude that model selection in higher order linear regression models should generally be performed for the analysis of multi-factorial microarray data.
Reducing chemical exposures at home: opportunities for action
Zota, Ami R; Singla, Veena; Adamkiewicz, Gary; Mitro, Susanna D; Dodson, Robin E
2017-01-01
Indoor environments can influence human environmental chemical exposures and, ultimately, public health. Furniture, electronics, personal care and cleaning products, floor coverings and other consumer products contain chemicals that can end up in the indoor air and settled dust. Consumer product chemicals such as phthalates, phenols, flame retardants and per- and polyfluorinated alkyl substances are widely detected in the US general population, including vulnerable populations, and are associated with adverse health effects such as reproductive and endocrine toxicity. We discuss the implications of our recent meta-analysis describing the patterns of chemical exposures and the ubiquity of multiple chemicals in indoor environments. To reduce the likelihood of exposures to these toxic chemicals, we then discuss approaches for exposure mitigation: targeting individual behaviour change, household maintenance and purchasing decisions, consumer advocacy and corporate responsibility in consumer markets, and regulatory action via state/federal policies. There is a need to further develop evidence-based strategies for chemical exposure reduction in each of these areas, given the multi-factorial nature of the problem. Further identifying those at greatest risk; understanding the individual, household and community factors that influence indoor chemical exposures; and developing options for mitigation may substantially improve individuals’ exposures and health. PMID:28756396
Cross-validation to select Bayesian hierarchical models in phylogenetics.
Duchêne, Sebastián; Duchêne, David A; Di Giallonardo, Francesca; Eden, John-Sebastian; Geoghegan, Jemma L; Holt, Kathryn E; Ho, Simon Y W; Holmes, Edward C
2016-05-26
Recent developments in Bayesian phylogenetic models have increased the range of inferences that can be drawn from molecular sequence data. Accordingly, model selection has become an important component of phylogenetic analysis. Methods of model selection generally consider the likelihood of the data under the model in question. In the context of Bayesian phylogenetics, the most common approach involves estimating the marginal likelihood, which is typically done by integrating the likelihood across model parameters, weighted by the prior. Although this method is accurate, it is sensitive to the presence of improper priors. We explored an alternative approach based on cross-validation that is widely used in evolutionary analysis. This involves comparing models according to their predictive performance. We analysed simulated data and a range of viral and bacterial data sets using a cross-validation approach to compare a variety of molecular clock and demographic models. Our results show that cross-validation can be effective in distinguishing between strict- and relaxed-clock models and in identifying demographic models that allow growth in population size over time. In most of our empirical data analyses, the model selected using cross-validation was able to match that selected using marginal-likelihood estimation. The accuracy of cross-validation appears to improve with longer sequence data, particularly when distinguishing between relaxed-clock models. Cross-validation is a useful method for Bayesian phylogenetic model selection. This method can be readily implemented even when considering complex models where selecting an appropriate prior for all parameters may be difficult.
Inverse Ising problem in continuous time: A latent variable approach
NASA Astrophysics Data System (ADS)
Donner, Christian; Opper, Manfred
2017-12-01
We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.
A Solution to Separation and Multicollinearity in Multiple Logistic Regression
Shen, Jianzhao; Gao, Sujuan
2010-01-01
In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27–38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth’s penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study. PMID:20376286
A Solution to Separation and Multicollinearity in Multiple Logistic Regression.
Shen, Jianzhao; Gao, Sujuan
2008-10-01
In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27-38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth's penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study.
Quasar microlensing models with constraints on the Quasar light curves
NASA Astrophysics Data System (ADS)
Tie, S. S.; Kochanek, C. S.
2018-01-01
Quasar microlensing analyses implicitly generate a model of the variability of the source quasar. The implied source variability may be unrealistic yet its likelihood is generally not evaluated. We used the damped random walk (DRW) model for quasar variability to evaluate the likelihood of the source variability and applied the revized algorithm to a microlensing analysis of the lensed quasar RX J1131-1231. We compared estimates of the size of the quasar disc and the average stellar mass of the lens galaxy with and without applying the DRW likelihoods for the source variability model and found no significant effect on the estimated physical parameters. The most likely explanation is that unreliastic source light-curve models are generally associated with poor microlensing fits that already make a negligible contribution to the probability distributions of the derived parameters.
Chen, Feng; Chen, Suren; Ma, Xiaoxiang
2018-06-01
Driving environment, including road surface conditions and traffic states, often changes over time and influences crash probability considerably. It becomes stretched for traditional crash frequency models developed in large temporal scales to capture the time-varying characteristics of these factors, which may cause substantial loss of critical driving environmental information on crash prediction. Crash prediction models with refined temporal data (hourly records) are developed to characterize the time-varying nature of these contributing factors. Unbalanced panel data mixed logit models are developed to analyze hourly crash likelihood of highway segments. The refined temporal driving environmental data, including road surface and traffic condition, obtained from the Road Weather Information System (RWIS), are incorporated into the models. Model estimation results indicate that the traffic speed, traffic volume, curvature and chemically wet road surface indicator are better modeled as random parameters. The estimation results of the mixed logit models based on unbalanced panel data show that there are a number of factors related to crash likelihood on I-25. Specifically, weekend indicator, November indicator, low speed limit and long remaining service life of rutting indicator are found to increase crash likelihood, while 5-am indicator and number of merging ramps per lane per mile are found to decrease crash likelihood. The study underscores and confirms the unique and significant impacts on crash imposed by the real-time weather, road surface, and traffic conditions. With the unbalanced panel data structure, the rich information from real-time driving environmental big data can be well incorporated. Copyright © 2018 National Safety Council and Elsevier Ltd. All rights reserved.
A Maximum Likelihood Approach to Functional Mapping of Longitudinal Binary Traits
Wang, Chenguang; Li, Hongying; Wang, Zhong; Wang, Yaqun; Wang, Ningtao; Wang, Zuoheng; Wu, Rongling
2013-01-01
Despite their importance in biology and biomedicine, genetic mapping of binary traits that change over time has not been well explored. In this article, we develop a statistical model for mapping quantitative trait loci (QTLs) that govern longitudinal responses of binary traits. The model is constructed within the maximum likelihood framework by which the association between binary responses is modeled in terms of conditional log odds-ratios. With this parameterization, the maximum likelihood estimates (MLEs) of marginal mean parameters are robust to the misspecification of time dependence. We implement an iterative procedures to obtain the MLEs of QTL genotype-specific parameters that define longitudinal binary responses. The usefulness of the model was validated by analyzing a real example in rice. Simulation studies were performed to investigate the statistical properties of the model, showing that the model has power to identify and map specific QTLs responsible for the temporal pattern of binary traits. PMID:23183762
NASA Astrophysics Data System (ADS)
Ben Abdessalem, Anis; Dervilis, Nikolaos; Wagg, David; Worden, Keith
2018-01-01
This paper will introduce the use of the approximate Bayesian computation (ABC) algorithm for model selection and parameter estimation in structural dynamics. ABC is a likelihood-free method typically used when the likelihood function is either intractable or cannot be approached in a closed form. To circumvent the evaluation of the likelihood function, simulation from a forward model is at the core of the ABC algorithm. The algorithm offers the possibility to use different metrics and summary statistics representative of the data to carry out Bayesian inference. The efficacy of the algorithm in structural dynamics is demonstrated through three different illustrative examples of nonlinear system identification: cubic and cubic-quintic models, the Bouc-Wen model and the Duffing oscillator. The obtained results suggest that ABC is a promising alternative to deal with model selection and parameter estimation issues, specifically for systems with complex behaviours.
New prior sampling methods for nested sampling - Development and testing
NASA Astrophysics Data System (ADS)
Stokes, Barrie; Tuyl, Frank; Hudson, Irene
2017-06-01
Nested Sampling is a powerful algorithm for fitting models to data in the Bayesian setting, introduced by Skilling [1]. The nested sampling algorithm proceeds by carrying out a series of compressive steps, involving successively nested iso-likelihood boundaries, starting with the full prior distribution of the problem parameters. The "central problem" of nested sampling is to draw at each step a sample from the prior distribution whose likelihood is greater than the current likelihood threshold, i.e., a sample falling inside the current likelihood-restricted region. For both flat and informative priors this ultimately requires uniform sampling restricted to the likelihood-restricted region. We present two new methods of carrying out this sampling step, and illustrate their use with the lighthouse problem [2], a bivariate likelihood used by Gregory [3] and a trivariate Gaussian mixture likelihood. All the algorithm development and testing reported here has been done with Mathematica® [4].
ERIC Educational Resources Information Center
Paek, Insu; Wilson, Mark
2011-01-01
This study elaborates the Rasch differential item functioning (DIF) model formulation under the marginal maximum likelihood estimation context. Also, the Rasch DIF model performance was examined and compared with the Mantel-Haenszel (MH) procedure in small sample and short test length conditions through simulations. The theoretically known…
Development and Validation of a Multifactorial Treatment Outcome Measure for Eating Disorders.
ERIC Educational Resources Information Center
Anderson, Drew A.; Williamson, Donald A.; Duchmann, Erich G.; Gleaves, David H.; Barbin, Jane M.
1999-01-01
Developed a brief self-report inventory to evaluate treatment outcome for anorexia and bulimia nervosa, the Multifactorial Assessment of Eating Disorders, and evaluated the instrument in a series of studies involving 1,054 women. Results support a stable factor structure and satisfactory reliability and validity, and establish normative data. (SLD)
ERIC Educational Resources Information Center
Klein, Andreas G.; Muthen, Bengt O.
2007-01-01
In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…
Counseling Pretreatment and the Elaboration Likelihood Model of Attitude Change.
ERIC Educational Resources Information Center
Heesacker, Martin
The importance of high levels of involvement in counseling has been related to theories of interpersonal influence. To examine differing effects of counselor credibility as a function of how personally involved counselors are, the Elaboration Likelihood Model (ELM) of attitude change was applied to counseling pretreatment. Students (N=256) were…
Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods
ERIC Educational Resources Information Center
Zhong, Xiaoling; Yuan, Ke-Hai
2011-01-01
In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…
Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth
ERIC Educational Resources Information Center
Jeon, Minjeong
2012-01-01
Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…
Evaluation of Smoking Prevention Television Messages Based on the Elaboration Likelihood Model
ERIC Educational Resources Information Center
Flynn, Brian S.; Worden, John K.; Bunn, Janice Yanushka; Connolly, Scott W.; Dorwaldt, Anne L.
2011-01-01
Progress in reducing youth smoking may depend on developing improved methods to communicate with higher risk youth. This study explored the potential of smoking prevention messages based on the Elaboration Likelihood Model (ELM) to address these needs. Structured evaluations of 12 smoking prevention messages based on three strategies derived from…
A time series intervention analysis (TSIA) of dendrochronological data to infer the tree growth-climate-disturbance relations and forest disturbance history is described. Maximum likelihood is used to estimate the parameters of a structural time series model with components for ...
Russell, James C; Proctor, Spencer D
2006-01-01
Cardiovascular disease, the leading cause of death in much of the modern world, is the common symptomatic end stage of a number of distinct diseases and, therefore, is multifactorial and polygenetic in character. The two major underlying causes are disorders of lipid metabolism and metabolic syndrome. The ability to develop preventative and ameliorative treatments will depend on animal models that mimic human disease processes. The focus of this review is to identify suitable animal models and insights into cardiovascular disease achieved to date using such models. The ideal animal model of cardiovascular disease will mimic the human subject metabolically and pathophysiologically, will be large enough to permit physiological and metabolic studies, and will develop end-stage disease comparable to those in humans. Given the complex multifactorial nature of cardiovascular disease, no one species will be suitable for all studies. Potential larger animal models are problematic due to cost, ethical considerations, or poor pathophysiological comparability to humans. Rabbits require high-cholesterol diets to develop cardiovascular disease, and there are no rabbit models of metabolic syndrome. Spontaneous mutations in rats provide several complementary models of obesity, hyperlipidemia, insulin resistance, and type 2 diabetes, one of which spontaneously develops cardiovascular disease and ischemic lesions. The mouse, like normal rats, is characteristically resistant to cardiovascular disease, although genetically altered strains respond to cholesterol feeding with atherosclerosis, but not with end-stage ischemic lesions. The most useful and valid species/strains for the study of cardiovascular disease appear to be small rodents, rats, and mice. This fragmented field would benefit from a consensus on well-characterized appropriate models for the study of different aspects of cardiovascular disease and a renewed emphasis on the biology of underlying diseases.
Sjösten, Noora M; Vahlberg, Tero J; Kivelä, Sirkka-Liisa
2008-05-01
The aim was to determine the effects of multifactorial fall prevention on depressive symptoms among aged Finns at increased risk of falling. This study is part of a multifactorial fall prevention trial with a randomised controlled design implemented in the town of Pori, western Finland. The study population consisted of ambulatory, 65-year-old or older Finns, with moderate or high cognitive and physical abilities who had fallen at least once during the previous 12 months. The participants (n=591) were randomised into a risk-based multifactorial fall prevention programme (intervention group, IG) or into a one-time counselling group (control group, CG). The 1-year intervention included individual geriatric assessment followed by treatment recommendations, individual guidance regarding fall prevention, physical exercise in small groups twice a month, psychosocial group activities and lectures once a month, home-exercises and home hazard assessment. The outcome, depressive symptoms, was measured by the 30-item Geriatric Depression Scale (GDS). The full GDS data with no missing items were available for 464 persons. A significant decrease in depressive symptoms during the 12-month intervention was found both in IG and in CG, but the difference in change was not significant (p=0.110). However, a significant difference in change between the groups was found among men and older subjects (>or=75) in favour of the IG. Multifactorial fall prevention had no effects on depressive symptoms among the community-dwelling aged. However, men and older participants benefited from the intervention.
An, P; Rice, T; Gagnon, J; Borecki, I B; Bergeron, J; Després, J P; Leon, A S; Skinner, J S; Wilmore, J H; Bouchard, C; Rao, D C
2000-03-01
Complex segregation analyses of apolipoproteins (apo) A-1 and B-100 were performed in a sample of 520 individuals from 99 white families who participated in the HERITAGE Family Study. In these sedentary families, plasma apo A-1 and B-100 concentrations were measured before and after a 20-week endurance exercise training program. Baseline apo A-1 and B-100 were adjusted for the effects of age (age-adjusted baseline apo A-1 and B-100) and for the effects of age and BMI (age-BMI-adjusted baseline apo A-1 and B-100). The change in response to training was computed as a simple Delta (posttraining minus baseline) and was adjusted for age and the baseline (age-baseline-adjusted apo A-1 and B-100 responses to training). In the present study, a major gene could not be inferred for baseline apo A-1. Rather, we found a major effect along with a multifactorial effect accounting for 8% to 9% and 51% to 56% of the variance, respectively. In addition, no clear evidence supported a major-gene effect for its response to training, whereas the transmission of a major effect from parents to offspring was ambiguous, ie, genetic in nature or familial environmental in origin. The major effect accounted for 15% of the variance, with an additional 21% and 58% of the variance being accounted for by a multifactorial effect in parents and offspring, respectively. It is interesting to have obtained evidence of a putative recessive major locus for baseline apo B-100, which accounted for 50% to 56% of the variance, with an additional 25% to 29% of the variance due to a multifactorial effect. In contrast, no major effect for its response to training was identified, although a multifactorial effect was found that accounted for 27% of the variance. The novel findings arising from the present study are summarized as follows. Baseline apo A-1 and its response to training were influenced by a major effect and a multifactorial effect. Baseline apo B-100 was influenced by a putative major recessive gene with a multifactorial component, but its response to training was influenced solely by a multifactorial component in these sedentary families.
Lin, Feng-Chang; Zhu, Jun
2012-01-01
We develop continuous-time models for the analysis of environmental or ecological monitoring data such that subjects are observed at multiple monitoring time points across space. Of particular interest are additive hazards regression models where the baseline hazard function can take on flexible forms. We consider time-varying covariates and take into account spatial dependence via autoregression in space and time. We develop statistical inference for the regression coefficients via partial likelihood. Asymptotic properties, including consistency and asymptotic normality, are established for parameter estimates under suitable regularity conditions. Feasible algorithms utilizing existing statistical software packages are developed for computation. We also consider a simpler additive hazards model with homogeneous baseline hazard and develop hypothesis testing for homogeneity. A simulation study demonstrates that the statistical inference using partial likelihood has sound finite-sample properties and offers a viable alternative to maximum likelihood estimation. For illustration, we analyze data from an ecological study that monitors bark beetle colonization of red pines in a plantation of Wisconsin.
Segregation analysis reveals evidence of a major gene for Alzheimer disease.
Farrer, L A; Myers, R H; Connor, L; Cupples, L A; Growdon, J H
1991-01-01
In an attempt to resolve the relative influences of major genes, multifactorial heritability, and cohort effects on the susceptibility to Alzheimer disease (AD), complex segregation analysis was performed on 232 nuclear families. All families were consecutively ascertained through a single proband who was referred for diagnostic evaluation of a memory disorder. The results suggest that susceptibility to AD is determined, in part, by a major autosomal dominant allele with an additional multifactorial component. Single-locus, polygenic, sporadic, and no-transmission models, as well as recessive inheritance of the major effect, were significantly rejected. Excess transmission from the heterozygote was marginally significant and probably reflects the presence of phenocopies or perhaps the existence of two or more major loci for AD. The frequency of the AD susceptibility allele was estimated to be .038, but the major locus accounts for only 24% of the transmission variance, indicating a substantial role for other genetic and nongenetic mechanisms in the causation of AD. PMID:2035523
Posada, David
2006-01-01
ModelTest server is a web-based application for the selection of models of nucleotide substitution using the program ModelTest. The server takes as input a text file with likelihood scores for the set of candidate models. Models can be selected with hierarchical likelihood ratio tests, or with the Akaike or Bayesian information criteria. The output includes several statistics for the assessment of model selection uncertainty, for model averaging or to estimate the relative importance of model parameters. The server can be accessed at . PMID:16845102
ERIC Educational Resources Information Center
Molenaar, Peter C. M.; Nesselroade, John R.
1998-01-01
Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…
ERIC Educational Resources Information Center
Campagna, Grace
2012-01-01
The purpose of the study was to develop a multifactorial model tracing paths from housing affordances to academic outcomes in higher education. The study sought to connect two areas of psychological research: on one side, the adverse effects of environmental stressors and inadequate self-regulation upon life course prospects and, on the other, the…
ERIC Educational Resources Information Center
Hill, Benjamin D.; Musso, Mandi; Jones, Glenn N.; Pella, Russell D.; Gouvier, Wm. Drew
2013-01-01
A psychometric evaluation on the measurement of self-report anxiety and depression using the Beck Depression Inventory (BDI-II), State Trait Anxiety Inventory, Form-Y (STAI-Y), and the Personality Assessment Inventory (PAI) was performed using a sample of 534 generally young adults seeking psychoeducational evaluation at a university-based clinic.…
Staley, Dennis M.; Negri, Jacquelyn A.; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.
2016-06-30
Wildfire can significantly alter the hydrologic response of a watershed to the extent that even modest rainstorms can generate dangerous flash floods and debris flows. To reduce public exposure to hazard, the U.S. Geological Survey produces post-fire debris-flow hazard assessments for select fires in the western United States. We use publicly available geospatial data describing basin morphology, burn severity, soil properties, and rainfall characteristics to estimate the statistical likelihood that debris flows will occur in response to a storm of a given rainfall intensity. Using an empirical database and refined geospatial analysis methods, we defined new equations for the prediction of debris-flow likelihood using logistic regression methods. We showed that the new logistic regression model outperformed previous models used to predict debris-flow likelihood.
Hey, Jody; Nielsen, Rasmus
2007-01-01
In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231
Explaining the effect of event valence on unrealistic optimism.
Gold, Ron S; Brown, Mark G
2009-05-01
People typically exhibit 'unrealistic optimism' (UO): they believe they have a lower chance of experiencing negative events and a higher chance of experiencing positive events than does the average person. UO has been found to be greater for negative than positive events. This 'valence effect' has been explained in terms of motivational processes. An alternative explanation is provided by the 'numerosity model', which views the valence effect simply as a by-product of a tendency for likelihood estimates pertaining to the average member of a group to increase with the size of the group. Predictions made by the numerosity model were tested in two studies. In each, UO for a single event was assessed. In Study 1 (n = 115 students), valence was manipulated by framing the event either negatively or positively, and participants estimated their own likelihood and that of the average student at their university. In Study 2 (n = 139 students), valence was again manipulated and participants again estimated their own likelihood; additionally, group size was manipulated by having participants estimate the likelihood of the average student in a small, medium-sized, or large group. In each study, the valence effect was found, but was due to an effect on estimates of own likelihood, not the average person's likelihood. In Study 2, valence did not interact with group size. The findings contradict the numerosity model, but are in accord with the motivational explanation. Implications for health education are discussed.
Bayesian structural equation modeling in sport and exercise psychology.
Stenling, Andreas; Ivarsson, Andreas; Johnson, Urban; Lindwall, Magnus
2015-08-01
Bayesian statistics is on the rise in mainstream psychology, but applications in sport and exercise psychology research are scarce. In this article, the foundations of Bayesian analysis are introduced, and we will illustrate how to apply Bayesian structural equation modeling in a sport and exercise psychology setting. More specifically, we contrasted a confirmatory factor analysis on the Sport Motivation Scale II estimated with the most commonly used estimator, maximum likelihood, and a Bayesian approach with weakly informative priors for cross-loadings and correlated residuals. The results indicated that the model with Bayesian estimation and weakly informative priors provided a good fit to the data, whereas the model estimated with a maximum likelihood estimator did not produce a well-fitting model. The reasons for this discrepancy between maximum likelihood and Bayesian estimation are discussed as well as potential advantages and caveats with the Bayesian approach.
Harrell-Williams, Leigh; Wolfe, Edward W
2014-01-01
Previous research has investigated the influence of sample size, model misspecification, test length, ability distribution offset, and generating model on the likelihood ratio difference test in applications of item response models. This study extended that research to the evaluation of dimensionality using the multidimensional random coefficients multinomial logit model (MRCMLM). Logistic regression analysis of simulated data reveal that sample size and test length have a large effect on the capacity of the LR difference test to correctly identify unidimensionality, with shorter tests and smaller sample sizes leading to smaller Type I error rates. Higher levels of simulated misfit resulted in fewer incorrect decisions than data with no or little misfit. However, Type I error rates indicate that the likelihood ratio difference test is not suitable under any of the simulated conditions for evaluating dimensionality in applications of the MRCMLM.
ERIC Educational Resources Information Center
Eaves, Michael
This paper provides a literature review of the elaboration likelihood model (ELM) as applied in persuasion. Specifically, the paper addresses distraction with regard to effects on persuasion. In addition, the application of proxemic violations as peripheral cues in message processing is discussed. Finally, the paper proposes to shed new light on…
ERIC Educational Resources Information Center
Andrews, Lester W.; Gutkin, Terry B.
1994-01-01
Investigates variables drawn from the Elaboration Likelihood Model (ELM) that might be manipulated to enhance the persuasiveness of a psychoeducational report. Results showed teachers in training were more persuaded by reports with high message quality. Findings are discussed in terms of the ELM and professional school psychology practice. (RJM)
ERIC Educational Resources Information Center
Heppner, Mary J.; And Others
1995-01-01
Intervention sought to improve first-year college students' attitudes about rape. Used the Elaboration Likelihood Model to examine men's and women's attitude change process. Found numerous sex differences in ways men and women experienced and changed during and after intervention. Women's attitude showed more lasting change while men's was more…
Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.
ERIC Educational Resources Information Center
Butler, Ronald W.
The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…
ERIC Educational Resources Information Center
Han, Kyung T.; Guo, Fanmin
2014-01-01
The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…
ERIC Educational Resources Information Center
Penfield, Randall D.; Bergeron, Jennifer M.
2005-01-01
This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…
Baele, Guy; Lemey, Philippe; Vansteelandt, Stijn
2013-03-06
Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model's marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. We here assess the original 'model-switch' path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model's marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation.
Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.
2014-12-01
Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.
Exclusion probabilities and likelihood ratios with applications to mixtures.
Slooten, Klaas-Jan; Egeland, Thore
2016-01-01
The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model.
2015-08-01
McCullagh, P.; Nelder, J.A. Generalized Linear Model , 2nd ed.; Chapman and Hall: London, 1989. 7. Johnston, J. Econometric Methods, 3rd ed.; McGraw...FOR A DOSE-RESPONSE MODEL ECBC-TN-068 Kyong H. Park Steven J. Lagan RESEARCH AND TECHNOLOGY DIRECTORATE August 2015 Approved for public release...Likelihood Estimation Method for Completely Separated and Quasi-Completely Separated Data for a Dose-Response Model 5a. CONTRACT NUMBER 5b. GRANT
Modeling Goal-Directed User Exploration in Human-Computer Interaction
2011-02-01
scent, other factors including the layout position and grouping of options in the user-interface also affect user exploration and the likelihood of...grouping of options in the user-interface also affect user exploration and the likelihood of success. This dissertation contributes a new model of goal...better inform UI design. 1.1 RESEARCH GAPS IN MODELING In addition to infoscent, the layout of the UI also affects the choices made during
2013-01-01
Background Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model’s marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. Results We here assess the original ‘model-switch’ path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model’s marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. Conclusions We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation. PMID:23497171
Fast maximum likelihood estimation of mutation rates using a birth-death process.
Wu, Xiaowei; Zhu, Hongxiao
2015-02-07
Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim
2014-11-01
In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.
A unifying framework for marginalized random intercept models of correlated binary outcomes
Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian M.
2013-01-01
We demonstrate that many current approaches for marginal modeling of correlated binary outcomes produce likelihoods that are equivalent to the copula-based models herein. These general copula models of underlying latent threshold random variables yield likelihood-based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data with exchangeable correlation structures. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalized random intercept models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts. PMID:25342871
Likelihood ratio decisions in memory: three implied regularities.
Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T
2009-06-01
We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.
Ditmyer, Marcia M; Dounis, Georgia; Howard, Katherine M; Mobley, Connie; Cappelli, David
2011-05-20
The objective of this study was to measure the validity and reliability of a multifactorial Risk Factor Model developed for use in predicting future caries risk in Nevada adolescents in a public health setting. This study examined retrospective data from an oral health surveillance initiative that screened over 51,000 students 13-18 years of age, attending public/private schools in Nevada across six academic years (2002/2003-2007/2008). The Risk Factor Model included ten demographic variables: exposure to fluoridation in the municipal water supply, environmental smoke exposure, race, age, locale (metropolitan vs. rural), tobacco use, Body Mass Index, insurance status, sex, and sealant application. Multiple regression was used in a previous study to establish which significantly contributed to caries risk. Follow-up logistic regression ascertained the weight of contribution and odds ratios of the ten variables. Researchers in this study computed sensitivity, specificity, positive predictive value (PVP), negative predictive value (PVN), and prevalence across all six years of screening to assess the validity of the Risk Factor Model. Subjects' overall mean caries prevalence across all six years was 66%. Average sensitivity across all six years was 79%; average specificity was 81%; average PVP was 89% and average PVN was 67%. Overall, the Risk Factor Model provided a relatively constant, valid measure of caries that could be used in conjunction with a comprehensive risk assessment in population-based screenings by school nurses/nurse practitioners, health educators, and physicians to guide them in assessing potential future caries risk for use in prevention and referral practices.
ERIC Educational Resources Information Center
Laforest, Sophie; Lorthios-Guilledroit, Agathe; Nour, Kareen; Parisien, Manon; Fournier, Michel; Ellemberg, Dave; Guay, Danielle; Desgagnés-Cyr, Charles-Émile; Bier, Nathalie
2017-01-01
This study examined the effects on attitudes and lifestyle behavior of "Jog your Mind," a multi-factorial community-based program promoting cognitive vitality among seniors with no known cognitive impairment. A quasi-experimental study was conducted. Twenty-three community organizations were assigned either to the experimental group…
ERIC Educational Resources Information Center
Magis, David; Raiche, Gilles
2012-01-01
This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…
ERIC Educational Resources Information Center
Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun
2002-01-01
Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)
Chen, Yong; Liu, Yulun; Ning, Jing; Cormier, Janice; Chu, Haitao
2014-01-01
Systematic reviews of diagnostic tests often involve a mixture of case-control and cohort studies. The standard methods for evaluating diagnostic accuracy only focus on sensitivity and specificity and ignore the information on disease prevalence contained in cohort studies. Consequently, such methods cannot provide estimates of measures related to disease prevalence, such as population averaged or overall positive and negative predictive values, which reflect the clinical utility of a diagnostic test. In this paper, we propose a hybrid approach that jointly models the disease prevalence along with the diagnostic test sensitivity and specificity in cohort studies, and the sensitivity and specificity in case-control studies. In order to overcome the potential computational difficulties in the standard full likelihood inference of the proposed hybrid model, we propose an alternative inference procedure based on the composite likelihood. Such composite likelihood based inference does not suffer computational problems and maintains high relative efficiency. In addition, it is more robust to model mis-specifications compared to the standard full likelihood inference. We apply our approach to a review of the performance of contemporary diagnostic imaging modalities for detecting metastases in patients with melanoma. PMID:25897179
Equivalence of binormal likelihood-ratio and bi-chi-squared ROC curve models
Hillis, Stephen L.
2015-01-01
A basic assumption for a meaningful diagnostic decision variable is that there is a monotone relationship between it and its likelihood ratio. This relationship, however, generally does not hold for a decision variable that results in a binormal ROC curve. As a result, receiver operating characteristic (ROC) curve estimation based on the assumption of a binormal ROC-curve model produces improper ROC curves that have “hooks,” are not concave over the entire domain, and cross the chance line. Although in practice this “improperness” is usually not noticeable, sometimes it is evident and problematic. To avoid this problem, Metz and Pan proposed basing ROC-curve estimation on the assumption of a binormal likelihood-ratio (binormal-LR) model, which states that the decision variable is an increasing transformation of the likelihood-ratio function of a random variable having normal conditional diseased and nondiseased distributions. However, their development is not easy to follow. I show that the binormal-LR model is equivalent to a bi-chi-squared model in the sense that the families of corresponding ROC curves are the same. The bi-chi-squared formulation provides an easier-to-follow development of the binormal-LR ROC curve and its properties in terms of well-known distributions. PMID:26608405
Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.
2015-01-01
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448
Ritchie, Marylyn D.; Hahn, Lance W.; Roodi, Nady; Bailey, L. Renee; Dupont, William D.; Parl, Fritz F.; Moore, Jason H.
2001-01-01
One of the greatest challenges facing human geneticists is the identification and characterization of susceptibility genes for common complex multifactorial human diseases. This challenge is partly due to the limitations of parametric-statistical methods for detection of gene effects that are dependent solely or partially on interactions with other genes and with environmental exposures. We introduce multifactor-dimensionality reduction (MDR) as a method for reducing the dimensionality of multilocus information, to improve the identification of polymorphism combinations associated with disease risk. The MDR method is nonparametric (i.e., no hypothesis about the value of a statistical parameter is made), is model-free (i.e., it assumes no particular inheritance model), and is directly applicable to case-control and discordant-sib-pair studies. Using simulated case-control data, we demonstrate that MDR has reasonable power to identify interactions among two or more loci in relatively small samples. When it was applied to a sporadic breast cancer case-control data set, in the absence of any statistically significant independent main effects, MDR identified a statistically significant high-order interaction among four polymorphisms from three different estrogen-metabolism genes. To our knowledge, this is the first report of a four-locus interaction associated with a common complex multifactorial disease. PMID:11404819
Approximate likelihood calculation on a phylogeny for Bayesian estimation of divergence times.
dos Reis, Mario; Yang, Ziheng
2011-07-01
The molecular clock provides a powerful way to estimate species divergence times. If information on some species divergence times is available from the fossil or geological record, it can be used to calibrate a phylogeny and estimate divergence times for all nodes in the tree. The Bayesian method provides a natural framework to incorporate different sources of information concerning divergence times, such as information in the fossil and molecular data. Current models of sequence evolution are intractable in a Bayesian setting, and Markov chain Monte Carlo (MCMC) is used to generate the posterior distribution of divergence times and evolutionary rates. This method is computationally expensive, as it involves the repeated calculation of the likelihood function. Here, we explore the use of Taylor expansion to approximate the likelihood during MCMC iteration. The approximation is much faster than conventional likelihood calculation. However, the approximation is expected to be poor when the proposed parameters are far from the likelihood peak. We explore the use of parameter transforms (square root, logarithm, and arcsine) to improve the approximation to the likelihood curve. We found that the new methods, particularly the arcsine-based transform, provided very good approximations under relaxed clock models and also under the global clock model when the global clock is not seriously violated. The approximation is poorer for analysis under the global clock when the global clock is seriously wrong and should thus not be used. The results suggest that the approximate method may be useful for Bayesian dating analysis using large data sets.
Robaey, P
1987-09-01
A review of the studies concerning age-related changes of the cognitive event-related potentials is presented. Graded changes (with little or no difference in waveform morphology but shifts in component latency or amplitude) draw to continuous developmental models, but morphological waveform differences are assumed to reflect fundamental differences in modes of cognitive processing. The authors equally present an experimental paradigm indicating that a multifactorial model of amplitude variations is able to reflect the passing from one cognitive stage to the next one, according to Piaget's theory.
ELASTIC NET FOR COX'S PROPORTIONAL HAZARDS MODEL WITH A SOLUTION PATH ALGORITHM.
Wu, Yichao
2012-01-01
For least squares regression, Efron et al. (2004) proposed an efficient solution path algorithm, the least angle regression (LAR). They showed that a slight modification of the LAR leads to the whole LASSO solution path. Both the LAR and LASSO solution paths are piecewise linear. Recently Wu (2011) extended the LAR to generalized linear models and the quasi-likelihood method. In this work we extend the LAR further to handle Cox's proportional hazards model. The goal is to develop a solution path algorithm for the elastic net penalty (Zou and Hastie (2005)) in Cox's proportional hazards model. This goal is achieved in two steps. First we extend the LAR to optimizing the log partial likelihood plus a fixed small ridge term. Then we define a path modification, which leads to the solution path of the elastic net regularized log partial likelihood. Our solution path is exact and piecewise determined by ordinary differential equation systems.
Evaluation of risk from acts of terrorism :the adversary/defender model using belief and fuzzy sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darby, John L.
Risk from an act of terrorism is a combination of the likelihood of an attack, the likelihood of success of the attack, and the consequences of the attack. The considerable epistemic uncertainty in each of these three factors can be addressed using the belief/plausibility measure of uncertainty from the Dempster/Shafer theory of evidence. The adversary determines the likelihood of the attack. The success of the attack and the consequences of the attack are determined by the security system and mitigation measures put in place by the defender. This report documents a process for evaluating risk of terrorist acts using anmore » adversary/defender model with belief/plausibility as the measure of uncertainty. Also, the adversary model is a linguistic model that applies belief/plausibility to fuzzy sets used in an approximate reasoning rule base.« less
Pseudomonas aeruginosa dose response and bathing water infection.
Roser, D J; van den Akker, B; Boase, S; Haas, C N; Ashbolt, N J; Rice, S A
2014-03-01
Pseudomonas aeruginosa is the opportunistic pathogen mostly implicated in folliculitis and acute otitis externa in pools and hot tubs. Nevertheless, infection risks remain poorly quantified. This paper reviews disease aetiologies and bacterial skin colonization science to advance dose-response theory development. Three model forms are identified for predicting disease likelihood from pathogen density. Two are based on Furumoto & Mickey's exponential 'single-hit' model and predict infection likelihood and severity (lesions/m2), respectively. 'Third-generation', mechanistic, dose-response algorithm development is additionally scoped. The proposed formulation integrates dispersion, epidermal interaction, and follicle invasion. The review also details uncertainties needing consideration which pertain to water quality, outbreaks, exposure time, infection sites, biofilms, cerumen, environmental factors (e.g. skin saturation, hydrodynamics), and whether P. aeruginosa is endogenous or exogenous. The review's findings are used to propose a conceptual infection model and identify research priorities including pool dose-response modelling, epidermis ecology and infection likelihood-based hygiene management.
Cosmological parameter estimation using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Prasad, J.; Souradeep, T.
2014-03-01
Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.
Negotiating Multicollinearity with Spike-and-Slab Priors.
Ročková, Veronika; George, Edward I
2014-08-01
In multiple regression under the normal linear model, the presence of multicollinearity is well known to lead to unreliable and unstable maximum likelihood estimates. This can be particularly troublesome for the problem of variable selection where it becomes more difficult to distinguish between subset models. Here we show how adding a spike-and-slab prior mitigates this difficulty by filtering the likelihood surface into a posterior distribution that allocates the relevant likelihood information to each of the subset model modes. For identification of promising high posterior models in this setting, we consider three EM algorithms, the fast closed form EMVS version of Rockova and George (2014) and two new versions designed for variants of the spike-and-slab formulation. For a multimodal posterior under multicollinearity, we compare the regions of convergence of these three algorithms. Deterministic annealing versions of the EMVS algorithm are seen to substantially mitigate this multimodality. A single simple running example is used for illustration throughout.
Dixon, Helen G; Warne, Charles D; Scully, Maree L; Wakefield, Melanie A; Dobbinson, Suzanne J
2011-04-01
Content analysis data on the tans of 4,422 female Caucasian models sampled from spring and summer magazine issues were combined with readership data to generate indices of potential exposure to social modeling of tanning via popular women's magazines over a 15-year period (1987 to 2002). Associations between these indices and cross-sectional telephone survey data from the same period on 5,675 female teenagers' and adults' tanning attitudes, beliefs, and behavior were examined using logistic regression models. Among young women, greater exposure to tanning in young women's magazines was associated with increased likelihood of endorsing pro-tan attitudes and beliefs. Among women of all ages, greater exposure to tanned models via the most popular women's magazines was associated with increased likelihood of attempting to get a tan but lower likelihood of endorsing pro-tan attitudes. Popular women's magazines may promote and reflect real women's tanning beliefs and behavior.
Maximum likelihood convolutional decoding (MCD) performance due to system losses
NASA Technical Reports Server (NTRS)
Webster, L.
1976-01-01
A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.
1990-11-01
1 = Q- 1 - 1 QlaaQ- 1.1 + a’Q-1a This is a simple case of a general formula called Woodbury’s formula by some authors; see, for example, Phadke and...1 2. The First-Order Moving Average Model ..... .................. 3. Some Approaches to the Iterative...the approximate likelihood function in some time series models. Useful suggestions have been the Cholesky decomposition of the covariance matrix and
NASA Technical Reports Server (NTRS)
Cash, W.
1979-01-01
Many problems in the experimental estimation of parameters for models can be solved through use of the likelihood ratio test. Applications of the likelihood ratio, with particular attention to photon counting experiments, are discussed. The procedures presented solve a greater range of problems than those currently in use, yet are no more difficult to apply. The procedures are proved analytically, and examples from current problems in astronomy are discussed.
Navas, Francisco Javier; Jordana, Jordi; León, José Manuel; Arando, Ander; Pizarro, Gabriela; McLean, Amy Katherine; Delgado, Juan Vicente
2017-08-01
New productive niches can offer new commercial perspectives linked to donkeys' products and human therapeutic or leisure applications. However, no assessment for selection criteria has been carried out yet. First, we assessed the animal inherent features and environmental factors that may potentially influence several cognitive processes in donkeys. Then, we aimed at describing a practical methodology to quantify such cognitive processes, seeking their inclusion in breeding and conservation programmes, through a multifactorial linear model. Sixteen cognitive process-related traits were scored on a problem-solving test in a sample of 300 Andalusian donkeys for three consecutive years from 2013 to 2015. The linear model assessed the influence and interactions of four environmental factors, sex as an animal-inherent factor, age as a covariable, and the interactions between these factors. Analyses of variance were performed with GLM procedure of SPSS Statistics for Windows, Version 24.0 software to assess the relative importance of each factor. All traits were significantly (P<0.05) affected by all factors in the model except for sex that was not significant for some of the cognitive processes, and stimulus which was not significant (P<0.05) for all of them except for the coping style related ones. The interaction between all factors within the model was non-significant (P<0.05) for almost all cognitive processes. The development of complex multifactorial models to study cognitive processes may counteract the inherent variability in behavior genetics and the estimation and prediction of related breeding parameters, key for the implementation of successful conservation programmes in apparently functionally misplaced endangered breeds. Copyright © 2017 Elsevier Ltd. All rights reserved.
On the Existence and Uniqueness of JML Estimates for the Partial Credit Model
ERIC Educational Resources Information Center
Bertoli-Barsotti, Lucio
2005-01-01
A necessary and sufficient condition is given in this paper for the existence and uniqueness of the maximum likelihood (the so-called joint maximum likelihood) estimate of the parameters of the Partial Credit Model. This condition is stated in terms of a structural property of the pattern of the data matrix that can be easily verified on the basis…
ERIC Educational Resources Information Center
Lee, Woong-Kyu
2012-01-01
The principal objective of this study was to gain insight into attitude changes occurring during IT acceptance from the perspective of elaboration likelihood model (ELM). In particular, the primary target of this study was the process of IT acceptance through an education program. Although the Internet and computers are now quite ubiquitous, and…
Phoebe L. Zarnetske; Thomas C., Jr. Edwards; Gretchen G. Moisen
2007-01-01
Estimating species likelihood of occurrence across extensive landscapes is a powerful management tool. Unfortunately, available occurrence data for landscape-scale modeling is often lacking and usually only in the form of observed presences. Ecologically based pseudo-absence points were generated from within habitat envelopes to accompany presence-only data in habitat...
ERIC Educational Resources Information Center
Andersen, Erling B.
A computer program for solving the conditional likelihood equations arising in the Rasch model for questionnaires is described. The estimation method and the computational problems involved are described in a previous research report by Andersen, but a summary of those results are given in two sections of this paper. A working example is also…
Moral Identity Predicts Doping Likelihood via Moral Disengagement and Anticipated Guilt.
Kavussanu, Maria; Ring, Christopher
2017-08-01
In this study, we integrated elements of social cognitive theory of moral thought and action and the social cognitive model of moral identity to better understand doping likelihood in athletes. Participants (N = 398) recruited from a variety of team sports completed measures of moral identity, moral disengagement, anticipated guilt, and doping likelihood. Moral identity predicted doping likelihood indirectly via moral disengagement and anticipated guilt. Anticipated guilt about potential doping mediated the relationship between moral disengagement and doping likelihood. Our findings provide novel evidence to suggest that athletes, who feel that being a moral person is central to their self-concept, are less likely to use banned substances due to their lower tendency to morally disengage and the more intense feelings of guilt they expect to experience for using banned substances.
NASA Astrophysics Data System (ADS)
De Santis, Alberto; Dellepiane, Umberto; Lucidi, Stefano
2012-11-01
In this paper we investigate the estimation problem for a model of the commodity prices. This model is a stochastic state space dynamical model and the problem unknowns are the state variables and the system parameters. Data are represented by the commodity spot prices, very seldom time series of Futures contracts are available for free. Both the system joint likelihood function (state variables and parameters) and the system marginal likelihood (the state variables are eliminated) function are addressed.
A comparison of abundance estimates from extended batch-marking and Jolly–Seber-type experiments
Cowen, Laura L E; Besbeas, Panagiotis; Morgan, Byron J T; Schwarz, Carl J
2014-01-01
Little attention has been paid to the use of multi-sample batch-marking studies, as it is generally assumed that an individual's capture history is necessary for fully efficient estimates. However, recently, Huggins et al. (2010) present a pseudo-likelihood for a multi-sample batch-marking study where they used estimating equations to solve for survival and capture probabilities and then derived abundance estimates using a Horvitz–Thompson-type estimator. We have developed and maximized the likelihood for batch-marking studies. We use data simulated from a Jolly–Seber-type study and convert this to what would have been obtained from an extended batch-marking study. We compare our abundance estimates obtained from the Crosbie–Manly–Arnason–Schwarz (CMAS) model with those of the extended batch-marking model to determine the efficiency of collecting and analyzing batch-marking data. We found that estimates of abundance were similar for all three estimators: CMAS, Huggins, and our likelihood. Gains are made when using unique identifiers and employing the CMAS model in terms of precision; however, the likelihood typically had lower mean square error than the pseudo-likelihood method of Huggins et al. (2010). When faced with designing a batch-marking study, researchers can be confident in obtaining unbiased abundance estimators. Furthermore, they can design studies in order to reduce mean square error by manipulating capture probabilities and sample size. PMID:24558576
2013-01-01
Background Falls among the elderly are a major public health concern. Therefore, the possibility of a modeling technique which could better estimate fall probability is both timely and needed. Using biomedical, pharmacological and demographic variables as predictors, latent class analysis (LCA) is demonstrated as a tool for the prediction of falls among community dwelling elderly. Methods Using a retrospective data-set a two-step LCA modeling approach was employed. First, we looked for the optimal number of latent classes for the seven medical indicators, along with the patients’ prescription medication and three covariates (age, gender, and number of medications). Second, the appropriate latent class structure, with the covariates, were modeled on the distal outcome (fall/no fall). The default estimator was maximum likelihood with robust standard errors. The Pearson chi-square, likelihood ratio chi-square, BIC, Lo-Mendell-Rubin Adjusted Likelihood Ratio test and the bootstrap likelihood ratio test were used for model comparisons. Results A review of the model fit indices with covariates shows that a six-class solution was preferred. The predictive probability for latent classes ranged from 84% to 97%. Entropy, a measure of classification accuracy, was good at 90%. Specific prescription medications were found to strongly influence group membership. Conclusions In conclusion the LCA method was effective at finding relevant subgroups within a heterogenous at-risk population for falling. This study demonstrated that LCA offers researchers a valuable tool to model medical data. PMID:23705639
On the occurrence of false positives in tests of migration under an isolation with migration model
Hey, Jody; Chung, Yujin; Sethuraman, Arun
2015-01-01
The population genetic study of divergence is often done using a Bayesian genealogy sampler, like those implemented in IMa2 and related programs, and these analyses frequently include a likelihood-ratio test of the null hypothesis of no migration between populations. Cruickshank and Hahn (2014, Molecular Ecology, 23, 3133–3157) recently reported a high rate of false positive test results with IMa2 for data simulated with small numbers of loci under models with no migration and recent splitting times. We confirm these findings and discover that they are caused by a failure of the assumptions underlying likelihood ratio tests that arises when using marginal likelihoods for a subset of model parameters. We also show that for small data sets, with little divergence between samples from two populations, an excellent fit can often be found by a model with a low migration rate and recent splitting time and a model with a high migration rate and a deep splitting time. PMID:26456794
2014-01-01
Background In line with a rapidly ageing global population, the rise in the frequency of falls will lead to increased healthcare and social care costs. This study will be one of the few randomized controlled trials evaluating a multifaceted falls intervention in a low-middle income, culturally-diverse older Asian community. The primary objective of our paper is to evaluate whether individually tailored multifactorial interventions will successfully reduce the number of falls among older adults. Methods Three hundred community-dwelling older Malaysian adults with a history of (i) two or more falls, or (ii) one injurious fall in the past 12 months will be recruited. Baseline assessment will include cardiovascular, frailty, fracture risk, psychological factors, gait and balance, activities of daily living and visual assessments. Fallers will be randomized into 2 groups: to receive tailored multifactorial interventions (intervention group); or given lifestyle advice with continued conventional care (control group). Multifactorial interventions will target 6 specific risk factors. All participants will be re-assessed after 12 months. The primary outcome measure will be fall recurrence, measured with monthly falls diaries. Secondary outcomes include falls risk factors; and psychological measures including fear of falling, and quality of life. Discussion Previous studies evaluating multifactorial interventions in falls have reported variable outcomes. Given likely cultural, personal, lifestyle and health service differences in Asian countries, it is vital that individually-tailored multifaceted interventions are evaluated in an Asian population to determine applicability of these interventions in our setting. If successful, these approaches have the potential for widespread application in geriatric healthcare services, will reduce the projected escalation of falls and fall-related injuries, and improve the quality of life of our older community. Trial registration ISRCTN11674947 PMID:24951180
Withers, Giselle F; Wertheim, Eleanor H
2004-01-01
This study applied principles from the Elaboration Likelihood Model of Persuasion to the prevention of disordered eating. Early adolescent girls watched either a preventive videotape only (n=114) or video plus post-video activity (verbal discussion, written exercises, or control discussion) (n=187); or had no intervention (n=104). Significantly more body image and knowledge improvements occurred at post video and follow-up in the intervention groups compared to no intervention. There were no outcome differences among intervention groups, or between girls with high or low elaboration likelihood. Further research is needed in integrating the videotape into a broader prevention package.
Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.
Xie, Yanmei; Zhang, Biao
2017-04-20
Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and Nutrition Examination Survey (NHANES).
A new model to predict weak-lensing peak counts. II. Parameter constraint strategies
NASA Astrophysics Data System (ADS)
Lin, Chieh-An; Kilbinger, Martin
2015-11-01
Context. Peak counts have been shown to be an excellent tool for extracting the non-Gaussian part of the weak lensing signal. Recently, we developed a fast stochastic forward model to predict weak-lensing peak counts. Our model is able to reconstruct the underlying distribution of observables for analysis. Aims: In this work, we explore and compare various strategies for constraining a parameter using our model, focusing on the matter density Ωm and the density fluctuation amplitude σ8. Methods: First, we examine the impact from the cosmological dependency of covariances (CDC). Second, we perform the analysis with the copula likelihood, a technique that makes a weaker assumption than does the Gaussian likelihood. Third, direct, non-analytic parameter estimations are applied using the full information of the distribution. Fourth, we obtain constraints with approximate Bayesian computation (ABC), an efficient, robust, and likelihood-free algorithm based on accept-reject sampling. Results: We find that neglecting the CDC effect enlarges parameter contours by 22% and that the covariance-varying copula likelihood is a very good approximation to the true likelihood. The direct techniques work well in spite of noisier contours. Concerning ABC, the iterative process converges quickly to a posterior distribution that is in excellent agreement with results from our other analyses. The time cost for ABC is reduced by two orders of magnitude. Conclusions: The stochastic nature of our weak-lensing peak count model allows us to use various techniques that approach the true underlying probability distribution of observables, without making simplifying assumptions. Our work can be generalized to other observables where forward simulations provide samples of the underlying distribution.
NASA Astrophysics Data System (ADS)
Feeney, Stephen M.; Mortlock, Daniel J.; Dalmasso, Niccolò
2018-05-01
Estimates of the Hubble constant, H0, from the local distance ladder and from the cosmic microwave background (CMB) are discrepant at the ˜3σ level, indicating a potential issue with the standard Λ cold dark matter (ΛCDM) cosmology. A probabilistic (i.e. Bayesian) interpretation of this tension requires a model comparison calculation, which in turn depends strongly on the tails of the H0 likelihoods. Evaluating the tails of the local H0 likelihood requires the use of non-Gaussian distributions to faithfully represent anchor likelihoods and outliers, and simultaneous fitting of the complete distance-ladder data set to ensure correct uncertainty propagation. We have hence developed a Bayesian hierarchical model of the full distance ladder that does not rely on Gaussian distributions and allows outliers to be modelled without arbitrary data cuts. Marginalizing over the full ˜3000-parameter joint posterior distribution, we find H0 = (72.72 ± 1.67) km s-1 Mpc-1 when applied to the outlier-cleaned Riess et al. data, and (73.15 ± 1.78) km s-1 Mpc-1 with supernova outliers reintroduced (the pre-cut Cepheid data set is not available). Using our precise evaluation of the tails of the H0 likelihood, we apply Bayesian model comparison to assess the evidence for deviation from ΛCDM given the distance-ladder and CMB data. The odds against ΛCDM are at worst ˜10:1 when considering the Planck 2015 XIII data, regardless of outlier treatment, considerably less dramatic than naïvely implied by the 2.8σ discrepancy. These odds become ˜60:1 when an approximation to the more-discrepant Planck Intermediate XLVI likelihood is included.
Statistical methods for the beta-binomial model in teratology.
Yamamoto, E; Yanagimoto, T
1994-01-01
The beta-binomial model is widely used for analyzing teratological data involving littermates. Recent developments in statistical analyses of teratological data are briefly reviewed with emphasis on the model. For statistical inference of the parameters in the beta-binomial distribution, separation of the likelihood introduces an likelihood inference. This leads to reducing biases of estimators and also to improving accuracy of empirical significance levels of tests. Separate inference of the parameters can be conducted in a unified way. PMID:8187716
Gyre and gimble: a maximum-likelihood replacement for Patterson correlation refinement.
McCoy, Airlie J; Oeffner, Robert D; Millán, Claudia; Sammito, Massimo; Usón, Isabel; Read, Randy J
2018-04-01
Descriptions are given of the maximum-likelihood gyre method implemented in Phaser for optimizing the orientation and relative position of rigid-body fragments of a model after the orientation of the model has been identified, but before the model has been positioned in the unit cell, and also the related gimble method for the refinement of rigid-body fragments of the model after positioning. Gyre refinement helps to lower the root-mean-square atomic displacements between model and target molecular-replacement solutions for the test case of antibody Fab(26-10) and improves structure solution with ARCIMBOLDO_SHREDDER.
Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano
2015-01-01
We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926
He, Ye; Lin, Huazhen; Tu, Dongsheng
2018-06-04
In this paper, we introduce a single-index threshold Cox proportional hazard model to select and combine biomarkers to identify patients who may be sensitive to a specific treatment. A penalized smoothed partial likelihood is proposed to estimate the parameters in the model. A simple, efficient, and unified algorithm is presented to maximize this likelihood function. The estimators based on this likelihood function are shown to be consistent and asymptotically normal. Under mild conditions, the proposed estimators also achieve the oracle property. The proposed approach is evaluated through simulation analyses and application to the analysis of data from two clinical trials, one involving patients with locally advanced or metastatic pancreatic cancer and one involving patients with resectable lung cancer. Copyright © 2018 John Wiley & Sons, Ltd.
Donato, David I.
2012-01-01
This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.
Cramer-Rao Bound, MUSIC, and Maximum Likelihood. Effects of Temporal Phase Difference
1990-11-01
Technical Report 1373 November 1990 Cramer-Rao Bound, MUSIC , And Maximum Likelihood Effects of Temporal Phase o Difference C. V. TranI OTIC Approved... MUSIC , and Maximum Likelihood (ML) asymptotic variances corresponding to the two-source direction-of-arrival estimation where sources were modeled as...1pI = 1.00, SNR = 20 dB ..................................... 27 2. MUSIC for two equipowered signals impinging on a 5-element ULA (a) IpI = 0.50, SNR
Population Synthesis of Radio and Gamma-ray Pulsars using the Maximum Likelihood Approach
NASA Astrophysics Data System (ADS)
Billman, Caleb; Gonthier, P. L.; Harding, A. K.
2012-01-01
We present the results of a pulsar population synthesis of normal pulsars from the Galactic disk using a maximum likelihood method. We seek to maximize the likelihood of a set of parameters in a Monte Carlo population statistics code to better understand their uncertainties and the confidence region of the model's parameter space. The maximum likelihood method allows for the use of more applicable Poisson statistics in the comparison of distributions of small numbers of detected gamma-ray and radio pulsars. Our code simulates pulsars at birth using Monte Carlo techniques and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and gamma-ray emission characteristics. We select measured distributions of radio pulsars from the Parkes Multibeam survey and Fermi gamma-ray pulsars to perform a likelihood analysis of the assumed model parameters such as initial period and magnetic field, and radio luminosity. We present the results of a grid search of the parameter space as well as a search for the maximum likelihood using a Markov Chain Monte Carlo method. We express our gratitude for the generous support of the Michigan Space Grant Consortium, of the National Science Foundation (REU and RUI), the NASA Astrophysics Theory and Fundamental Program and the NASA Fermi Guest Investigator Program.
NASA Astrophysics Data System (ADS)
Elshall, A. S.; Ye, M.; Niu, G. Y.; Barron-Gafford, G.
2015-12-01
Models in biogeoscience involve uncertainties in observation data, model inputs, model structure, model processes and modeling scenarios. To accommodate for different sources of uncertainty, multimodal analysis such as model combination, model selection, model elimination or model discrimination are becoming more popular. To illustrate theoretical and practical challenges of multimodal analysis, we use an example about microbial soil respiration modeling. Global soil respiration releases more than ten times more carbon dioxide to the atmosphere than all anthropogenic emissions. Thus, improving our understanding of microbial soil respiration is essential for improving climate change models. This study focuses on a poorly understood phenomena, which is the soil microbial respiration pulses in response to episodic rainfall pulses (the "Birch effect"). We hypothesize that the "Birch effect" is generated by the following three mechanisms. To test our hypothesis, we developed and assessed five evolving microbial-enzyme models against field measurements from a semiarid Savannah that is characterized by pulsed precipitation. These five model evolve step-wise such that the first model includes none of these three mechanism, while the fifth model includes the three mechanisms. The basic component of Bayesian multimodal analysis is the estimation of marginal likelihood to rank the candidate models based on their overall likelihood with respect to observation data. The first part of the study focuses on using this Bayesian scheme to discriminate between these five candidate models. The second part discusses some theoretical and practical challenges, which are mainly the effect of likelihood function selection and the marginal likelihood estimation methods on both model ranking and Bayesian model averaging. The study shows that making valid inference from scientific data is not a trivial task, since we are not only uncertain about the candidate scientific models, but also about the statistical methods that are used to discriminate between these models.
Efficient simulation and likelihood methods for non-neutral multi-allele models.
Joyce, Paul; Genz, Alan; Buzbas, Erkan Ozge
2012-06-01
Throughout the 1980s, Simon Tavaré made numerous significant contributions to population genetics theory. As genetic data, in particular DNA sequence, became more readily available, a need to connect population-genetic models to data became the central issue. The seminal work of Griffiths and Tavaré (1994a , 1994b , 1994c) was among the first to develop a likelihood method to estimate the population-genetic parameters using full DNA sequences. Now, we are in the genomics era where methods need to scale-up to handle massive data sets, and Tavaré has led the way to new approaches. However, performing statistical inference under non-neutral models has proved elusive. In tribute to Simon Tavaré, we present an article in spirit of his work that provides a computationally tractable method for simulating and analyzing data under a class of non-neutral population-genetic models. Computational methods for approximating likelihood functions and generating samples under a class of allele-frequency based non-neutral parent-independent mutation models were proposed by Donnelly, Nordborg, and Joyce (DNJ) (Donnelly et al., 2001). DNJ (2001) simulated samples of allele frequencies from non-neutral models using neutral models as auxiliary distribution in a rejection algorithm. However, patterns of allele frequencies produced by neutral models are dissimilar to patterns of allele frequencies produced by non-neutral models, making the rejection method inefficient. For example, in some cases the methods in DNJ (2001) require 10(9) rejections before a sample from the non-neutral model is accepted. Our method simulates samples directly from the distribution of non-neutral models, making simulation methods a practical tool to study the behavior of the likelihood and to perform inference on the strength of selection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si
2014-12-01
The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less
Brown, Joshua W.
2009-01-01
The error likelihood computational model of anterior cingulate cortex (ACC) (Brown & Braver, 2005) has successfully predicted error likelihood effects, risk prediction effects, and how individual differences in conflict and error likelihood effects vary with trait differences in risk aversion. The same computational model now makes a further prediction that apparent conflict effects in ACC may result in part from an increasing number of simultaneously active responses, regardless of whether or not the cued responses are mutually incompatible. In Experiment 1, the model prediction was tested with a modification of the Eriksen flanker task, in which some task conditions require two otherwise mutually incompatible responses to be generated simultaneously. In that case, the two response processes are no longer in conflict with each other. The results showed small but significant medial PFC effects in the incongruent vs. congruent contrast, despite the absence of response conflict, consistent with model predictions. This is the multiple response effect. Nonetheless, actual response conflict led to greater ACC activation, suggesting that conflict effects are specific to particular task contexts. In Experiment 2, results from a change signal task suggested that the context dependence of conflict signals does not depend on error likelihood effects. Instead, inputs to ACC may reflect complex and task specific representations of motor acts, such as bimanual responses. Overall, the results suggest the existence of a richer set of motor signals monitored by medial PFC and are consistent with distinct effects of multiple responses, conflict, and error likelihood in medial PFC. PMID:19375509
Estimation of parameters of dose volume models and their confidence limits
NASA Astrophysics Data System (ADS)
van Luijk, P.; Delvigne, T. C.; Schilstra, C.; Schippers, J. M.
2003-07-01
Predictions of the normal-tissue complication probability (NTCP) for the ranking of treatment plans are based on fits of dose-volume models to clinical and/or experimental data. In the literature several different fit methods are used. In this work frequently used methods and techniques to fit NTCP models to dose response data for establishing dose-volume effects, are discussed. The techniques are tested for their usability with dose-volume data and NTCP models. Different methods to estimate the confidence intervals of the model parameters are part of this study. From a critical-volume (CV) model with biologically realistic parameters a primary dataset was generated, serving as the reference for this study and describable by the NTCP model. The CV model was fitted to this dataset. From the resulting parameters and the CV model, 1000 secondary datasets were generated by Monte Carlo simulation. All secondary datasets were fitted to obtain 1000 parameter sets of the CV model. Thus the 'real' spread in fit results due to statistical spreading in the data is obtained and has been compared with estimates of the confidence intervals obtained by different methods applied to the primary dataset. The confidence limits of the parameters of one dataset were estimated using the methods, employing the covariance matrix, the jackknife method and directly from the likelihood landscape. These results were compared with the spread of the parameters, obtained from the secondary parameter sets. For the estimation of confidence intervals on NTCP predictions, three methods were tested. Firstly, propagation of errors using the covariance matrix was used. Secondly, the meaning of the width of a bundle of curves that resulted from parameters that were within the one standard deviation region in the likelihood space was investigated. Thirdly, many parameter sets and their likelihood were used to create a likelihood-weighted probability distribution of the NTCP. It is concluded that for the type of dose response data used here, only a full likelihood analysis will produce reliable results. The often-used approximations, such as the usage of the covariance matrix, produce inconsistent confidence limits on both the parameter sets and the resulting NTCP values.
How Stuttering Develops: The Multifactorial Dynamic Pathways Theory
Weber, Christine
2017-01-01
Purpose We advanced a multifactorial, dynamic account of the complex, nonlinear interactions of motor, linguistic, and emotional factors contributing to the development of stuttering. Our purpose here is to update our account as the multifactorial dynamic pathways theory. Method We review evidence related to how stuttering develops, including genetic/epigenetic factors; motor, linguistic, and emotional features; and advances in neuroimaging studies. We update evidence for our earlier claim: Although stuttering ultimately reflects impairment in speech sensorimotor processes, its course over the life span is strongly conditioned by linguistic and emotional factors. Results Our current account places primary emphasis on the dynamic developmental context in which stuttering emerges and follows its course during the preschool years. Rapid changes in many neurobehavioral systems are ongoing, and critical interactions among these systems likely play a major role in determining persistence of or recovery from stuttering. Conclusion Stuttering, or childhood onset fluency disorder (Diagnostic and Statistical Manual of Mental Disorders, 5th edition; American Psychiatric Association [APA], 2013), is a neurodevelopmental disorder that begins when neural networks supporting speech, language, and emotional functions are rapidly developing. The multifactorial dynamic pathways theory motivates experimental and clinical work to determine the specific factors that contribute to each child's pathway to the diagnosis of stuttering and those most likely to promote recovery. PMID:28837728
How Stuttering Develops: The Multifactorial Dynamic Pathways Theory.
Smith, Anne; Weber, Christine
2017-09-18
We advanced a multifactorial, dynamic account of the complex, nonlinear interactions of motor, linguistic, and emotional factors contributing to the development of stuttering. Our purpose here is to update our account as the multifactorial dynamic pathways theory. We review evidence related to how stuttering develops, including genetic/epigenetic factors; motor, linguistic, and emotional features; and advances in neuroimaging studies. We update evidence for our earlier claim: Although stuttering ultimately reflects impairment in speech sensorimotor processes, its course over the life span is strongly conditioned by linguistic and emotional factors. Our current account places primary emphasis on the dynamic developmental context in which stuttering emerges and follows its course during the preschool years. Rapid changes in many neurobehavioral systems are ongoing, and critical interactions among these systems likely play a major role in determining persistence of or recovery from stuttering. Stuttering, or childhood onset fluency disorder (Diagnostic and Statistical Manual of Mental Disorders, 5th edition; American Psychiatric Association [APA], 2013), is a neurodevelopmental disorder that begins when neural networks supporting speech, language, and emotional functions are rapidly developing. The multifactorial dynamic pathways theory motivates experimental and clinical work to determine the specific factors that contribute to each child's pathway to the diagnosis of stuttering and those most likely to promote recovery.
Lihavainen, Katri; Sipilä, Sarianna; Rantanen, Taina; Seppänen, Jarmo; Lavikainen, Piia; Sulkava, Raimo; Hartikainen, Sirpa
2012-08-01
We studied the effects of comprehensive geriatric assessment and multifactorial intervention on physical performance among older people. In a 3-year geriatric development project with an experimental design, 668 participants aged 75-98 were assigned to intervention (n=348) or control (n=320) groups. The intervention group received comprehensive geriatric assessment with an individually targeted intervention for 2 years. The outcome measures - performance in the Timed Up-and-Go (TUG), 10-meter walking and Berg Balance Scale tests - were gathered annually during the intervention and the 1-year follow-up after it. With linear mixed models, over the 2-year intervention period, the intervention group was found to be improved in the balance (p<0.001) and walking speed (p<0.001) tests, and maintained performance in the TUG test (p<0.001), compared with the control group. The results remained significant 1 year post-intervention. Comprehensive geriatric assessment and individually targeted multifactorial intervention had positive effects on physical performance, potentially helping to maintain mobility and prevent disability in old age.
A Combination Therapy of JO-1 and Chemotherapy in Ovarian Cancer Models
2014-12-01
stress , as it was mild and resolved quickly. Other minor histologic changes as stated are unlikely to have been clinically significant...in the hematology data, with WBC count of 25,440 on July 24th. Stress may be playing a role in the leukocytosis (but is not typically associated...multifactorial, with potential factors including nutritional, stress or possibly food intolerances or allergies. Other minor changes are interpreted to
Boden, Lauren M; Boden, Stephanie A; Premkumar, Ajay; Gottschalk, Michael B; Boden, Scott D
2018-02-09
Retrospective analysis of prospectively collected data. To create a data-driven triage system stratifying patients by likelihood of undergoing spinal surgery within one year of presentation. Low back pain (LBP) and radicular lower extremity (LE) symptoms are common musculoskeletal problems. There is currently no standard data-derived triage process based on information that can be obtained prior to the initial physician-patient encounter to direct patients to the optimal physician type. We analyzed patient-reported data from 8006 patients with a chief complaint of LBP and/or LE radicular symptoms who presented to surgeons at a large multidisciplinary spine center between September 1, 2005 and June 30, 2016. Univariate and multivariate analysis identified independent risk factors for undergoing spinal surgery within one year of initial visit. A model incorporating these risk factors was created using a random sample of 80% of the total patients in our cohort, and validated on the remaining 20%. The baseline one-year surgery rate within our cohort was 39% for all patients and 42% for patients with LE symptoms. Those identified as high likelihood by the center's existing triage process had a surgery rate of 45%. The new triage scoring system proposed in this study was able to identify a high likelihood group in which 58% underwent surgery, which is a 46% higher surgery rate than in non-triaged patients and a 29% improvement from our institution's existing triage system. The data-driven triage model and scoring system derived and validated in this study (Spine Surgery Likelihood model [SSL-11]), significantly improved existing processes in predicting the likelihood of undergoing spinal surgery within one year of initial presentation. This triage system will allow centers to more selectively screen for surgical candidates and more effectively direct patients to surgeons or non-operative spine specialists. 4.
Pathophysiology and classification of primary graft dysfunction after lung transplantation
Morrison, Morvern Isabel; Pither, Thomas Leonard
2017-01-01
The term primary graft dysfunction (PGD) incorporates a continuum of disease severity from moderate to severe acute lung injury (ALI) within 72 h of lung transplantation. It represents the most significant obstacle to achieving good early post-transplant outcomes, but is also associated with increased incidence of bronchiolitis obliterans syndrome (BOS) subsequently. PGD is characterised histologically by diffuse alveolar damage, but is graded on clinical grounds with a combination of PaO2/FiO2 (P/F) and the presence of radiographic infiltrates, with 0 being absence of disease and 3 being severe PGD. The aetiology is multifactorial but commonly results from severe ischaemia-reperfusion injury (IRI), with tissue-resident macrophages largely responsible for stimulating a secondary ‘wave’ of neutrophils and lymphocytes that produce severe and widespread tissue damage. Donor history, recipient health and operative factors may all potentially contribute to the likelihood of PGD development. Work that aims to minimise the incidence of PGD in ongoing, with techniques such as ex vivo perfusion of donor lungs showing promise both in research and in clinical studies. This review will summarise the current clinical status of PGD before going on to discuss its pathophysiology, current therapies available and future directions for clinical management of PGD. PMID:29268419
Risk and protective factors for spasmodic dysphonia: a case-control investigation.
Tanner, Kristine; Roy, Nelson; Merrill, Ray M; Kimber, Kamille; Sauder, Cara; Houtz, Daniel R; Doman, Darrin; Smith, Marshall E
2011-01-01
Spasmodic dysphonia (SD) is a chronic, incurable, and often disabling voice disorder of unknown pathogenesis. The purpose of this study was to identify possible endogenous and exogenous risk and protective factors uniquely associated with SD. Prospective, exploratory, case-control investigation. One hundred fifty patients with SD and 150 medical controls (MCs) were interviewed regarding their personal and family histories, environmental exposures, illnesses, injuries, voice use patterns, and general health using a previously vetted and validated epidemiologic questionnaire. Odds ratios and multiple logistic regression analyses (α<0.15) identified several factors that significantly increased the likelihood of having SD. These factors included (1) a personal history of mumps, blepharospasm, tremor, intense occupational and avocational voice use, and a family history of voice disorders; (2) an immediate family history of meningitis, tremor, tics, cancer, and compulsive behaviors; and (3) an extended family history of tremor and cancer. SD is likely multifactorial in etiology, involving both genetic and environmental factors. Viral infections/exposures, along with intense voice use, may trigger the onset of SD in genetically predisposed individuals. Future studies should examine the interaction among genetic and environmental factors to determine the pathogenesis of SD. Copyright © 2011 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
Testing students' e-learning via Facebook through Bayesian structural equation modeling.
Salarzadeh Jenatabadi, Hashem; Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad
2017-01-01
Learning is an intentional activity, with several factors affecting students' intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods' results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated.
Testing students’ e-learning via Facebook through Bayesian structural equation modeling
Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad
2017-01-01
Learning is an intentional activity, with several factors affecting students’ intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods’ results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated. PMID:28886019
Ring, Christopher; Kavussanu, Maria
2018-03-01
Given the concern over doping in sport, researchers have begun to explore the role played by self-regulatory processes in the decision whether to use banned performance-enhancing substances. Grounded on Bandura's (1991) theory of moral thought and action, this study examined the role of self-regulatory efficacy, moral disengagement and anticipated guilt on the likelihood to use a banned substance among college athletes. Doping self-regulatory efficacy was associated with doping likelihood both directly (b = -.16, P < .001) and indirectly (b = -.29, P < .001) through doping moral disengagement. Moral disengagement also contributed directly to higher doping likelihood and lower anticipated guilt about doping, which was associated with higher doping likelihood. Overall, the present findings provide evidence to support a model of doping based on Bandura's social cognitive theory of moral thought and action, in which self-regulatory efficacy influences the likelihood to use banned performance-enhancing substances both directly and indirectly via moral disengagement.
Stem Cell Technology for (Epi)genetic Brain Disorders.
Riemens, Renzo J M; Soares, Edilene S; Esteller, Manel; Delgado-Morales, Raul
2017-01-01
Despite the enormous efforts of the scientific community over the years, effective therapeutics for many (epi)genetic brain disorders remain unidentified. The common and persistent failures to translate preclinical findings into clinical success are partially attributed to the limited efficiency of current disease models. Although animal and cellular models have substantially improved our knowledge of the pathological processes involved in these disorders, human brain research has generally been hampered by a lack of satisfactory humanized model systems. This, together with our incomplete knowledge of the multifactorial causes in the majority of these disorders, as well as a thorough understanding of associated (epi)genetic alterations, has been impeding progress in gaining more mechanistic insights from translational studies. Over the last years, however, stem cell technology has been offering an alternative approach to study and treat human brain disorders. Owing to this technology, we are now able to obtain a theoretically inexhaustible source of human neural cells and precursors in vitro that offer a platform for disease modeling and the establishment of therapeutic interventions. In addition to the potential to increase our general understanding of how (epi)genetic alterations contribute to the pathology of brain disorders, stem cells and derivatives allow for high-throughput drugs and toxicity testing, and provide a cell source for transplant therapies in regenerative medicine. In the current chapter, we will demonstrate the validity of human stem cell-based models and address the utility of other stem cell-based applications for several human brain disorders with multifactorial and (epi)genetic bases, including Parkinson's disease (PD), Alzheimer's disease (AD), fragile X syndrome (FXS), Angelman syndrome (AS), Prader-Willi syndrome (PWS), and Rett syndrome (RTT).
Synthesizing Regression Results: A Factored Likelihood Method
ERIC Educational Resources Information Center
Wu, Meng-Jia; Becker, Betsy Jane
2013-01-01
Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported…
Maximum likelihood estimation for Cox's regression model under nested case-control sampling.
Scheike, Thomas H; Juul, Anders
2004-04-01
Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used to obtain information additional to the relative risk estimates of covariates.
NASA Technical Reports Server (NTRS)
Switzer, Eric Ryan; Watts, Duncan J.
2016-01-01
The B-mode polarization of the cosmic microwave background provides a unique window into tensor perturbations from inflationary gravitational waves. Survey effects complicate the estimation and description of the power spectrum on the largest angular scales. The pixel-space likelihood yields parameter distributions without the power spectrum as an intermediate step, but it does not have the large suite of tests available to power spectral methods. Searches for primordial B-modes must rigorously reject and rule out contamination. Many forms of contamination vary or are uncorrelated across epochs, frequencies, surveys, or other data treatment subsets. The cross power and the power spectrum of the difference of subset maps provide approaches to reject and isolate excess variance. We develop an analogous joint pixel-space likelihood. Contamination not modeled in the likelihood produces parameter-dependent bias and complicates the interpretation of the difference map. We describe a null test that consistently weights the difference map. Excess variance should either be explicitly modeled in the covariance or be removed through reprocessing the data.
NASA Astrophysics Data System (ADS)
Moschetti, M. P.; Mueller, C. S.; Boyd, O. S.; Petersen, M. D.
2013-12-01
In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.
Moschetti, Morgan P.; Mueller, Charles S.; Boyd, Oliver S.; Petersen, Mark D.
2014-01-01
In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.
Flassig, Robert J; Migal, Iryna; der Zalm, Esther van; Rihko-Struckmann, Liisa; Sundmacher, Kai
2015-01-16
Understanding the dynamics of biological processes can substantially be supported by computational models in the form of nonlinear ordinary differential equations (ODE). Typically, this model class contains many unknown parameters, which are estimated from inadequate and noisy data. Depending on the ODE structure, predictions based on unmeasured states and associated parameters are highly uncertain, even undetermined. For given data, profile likelihood analysis has been proven to be one of the most practically relevant approaches for analyzing the identifiability of an ODE structure, and thus model predictions. In case of highly uncertain or non-identifiable parameters, rational experimental design based on various approaches has shown to significantly reduce parameter uncertainties with minimal amount of effort. In this work we illustrate how to use profile likelihood samples for quantifying the individual contribution of parameter uncertainty to prediction uncertainty. For the uncertainty quantification we introduce the profile likelihood sensitivity (PLS) index. Additionally, for the case of several uncertain parameters, we introduce the PLS entropy to quantify individual contributions to the overall prediction uncertainty. We show how to use these two criteria as an experimental design objective for selecting new, informative readouts in combination with intervention site identification. The characteristics of the proposed multi-criterion objective are illustrated with an in silico example. We further illustrate how an existing practically non-identifiable model for the chlorophyll fluorescence induction in a photosynthetic organism, D. salina, can be rendered identifiable by additional experiments with new readouts. Having data and profile likelihood samples at hand, the here proposed uncertainty quantification based on prediction samples from the profile likelihood provides a simple way for determining individual contributions of parameter uncertainties to uncertainties in model predictions. The uncertainty quantification of specific model predictions allows identifying regions, where model predictions have to be considered with care. Such uncertain regions can be used for a rational experimental design to render initially highly uncertain model predictions into certainty. Finally, our uncertainty quantification directly accounts for parameter interdependencies and parameter sensitivities of the specific prediction.
Spectral likelihood expansions for Bayesian inference
NASA Astrophysics Data System (ADS)
Nagel, Joseph B.; Sudret, Bruno
2016-03-01
A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the likelihood function in terms of orthogonal polynomials. From this spectral likelihood expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.
NASA Astrophysics Data System (ADS)
Harris, Adam
2014-05-01
The Intergovernmental Panel on Climate Change (IPCC) prescribes that the communication of risk and uncertainty information pertaining to scientific reports, model predictions etc. be communicated with a set of 7 likelihood expressions. These range from "Extremely likely" (intended to communicate a likelihood of greater than 99%) through "As likely as not" (33-66%) to "Extremely unlikely" (less than 1%). Psychological research has investigated the degree to which these expressions are interpreted as intended by the IPCC, both within and across cultures. I will present a selection of this research and demonstrate some problems associated with communicating likelihoods in this way, as well as suggesting some potential improvements.
Lirio, R B; Dondériz, I C; Pérez Abalo, M C
1992-08-01
The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.
ERIC Educational Resources Information Center
Kelderman, Henk
In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual counts in the full contingency table. This is…
Accounting for informatively missing data in logistic regression by means of reassessment sampling.
Lin, Ji; Lyles, Robert H
2015-05-20
We explore the 'reassessment' design in a logistic regression setting, where a second wave of sampling is applied to recover a portion of the missing data on a binary exposure and/or outcome variable. We construct a joint likelihood function based on the original model of interest and a model for the missing data mechanism, with emphasis on non-ignorable missingness. The estimation is carried out by numerical maximization of the joint likelihood function with close approximation of the accompanying Hessian matrix, using sharable programs that take advantage of general optimization routines in standard software. We show how likelihood ratio tests can be used for model selection and how they facilitate direct hypothesis testing for whether missingness is at random. Examples and simulations are presented to demonstrate the performance of the proposed method. Copyright © 2015 John Wiley & Sons, Ltd.
Hybrid pairwise likelihood analysis of animal behavior experiments.
Cattelan, Manuela; Varin, Cristiano
2013-12-01
The study of the determinants of fights between animals is an important issue in understanding animal behavior. For this purpose, tournament experiments among a set of animals are often used by zoologists. The results of these tournament experiments are naturally analyzed by paired comparison models. Proper statistical analysis of these models is complicated by the presence of dependence between the outcomes of fights because the same animal is involved in different contests. This paper discusses two different model specifications to account for between-fights dependence. Models are fitted through the hybrid pairwise likelihood method that iterates between optimal estimating equations for the regression parameters and pairwise likelihood inference for the association parameters. This approach requires the specification of means and covariances only. For this reason, the method can be applied also when the computation of the joint distribution is difficult or inconvenient. The proposed methodology is investigated by simulation studies and applied to real data about adult male Cape Dwarf Chameleons. © 2013, The International Biometric Society.
ELASTIC NET FOR COX’S PROPORTIONAL HAZARDS MODEL WITH A SOLUTION PATH ALGORITHM
Wu, Yichao
2012-01-01
For least squares regression, Efron et al. (2004) proposed an efficient solution path algorithm, the least angle regression (LAR). They showed that a slight modification of the LAR leads to the whole LASSO solution path. Both the LAR and LASSO solution paths are piecewise linear. Recently Wu (2011) extended the LAR to generalized linear models and the quasi-likelihood method. In this work we extend the LAR further to handle Cox’s proportional hazards model. The goal is to develop a solution path algorithm for the elastic net penalty (Zou and Hastie (2005)) in Cox’s proportional hazards model. This goal is achieved in two steps. First we extend the LAR to optimizing the log partial likelihood plus a fixed small ridge term. Then we define a path modification, which leads to the solution path of the elastic net regularized log partial likelihood. Our solution path is exact and piecewise determined by ordinary differential equation systems. PMID:23226932
Negotiating Multicollinearity with Spike-and-Slab Priors
Ročková, Veronika
2014-01-01
In multiple regression under the normal linear model, the presence of multicollinearity is well known to lead to unreliable and unstable maximum likelihood estimates. This can be particularly troublesome for the problem of variable selection where it becomes more difficult to distinguish between subset models. Here we show how adding a spike-and-slab prior mitigates this difficulty by filtering the likelihood surface into a posterior distribution that allocates the relevant likelihood information to each of the subset model modes. For identification of promising high posterior models in this setting, we consider three EM algorithms, the fast closed form EMVS version of Rockova and George (2014) and two new versions designed for variants of the spike-and-slab formulation. For a multimodal posterior under multicollinearity, we compare the regions of convergence of these three algorithms. Deterministic annealing versions of the EMVS algorithm are seen to substantially mitigate this multimodality. A single simple running example is used for illustration throughout. PMID:25419004
The Atacama Cosmology Telescope: Likelihood for Small-Scale CMB Data
NASA Technical Reports Server (NTRS)
Dunkley, J.; Calabrese, E.; Sievers, J.; Addison, G. E.; Battaglia, N.; Battistelli, E. S.; Bond, J. R.; Das, S.; Devlin, M. J.; Dunner, R.;
2013-01-01
The Atacama Cosmology Telescope has measured the angular power spectra of microwave fluctuations to arcminute scales at frequencies of 148 and 218 GHz, from three seasons of data. At small scales the fluctuations in the primordial Cosmic Microwave Background (CMB) become increasingly obscured by extragalactic foregounds and secondary CMB signals. We present results from a nine-parameter model describing these secondary effects, including the thermal and kinematic Sunyaev-Zel'dovich (tSZ and kSZ) power; the clustered and Poisson-like power from Cosmic Infrared Background (CIB) sources, and their frequency scaling; the tSZ-CIB correlation coefficient; the extragalactic radio source power; and thermal dust emission from Galactic cirrus in two different regions of the sky. In order to extract cosmological parameters, we describe a likelihood function for the ACT data, fitting this model to the multi-frequency spectra in the multipole range 500 < l < 10000. We extend the likelihood to include spectra from the South Pole Telescope at frequencies of 95, 150, and 220 GHz. Accounting for different radio source levels and Galactic cirrus emission, the same model provides an excellent fit to both datasets simultaneously, with ?2/dof= 675/697 for ACT, and 96/107 for SPT. We then use the multi-frequency likelihood to estimate the CMB power spectrum from ACT in bandpowers, marginalizing over the secondary parameters. This provides a simplified 'CMB-only' likelihood in the range 500 < l < 3500 for use in cosmological parameter estimation
Two stochastic models useful in petroleum exploration
NASA Technical Reports Server (NTRS)
Kaufman, G. M.; Bradley, P. G.
1972-01-01
A model of the petroleum exploration process that tests empirically the hypothesis that at an early stage in the exploration of a basin, the process behaves like sampling without replacement is proposed along with a model of the spatial distribution of petroleum reserviors that conforms to observed facts. In developing the model of discovery, the following topics are discussed: probabilitistic proportionality, likelihood function, and maximum likelihood estimation. In addition, the spatial model is described, which is defined as a stochastic process generating values of a sequence or random variables in a way that simulates the frequency distribution of areal extent, the geographic location, and shape of oil deposits
Maximum likelihood estimation for periodic autoregressive moving average models
Vecchia, A.V.
1985-01-01
A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.
A maximum likelihood convolutional decoder model vs experimental data comparison
NASA Technical Reports Server (NTRS)
Chen, R. Y.
1979-01-01
This article describes the comparison of a maximum likelihood convolutional decoder (MCD) prediction model and the actual performance of the MCD at the Madrid Deep Space Station. The MCD prediction model is used to develop a subroutine that has been utilized by the Telemetry Analysis Program (TAP) to compute the MCD bit error rate for a given signal-to-noise ratio. The results indicate that that the TAP can predict quite well compared to the experimental measurements. An optimal modulation index also can be found through TAP.
Individual, team, and coach predictors of players' likelihood to aggress in youth soccer.
Chow, Graig M; Murray, Kristen E; Feltz, Deborah L
2009-08-01
The purpose of this study was to examine personal and socioenvironmental factors of players' likelihood to aggress. Participants were youth soccer players (N = 258) and their coaches (N = 23) from high school and club teams. Players completed the Judgments About Moral Behavior in Youth Sports Questionnaire (JAMBYSQ; Stephens, Bredemeier, & Shields, 1997), which assessed athletes' stage of moral development, team norm for aggression, and self-described likelihood to aggress against an opponent. Coaches were administered the Coaching Efficacy Scale (CES; Feltz, Chase, Moritz, & Sullivan, 1999). Using multilevel modeling, results demonstrated that the team norm for aggression at the athlete and team level were significant predictors of athletes' self likelihood to aggress scores. Further, coaches' game strategy efficacy emerged as a positive predictor of their players' self-described likelihood to aggress. The findings contribute to previous research examining the socioenvironmental predictors of athletic aggression in youth sport by demonstrating the importance of coaching efficacy beliefs.
Posterior propriety for hierarchical models with log-likelihoods that have norm bounds
Michalak, Sarah E.; Morris, Carl N.
2015-07-17
Statisticians often use improper priors to express ignorance or to provide good frequency properties, requiring that posterior propriety be verified. Our paper addresses generalized linear mixed models, GLMMs, when Level I parameters have Normal distributions, with many commonly-used hyperpriors. It provides easy-to-verify sufficient posterior propriety conditions based on dimensions, matrix ranks, and exponentiated norm bounds, ENBs, for the Level I likelihood. Since many familiar likelihoods have ENBs, which is often verifiable via log-concavity and MLE finiteness, our novel use of ENBs permits unification of posterior propriety results and posterior MGF/moment results for many useful Level I distributions, including those commonlymore » used with multilevel generalized linear models, e.g., GLMMs and hierarchical generalized linear models, HGLMs. Furthermore, those who need to verify existence of posterior distributions or of posterior MGFs/moments for a multilevel generalized linear model given a proper or improper multivariate F prior as in Section 1 should find the required results in Sections 1 and 2 and Theorem 3 (GLMMs), Theorem 4 (HGLMs), or Theorem 5 (posterior MGFs/moments).« less
Occupancy Modeling Species-Environment Relationships with Non-ignorable Survey Designs.
Irvine, Kathryn M; Rodhouse, Thomas J; Wright, Wilson J; Olsen, Anthony R
2018-05-26
Statistical models supporting inferences about species occurrence patterns in relation to environmental gradients are fundamental to ecology and conservation biology. A common implicit assumption is that the sampling design is ignorable and does not need to be formally accounted for in analyses. The analyst assumes data are representative of the desired population and statistical modeling proceeds. However, if datasets from probability and non-probability surveys are combined or unequal selection probabilities are used, the design may be non ignorable. We outline the use of pseudo-maximum likelihood estimation for site-occupancy models to account for such non-ignorable survey designs. This estimation method accounts for the survey design by properly weighting the pseudo-likelihood equation. In our empirical example, legacy and newer randomly selected locations were surveyed for bats to bridge a historic statewide effort with an ongoing nationwide program. We provide a worked example using bat acoustic detection/non-detection data and show how analysts can diagnose whether their design is ignorable. Using simulations we assessed whether our approach is viable for modeling datasets composed of sites contributed outside of a probability design Pseudo-maximum likelihood estimates differed from the usual maximum likelihood occu31 pancy estimates for some bat species. Using simulations we show the maximum likelihood estimator of species-environment relationships with non-ignorable sampling designs was biased, whereas the pseudo-likelihood estimator was design-unbiased. However, in our simulation study the designs composed of a large proportion of legacy or non-probability sites resulted in estimation issues for standard errors. These issues were likely a result of highly variable weights confounded by small sample sizes (5% or 10% sampling intensity and 4 revisits). Aggregating datasets from multiple sources logically supports larger sample sizes and potentially increases spatial extents for statistical inferences. Our results suggest that ignoring the mechanism for how locations were selected for data collection (e.g., the sampling design) could result in erroneous model-based conclusions. Therefore, in order to ensure robust and defensible recommendations for evidence-based conservation decision-making, the survey design information in addition to the data themselves must be available for analysts. Details for constructing the weights used in estimation and code for implementation are provided. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Anstey, Kaarin J; Horswill, Mark S; Wood, Joanne M; Hatherly, Christopher
2012-03-01
The current study evaluated part of the Multifactorial Model of Driving Safety to elucidate the relative importance of cognitive function and a limited range of standard measures of visual function in relation to the Capacity to Drive Safely. Capacity to Drive Safely was operationalized using three validated screening measures for older drivers. These included an adaptation of the well validated Useful Field of View (UFOV) and two newer measures, namely a Hazard Perception Test (HPT), and a Hazard Change Detection Task (HCDT). Community dwelling drivers (n=297) aged 65-96 were assessed using a battery of measures of cognitive and visual function. Factor analysis of these predictor variables yielded factors including Executive/Speed, Vision (measured by visual acuity and contrast sensitivity), Spatial, Visual Closure, and Working Memory. Cognitive and Vision factors explained 83-95% of age-related variance in the Capacity to Drive Safely. Spatial and Working Memory were associated with UFOV, HPT and HCDT, Executive/Speed was associated with UFOV and HCDT and Vision was associated with HPT. The Capacity to Drive Safely declines with chronological age, and this decline is associated with age-related declines in several higher order cognitive abilities involving manipulation and storage of visuospatial information under speeded conditions. There are also age-independent effects of cognitive function and vision that determine driving safety. Copyright © 2011 Elsevier Ltd. All rights reserved.
Sex ratio selection and multi-factorial sex determination in the housefly: a dynamic model.
Kozielska, M; Pen, I; Beukeboom, L W; Weissing, F J
2006-05-01
Sex determining (SD) mechanisms are highly variable between different taxonomic groups and appear to change relatively quickly during evolution. Sex ratio selection could be a dominant force causing such changes. We investigate theoretically the effect of sex ratio selection on the dynamics of a multi-factorial SD system. The system considered resembles the naturally occurring three-locus system of the housefly, which allows for male heterogamety, female heterogamety and a variety of other mechanisms. Sex ratio selection is modelled by assuming cost differences in the production of sons and daughters, a scenario leading to a strong sex ratio bias in the absence of constraints imposed by the mechanism of sex determination. We show that, despite of the presumed flexibility of the SD system considered, equilibrium sex ratios never deviate strongly from 1 : 1. Even if daughters are very costly, a male-biased sex ratio can never evolve. If sons are more costly, sex ratio can be slightly female biased but even in case of large cost differences the bias is very small (<10% from 1 : 1). Sex ratio selection can lead to a shift in the SD mechanism, but cannot be the sole cause of complete switches from one SD system to another. In fact, more than one locus remains polymorphic at equilibrium. We discuss our results in the context of evolution of the variable SD mechanism found in natural housefly populations.
On Muthen's Maximum Likelihood for Two-Level Covariance Structure Models
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Hayashi, Kentaro
2005-01-01
Data in social and behavioral sciences are often hierarchically organized. Special statistical procedures that take into account the dependence of such observations have been developed. Among procedures for 2-level covariance structure analysis, Muthen's maximum likelihood (MUML) has the advantage of easier computation and faster convergence. When…
ERIC Educational Resources Information Center
Casabianca, Jodi M.; Lewis, Charles
2015-01-01
Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…
Likelihood-Ratio DIF Testing: Effects of Nonnormality
ERIC Educational Resources Information Center
Woods, Carol M.
2008-01-01
Differential item functioning (DIF) occurs when an item has different measurement properties for members of one group versus another. Likelihood-ratio (LR) tests for DIF based on item response theory (IRT) involve statistically comparing IRT models that vary with respect to their constraints. A simulation study evaluated how violation of the…
A Study of Item Bias for Attitudinal Measurement Using Maximum Likelihood Factor Analysis.
ERIC Educational Resources Information Center
Mayberry, Paul W.
A technique for detecting item bias that is responsive to attitudinal measurement considerations is a maximum likelihood factor analysis procedure comparing multivariate factor structures across various subpopulations, often referred to as SIFASP. The SIFASP technique allows for factorial model comparisons in the testing of various hypotheses…
An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models
ERIC Educational Resources Information Center
Lee, Taehun
2010-01-01
In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…
Chandran, Avinash; Barron, Mary J; Westerman, Beverly J; DiPietro, Loretta
2017-10-25
While head injuries and concussions are major concerns among soccer players, the multifactorial nature of head injury observations in this group remains relatively undefined. We aim to extend previous analyses and examine sex-differences in the incidence of head injuries, odds of head injuries within an injured sample, and severity of head injuries, among collegiate soccer players between 2004 and 2009. Data collected within the National Collegiate Athletic Association (NCAA) Injury Surveillance System (ISS) between the years of 2004 and 2009, were analyzed in this study. Unadjusted rate ratios (RR), compared incidence rates between categories of sex, injury mechanism, setting and competition level. We also examined sex-differences in head injury incidence rates, across categories of the other covariates. Multivariable logistic regression and negative binomial regression modeling tested the relation between sex and head injury corollaries, while controlling for contact, setting, and competition level. Between 2004 and 2009, head injuries accounted for approximately 11% of all soccer-related injuries reported within the NCAA-ISS. The rate of head injuries among women was higher than among men (RR = 1.23, 95% CI = [1.08, 1.41]). The rate of head injuries due to player-to-player contact was comparable between women and men (RR = 0.95, 95% CI = [0.81, 1.11]). Whereas, the rate of injury due to contact with apparatus (ball/goal) was nearly 2.5 times higher (RR = 2.46, 95% CI = [1.76, 3.44]) and the rate due to contact with a playing surface was over two times higher (RR = 2.29, 95% CI = [1.34, 3.91]) in women than in men. In our multifactorial models, we also observed that the association between sex and head injury corollaries varied by injury mechanism. Sex-differences in the incidence, odds (given an injury), and severity (concussion diagnosis, time-loss) of head injuries varied by injury mechanism (player-to-player contact vs. all other mechanisms) in this sample.
Robust Multipoint Water-Fat Separation Using Fat Likelihood Analysis
Yu, Huanzhou; Reeder, Scott B.; Shimakawa, Ann; McKenzie, Charles A.; Brittain, Jean H.
2016-01-01
Fat suppression is an essential part of routine MRI scanning. Multiecho chemical-shift based water-fat separation methods estimate and correct for Bo field inhomogeneity. However, they must contend with the intrinsic challenge of water-fat ambiguity that can result in water-fat swapping. This problem arises because the signals from two chemical species, when both are modeled as a single discrete spectral peak, may appear indistinguishable in the presence of Bo off-resonance. In conventional methods, the water-fat ambiguity is typically removed by enforcing field map smoothness using region growing based algorithms. In reality, the fat spectrum has multiple spectral peaks. Using this spectral complexity, we introduce a novel concept that identifies water and fat for multiecho acquisitions by exploiting the spectral differences between water and fat. A fat likelihood map is produced to indicate if a pixel is likely to be water-dominant or fat-dominant by comparing the fitting residuals of two different signal models. The fat likelihood analysis and field map smoothness provide complementary information, and we designed an algorithm (Fat Likelihood Analysis for Multiecho Signals) to exploit both mechanisms. It is demonstrated in a wide variety of data that the Fat Likelihood Analysis for Multiecho Signals algorithm offers highly robust water-fat separation for 6-echo acquisitions, particularly in some previously challenging applications. PMID:21842498
Validation of DNA-based identification software by computation of pedigree likelihood ratios.
Slooten, K
2011-08-01
Disaster victim identification (DVI) can be aided by DNA-evidence, by comparing the DNA-profiles of unidentified individuals with those of surviving relatives. The DNA-evidence is used optimally when such a comparison is done by calculating the appropriate likelihood ratios. Though conceptually simple, the calculations can be quite involved, especially with large pedigrees, precise mutation models etc. In this article we describe a series of test cases designed to check if software designed to calculate such likelihood ratios computes them correctly. The cases include both simple and more complicated pedigrees, among which inbred ones. We show how to calculate the likelihood ratio numerically and algebraically, including a general mutation model and possibility of allelic dropout. In Appendix A we show how to derive such algebraic expressions mathematically. We have set up these cases to validate new software, called Bonaparte, which performs pedigree likelihood ratio calculations in a DVI context. Bonaparte has been developed by SNN Nijmegen (The Netherlands) for the Netherlands Forensic Institute (NFI). It is available free of charge for non-commercial purposes (see www.dnadvi.nl for details). Commercial licenses can also be obtained. The software uses Bayesian networks and the junction tree algorithm to perform its calculations. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Effects of a two-school-year multifactorial back education program in elementary schoolchildren.
Geldhof, Elisabeth; Cardon, Greet; De Bourdeaudhuij, Ilse; De Clercq, Dirk
2006-08-01
A quasi-experimental pre/post design. To investigate effects of a 2-school-year multifactorial back education program on back posture knowledge and postural behavior in elementary schoolchildren. Additionally, self-reported back or neck pain and fear-avoidance beliefs were evaluated. Epidemiologic studies report mounting nonspecific back pain prevalence among youngsters, characterized by multifactorial risk factors. Study findings of school-based interventions are promising. Furthermore, biomechanical discomfort is found in the school environment. The study sample included 193 intervention children and 172 controls (baseline, 9-to-11-year-olds). The multifactorial intervention consisted of a back education program and the stimulation of postural dynamism in the class through support and environmental changes. Evaluation consisted of a questionnaire, an observation of postural behavior in the classroom, and an observation of material handling during a movement session. The intervention resulted in increased back posture knowledge (P < 0.001), improved postural behavior during material handling (P < 0.001), and decreased duration of trunk flexion (P < 0.05) and neck torsion (P < 0.05) during lesson time. The intervention did not change fear-avoidance beliefs. There was a trend for decreased pain reports in boys of the intervention group (P < 0.09). The intervention resulted in improved postural aspects related to spinal loading. The long-term effect of improved postural behavior at young age on back pain prevalence later in life is of interest for future research.
Castillo-Cadena, Julieta; Mejia-Sanchez, Fernando; López-Arriaga, Jerónimo Amado
2017-03-01
Birth defects are the number one cause of child mortality worldwide and in 2010 it was the second cause in Mexico. Congenital malformations are a public health issue, because they cause infant mortality, chronic disease and disability. The origin can be genetic, environmental or unknown causes. Among environmental contaminants, pesticides stand out. In this study, we determine the frequency and etiology of congenital malformations in newborns (NBs) of a floricultural community and we compare it with that in the urban community. For 18 months, the NBs were monitored at the Tenancingo General Hospital and the Mother and Child Gynecology and Obstetrics Hospital (IMIEM) in Toluca. The identification of these malformations was carried out in accordance with the WHO. In Tenancingo, 1149 NBs were viewed, where 20% had some kind of congenital malformations. While in the IMIEM, 5069 were reviewed and 6% had some malformation. According to the etiology, in Tenancingo, 69% were multifactorial, 28% were monogenetic and 2% were chromosomal. In the IMIEM, 47% were multifactorial, then 18.3% were monogenetic and 2.8% were chromosomal. There was a significant difference between the global frequency of malformations and the multifactorial etiology of both institutions. Our results show that congenital malformations in the NBs occurred more frequently in the floricultural zone and that because the percentage of multifactorial etiology is higher, it is likely there is an association with exposure to pesticides.
Maximum Likelihood Shift Estimation Using High Resolution Polarimetric SAR Clutter Model
NASA Astrophysics Data System (ADS)
Harant, Olivier; Bombrun, Lionel; Vasile, Gabriel; Ferro-Famil, Laurent; Gay, Michel
2011-03-01
This paper deals with a Maximum Likelihood (ML) shift estimation method in the context of High Resolution (HR) Polarimetric SAR (PolSAR) clutter. Texture modeling is exposed and the generalized ML texture tracking method is extended to the merging of various sensors. Some results on displacement estimation on the Argentiere glacier in the Mont Blanc massif using dual-pol TerraSAR-X (TSX) and quad-pol RADARSAT-2 (RS2) sensors are finally discussed.
Approximate Bayesian computation in large-scale structure: constraining the galaxy-halo connection
NASA Astrophysics Data System (ADS)
Hahn, ChangHoon; Vakili, Mohammadjavad; Walsh, Kilian; Hearin, Andrew P.; Hogg, David W.; Campbell, Duncan
2017-08-01
Standard approaches to Bayesian parameter inference in large-scale structure assume a Gaussian functional form (chi-squared form) for the likelihood. This assumption, in detail, cannot be correct. Likelihood free inferences such as approximate Bayesian computation (ABC) relax these restrictions and make inference possible without making any assumptions on the likelihood. Instead ABC relies on a forward generative model of the data and a metric for measuring the distance between the model and data. In this work, we demonstrate that ABC is feasible for LSS parameter inference by using it to constrain parameters of the halo occupation distribution (HOD) model for populating dark matter haloes with galaxies. Using specific implementation of ABC supplemented with population Monte Carlo importance sampling, a generative forward model using HOD and a distance metric based on galaxy number density, two-point correlation function and galaxy group multiplicity function, we constrain the HOD parameters of mock observation generated from selected 'true' HOD parameters. The parameter constraints we obtain from ABC are consistent with the 'true' HOD parameters, demonstrating that ABC can be reliably used for parameter inference in LSS. Furthermore, we compare our ABC constraints to constraints we obtain using a pseudo-likelihood function of Gaussian form with MCMC and find consistent HOD parameter constraints. Ultimately, our results suggest that ABC can and should be applied in parameter inference for LSS analyses.
Bayesian inference for OPC modeling
NASA Astrophysics Data System (ADS)
Burbine, Andrew; Sturtevant, John; Fryer, David; Smith, Bruce W.
2016-03-01
The use of optical proximity correction (OPC) demands increasingly accurate models of the photolithographic process. Model building and inference techniques in the data science community have seen great strides in the past two decades which make better use of available information. This paper aims to demonstrate the predictive power of Bayesian inference as a method for parameter selection in lithographic models by quantifying the uncertainty associated with model inputs and wafer data. Specifically, the method combines the model builder's prior information about each modelling assumption with the maximization of each observation's likelihood as a Student's t-distributed random variable. Through the use of a Markov chain Monte Carlo (MCMC) algorithm, a model's parameter space is explored to find the most credible parameter values. During parameter exploration, the parameters' posterior distributions are generated by applying Bayes' rule, using a likelihood function and the a priori knowledge supplied. The MCMC algorithm used, an affine invariant ensemble sampler (AIES), is implemented by initializing many walkers which semiindependently explore the space. The convergence of these walkers to global maxima of the likelihood volume determine the parameter values' highest density intervals (HDI) to reveal champion models. We show that this method of parameter selection provides insights into the data that traditional methods do not and outline continued experiments to vet the method.
OʼHara, Susan
2014-01-01
Nurses have increasingly been regarded as critical members of the planning team as architects recognize their knowledge and value. But the nurses' role as knowledge experts can be expanded to leading efforts to integrate the clinical, operational, and architectural expertise through simulation modeling. Simulation modeling allows for the optimal merge of multifactorial data to understand the current state of the intensive care unit and predict future states. Nurses can champion the simulation modeling process and reap the benefits of a cost-effective way to test new designs, processes, staffing models, and future programming trends prior to implementation. Simulation modeling is an evidence-based planning approach, a standard, for integrating the sciences with real client data, to offer solutions for improving patient care.
Murine Models of Systemic Lupus Erythematosus
Perry, Daniel; Sang, Allison; Yin, Yiming; Zheng, Ying-Yi; Morel, Laurence
2011-01-01
Systemic lupus erythematosus (SLE) is a multifactorial autoimmune disorder. The study of diverse mouse models of lupus has provided clues to the etiology of SLE. Spontaneous mouse models of lupus have led to identification of numerous susceptibility loci from which several candidate genes have emerged. Meanwhile, induced models of lupus have provided insight into the role of environmental factors in lupus pathogenesis as well as provided a better understanding of cellular mechanisms involved in the onset and progression of disease. The SLE-like phenotypes present in these models have also served to screen numerous potential SLE therapies. Due to the complex nature of SLE, it is necessary to understand the effect specific targeted therapies have on immune homeostasis. Furthermore, knowledge gained from mouse models will provide novel therapy targets for the treatment of SLE. PMID:21403825
Interpersonal distance modeling during fighting activities.
Dietrich, Gilles; Bredin, Jonathan; Kerlirzin, Yves
2010-10-01
The aim of this article is to elaborate a general framework for modeling dual opposition activities, or more generally, dual interaction. The main hypothesis is that opposition behavior can be measured directly from a global variable and that the relative distance between the two subjects can be this parameter. Moreover, this parameter should be considered as multidimensional parameter depending not only on the dynamics of the subjects but also on the "internal" parameters of the subjects, such as sociological and/or emotional states. Standard and simple mechanical formalization will be used to model this multifactorial distance. To illustrate such a general modeling methodology, this model was compared with actual data from an opposition activity like Japanese fencing (kendo). This model captures not only coupled coordination, but more generally interaction in two-subject activities.
SMURC: High-Dimension Small-Sample Multivariate Regression With Covariance Estimation.
Bayar, Belhassen; Bouaynaya, Nidhal; Shterenberg, Roman
2017-03-01
We consider a high-dimension low sample-size multivariate regression problem that accounts for correlation of the response variables. The system is underdetermined as there are more parameters than samples. We show that the maximum likelihood approach with covariance estimation is senseless because the likelihood diverges. We subsequently propose a normalization of the likelihood function that guarantees convergence. We call this method small-sample multivariate regression with covariance (SMURC) estimation. We derive an optimization problem and its convex approximation to compute SMURC. Simulation results show that the proposed algorithm outperforms the regularized likelihood estimator with known covariance matrix and the sparse conditional Gaussian graphical model. We also apply SMURC to the inference of the wing-muscle gene network of the Drosophila melanogaster (fruit fly).
Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions
Barrett, Harrison H.; Dainty, Christopher; Lara, David
2008-01-01
Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255
What are hierarchical models and how do we analyze them?
Royle, Andy
2016-01-01
In this chapter we provide a basic definition of hierarchical models and introduce the two canonical hierarchical models in this book: site occupancy and N-mixture models. The former is a hierarchical extension of logistic regression and the latter is a hierarchical extension of Poisson regression. We introduce basic concepts of probability modeling and statistical inference including likelihood and Bayesian perspectives. We go through the mechanics of maximizing the likelihood and characterizing the posterior distribution by Markov chain Monte Carlo (MCMC) methods. We give a general perspective on topics such as model selection and assessment of model fit, although we demonstrate these topics in practice in later chapters (especially Chapters 5, 6, 7, and 10 Chapter 5 Chapter 6 Chapter 7 Chapter 10)
Assessing first-stage labor progression and its relationship to complications.
Hamilton, Emily F; Warrick, Philip A; Collins, Kathleen; Smith, Samuel; Garite, Thomas J
2016-03-01
New labor curves have challenged the traditional understanding of the general pattern of dilation and descent in labor. They also revealed wide variation in the time to advance in dilation. An interval of arrest such as 4 hours did not fall beyond normal limits until dilation had reached 6 cm. Thus, the American College of Obstetricians and Gynecologists/Society for Maternal-Fetal Medicine first-stage arrest criteria, based in part on these findings, are applicable only in late labor. The wide range of time to dilate is unavoidable because cervical dilation has neither a precise nor direct relationship to time. Newer statistical techniques (multifactorial models) can improve precision by incorporating several factors that are related directly to labor progress. At each examination, the calculations adapt to the mother's current labor conditions. They produce a quantitative assessment that is expressed in percentiles. Low percentiles indicate potentially problematic labor progression. The purpose of this study was to assess the relationship between first-stage labor progress- and labor-related complications with the use of 2 different assessment methods. The first method was based on arrest of dilation definitions. The other method used percentile rankings of dilation or station based on adaptive multifactorial models. We included all 4703 cephalic-presenting, term, singleton births with electronic fetal monitoring and cord gases at 2 academic community referral hospitals in 2012 and 2013. We assessed electronic data for route of delivery, all dilation and station examinations, newborn infant status, electronic fetal monitoring tracings, and cord blood gases. The labor-related complication groups included 272 women with cesarean delivery for first-stage arrest, 558 with cesarean delivery for fetal heart rate concerns, 178 with obstetric hemorrhage, and 237 with neonatal depression, which left 3004 women in the spontaneous vaginal birth group. Receiver operating characteristic curves were constructed for each assessment method by measurement of the sensitivity for each complication vs the false-positive rate in the normal reference group. The duration of arrest at ≥6 cm dilation showed poor levels of discrimination for the cesarean delivery interventions (area under the curve, 0.55-0.65; P < .01) and no significant relationship to hemorrhage or neonatal depression. The dilation and station percentiles showed high discrimination for the cesarean delivery-related outcomes (area under the curve, 0.78-0.93; P < .01) and low discrimination for the clinical outcomes of hemorrhage and neonatal depression (area under the curve, 0.58-0.61; P < .01). Duration of arrest of dilation at ≥6 cm showed little or no discrimination for any of the complications. In comparison, percentile rankings that were based on the adaptive multifactorial models showed much higher discrimination for cesarean delivery interventions and better, but low discrimination for hemorrhage. Adaptive multifactorial models present a different method to assess labor progress. Rather than "pass/fail" criteria that are applicable only to dilation in late labor, they produce percentile rankings, assess 2 essential processes for vaginal birth (dilation and descent), and can be applied from 3 cm onward. Given the limitations of labor-progress assessment based solely on the passage of time and because of the extreme variation in decision-making for cesarean delivery for labor disorders, the types of mathematic analyses that are described in this article are logical and promising steps to help standardize labor assessment. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Renfro, Mindy Oxman; Fehrer, Steven
2011-01-01
Unintentional falls is an increasing public health problem as incidence of falls rises and the population ages. The Centers for Disease Control and Prevention reports that 1 in 3 adults aged 65 years and older will experience a fall this year; 20% to 30% of those who fall will sustain a moderate to severe injury. Physical therapists caring for older adults are usually engaged with these patients after the first injury fall and may have little opportunity to abate fall risk before the injuries occur. This article describes the content selection and development of a simple-to-administer, multifactorial, Fall Risk Assessment & Screening Tool (FRAST), designed specifically for use in primary care settings to identify those older adults with high fall risk. Fall Risk Assessment & Screening Tool incorporates previously validated measures within a new multifactorial tool and includes targeted recommendations for intervention. Development of the multifactorial FRAST used a 5-part process: identification of significant fall risk factors, review of best evidence, selection of items, creation of the scoring grid, and development of a recommended action plan. Fall Risk Assessment & Screening Tool has been developed to assess fall risk in the target population of older adults (older than 65 years) living and ambulating independently in the community. Many fall risk factors have been considered and 15 items selected for inclusion. Fall Risk Assessment & Screening Tool includes 4 previously validated measures to assess balance, depression, falls efficacy, and home safety. Reliability and validity studies of FRAST are under way. Fall risk for community-dwelling older adults is an urgent, multifactorial, public health problem. Providing primary care practitioners (PCPs) with a very simple screening tool is imperative. Fall Risk Assessment & Screening Tool was created to allow for safe, quick, and low-cost administration by minimally trained office staff with interpretation and follow-up provided by the PCP.
NASA Astrophysics Data System (ADS)
Balbi, Stefano; Villa, Ferdinando; Mojtahed, Vahid; Hegetschweiler, Karin Tessa; Giupponi, Carlo
2016-06-01
This article presents a novel methodology to assess flood risk to people by integrating people's vulnerability and ability to cushion hazards through coping and adapting. The proposed approach extends traditional risk assessments beyond material damages; complements quantitative and semi-quantitative data with subjective and local knowledge, improving the use of commonly available information; and produces estimates of model uncertainty by providing probability distributions for all of its outputs. Flood risk to people is modeled using a spatially explicit Bayesian network model calibrated on expert opinion. Risk is assessed in terms of (1) likelihood of non-fatal physical injury, (2) likelihood of post-traumatic stress disorder and (3) likelihood of death. The study area covers the lower part of the Sihl valley (Switzerland) including the city of Zurich. The model is used to estimate the effect of improving an existing early warning system, taking into account the reliability, lead time and scope (i.e., coverage of people reached by the warning). Model results indicate that the potential benefits of an improved early warning in terms of avoided human impacts are particularly relevant in case of a major flood event.
A simulation study on Bayesian Ridge regression models for several collinearity levels
NASA Astrophysics Data System (ADS)
Efendi, Achmad; Effrihan
2017-12-01
When analyzing data with multiple regression model if there are collinearities, then one or several predictor variables are usually omitted from the model. However, there sometimes some reasons, for instance medical or economic reasons, the predictors are all important and should be included in the model. Ridge regression model is not uncommon in some researches to use to cope with collinearity. Through this modeling, weights for predictor variables are used for estimating parameters. The next estimation process could follow the concept of likelihood. Furthermore, for the estimation nowadays the Bayesian version could be an alternative. This estimation method does not match likelihood one in terms of popularity due to some difficulties; computation and so forth. Nevertheless, with the growing improvement of computational methodology recently, this caveat should not at the moment become a problem. This paper discusses about simulation process for evaluating the characteristic of Bayesian Ridge regression parameter estimates. There are several simulation settings based on variety of collinearity levels and sample sizes. The results show that Bayesian method gives better performance for relatively small sample sizes, and for other settings the method does perform relatively similar to the likelihood method.
How users adopt healthcare information: An empirical study of an online Q&A community.
Jin, Jiahua; Yan, Xiangbin; Li, Yijun; Li, Yumei
2016-02-01
The emergence of social media technology has led to the creation of many online healthcare communities, where patients can easily share and look for healthcare-related information from peers who have experienced a similar problem. However, with increased user-generated content, there is a need to constantly analyse which content should be trusted as one sifts through enormous amounts of healthcare information. This study aims to explore patients' healthcare information seeking behavior in online communities. Based on dual-process theory and the knowledge adoption model, we proposed a healthcare information adoption model for online communities. This model highlights that information quality, emotional support, and source credibility are antecedent variables of adoption likelihood of healthcare information, and competition among repliers and involvement of recipients moderate the relationship between the antecedent variables and adoption likelihood. Empirical data were collected from the healthcare module of China's biggest Q&A community-Baidu Knows. Text mining techniques were adopted to calculate the information quality and emotional support contained in each reply text. A binary logistics regression model and hierarchical regression approach were employed to test the proposed conceptual model. Information quality, emotional support, and source credibility have significant and positive impact on healthcare information adoption likelihood, and among these factors, information quality has the biggest impact on a patient's adoption decision. In addition, competition among repliers and involvement of recipients were tested as moderating effects between these antecedent factors and the adoption likelihood. Results indicate competition among repliers positively moderates the relationship between source credibility and adoption likelihood, and recipients' involvement positively moderates the relationship between information quality, source credibility, and adoption decision. In addition to information quality and source credibility, emotional support has significant positive impact on individuals' healthcare information adoption decisions. Moreover, the relationships between information quality, source credibility, emotional support, and adoption decision are moderated by competition among repliers and involvement of recipients. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
[The nature, diagnosis and treatment of post-concussion syndrome].
Muñoz-Céspedes, J M; Pelegrín-Valero, C; Tirapu-Ustarroz, J; Fernández-Guinea, S
1998-11-01
The relationship between brief loss of consciousness, subsequent cognitive and emotional complaints, and impact on daily functioning continues to be hotly debated. In this paper the strong variability about prevalence of the postconcussional syndrome found in several studies is outstanding and the main issues of this disagreement are suggested. Recent neuroimaging techniques are discussed and some neuropsychological measures are suggested. Currents models (organic/psychogenic) of postconcussional symptoms are reviewed, and a multifactorial model which integrates biological factors with the relevance of neuropsychological deficits--attention, memory, speed of information processing--and coping process is proposed. Finally, according with this model, we conclude with some suggestions to improve neuropsychological intervention and medical treatment of these patients.
Church, Jody L; Haas, Marion R; Goodall, Stephen
2015-12-01
To evaluate the cost effectiveness of interventions designed to prevent falls and fall-related injuries among older people living in residential aged care facilities (RACFs) from an Australian health care perspective. A decision analytic Markov model was developed that stratified individuals according to their risk of falling and accounted for the risk of injury following a fall. The effectiveness of the interventions was derived from two Cochrane reviews of randomized controlled trials for falls/fall-related injury prevention in RACFs. Interventions were considered effective if they reduced the risk of falling or reduced the risk of injury following a fall. The interventions that were modelled included vitamin D supplementation, annual medication review, multifactorial intervention (a combination of risk assessment, medication review, vision assessment and exercise) and hip protectors. The cost effectiveness was calculated as the incremental cost relative to the incremental benefit, in which the benefit was estimated using quality-adjusted life-years (QALYs). Uncertainty was explored using univariate and probabilistic sensitivity analysis. Vitamin D supplementation and medication review both dominated 'no intervention', as these interventions were both more effective and cost saving (because of healthcare costs avoided). Hip protectors are dominated (less effective and more costly) by vitamin D and medication review. The incremental cost-effectiveness ratio (ICER) for medication review relative to vitamin D supplementation is AU$2442 per QALY gained, and the ICER for multifactorial intervention relative to medication review is AU$1,112,500 per QALY gained. The model is most sensitive to the fear of falling and the cost of the interventions. The model suggests that vitamin D supplementation and medication review are cost-effective interventions that reduce falls, provide health benefits and reduce health care costs in older adults living in RACFs.
Using the β-binomial distribution to characterize forest health
S.J. Zarnoch; R.L. Anderson; R.M. Sheffield
1995-01-01
The β-binomial distribution is suggested as a model for describing and analyzing the dichotomous data obtained from programs monitoring the health of forests in the United States. Maximum likelihood estimation of the parameters is given as well as asymptotic likelihood ratio tests. The procedure is illustrated with data on dogwood anthracnose infection (caused...
Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning
ERIC Educational Resources Information Center
Li, Zhushan
2014-01-01
Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…
Expected versus Observed Information in SEM with Incomplete Normal and Nonnormal Data
ERIC Educational Resources Information Center
Savalei, Victoria
2010-01-01
Maximum likelihood is the most common estimation method in structural equation modeling. Standard errors for maximum likelihood estimates are obtained from the associated information matrix, which can be estimated from the sample using either expected or observed information. It is known that, with complete data, estimates based on observed or…
Five Methods for Estimating Angoff Cut Scores with IRT
ERIC Educational Resources Information Center
Wyse, Adam E.
2017-01-01
This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…
Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data
NASA Technical Reports Server (NTRS)
Iliff, K. W.; Maine, R. E.
1985-01-01
Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.
Astrocytes Pathology in ALS: A Potential Therapeutic Target?
Johann, Sonja
2017-01-01
The mechanisms underlying neurodegeneration in amyotrophic lateral sclerosis (ALS) are multifactorial and include genetic and environmental factors. Nowadays, it is well accepted that neuronal loss is driven by non-cell autonomous toxicity. Non-neuronal cells, such as astrocytes, have been described to significantly contribute to motoneuron cell death and disease progression in cell culture experiments and animal models of ALS. Astrocytes are essential for neuronal survival and function by regulating neurotransmitter and ion homeostasis, immune response, blood flow and glucose uptake, antioxidant defence and growth factor release. Based on their significant functions in "housekeeping" the central nervous system (CNS), they are no longer thought to be passive bystanders but rather contributors to ALS pathogenesis. Findings from animal models have broadened our knowledge about different pathomechanisms in ALS, but therapeutic approaches to impede disease progression failed. So far, there is no cure for ALS and effective medication to slow down disease progression is limited. Targeting only a single aspect of this multifactorial disease may exhibit therapeutic limitations. Hence, novel cellular targets must be defined and new pharmaceutical strategies, such as combinatorial drug therapies are urgently needed. The present review discusses the physiological role of astrocytes and current hypotheses of astrocyte pathology in ALS. Furthermore, recent investigation of potential drug candidates in astrocyte cell culture systems and animal models, as well as data obtained from clinical trials, will be addressed. The central role of astrocytes in ALS pathogenesis makes them a promising target for pharmaceutical interventions. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
optBINS: Optimal Binning for histograms
NASA Astrophysics Data System (ADS)
Knuth, Kevin H.
2018-03-01
optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.
A maximum pseudo-profile likelihood estimator for the Cox model under length-biased sampling
Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.
2012-01-01
This paper considers semiparametric estimation of the Cox proportional hazards model for right-censored and length-biased data arising from prevalent sampling. To exploit the special structure of length-biased sampling, we propose a maximum pseudo-profile likelihood estimator, which can handle time-dependent covariates and is consistent under covariate-dependent censoring. Simulation studies show that the proposed estimator is more efficient than its competitors. A data analysis illustrates the methods and theory. PMID:23843659
Bayesian hierarchical modeling for detecting safety signals in clinical trials.
Xia, H Amy; Ma, Haijun; Carlin, Bradley P
2011-09-01
Detection of safety signals from clinical trial adverse event data is critical in drug development, but carries a challenging statistical multiplicity problem. Bayesian hierarchical mixture modeling is appealing for its ability to borrow strength across subgroups in the data, as well as moderate extreme findings most likely due merely to chance. We implement such a model for subject incidence (Berry and Berry, 2004 ) using a binomial likelihood, and extend it to subject-year adjusted incidence rate estimation under a Poisson likelihood. We use simulation to choose a signal detection threshold, and illustrate some effective graphics for displaying the flagged signals.
PERIODIC AUTOREGRESSIVE-MOVING AVERAGE (PARMA) MODELING WITH APPLICATIONS TO WATER RESOURCES.
Vecchia, A.V.
1985-01-01
Results involving correlation properties and parameter estimation for autogressive-moving average models with periodic parameters are presented. A multivariate representation of the PARMA model is used to derive parameter space restrictions and difference equations for the periodic autocorrelations. Close approximation to the likelihood function for Gaussian PARMA processes results in efficient maximum-likelihood estimation procedures. Terms in the Fourier expansion of the parameters are sequentially included, and a selection criterion is given for determining the optimal number of harmonics to be included. Application of the techniques is demonstrated through analysis of a monthly streamflow time series.
Heersink, Daniel K; Caley, Peter; Paini, Dean R; Barry, Simon C
2016-05-01
The cost of an uncontrolled incursion of invasive alien species (IAS) arising from undetected entry through ports can be substantial, and knowledge of port-specific risks is needed to help allocate limited surveillance resources. Quantifying the establishment likelihood of such an incursion requires quantifying the ability of a species to enter, establish, and spread. Estimation of the approach rate of IAS into ports provides a measure of likelihood of entry. Data on the approach rate of IAS are typically sparse, and the combinations of risk factors relating to country of origin and port of arrival diverse. This presents challenges to making formal statistical inference on establishment likelihood. Here we demonstrate how these challenges can be overcome with judicious use of mixed-effects models when estimating the incursion likelihood into Australia of the European (Apis mellifera) and Asian (A. cerana) honeybees, along with the invasive parasites of biosecurity concern they host (e.g., Varroa destructor). Our results demonstrate how skewed the establishment likelihood is, with one-tenth of the ports accounting for 80% or more of the likelihood for both species. These results have been utilized by biosecurity agencies in the allocation of resources to the surveillance of maritime ports. © 2015 Society for Risk Analysis.
Competition between learned reward and error outcome predictions in anterior cingulate cortex.
Alexander, William H; Brown, Joshua W
2010-02-15
The anterior cingulate cortex (ACC) is implicated in performance monitoring and cognitive control. Non-human primate studies of ACC show prominent reward signals, but these are elusive in human studies, which instead show mainly conflict and error effects. Here we demonstrate distinct appetitive and aversive activity in human ACC. The error likelihood hypothesis suggests that ACC activity increases in proportion to the likelihood of an error, and ACC is also sensitive to the consequence magnitude of the predicted error. Previous work further showed that error likelihood effects reach a ceiling as the potential consequences of an error increase, possibly due to reductions in the average reward. We explored this issue by independently manipulating reward magnitude of task responses and error likelihood while controlling for potential error consequences in an Incentive Change Signal Task. The fMRI results ruled out a modulatory effect of expected reward on error likelihood effects in favor of a competition effect between expected reward and error likelihood. Dynamic causal modeling showed that error likelihood and expected reward signals are intrinsic to the ACC rather than received from elsewhere. These findings agree with interpretations of ACC activity as signaling both perceptions of risk and predicted reward. Copyright 2009 Elsevier Inc. All rights reserved.
Animal models of female pelvic organ prolapse: lessons learned
Couri, Bruna M; Lenis, Andrew T; Borazjani, Ali; Paraiso, Marie Fidela R; Damaser, Margot S
2012-01-01
Pelvic organ prolapse is a vaginal protrusion of female pelvic organs. It has high prevalence worldwide and represents a great burden to the economy. The pathophysiology of pelvic organ prolapse is multifactorial and includes genetic predisposition, aberrant connective tissue, obesity, advancing age, vaginal delivery and other risk factors. Owing to the long course prior to patients becoming symptomatic and ethical questions surrounding human studies, animal models are necessary and useful. These models can mimic different human characteristics – histological, anatomical or hormonal, but none present all of the characteristics at the same time. Major animal models include knockout mice, rats, sheep, rabbits and nonhuman primates. In this article we discuss different animal models and their utility for investigating the natural progression of pelvic organ prolapse pathophysiology and novel treatment approaches. PMID:22707980
Learning Petri net models of non-linear gene interactions.
Mayo, Michael
2005-10-01
Understanding how an individual's genetic make-up influences their risk of disease is a problem of paramount importance. Although machine-learning techniques are able to uncover the relationships between genotype and disease, the problem of automatically building the best biochemical model or "explanation" of the relationship has received less attention. In this paper, I describe a method based on random hill climbing that automatically builds Petri net models of non-linear (or multi-factorial) disease-causing gene-gene interactions. Petri nets are a suitable formalism for this problem, because they are used to model concurrent, dynamic processes analogous to biochemical reaction networks. I show that this method is routinely able to identify perfect Petri net models for three disease-causing gene-gene interactions recently reported in the literature.
Extending the Applicability of the Generalized Likelihood Function for Zero-Inflated Data Series
NASA Astrophysics Data System (ADS)
Oliveira, Debora Y.; Chaffe, Pedro L. B.; Sá, João. H. M.
2018-03-01
Proper uncertainty estimation for data series with a high proportion of zero and near zero observations has been a challenge in hydrologic studies. This technical note proposes a modification to the Generalized Likelihood function that accounts for zero inflation of the error distribution (ZI-GL). We compare the performance of the proposed ZI-GL with the original Generalized Likelihood function using the entire data series (GL) and by simply suppressing zero observations (GLy>0). These approaches were applied to two interception modeling examples characterized by data series with a significant number of zeros. The ZI-GL produced better uncertainty ranges than the GL as measured by the precision, reliability and volumetric bias metrics. The comparison between ZI-GL and GLy>0 highlights the need for further improvement in the treatment of residuals from near zero simulations when a linear heteroscedastic error model is considered. Aside from the interception modeling examples illustrated herein, the proposed ZI-GL may be useful for other hydrologic studies, such as for the modeling of the runoff generation in hillslopes and ephemeral catchments.
Linking Illness in Parents to Health Anxiety in Offspring: Do Beliefs about Health Play a Role?
Alberts, Nicole M; Hadjistavropoulos, Heather D; Sherry, Simon B; Stewart, Sherry H
2016-01-01
The cognitive behavioural (CB) model of health anxiety proposes parental illness leads to elevated health anxiety in offspring by promoting the acquisition of specific health beliefs (e.g. overestimation of the likelihood of illness). Our study tested this central tenet of the CB model. Participants were 444 emerging adults (18-25-years-old) who completed online measures and were categorized into those with healthy parents (n = 328) or seriously ill parents (n = 116). Small (d = .21), but significant, elevations in health anxiety, and small to medium (d = .40) elevations in beliefs about the likelihood of illness were found among those with ill vs. healthy parents. Mediation analyses indicated the relationship between parental illness and health anxiety was mediated by beliefs regarding the likelihood of future illness. Our study incrementally advances knowledge by testing and supporting a central proposition of the CB model. The findings add further specificity to the CB model by highlighting the importance of a specific health belief as a central contributor to health anxiety among offspring with a history of serious parental illness.
Lehmann, A; Scheffler, Ch; Hermanussen, M
2010-02-01
Recent progress in modelling individual growth has been achieved by combining the principal component analysis and the maximum likelihood principle. This combination models growth even in incomplete sets of data and in data obtained at irregular intervals. We re-analysed late 18th century longitudinal growth of German boys from the boarding school Carlsschule in Stuttgart. The boys, aged 6-23 years, were measured at irregular 3-12 monthly intervals during the period 1771-1793. At the age of 18 years, mean height was 1652 mm, but height variation was large. The shortest boy reached 1474 mm, the tallest 1826 mm. Measured height closely paralleled modelled height, with mean difference of 4 mm, SD 7 mm. Seasonal height variation was found. Low growth rates occurred in spring and high growth rates in summer and autumn. The present study demonstrates that combining the principal component analysis and the maximum likelihood principle enables growth modelling in historic height data also. Copyright (c) 2009 Elsevier GmbH. All rights reserved.
Zhao, Xing; Zhou, Xiao-Hua; Feng, Zijian; Guo, Pengfei; He, Hongyan; Zhang, Tao; Duan, Lei; Li, Xiaosong
2013-01-01
As a useful tool for geographical cluster detection of events, the spatial scan statistic is widely applied in many fields and plays an increasingly important role. The classic version of the spatial scan statistic for the binary outcome is developed by Kulldorff, based on the Bernoulli or the Poisson probability model. In this paper, we apply the Hypergeometric probability model to construct the likelihood function under the null hypothesis. Compared with existing methods, the likelihood function under the null hypothesis is an alternative and indirect method to identify the potential cluster, and the test statistic is the extreme value of the likelihood function. Similar with Kulldorff's methods, we adopt Monte Carlo test for the test of significance. Both methods are applied for detecting spatial clusters of Japanese encephalitis in Sichuan province, China, in 2009, and the detected clusters are identical. Through a simulation to independent benchmark data, it is indicated that the test statistic based on the Hypergeometric model outweighs Kulldorff's statistics for clusters of high population density or large size; otherwise Kulldorff's statistics are superior.
NASA Astrophysics Data System (ADS)
Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.
2014-08-01
We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for testing periods of 10-20 yr. The testing results suggest that our model is a viable candidate model to serve for long-term forecasting on timescales of years to decades for the European region.
The pathobiological impact of cigarette smoke on pancreatic cancer development (review).
Wittel, Uwe A; Momi, Navneet; Seifert, Gabriel; Wiech, Thorsten; Hopt, Ulrich T; Batra, Surinder K
2012-07-01
Despite extensive efforts, pancreatic cancer remains incurable. Most risk factors, such as genetic disposition, metabolic diseases or chronic pancreatitis cannot be influenced. By contrast, cigarette smoking, an important risk factor for pancreatic cancer, can be controlled. Despite the epidemiological evidence of the detrimental effects of cigarette smoking with regard to pancreatic cancer development and its unique property of being influenceable, our understanding of cigarette smoke-induced pancreatic carcinogenesis is limited. Current data on cigarette smoke-induced pancreatic carcinogenesis indicate multifactorial events that are triggered by nicotine, which is the major pharmacologically active constituent of tobacco smoke. In addition to nicotine, a vast number of carcinogens have the potential to reach the pancreatic gland, where they are metabolized, in some instances to even more toxic compounds. These metabolic events are not restricted to pancreatic ductal cells. Several studies show that acinar cells are also greatly affected. Furthermore, pancreatic cancer progenitor cells do not only derive from the ductal epithelial lineage, but also from acinar cells. This sheds new light on cigarette smoke-induced acinar cell damage. On this background, our objective is to outline a multifactorial model of tobacco smoke-induced pancreatic carcinogenesis.
Lyapunov exponents and phase diagrams reveal multi-factorial control over TRAIL-induced apoptosis
Aldridge, Bree B; Gaudet, Suzanne; Lauffenburger, Douglas A; Sorger, Peter K
2011-01-01
Receptor-mediated apoptosis proceeds via two pathways: one requiring only a cascade of initiator and effector caspases (type I behavior) and the second requiring an initiator–effector caspase cascade and mitochondrial outer membrane permeabilization (type II behavior). Here, we investigate factors controlling type I versus II phenotypes by performing Lyapunov exponent analysis of an ODE-based model of cell death. The resulting phase diagrams predict that the ratio of XIAP to pro-caspase-3 concentrations plays a key regulatory role: type I behavior predominates when the ratio is low and type II behavior when the ratio is high. Cell-to-cell variability in phenotype is observed when the ratio is close to the type I versus II boundary. By positioning multiple tumor cell lines on the phase diagram we confirm these predictions. We also extend phase space analysis to mutations affecting the rate of caspase-3 ubiquitylation by XIAP, predicting and showing that such mutations abolish all-or-none control over activation of effector caspases. Thus, phase diagrams derived from Lyapunov exponent analysis represent a means to study multi-factorial control over a complex biochemical pathway. PMID:22108795
Methods for estimating drought streamflow probabilities for Virginia streams
Austin, Samuel H.
2014-01-01
Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.
Grinband, Jack; Savitskaya, Judith; Wager, Tor D; Teichert, Tobias; Ferrera, Vincent P; Hirsch, Joy
2011-07-15
The dorsal medial frontal cortex (dMFC) is highly active during choice behavior. Though many models have been proposed to explain dMFC function, the conflict monitoring model is the most influential. It posits that dMFC is primarily involved in detecting interference between competing responses thus signaling the need for control. It accurately predicts increased neural activity and response time (RT) for incompatible (high-interference) vs. compatible (low-interference) decisions. However, it has been shown that neural activity can increase with time on task, even when no decisions are made. Thus, the greater dMFC activity on incompatible trials may stem from longer RTs rather than response conflict. This study shows that (1) the conflict monitoring model fails to predict the relationship between error likelihood and RT, and (2) the dMFC activity is not sensitive to congruency, error likelihood, or response conflict, but is monotonically related to time on task. Copyright © 2010 Elsevier Inc. All rights reserved.
Probabilistic Modeling of the Renal Stone Formation Module
NASA Technical Reports Server (NTRS)
Best, Lauren M.; Myers, Jerry G.; Goodenow, Debra A.; McRae, Michael P.; Jackson, Travis C.
2013-01-01
The Integrated Medical Model (IMM) is a probabilistic tool, used in mission planning decision making and medical systems risk assessments. The IMM project maintains a database of over 80 medical conditions that could occur during a spaceflight, documenting an incidence rate and end case scenarios for each. In some cases, where observational data are insufficient to adequately define the inflight medical risk, the IMM utilizes external probabilistic modules to model and estimate the event likelihoods. One such medical event of interest is an unpassed renal stone. Due to a high salt diet and high concentrations of calcium in the blood (due to bone depletion caused by unloading in the microgravity environment) astronauts are at a considerable elevated risk for developing renal calculi (nephrolithiasis) while in space. Lack of observed incidences of nephrolithiasis has led HRP to initiate the development of the Renal Stone Formation Module (RSFM) to create a probabilistic simulator capable of estimating the likelihood of symptomatic renal stone presentation in astronauts on exploration missions. The model consists of two major parts. The first is the probabilistic component, which utilizes probability distributions to assess the range of urine electrolyte parameters and a multivariate regression to transform estimated crystal density and size distributions to the likelihood of the presentation of nephrolithiasis symptoms. The second is a deterministic physical and chemical model of renal stone growth in the kidney developed by Kassemi et al. The probabilistic component of the renal stone model couples the input probability distributions describing the urine chemistry, astronaut physiology, and system parameters with the physical and chemical outputs and inputs to the deterministic stone growth model. These two parts of the model are necessary to capture the uncertainty in the likelihood estimate. The model will be driven by Monte Carlo simulations, continuously randomly sampling the probability distributions of the electrolyte concentrations and system parameters that are inputs into the deterministic model. The total urine chemistry concentrations are used to determine the urine chemistry activity using the Joint Expert Speciation System (JESS), a biochemistry model. Information used from JESS is then fed into the deterministic growth model. Outputs from JESS and the deterministic model are passed back to the probabilistic model where a multivariate regression is used to assess the likelihood of a stone forming and the likelihood of a stone requiring clinical intervention. The parameters used to determine to quantify these risks include: relative supersaturation (RS) of calcium oxalate, citrate/calcium ratio, crystal number density, total urine volume, pH, magnesium excretion, maximum stone width, and ureteral location. Methods and Validation: The RSFM is designed to perform a Monte Carlo simulation to generate probability distributions of clinically significant renal stones, as well as provide an associated uncertainty in the estimate. Initially, early versions will be used to test integration of the components and assess component validation and verification (V&V), with later versions used to address questions regarding design reference mission scenarios. Once integrated with the deterministic component, the credibility assessment of the integrated model will follow NASA STD 7009 requirements.
Lee, E Henry; Wickham, Charlotte; Beedlow, Peter A; Waschmann, Ronald S; Tingey, David T
2017-10-01
A time series intervention analysis (TSIA) of dendrochronological data to infer the tree growth-climate-disturbance relations and forest disturbance history is described. Maximum likelihood is used to estimate the parameters of a structural time series model with components for climate and forest disturbances (i.e., pests, diseases, fire). The statistical method is illustrated with a tree-ring width time series for a mature closed-canopy Douglas-fir stand on the west slopes of the Cascade Mountains of Oregon, USA that is impacted by Swiss needle cast disease caused by the foliar fungus, Phaecryptopus gaeumannii (Rhode) Petrak. The likelihood-based TSIA method is proposed for the field of dendrochronology to understand the interaction of temperature, water, and forest disturbances that are important in forest ecology and climate change studies.
Multiple Cognitive Control Effects of Error Likelihood and Conflict
Brown, Joshua W.
2010-01-01
Recent work on cognitive control has suggested a variety of performance monitoring functions of the anterior cingulate cortex, such as errors, conflict, error likelihood, and others. Given the variety of monitoring effects, a corresponding variety of control effects on behavior might be expected. This paper explores whether conflict and error likelihood produce distinct cognitive control effects on behavior, as measured by response time. A change signal task (Brown & Braver, 2005) was modified to include conditions of likely errors due to tardy as well as premature responses, in conditions with and without conflict. The results discriminate between competing hypotheses of independent vs. interacting conflict and error likelihood control effects. Specifically, the results suggest that the likelihood of premature vs. tardy response errors can lead to multiple distinct control effects, which are independent of cognitive control effects driven by response conflict. As a whole, the results point to the existence of multiple distinct cognitive control mechanisms and challenge existing models of cognitive control that incorporate only a single control signal. PMID:19030873
Chen, Rui; Hyrien, Ollivier
2011-01-01
This article deals with quasi- and pseudo-likelihood estimation in a class of continuous-time multi-type Markov branching processes observed at discrete points in time. “Conventional” and conditional estimation are discussed for both approaches. We compare their properties and identify situations where they lead to asymptotically equivalent estimators. Both approaches possess robustness properties, and coincide with maximum likelihood estimation in some cases. Quasi-likelihood functions involving only linear combinations of the data may be unable to estimate all model parameters. Remedial measures exist, including the resort either to non-linear functions of the data or to conditioning the moments on appropriate sigma-algebras. The method of pseudo-likelihood may also resolve this issue. We investigate the properties of these approaches in three examples: the pure birth process, the linear birth-and-death process, and a two-type process that generalizes the previous two examples. Simulations studies are conducted to evaluate performance in finite samples. PMID:21552356
The skewed weak lensing likelihood: why biases arise, despite data and theory being sound
NASA Astrophysics Data System (ADS)
Sellentin, Elena; Heymans, Catherine; Harnois-Déraps, Joachim
2018-07-01
We derive the essentials of the skewed weak lensing likelihood via a simple hierarchical forward model. Our likelihood passes four objective and cosmology-independent tests which a standard Gaussian likelihood fails. We demonstrate that sound weak lensing data are naturally biased low, since they are drawn from a skewed distribution. This occurs already in the framework of Lambda cold dark matter. Mathematically, the biases arise because noisy two-point functions follow skewed distributions. This form of bias is already known from cosmic microwave background analyses, where the low multipoles have asymmetric error bars. Weak lensing is more strongly affected by this asymmetry as galaxies form a discrete set of shear tracer particles, in contrast to a smooth shear field. We demonstrate that the biases can be up to 30 per cent of the standard deviation per data point, dependent on the properties of the weak lensing survey and the employed filter function. Our likelihood provides a versatile framework with which to address this bias in future weak lensing analyses.
The skewed weak lensing likelihood: why biases arise, despite data and theory being sound.
NASA Astrophysics Data System (ADS)
Sellentin, Elena; Heymans, Catherine; Harnois-Déraps, Joachim
2018-04-01
We derive the essentials of the skewed weak lensing likelihood via a simple Hierarchical Forward Model. Our likelihood passes four objective and cosmology-independent tests which a standard Gaussian likelihood fails. We demonstrate that sound weak lensing data are naturally biased low, since they are drawn from a skewed distribution. This occurs already in the framework of ΛCDM. Mathematically, the biases arise because noisy two-point functions follow skewed distributions. This form of bias is already known from CMB analyses, where the low multipoles have asymmetric error bars. Weak lensing is more strongly affected by this asymmetry as galaxies form a discrete set of shear tracer particles, in contrast to a smooth shear field. We demonstrate that the biases can be up to 30% of the standard deviation per data point, dependent on the properties of the weak lensing survey and the employed filter function. Our likelihood provides a versatile framework with which to address this bias in future weak lensing analyses.
Grummer, Jared A; Bryson, Robert W; Reeder, Tod W
2014-03-01
Current molecular methods of species delimitation are limited by the types of species delimitation models and scenarios that can be tested. Bayes factors allow for more flexibility in testing non-nested species delimitation models and hypotheses of individual assignment to alternative lineages. Here, we examined the efficacy of Bayes factors in delimiting species through simulations and empirical data from the Sceloporus scalaris species group. Marginal-likelihood scores of competing species delimitation models, from which Bayes factor values were compared, were estimated with four different methods: harmonic mean estimation (HME), smoothed harmonic mean estimation (sHME), path-sampling/thermodynamic integration (PS), and stepping-stone (SS) analysis. We also performed model selection using a posterior simulation-based analog of the Akaike information criterion through Markov chain Monte Carlo analysis (AICM). Bayes factor species delimitation results from the empirical data were then compared with results from the reversible-jump MCMC (rjMCMC) coalescent-based species delimitation method Bayesian Phylogenetics and Phylogeography (BP&P). Simulation results show that HME and sHME perform poorly compared with PS and SS marginal-likelihood estimators when identifying the true species delimitation model. Furthermore, Bayes factor delimitation (BFD) of species showed improved performance when species limits are tested by reassigning individuals between species, as opposed to either lumping or splitting lineages. In the empirical data, BFD through PS and SS analyses, as well as the rjMCMC method, each provide support for the recognition of all scalaris group taxa as independent evolutionary lineages. Bayes factor species delimitation and BP&P also support the recognition of three previously undescribed lineages. In both simulated and empirical data sets, harmonic and smoothed harmonic mean marginal-likelihood estimators provided much higher marginal-likelihood estimates than PS and SS estimators. The AICM displayed poor repeatability in both simulated and empirical data sets, and produced inconsistent model rankings across replicate runs with the empirical data. Our results suggest that species delimitation through the use of Bayes factors with marginal-likelihood estimates via PS or SS analyses provide a useful and complementary alternative to existing species delimitation methods.
Calibration of two complex ecosystem models with different likelihood functions
NASA Astrophysics Data System (ADS)
Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán
2014-05-01
The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model goodness metric on calibration. The different likelihoods are different functions of RMSE (root mean squared error) weighted by measurement uncertainty: exponential / linear / quadratic / linear normalized by correlation. As a first calibration step sensitivity analysis was performed in order to select the influential parameters which have strong effect on the output data. In the second calibration step only the sensitive parameters were calibrated (optimal values and confidence intervals were calculated). In case of PaSim more parameters were found responsible for the 95% of the output data variance than is case of BBGC MuSo. Analysis of the results of the optimized models revealed that the exponential likelihood estimation proved to be the most robust (best model simulation with optimized parameter, highest confidence interval increase). The cross-validation of the model simulations can help in constraining the highly uncertain greenhouse gas budget of grasslands.
DarkBit: a GAMBIT module for computing dark matter observables and likelihoods
NASA Astrophysics Data System (ADS)
Bringmann, Torsten; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Kahlhoefer, Felix; Kvellestad, Anders; Putze, Antje; Savage, Christopher; Scott, Pat; Weniger, Christoph; White, Martin; Wild, Sebastian
2017-12-01
We introduce DarkBit, an advanced software code for computing dark matter constraints on various extensions to the Standard Model of particle physics, comprising both new native code and interfaces to external packages. This release includes a dedicated signal yield calculator for gamma-ray observations, which significantly extends current tools by implementing a cascade-decay Monte Carlo, as well as a dedicated likelihood calculator for current and future experiments ( gamLike). This provides a general solution for studying complex particle physics models that predict dark matter annihilation to a multitude of final states. We also supply a direct detection package that models a large range of direct detection experiments ( DDCalc), and that provides the corresponding likelihoods for arbitrary combinations of spin-independent and spin-dependent scattering processes. Finally, we provide custom relic density routines along with interfaces to DarkSUSY, micrOMEGAs, and the neutrino telescope likelihood package nulike. DarkBit is written in the framework of the Global And Modular Beyond the Standard Model Inference Tool ( GAMBIT), providing seamless integration into a comprehensive statistical fitting framework that allows users to explore new models with both particle and astrophysics constraints, and a consistent treatment of systematic uncertainties. In this paper we describe its main functionality, provide a guide to getting started quickly, and show illustrative examples for results obtained with DarkBit (both as a stand-alone tool and as a GAMBIT module). This includes a quantitative comparison between two of the main dark matter codes ( DarkSUSY and micrOMEGAs), and application of DarkBit 's advanced direct and indirect detection routines to a simple effective dark matter model.
Empirical likelihood inference in randomized clinical trials.
Zhang, Biao
2017-01-01
In individually randomized controlled trials, in addition to the primary outcome, information is often available on a number of covariates prior to randomization. This information is frequently utilized to undertake adjustment for baseline characteristics in order to increase precision of the estimation of average treatment effects; such adjustment is usually performed via covariate adjustment in outcome regression models. Although the use of covariate adjustment is widely seen as desirable for making treatment effect estimates more precise and the corresponding hypothesis tests more powerful, there are considerable concerns that objective inference in randomized clinical trials can potentially be compromised. In this paper, we study an empirical likelihood approach to covariate adjustment and propose two unbiased estimating functions that automatically decouple evaluation of average treatment effects from regression modeling of covariate-outcome relationships. The resulting empirical likelihood estimator of the average treatment effect is as efficient as the existing efficient adjusted estimators 1 when separate treatment-specific working regression models are correctly specified, yet are at least as efficient as the existing efficient adjusted estimators 1 for any given treatment-specific working regression models whether or not they coincide with the true treatment-specific covariate-outcome relationships. We present a simulation study to compare the finite sample performance of various methods along with some results on analysis of a data set from an HIV clinical trial. The simulation results indicate that the proposed empirical likelihood approach is more efficient and powerful than its competitors when the working covariate-outcome relationships by treatment status are misspecified.
NASA Technical Reports Server (NTRS)
Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong
2011-01-01
Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.
On the Likelihood Ratio Test for the Number of Factors in Exploratory Factor Analysis
ERIC Educational Resources Information Center
Hayashi, Kentaro; Bentler, Peter M.; Yuan, Ke-Hai
2007-01-01
In the exploratory factor analysis, when the number of factors exceeds the true number of factors, the likelihood ratio test statistic no longer follows the chi-square distribution due to a problem of rank deficiency and nonidentifiability of model parameters. As a result, decisions regarding the number of factors may be incorrect. Several…
Nowakowska, Marzena
2017-04-01
The development of the Bayesian logistic regression model classifying the road accident severity is discussed. The already exploited informative priors (method of moments, maximum likelihood estimation, and two-stage Bayesian updating), along with the original idea of a Boot prior proposal, are investigated when no expert opinion has been available. In addition, two possible approaches to updating the priors, in the form of unbalanced and balanced training data sets, are presented. The obtained logistic Bayesian models are assessed on the basis of a deviance information criterion (DIC), highest probability density (HPD) intervals, and coefficients of variation estimated for the model parameters. The verification of the model accuracy has been based on sensitivity, specificity and the harmonic mean of sensitivity and specificity, all calculated from a test data set. The models obtained from the balanced training data set have a better classification quality than the ones obtained from the unbalanced training data set. The two-stage Bayesian updating prior model and the Boot prior model, both identified with the use of the balanced training data set, outperform the non-informative, method of moments, and maximum likelihood estimation prior models. It is important to note that one should be careful when interpreting the parameters since different priors can lead to different models. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cox Regression Models with Functional Covariates for Survival Data.
Gellar, Jonathan E; Colantuoni, Elizabeth; Needham, Dale M; Crainiceanu, Ciprian M
2015-06-01
We extend the Cox proportional hazards model to cases when the exposure is a densely sampled functional process, measured at baseline. The fundamental idea is to combine penalized signal regression with methods developed for mixed effects proportional hazards models. The model is fit by maximizing the penalized partial likelihood, with smoothing parameters estimated by a likelihood-based criterion such as AIC or EPIC. The model may be extended to allow for multiple functional predictors, time varying coefficients, and missing or unequally-spaced data. Methods were inspired by and applied to a study of the association between time to death after hospital discharge and daily measures of disease severity collected in the intensive care unit, among survivors of acute respiratory distress syndrome.
Earthquake likelihood model testing
Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.
2007-01-01
INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a wide range of possible testing procedures exist. Jolliffe and Stephenson (2003) present different forecast verifications from atmospheric science, among them likelihood testing of probability forecasts and testing the occurrence of binary events. Testing binary events requires that for each forecasted event, the spatial, temporal and magnitude limits be given. Although major earthquakes can be considered binary events, the models within the RELM project express their forecasts on a spatial grid and in 0.1 magnitude units; thus the results are a distribution of rates over space and magnitude. These forecasts can be tested with likelihood tests.In general, likelihood tests assume a valid null hypothesis against which a given hypothesis is tested. The outcome is either a rejection of the null hypothesis in favor of the test hypothesis or a nonrejection, meaning the test hypothesis cannot outperform the null hypothesis at a given significance level. Within RELM, there is no accepted null hypothesis and thus the likelihood test needs to be expanded to allow comparable testing of equipollent hypotheses.To test models against one another, we require that forecasts are expressed in a standard format: the average rate of earthquake occurrence within pre-specified limits of hypocentral latitude, longitude, depth, magnitude, time period, and focal mechanisms. Focal mechanisms should either be described as the inclination of P-axis, declination of P-axis, and inclination of the T-axis, or as strike, dip, and rake angles. Schorlemmer and Gerstenberger (2007, this issue) designed classes of these parameters such that similar models will be tested against each other. These classes make the forecasts comparable between models. Additionally, we are limited to testing only what is precisely defined and consistently reported in earthquake catalogs. Therefore it is currently not possible to test such information as fault rupture length or area, asperity location, etc. Also, to account for data quality issues, we allow for location and magnitude uncertainties as well as the probability that an event is dependent on another event.As we mentioned above, only models with comparable forecasts can be tested against each other. Our current tests are designed to examine grid-based models. This requires that any fault-based model be adapted to a grid before testing is possible. While this is a limitation of the testing, it is an inherent difficulty in any such comparative testing. Please refer to appendix B for a statistical evaluation of the application of the Poisson hypothesis to fault-based models.The testing suite we present consists of three different tests: L-Test, N-Test, and R-Test. These tests are defined similarily to Kagan and Jackson (1995). The first two tests examine the consistency of the hypotheses with the observations while the last test compares the spatial performances of the models.
Silveira, Maria J; Copeland, Laurel A; Feudtner, Chris
2006-07-01
We tested whether local cultural and social values regarding the use of health care are associated with the likelihood of home death, using variation in local rates of home births as a proxy for geographic variation in these values. For each of 351110 adult decedents in Washington state who died from 1989 through 1998, we calculated the home birth rate in each zip code during the year of death and then used multivariate regression modeling to estimate the relation between the likelihood of home death and the local rate of home births. Individuals residing in local areas with higher home birth rates had greater adjusted likelihood of dying at home (odds ratio [OR]=1.04 for each percentage point increase in home birth rate; 95% confidence interval [CI] = 1.03, 1.05). Moreover, the likelihood of dying at home increased with local wealth (OR=1.04 per $10000; 95% CI=1.02, 1.06) but decreased with local hospital bed availability (OR=0.96 per 1000 beds; 95% CI=0.95, 0.97). The likelihood of home death is associated with local rates of home births, suggesting the influence of health care use preferences.
Wang, Jiun-Hao; Chang, Hung-Hao
2010-10-26
In contrast to the considerable body of literature concerning the disabilities of the general population, little information exists pertaining to the disabilities of the farm population. Focusing on the disability issue to the insurants in the Farmers' Health Insurance (FHI) program in Taiwan, this paper examines the associations among socio-demographic characteristics, insured factors, and the introduction of the national health insurance program, as well as the types and payments of disabilities among the insurants. A unique dataset containing 1,594,439 insurants in 2008 was used in this research. A logistic regression model was estimated for the likelihood of received disability payments. By focusing on the recipients, a disability payment and a disability type equation were estimated using the ordinary least squares method and a multinomial logistic model, respectively, to investigate the effects of the exogenous factors on their received payments and the likelihood of having different types of disabilities. Age and different job categories are significantly associated with the likelihood of receiving disability payments. Compared to those under age 45, the likelihood is higher among recipients aged 85 and above (the odds ratio is 8.04). Compared to hired workers, the odds ratios for self-employed and spouses of farm operators who were not members of farmers' associations are 0.97 and 0.85, respectively. In addition, older insurants are more likely to have eye problems; few differences in disability types are related to insured job categories. Results indicate that older farmers are more likely to receive disability payments, but the likelihood is not much different among insurants of various job categories. Among all of the selected types of disability, a highest likelihood is found for eye disability. In addition, the introduction of the national health insurance program decreases the likelihood of receiving disability payments. The experience in Taiwan can be valuable for other countries that are in an initial stage to implement a universal health insurance program.
NASA Astrophysics Data System (ADS)
Balbi, S.; Villa, F.; Mojtahed, V.; Hegetschweiler, K. T.; Giupponi, C.
2015-10-01
This article presents a novel methodology to assess flood risk to people by integrating people's vulnerability and ability to cushion hazards through coping and adapting. The proposed approach extends traditional risk assessments beyond material damages; complements quantitative and semi-quantitative data with subjective and local knowledge, improving the use of commonly available information; produces estimates of model uncertainty by providing probability distributions for all of its outputs. Flood risk to people is modeled using a spatially explicit Bayesian network model calibrated on expert opinion. Risk is assessed in terms of: (1) likelihood of non-fatal physical injury; (2) likelihood of post-traumatic stress disorder; (3) likelihood of death. The study area covers the lower part of the Sihl valley (Switzerland) including the city of Zurich. The model is used to estimate the benefits of improving an existing Early Warning System, taking into account the reliability, lead-time and scope (i.e. coverage of people reached by the warning). Model results indicate that the potential benefits of an improved early warning in terms of avoided human impacts are particularly relevant in case of a major flood event: about 75 % of fatalities, 25 % of injuries and 18 % of post-traumatic stress disorders could be avoided.
Likelihood of achieving air quality targets under model uncertainties.
Digar, Antara; Cohan, Daniel S; Cox, Dennis D; Kim, Byeong-Uk; Boylan, James W
2011-01-01
Regulatory attainment demonstrations in the United States typically apply a bright-line test to predict whether a control strategy is sufficient to attain an air quality standard. Photochemical models are the best tools available to project future pollutant levels and are a critical part of regulatory attainment demonstrations. However, because photochemical models are uncertain and future meteorology is unknowable, future pollutant levels cannot be predicted perfectly and attainment cannot be guaranteed. This paper introduces a computationally efficient methodology for estimating the likelihood that an emission control strategy will achieve an air quality objective in light of uncertainties in photochemical model input parameters (e.g., uncertain emission and reaction rates, deposition velocities, and boundary conditions). The method incorporates Monte Carlo simulations of a reduced form model representing pollutant-precursor response under parametric uncertainty to probabilistically predict the improvement in air quality due to emission control. The method is applied to recent 8-h ozone attainment modeling for Atlanta, Georgia, to assess the likelihood that additional controls would achieve fixed (well-defined) or flexible (due to meteorological variability and uncertain emission trends) targets of air pollution reduction. The results show that in certain instances ranking of the predicted effectiveness of control strategies may differ between probabilistic and deterministic analyses.
Parry, Steve W; Hill, Harry; Lawson, Joanna; Lawson, Nick; Green, David; Trundle, Heidi; McNaught, Judith; Strassheim, Victoria; Caldwell, Alma; Mayland, Richard; Earley, Phillip; McMeekin, Peter
2016-11-01
National and international evidence and guidelines on falls prevention and management in community-dwelling elderly adults recommend that falls services should be multifactorial and their interventions multicomponent. The way that individuals are identified as having had or being at risk of falls in order to take advantage of such services is far less clear. A novel multidisciplinary, multifactorial falls, syncope, and dizziness service model was designed with enhanced case ascertainment through proactive, primary care-based screening (of individual case notes of individuals aged ≥60) for individual fall risk factors. The service model identified 4,039 individuals, of whom 2,232 had significant gait and balance abnormalities according to senior physiotherapist assessment. Significant numbers of individuals with new diagnoses ranging from cognitive impairment to Parkinson's disease to urgent indications for a pacemaker were discovered. More than 600 individuals were found who were at high risk of osteoporosis according to World Health Association Fracture Risk Assessment Tool score, 179 with benign positional paroxysmal vertigo and 50 with atrial fibrillation. Through such screening and this approach, Comprehensive Geriatric Assessment Plus (Plus falls, syncope and dizziness expertise), unmet need was targeted on a scale far outside the numbers seen in clinical trials. Further work is needed to determine whether this approach translates into fewer falls and decreases in syncope and dizziness. © 2016, Copyright the Authors Journal compilation © 2016, The American Geriatrics Society.
Efficient Exploration of the Space of Reconciled Gene Trees
Szöllősi, Gergely J.; Rosikiewicz, Wojciech; Boussau, Bastien; Tannier, Eric; Daubin, Vincent
2013-01-01
Gene trees record the combination of gene-level events, such as duplication, transfer and loss (DTL), and species-level events, such as speciation and extinction. Gene tree–species tree reconciliation methods model these processes by drawing gene trees into the species tree using a series of gene and species-level events. The reconstruction of gene trees based on sequence alone almost always involves choosing between statistically equivalent or weakly distinguishable relationships that could be much better resolved based on a putative species tree. To exploit this potential for accurate reconstruction of gene trees, the space of reconciled gene trees must be explored according to a joint model of sequence evolution and gene tree–species tree reconciliation. Here we present amalgamated likelihood estimation (ALE), a probabilistic approach to exhaustively explore all reconciled gene trees that can be amalgamated as a combination of clades observed in a sample of gene trees. We implement the ALE approach in the context of a reconciliation model (Szöllősi et al. 2013), which allows for the DTL of genes. We use ALE to efficiently approximate the sum of the joint likelihood over amalgamations and to find the reconciled gene tree that maximizes the joint likelihood among all such trees. We demonstrate using simulations that gene trees reconstructed using the joint likelihood are substantially more accurate than those reconstructed using sequence alone. Using realistic gene tree topologies, branch lengths, and alignment sizes, we demonstrate that ALE produces more accurate gene trees even if the model of sequence evolution is greatly simplified. Finally, examining 1099 gene families from 36 cyanobacterial genomes we find that joint likelihood-based inference results in a striking reduction in apparent phylogenetic discord, with respectively. 24%, 59%, and 46% reductions in the mean numbers of duplications, transfers, and losses per gene family. The open source implementation of ALE is available from https://github.com/ssolo/ALE.git. [amalgamation; gene tree reconciliation; gene tree reconstruction; lateral gene transfer; phylogeny.] PMID:23925510
Reuter, Tabea; Renner, Britta
2011-01-01
Background In order to fight the spread of the novel H1N1 influenza, health authorities worldwide called for a change in hygiene behavior. Within a longitudinal study, we examined who collected a free bottle of hand sanitizer towards the end of the first swine flu pandemic wave in December 2009. Methods 629 participants took part in a longitudinal study assessing perceived likelihood and severity of an H1N1 infection, and H1N1 influenza related negative affect (i.e., feelings of threat, concern, and worry) at T1 (October 2009, week 43–44) and T2 (December 2009, week 51–52). Importantly, all participants received a voucher for a bottle of hand sanitizer at T2 which could be redeemed in a university office newly established for this occasion at T3 (ranging between 1–4 days after T2). Results Both a sequential longitudinal model (M2) as well as a change score model (M3) showed that greater perceived likelihood and severity at T1 (M2) or changes in perceived likelihood and severity between T1 and T2 (M3) did not directly drive protective behavior (T3), but showed a significant indirect impact on behavior through H1N1 influenza related negative affect. Specifically, increases in perceived likelihood (β = .12), severity (β = .24) and their interaction (β = .13) were associated with a more pronounced change in negative affect (M3). The more threatened, concerned and worried people felt (T2), the more likely they were to redeem the voucher at T3 (OR = 1.20). Conclusions Affective components need to be considered in health behavior models. Perceived likelihood and severity of an influenza infection represent necessary but not sufficient self-referential knowledge for paving the way for preventive behaviors. PMID:21789224
Reuter, Tabea; Renner, Britta
2011-01-01
In order to fight the spread of the novel H1N1 influenza, health authorities worldwide called for a change in hygiene behavior. Within a longitudinal study, we examined who collected a free bottle of hand sanitizer towards the end of the first swine flu pandemic wave in December 2009. 629 participants took part in a longitudinal study assessing perceived likelihood and severity of an H1N1 infection, and H1N1 influenza related negative affect (i.e., feelings of threat, concern, and worry) at T1 (October 2009, week 43-44) and T2 (December 2009, week 51-52). Importantly, all participants received a voucher for a bottle of hand sanitizer at T2 which could be redeemed in a university office newly established for this occasion at T3 (ranging between 1-4 days after T2). Both a sequential longitudinal model (M2) as well as a change score model (M3) showed that greater perceived likelihood and severity at T1 (M2) or changes in perceived likelihood and severity between T1 and T2 (M3) did not directly drive protective behavior (T3), but showed a significant indirect impact on behavior through H1N1 influenza related negative affect. Specifically, increases in perceived likelihood (β = .12), severity (β = .24) and their interaction (β = .13) were associated with a more pronounced change in negative affect (M3). The more threatened, concerned and worried people felt (T2), the more likely they were to redeem the voucher at T3 (OR = 1.20). Affective components need to be considered in health behavior models. Perceived likelihood and severity of an influenza infection represent necessary but not sufficient self-referential knowledge for paving the way for preventive behaviors.
Challenges in Species Tree Estimation Under the Multispecies Coalescent Model
Xu, Bo; Yang, Ziheng
2016-01-01
The multispecies coalescent (MSC) model has emerged as a powerful framework for inferring species phylogenies while accounting for ancestral polymorphism and gene tree-species tree conflict. A number of methods have been developed in the past few years to estimate the species tree under the MSC. The full likelihood methods (including maximum likelihood and Bayesian inference) average over the unknown gene trees and accommodate their uncertainties properly but involve intensive computation. The approximate or summary coalescent methods are computationally fast and are applicable to genomic datasets with thousands of loci, but do not make an efficient use of information in the multilocus data. Most of them take the two-step approach of reconstructing the gene trees for multiple loci by phylogenetic methods and then treating the estimated gene trees as observed data, without accounting for their uncertainties appropriately. In this article we review the statistical nature of the species tree estimation problem under the MSC, and explore the conceptual issues and challenges of species tree estimation by focusing mainly on simple cases of three or four closely related species. We use mathematical analysis and computer simulation to demonstrate that large differences in statistical performance may exist between the two classes of methods. We illustrate that several counterintuitive behaviors may occur with the summary methods but they are due to inefficient use of information in the data by summary methods and vanish when the data are analyzed using full-likelihood methods. These include (i) unidentifiability of parameters in the model, (ii) inconsistency in the so-called anomaly zone, (iii) singularity on the likelihood surface, and (iv) deterioration of performance upon addition of more data. We discuss the challenges and strategies of species tree inference for distantly related species when the molecular clock is violated, and highlight the need for improving the computational efficiency and model realism of the likelihood methods as well as the statistical efficiency of the summary methods. PMID:27927902
Incorrect likelihood methods were used to infer scaling laws of marine predator search behaviour.
Edwards, Andrew M; Freeman, Mervyn P; Breed, Greg A; Jonsen, Ian D
2012-01-01
Ecologists are collecting extensive data concerning movements of animals in marine ecosystems. Such data need to be analysed with valid statistical methods to yield meaningful conclusions. We demonstrate methodological issues in two recent studies that reached similar conclusions concerning movements of marine animals (Nature 451:1098; Science 332:1551). The first study analysed vertical movement data to conclude that diverse marine predators (Atlantic cod, basking sharks, bigeye tuna, leatherback turtles and Magellanic penguins) exhibited "Lévy-walk-like behaviour", close to a hypothesised optimal foraging strategy. By reproducing the original results for the bigeye tuna data, we show that the likelihood of tested models was calculated from residuals of regression fits (an incorrect method), rather than from the likelihood equations of the actual probability distributions being tested. This resulted in erroneous Akaike Information Criteria, and the testing of models that do not correspond to valid probability distributions. We demonstrate how this led to overwhelming support for a model that has no biological justification and that is statistically spurious because its probability density function goes negative. Re-analysis of the bigeye tuna data, using standard likelihood methods, overturns the original result and conclusion for that data set. The second study observed Lévy walk movement patterns by mussels. We demonstrate several issues concerning the likelihood calculations (including the aforementioned residuals issue). Re-analysis of the data rejects the original Lévy walk conclusion. We consequently question the claimed existence of scaling laws of the search behaviour of marine predators and mussels, since such conclusions were reached using incorrect methods. We discourage the suggested potential use of "Lévy-like walks" when modelling consequences of fishing and climate change, and caution that any resulting advice to managers of marine ecosystems would be problematic. For reproducibility and future work we provide R source code for all calculations.
Factors associated with persons with disability employment in India: a cross-sectional study.
Naraharisetti, Ramya; Castro, Marcia C
2016-10-07
Over twenty million persons with disability in India are increasingly being offered poverty alleviation strategies, including employment programs. This study employs a spatial analytic approach to identify correlates of employment among persons with disability in India, considering sight, speech, hearing, movement, and mental disabilities. Based on 2001 Census data, this study utilizes linear regression and spatial autoregressive models to identify factors associated with the proportion employed among persons with disability at the district level. Models stratified by rural and urban areas were also considered. Spatial autoregressive models revealed that different factors contribute to employment of persons with disability in rural and urban areas. In rural areas, having mental disability decreased the likelihood of employment, while being female and having movement, or sight impairment (compared to other disabilities) increased the likelihood of employment. In urban areas, being female and illiterate decreased the likelihood of employment but having sight, mental and movement impairment (compared to other disabilities) increased the likelihood of employment. Poverty alleviation programs designed for persons with disability in India should account for differences in employment by disability types and should be spatially targeted. Since persons with disability in rural and urban areas have different factors contributing to their employment, it is vital that government and service-planning organizations account for these differences when creating programs aimed at livelihood development.
A likelihood ratio anomaly detector for identifying within-perimeter computer network attacks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grana, Justin; Wolpert, David; Neil, Joshua
The rapid detection of attackers within firewalls of enterprise computer networks is of paramount importance. Anomaly detectors address this problem by quantifying deviations from baseline statistical models of normal network behavior and signaling an intrusion when the observed data deviates significantly from the baseline model. But, many anomaly detectors do not take into account plausible attacker behavior. As a result, anomaly detectors are prone to a large number of false positives due to unusual but benign activity. Our paper first introduces a stochastic model of attacker behavior which is motivated by real world attacker traversal. Then, we develop a likelihoodmore » ratio detector that compares the probability of observed network behavior under normal conditions against the case when an attacker has possibly compromised a subset of hosts within the network. Since the likelihood ratio detector requires integrating over the time each host becomes compromised, we illustrate how to use Monte Carlo methods to compute the requisite integral. We then present Receiver Operating Characteristic (ROC) curves for various network parameterizations that show for any rate of true positives, the rate of false positives for the likelihood ratio detector is no higher than that of a simple anomaly detector and is often lower. Finally, we demonstrate the superiority of the proposed likelihood ratio detector when the network topologies and parameterizations are extracted from real-world networks.« less
A likelihood ratio anomaly detector for identifying within-perimeter computer network attacks
Grana, Justin; Wolpert, David; Neil, Joshua; ...
2016-03-11
The rapid detection of attackers within firewalls of enterprise computer networks is of paramount importance. Anomaly detectors address this problem by quantifying deviations from baseline statistical models of normal network behavior and signaling an intrusion when the observed data deviates significantly from the baseline model. But, many anomaly detectors do not take into account plausible attacker behavior. As a result, anomaly detectors are prone to a large number of false positives due to unusual but benign activity. Our paper first introduces a stochastic model of attacker behavior which is motivated by real world attacker traversal. Then, we develop a likelihoodmore » ratio detector that compares the probability of observed network behavior under normal conditions against the case when an attacker has possibly compromised a subset of hosts within the network. Since the likelihood ratio detector requires integrating over the time each host becomes compromised, we illustrate how to use Monte Carlo methods to compute the requisite integral. We then present Receiver Operating Characteristic (ROC) curves for various network parameterizations that show for any rate of true positives, the rate of false positives for the likelihood ratio detector is no higher than that of a simple anomaly detector and is often lower. Finally, we demonstrate the superiority of the proposed likelihood ratio detector when the network topologies and parameterizations are extracted from real-world networks.« less
NASA Astrophysics Data System (ADS)
Morse, Brad S.; Pohll, Greg; Huntington, Justin; Rodriguez Castillo, Ramiro
2003-06-01
In 1992, Mexican researchers discovered concentrations of arsenic in excess of World Heath Organization (WHO) standards in several municipal wells in the Zimapan Valley of Mexico. This study describes a method to delineate a capture zone for one of the most highly contaminated wells to aid in future well siting. A stochastic approach was used to model the capture zone because of the high level of uncertainty in several input parameters. Two stochastic techniques were performed and compared: "standard" Monte Carlo analysis and the generalized likelihood uncertainty estimator (GLUE) methodology. The GLUE procedure differs from standard Monte Carlo analysis in that it incorporates a goodness of fit (termed a likelihood measure) in evaluating the model. This allows for more information (in this case, head data) to be used in the uncertainty analysis, resulting in smaller prediction uncertainty. Two likelihood measures are tested in this study to determine which are in better agreement with the observed heads. While the standard Monte Carlo approach does not aid in parameter estimation, the GLUE methodology indicates best fit models when hydraulic conductivity is approximately 10-6.5 m/s, with vertically isotropic conditions and large quantities of interbasin flow entering the basin. Probabilistic isochrones (capture zone boundaries) are then presented, and as predicted, the GLUE-derived capture zones are significantly smaller in area than those from the standard Monte Carlo approach.
Gene × Environment Interactions in Schizophrenia: Evidence from Genetic Mouse Models
Marr, Julia; Bock, Gavin; Desbonnet, Lieve; Waddington, John
2016-01-01
The study of gene × environment, as well as epistatic interactions in schizophrenia, has provided important insight into the complex etiopathologic basis of schizophrenia. It has also increased our understanding of the role of susceptibility genes in the disorder and is an important consideration as we seek to translate genetic advances into novel antipsychotic treatment targets. This review summarises data arising from research involving the modelling of gene × environment interactions in schizophrenia using preclinical genetic models. Evidence for synergistic effects on the expression of schizophrenia-relevant endophenotypes will be discussed. It is proposed that valid and multifactorial preclinical models are important tools for identifying critical areas, as well as underlying mechanisms, of convergence of genetic and environmental risk factors, and their interaction in schizophrenia. PMID:27725886
Water Buffalo (Bubalus bubalis) as a spontaneous animal model of Vitiligo.
Singh, Vijay Pal; Motiani, Rajender K; Singh, Archana; Malik, Garima; Aggarwal, Rangoli; Pratap, Kunal; Wani, Mohan R; Gokhale, Suresh B; Natarajan, Vivek T; Gokhale, Rajesh S
2016-07-01
Vitiligo is a multifactorial acquired depigmenting disorder. Recent insights into the molecular mechanisms driving the gradual destruction of melanocytes in vitiligo will likely lead to the discovery of novel therapies, which need to be evaluated in animal models that closely recapitulate the pathogenesis of human vitiligo. In humans, vitiligo is characterized by a spontaneous loss of functional melanocytes from the epidermis, but most animal models of vitiligo are either inducible or genetically programmed. Here, we report that acquired depigmentation in water buffalo recapitulates molecular, histological, immunohistochemical, and ultrastructural changes observed in human vitiligo and hence could be used as a model to study vitiligo pathogenesis and facilitate the discovery and evaluation of therapeutic interventions for vitiligo. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Kinematic Structural Modelling in Bayesian Networks
NASA Astrophysics Data System (ADS)
Schaaf, Alexander; de la Varga, Miguel; Florian Wellmann, J.
2017-04-01
We commonly capture our knowledge about the spatial distribution of distinct geological lithologies in the form of 3-D geological models. Several methods exist to create these models, each with its own strengths and limitations. We present here an approach to combine the functionalities of two modeling approaches - implicit interpolation and kinematic modelling methods - into one framework, while explicitly considering parameter uncertainties and thus model uncertainty. In recent work, we proposed an approach to implement implicit modelling algorithms into Bayesian networks. This was done to address the issues of input data uncertainty and integration of geological information from varying sources in the form of geological likelihood functions. However, one general shortcoming of implicit methods is that they usually do not take any physical constraints into consideration, which can result in unrealistic model outcomes and artifacts. On the other hand, kinematic structural modelling intends to reconstruct the history of a geological system based on physically driven kinematic events. This type of modelling incorporates simplified, physical laws into the model, at the cost of a substantial increment of usable uncertain parameters. In the work presented here, we show an integration of these two different modelling methodologies, taking advantage of the strengths of both of them. First, we treat the two types of models separately, capturing the information contained in the kinematic models and their specific parameters in the form of likelihood functions, in order to use them in the implicit modelling scheme. We then go further and combine the two modelling approaches into one single Bayesian network. This enables the direct flow of information between the parameters of the kinematic modelling step and the implicit modelling step and links the exclusive input data and likelihoods of the two different modelling algorithms into one probabilistic inference framework. In addition, we use the capabilities of Noddy to analyze the topology of structural models to demonstrate how topological information, such as the connectivity of two layers across an unconformity, can be used as a likelihood function. In an application to a synthetic case study, we show that our approach leads to a successful combination of the two different modelling concepts. Specifically, we show that we derive ensemble realizations of implicit models that now incorporate the knowledge of the kinematic aspects, representing an important step forward in the integration of knowledge and a corresponding estimation of uncertainties in structural geological models.
The STR/ort mouse model of spontaneous osteoarthritis - an update.
Staines, K A; Poulet, B; Wentworth, D N; Pitsillides, A A
2017-06-01
Osteoarthritis is a degenerative joint disease and a world-wide healthcare burden. Characterized by cartilage degradation, subchondral bone thickening and osteophyte formation, osteoarthritis inflicts much pain and suffering, for which there are currently no disease-modifying treatments available. Mouse models of osteoarthritis are proving critical in advancing our understanding of the underpinning molecular mechanisms. The STR/ort mouse is a well-recognized model which develops a natural form of osteoarthritis very similar to the human disease. In this Review we discuss the use of the STR/ort mouse in understanding this multifactorial disease with an emphasis on recent advances in its genetics and its bone, endochondral and immune phenotypes. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
A composite likelihood approach for spatially correlated survival data
Paik, Jane; Ying, Zhiliang
2013-01-01
The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450
A composite likelihood approach for spatially correlated survival data.
Paik, Jane; Ying, Zhiliang
2013-01-01
The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory.
Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown
ERIC Educational Resources Information Center
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi
2014-01-01
When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…
ERIC Educational Resources Information Center
Magis, David; Raiche, Gilles
2010-01-01
In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…
ERIC Educational Resources Information Center
Khattab, Ali-Maher; And Others
1982-01-01
A causal modeling system, using confirmatory maximum likelihood factor analysis with the LISREL IV computer program, evaluated the construct validity underlying the higher order factor structure of a given correlation matrix of 46 structure-of-intellect tests emphasizing the product of transformations. (Author/PN)
Empirical likelihood method for non-ignorable missing data problems.
Guan, Zhong; Qin, Jing
2017-01-01
Missing response problem is ubiquitous in survey sampling, medical, social science and epidemiology studies. It is well known that non-ignorable missing is the most difficult missing data problem where the missing of a response depends on its own value. In statistical literature, unlike the ignorable missing data problem, not many papers on non-ignorable missing data are available except for the full parametric model based approach. In this paper we study a semiparametric model for non-ignorable missing data in which the missing probability is known up to some parameters, but the underlying distributions are not specified. By employing Owen (1988)'s empirical likelihood method we can obtain the constrained maximum empirical likelihood estimators of the parameters in the missing probability and the mean response which are shown to be asymptotically normal. Moreover the likelihood ratio statistic can be used to test whether the missing of the responses is non-ignorable or completely at random. The theoretical results are confirmed by a simulation study. As an illustration, the analysis of a real AIDS trial data shows that the missing of CD4 counts around two years are non-ignorable and the sample mean based on observed data only is biased.
Maximum Likelihood Estimations and EM Algorithms with Length-biased Data
Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu
2012-01-01
SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840
2010-01-01
Background Earlier diagnosis followed by multi-factorial cardiovascular risk intervention may improve outcomes in Type 2 Diabetes Mellitus (T2DM). Latent phase identification through screening requires structured, appropriately targeted population-based approaches. Providers responsible for implementing screening policy await evidence of clinical and cost effectiveness from randomised intervention trials in screen-detected T2DM cases. UK South Asians are at particularly high risk of abnormal glucose tolerance and T2DM. To be effective national screening programmes must achieve good coverage across the population by identifying barriers to the detection of disease and adapting to the delivery of earlier care. Here we describe the rationale and methods of a systematic community screening programme and randomised controlled trial of cardiovascular risk management within a UK multiethnic setting (ADDITION-Leicester). Design A single-blind cluster randomised, parallel group trial among people with screen-detected T2DM comparing a protocol driven intensive multi-factorial treatment with conventional care. Methods ADDITION-Leicester consists of community-based screening and intervention phases within 20 general practices coordinated from a single academic research centre. Screening adopts a universal diagnostic approach via repeated 75g-Oral Glucose Tolerance Tests within an eligible non-diabetic population of 66,320 individuals aged 40-75 years (25-75 years South Asian). Volunteers also provide detailed medical and family histories; complete health questionnaires, undergo anthropometric measures, lipid profiling and a proteinuria assessment. Primary outcome is reduction in modelled Coronary Heart Disease (UKPDS CHD) risk at five years. Seven thousand (30% of South Asian ethnic origin) volunteers over three years will be recruited to identify a screen-detected T2DM cohort (n = 285) powered to detected a 6% relative difference (80% power, alpha 0.05) between treatment groups at one year. Randomisation will occur at practice-level with newly diagnosed T2DM cases receiving either conventional (according to current national guidelines) or intensive (algorithmic target-driven multi-factorial cardiovascular risk intervention) treatments. Discussion ADDITION-Leicester is the largest multiethnic (targeting >30% South Asian recruitment) community T2DM and vascular risk screening programme in the UK. By assessing feasibility and efficacy of T2DM screening, it will inform national disease prevention policy and contribute significantly to our understanding of the health care needs of UK South Asians. Trial registration Clinicaltrial.gov (NCT00318032). PMID:20170482
Ray, Chad A; Patel, Vimal; Shih, Judy; Macaraeg, Chris; Wu, Yuling; Thway, Theingi; Ma, Mark; Lee, Jean W; Desilva, Binodh
2009-02-20
Developing a process that generates robust immunoassays that can be used to support studies with tight timelines is a common challenge for bioanalytical laboratories. Design of experiments (DOEs) is a tool that has been used by many industries for the purpose of optimizing processes. The approach is capable of identifying critical factors and their interactions with a minimal number of experiments. The challenge for implementing this tool in the bioanalytical laboratory is to develop a user-friendly approach that scientists can understand and apply. We have successfully addressed these challenges by eliminating the screening design, introducing automation, and applying a simple mathematical approach for the output parameter. A modified central composite design (CCD) was applied to three ligand binding assays. The intra-plate factors selected were coating, detection antibody concentration, and streptavidin-HRP concentrations. The inter-plate factors included incubation times for each step. The objective was to maximize the logS/B (S/B) of the low standard to the blank. The maximum desirable conditions were determined using JMP 7.0. To verify the validity of the predictions, the logS/B prediction was compared against the observed logS/B during pre-study validation experiments. The three assays were optimized using the multi-factorial DOE. The total error for all three methods was less than 20% which indicated method robustness. DOE identified interactions in one of the methods. The model predictions for logS/B were within 25% of the observed pre-study validation values for all methods tested. The comparison between the CCD and hybrid screening design yielded comparable parameter estimates. The user-friendly design enables effective application of multi-factorial DOE to optimize ligand binding assays for therapeutic proteins. The approach allows for identification of interactions between factors, consistency in optimal parameter determination, and reduced method development time.
Bhasin, Shalender; Gill, Thomas M; Reuben, David B; Latham, Nancy K; Gurwitz, Jerry H; Dykes, Patricia; McMahon, Siobhan; Storer, Thomas W; Duncan, Pamela W; Ganz, David A; Basaria, Shehzad; Miller, Michael E; Travison, Thomas G; Greene, Erich J; Dziura, James; Esserman, Denise; Allore, Heather; Carnie, Martha B; Fagan, Maureen; Hanson, Catherine; Baker, Dorothy; Greenspan, Susan L; Alexander, Neil; Ko, Fred; Siu, Albert L; Volpi, Elena; Wu, Albert W; Rich, Jeremy; Waring, Stephen C; Wallace, Robert; Casteel, Carri; Magaziner, Jay; Charpentier, Peter; Lu, Charles; Araujo, Katy; Rajeevan, Haseena; Margolis, Scott; Eder, Richard; McGloin, Joanne M; Skokos, Eleni; Wiggins, Jocelyn; Garber, Lawrence; Clauser, Steven B; Correa-De-Araujo, Rosaly; Peduzzi, Peter
2017-10-14
Fall injuries are a major cause of morbidity and mortality among older adults. We describe the design of a pragmatic trial to compare the effectiveness of an evidence-based, patient-centered multifactorial fall injury prevention strategy to an enhanced usual care. Strategies to Reduce Injuries and Develop Confidence in Elders (STRIDE) is a 40-month cluster-randomized, parallel-group, superiority, pragmatic trial being conducted at 86 primary care practices in 10 healthcare systems across USA. The 86 practices were randomized to intervention or control group using covariate-based constrained randomization, stratified by healthcare system. Participants are community-living persons, ≥70 years, at increased risk for serious fall injuries. The intervention is a co-management model in which a nurse Falls Care Manager performs multifactorial risk assessments, develops individualized care plans, which include surveillance, follow-up evaluation, and intervention strategies. Control group receives enhanced usual care, with clinicians and patients receiving evidence-based information on falls prevention. Primary outcome is serious fall injuries, operationalized as those leading to medical attention (non-vertebral fractures, joint dislocation, head injury, lacerations, and other major sequelae). Secondary outcomes include all fall injuries, all falls, and well-being (concern for falling; anxiety and depressive symptoms; physical function and disability). Target sample size was 5,322 participants to provide 90% power to detect 20% reduction in primary outcome rate relative to control. Trial enrolled 5451 subjects in 20 months. Intervention and follow-up are ongoing. The findings of the STRIDE study will have important clinical and policy implications for the prevention of fall injuries in older adults. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Multifactorial Understanding of Ion Abundance in Tandem Mass Spectrometry Experiments.
Fazal, Zeeshan; Southey, Bruce R; Sweedler, Jonathan V; Rodriguez-Zas, Sandra L
2013-01-29
In a bottom-up shotgun approach, the proteins of a mixture are enzymatically digested, separated, and analyzed via tandem mass spectrometry. The mass spectra relating fragment ion intensities (abundance) to the mass-to-charge are used to deduce the amino acid sequence and identify the peptides and proteins. The variables that influence intensity were characterized using a multi-factorial mixed-effects model, a ten-fold cross-validation, and stepwise feature selection on 6,352,528 fragment ions from 61,543 peptide ions. Intensity was higher in fragment ions that did not have neutral mass loss relative to any mass loss or that had a +1 charge state. Peptide ions classified for proton mobility as non-mobile had lowest intensity of all mobility levels. Higher basic residue (arginine, lysine or histidine) counts in the peptide ion and low counts in the fragment ion were associated with lower fragment ion intensities. Higher counts of proline in peptide and fragment ions were associated with lower intensities. These results are consistent with the mobile proton theory. Opposite trends between peptide and fragment ion counts and intensity may be due to the different impact of factor under consideration at different stages of the MS/MS experiment or to the different distribution of observations across peptide and fragment ion levels. Presence of basic residues at all three positions next to the fragmentation site was associated with lower fragment ion intensity. The presence of proline proximal to the fragmentation site enhanced fragmentation and had the opposite trend when located distant from the site. A positive association between fragment ion intensity and presence of sulfur residues (cysteine and methionine) on the vicinity of the fragmentation site was identified. These results highlight the multi-factorial nature of fragment ion intensity and could improve the algorithms for peptide identification and the simulation in tandem mass spectrometry experiments.
Multifactorial Understanding of Ion Abundance in Tandem Mass Spectrometry Experiments
Fazal, Zeeshan; Southey, Bruce R; Sweedler, Jonathan V.; Rodriguez-Zas, Sandra L.
2013-01-01
In a bottom-up shotgun approach, the proteins of a mixture are enzymatically digested, separated, and analyzed via tandem mass spectrometry. The mass spectra relating fragment ion intensities (abundance) to the mass-to-charge are used to deduce the amino acid sequence and identify the peptides and proteins. The variables that influence intensity were characterized using a multi-factorial mixed-effects model, a ten-fold cross-validation, and stepwise feature selection on 6,352,528 fragment ions from 61,543 peptide ions. Intensity was higher in fragment ions that did not have neutral mass loss relative to any mass loss or that had a +1 charge state. Peptide ions classified for proton mobility as non-mobile had lowest intensity of all mobility levels. Higher basic residue (arginine, lysine or histidine) counts in the peptide ion and low counts in the fragment ion were associated with lower fragment ion intensities. Higher counts of proline in peptide and fragment ions were associated with lower intensities. These results are consistent with the mobile proton theory. Opposite trends between peptide and fragment ion counts and intensity may be due to the different impact of factor under consideration at different stages of the MS/MS experiment or to the different distribution of observations across peptide and fragment ion levels. Presence of basic residues at all three positions next to the fragmentation site was associated with lower fragment ion intensity. The presence of proline proximal to the fragmentation site enhanced fragmentation and had the opposite trend when located distant from the site. A positive association between fragment ion intensity and presence of sulfur residues (cysteine and methionine) on the vicinity of the fragmentation site was identified. These results highlight the multi-factorial nature of fragment ion intensity and could improve the algorithms for peptide identification and the simulation in tandem mass spectrometry experiments. PMID:24031159
Contribution of nonprimate animal models in understanding the etiology of schizophrenia
Lazar, Noah L.; Neufeld, Richard W.J.; Cain, Donald P.
2011-01-01
Schizophrenia is a severe psychiatric disorder that is characterized by positive and negative symptoms and cognitive impairments. The etiology of the disorder is complex, and it is thought to follow a multifactorial threshold model of inheritance with genetic and neurodevelopmental contributions to risk. Human studies are particularly useful in capturing the richness of the phenotype, but they are often limited to the use of correlational approaches. By assessing behavioural abnormalities in both humans and rodents, nonprimate animal models of schizophrenia provide unique insight into the etiology and mechanisms of the disorder. This review discusses the phenomenology and etiology of schizophrenia and the contribution of current nonprimate animal models with an emphasis on how research with models of neurotransmitter dysregulation, environmental risk factors, neurodevelopmental disruption and genetic risk factors can complement the literature on schizophrenia in humans. PMID:21247514
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
Heumann, Benjamin W.; Walsh, Stephen J.; Verdery, Ashton M.; McDaniel, Phillip M.; Rindfuss, Ronald R.
2012-01-01
Understanding the pattern-process relations of land use/land cover change is an important area of research that provides key insights into human-environment interactions. The suitability or likelihood of occurrence of land use such as agricultural crop types across a human-managed landscape is a central consideration. Recent advances in niche-based, geographic species distribution modeling (SDM) offer a novel approach to understanding land suitability and land use decisions. SDM links species presence-location data with geospatial information and uses machine learning algorithms to develop non-linear and discontinuous species-environment relationships. Here, we apply the MaxEnt (Maximum Entropy) model for land suitability modeling by adapting niche theory to a human-managed landscape. In this article, we use data from an agricultural district in Northeastern Thailand as a case study for examining the relationships between the natural, built, and social environments and the likelihood of crop choice for the commonly grown crops that occur in the Nang Rong District – cassava, heavy rice, and jasmine rice, as well as an emerging crop, fruit trees. Our results indicate that while the natural environment (e.g., elevation and soils) is often the dominant factor in crop likelihood, the likelihood is also influenced by household characteristics, such as household assets and conditions of the neighborhood or built environment. Furthermore, the shape of the land use-environment curves illustrates the non-continuous and non-linear nature of these relationships. This approach demonstrates a novel method of understanding non-linear relationships between land and people. The article concludes with a proposed method for integrating the niche-based rules of land use allocation into a dynamic land use model that can address both allocation and quantity of agricultural crops. PMID:24187378
Slot, Esther M; van Viersen, Sietske; de Bree, Elise H; Kroesbergen, Evelyn H
2016-01-01
High comorbidity rates have been reported between mathematical learning disabilities (MD) and reading and spelling disabilities (RSD). Research has identified skills related to math, such as number sense (NS) and visuospatial working memory (visuospatial WM), as well as to literacy, such as phonological awareness (PA), rapid automatized naming (RAN) and verbal short-term memory (Verbal STM). In order to explain the high comorbidity rates between MD and RSD, 7-11-year-old children were assessed on a range of cognitive abilities related to literacy (PA, RAN, Verbal STM) and mathematical ability (visuospatial WM, NS). The group of children consisted of typically developing (TD) children (n = 32), children with MD (n = 26), children with RSD (n = 29), and combined MD and RSD (n = 43). It was hypothesized that, in line with the multiple deficit view on learning disorders, at least one unique predictor for both MD and RSD and a possible shared cognitive risk factor would be found to account for the comorbidity between the symptom dimensions literacy and math. Secondly, our hypotheses were that (a) a probabilistic multi-factorial risk factor model would provide a better fit to the data than a deterministic single risk factor model and (b) that a shared risk factor model would provide a better fit than the specific multi-factorial model. All our hypotheses were confirmed. NS and visuospatial WM were identified as unique cognitive predictors for MD, whereas PA and RAN were both associated with RSD. Also, a shared risk factor model with PA as a cognitive predictor for both RSD and MD fitted the data best, indicating that MD and RSD might co-occur due to a shared underlying deficit in phonological processing. Possible explanations are discussed in the context of sample selection and composition. This study shows that different cognitive factors play a role in mathematics and literacy, and that a phonological processing deficit might play a role in the occurrence of MD and RSD.
Duc, Duong Minh; Vui, Le Thi; Son, Hoang Ngoc; Minh, Hoang Van
2016-01-01
Study of smoking initiation and cessation is particularly important in adolescent population because smoking prevention and cessation at this time may prevent several health consequences later in life. There is a very limited knowledge about the determinants of smoking initiation and cessation among youths in Vietnam. This limits the development and implementation of appropriately targeted anti-smoking prevention interventions. This study applied pooled data from 3 rounds of a longitudinal survey in the Chi Linh Demographic—Epidemiological Surveillance System (CHILILAB DESS) in a northern province in Vietnam to analyse the determinants of smoking initiation and cessation among youths. The total of youths in the first round, second, and third rounds was 12,406, 10,211, and 7,654, respectively. The random-effects logit model controlling for both time-variant and time-invariant variables was conducted to explore the associated factors with new smokers and quitters. We found an increase trend of new smokers (7.0% to 9.6%) and quitters (27.5% to 31.4%) during 2009–2013. Smoking initiation and cessation are the result of multifactorial influences of demographic and health behaviours and status. Demographic background (older youths, male, unmarried youths, and youths having informal work) and health behaviours and status (youths who had smoking family members and/or smoking close friends, and had harmful drinking) were more likely to initiate smoking and more difficult to quit smoking. Among these variables, youths who had smoking close-friends had the highest likelihood of both initiating smoking and failed quitting. Our results could represent the similar health problems among youths in peri-urban areas in Vietnam. Further, our findings suggested that anti-smoking interventions should involve peer intervention, integrated with the reduction of other unhealthy behaviours such as alcohol consumption, and to focus on adolescents in their very early age (10–14 years old). PMID:29546208
[Diabetes and predictive medicine--parallax of the present time].
Rybka, J
2010-04-01
Predictive genetics uses genetic testing to estimate the risk in asymptomatic persons. Since in the case of multifactorial diseases predictive genetic analysis deals with findings which allow wider interpretation, it has a higher predictive value in expressly qualified diseases (monogenous) with high penetration compared to multifactorial (polygenous) diseases with high participation of environmental factors. In most "civilisation" (multifactorial) diseases including diabetes, heredity and environmental factors do not play two separate, independent roles. Instead, their interactions play a principal role. The new classification of diabetes is based on the implementation of not only ethiopathogenetic, but also genetic research. Diabetes mellitus type 1 (DM1T) is a polygenous multifactorial disease with the genetic component carrying about one half of the risk, the non-genetic one the other half. The study of the autoimmune nature of DM1T in connection with genetic analysis is going to bring about new insights in DM1T prediction. The author presents new pieces of knowledge on molecular genetics concerning certain specific types of diabetes. Issues relating to heredity in diabetes mellitus type 2 (DM2T) are even more complex. The disease has a polygenous nature, and the phenotype of a patient with DM2T, in addition to environmental factors, involves at least three, perhaps even tens of different genetic variations. At present, results at the genom-wide level appear to be most promising. The current concept of prediabetes is a realistic foundation for our prediction and prevention of DM2T. A multifactorial, multimarker approach based on our understanding of new pathophysiological factors of DM2T, tries to outline a "map" of prediabetes physiology, and if these tests are combined with sophisticated methods of genetic forecasting of DM2T, this may represent a significant step in our methodology of diabetes prediction. So far however, predictive genetics is limited by the interpretation of genetic predisposition and individualisation of the level of risk. There is no doubt that interpretation calls for co-operation with clinicians, while results of genetic analyses should presently be not uncritically overestimated. Predictive medicine, however, unquestionably fulfills the preventive focus of modern medicine, and genetic analysis is a perspective diagnostic method.
Trombetti, A; Hars, M; Herrmann, F; Rizzoli, R; Ferrari, S
2013-03-01
This controlled intervention study in hospitalized oldest old adults showed that a multifactorial fall-and-fracture risk assessment and management program, applied in a dedicated geriatric hospital unit, was effective in improving fall-related physical and functional performances and the level of independence in activities of daily living in high-risk patients. Hospitalization affords a major opportunity for interdisciplinary cooperation to manage fall-and-fracture risk factors in older adults. This study aimed at assessing the effects on physical performances and the level of independence in activities of daily living (ADL) of a multifactorial fall-and-fracture risk assessment and management program applied in a geriatric hospital setting. A controlled intervention study was conducted among 122 geriatric inpatients (mean ± SD age, 84 ± 7 years) admitted with a fall-related diagnosis. Among them, 92 were admitted to a dedicated unit and enrolled into a multifactorial intervention program, including intensive targeted exercise. Thirty patients who received standard usual care in a general geriatric unit formed the control group. Primary outcomes included gait and balance performances and the level of independence in ADL measured 12 ± 6 days apart. Secondary outcomes included length of stay, incidence of in-hospital falls, hospital readmission, and mortality rates. Compared to the usual care group, the intervention group had significant improvements in Timed Up and Go (adjusted mean difference [AMD] = -3.7s; 95 % CI = -6.8 to -0.7; P = 0.017), Tinetti (AMD = -1.4; 95 % CI = -2.1 to -0.8; P < 0.001), and Functional Independence Measure (AMD = 6.5; 95 %CI = 0.7-12.3; P = 0.027) test performances, as well as in several gait parameters (P < 0.05). Furthermore, this program favorably impacted adverse outcomes including hospital readmission (hazard ratio = 0.3; 95 % CI = 0.1-0.9; P = 0.02). A multifactorial fall-and-fracture risk-based intervention program, applied in a dedicated geriatric hospital unit, was effective and more beneficial than usual care in improving physical parameters related to the risk of fall and disability among high-risk oldest old patients.
Furlan, L; Contiero, B; Chiarini, F; Colauzzi, M; Sartori, E; Benvegnù, I; Fracasso, F; Giandon, P
2017-01-01
A survey of maize fields was conducted in northeast Italy from 1986 to 2014, resulting in a dataset of 1296 records including information on wireworm damage to maize, plant-attacking species, agronomic characteristics, landscape and climate. Three wireworm species, Agriotes brevis Candeze, A. sordidus Illiger and A. ustulatus Schäller, were identified as the dominant pest species in maize fields. Over the 29-year period surveyed, no yield reduction was observed when wireworm plant damage was below 15 % of the stand. A preliminary univariate analysis of risk assessment was applied to identify the main factors influencing the occurrence of damage. A multifactorial model was then applied by using the significant factors identified. This model allowed the research to highlight the strongest factors and to analyse how the main factors together influenced damage risk. The strongest factors were: A. brevis as prevalent damaging species, soil organic matter content >5 %, rotation including meadows and/or double crops, A. sordidus as prevalent damaging species, and surrounding landscape mainly meadows, uncultivated grass and double crops. The multifactorial model also showed how the simultaneous occurrence of two or more of the aforementioned risk factors can conspicuously increase the risk of wireworm damage to maize crops, while the probability of damage to a field with no-risk factors is always low (<1 %). These results make it possible to draw risk maps to identify low-risk and high-risk areas, a first step in implementing bespoke IPM procedures in an attempt to reduce the impact of soil insecticides significantly.
Noble, Penelope J.; Noble, Denis
2011-01-01
Ca2+-induced delayed afterdepolarizations (DADs) are depolarizations that occur after full repolarization. They have been observed across multiple species and cell types. Experimental results have indicated that the main cause of DADs is Ca2+ overload. The main hypothesis as to their initiation has been Ca2+ overflow from the overloaded sarcoplasmic reticulum (SR). Our results using 37 previously published mathematical models provide evidence that Ca2+-induced DADs are initiated by the same mechanism as Ca2+-induced Ca2+ release, i.e., the modulation of the opening of ryanodine receptors (RyR) by Ca2+ in the dyadic subspace; an SR overflow mechanism was not necessary for the induction of DADs in any of the models. The SR Ca2+ level is better viewed as a modulator of the appearance of DADs and the magnitude of Ca2+ release. The threshold for the total Ca2+ level within the cell (not only the SR) at which Ca2+ oscillations arise in the models is close to their baseline level (∼1- to 3-fold). It is most sensitive to changes in the maximum sarco(endo)plasmic reticulum Ca2+-ATPase (SERCA) pump rate (directly proportional), the opening probability of RyRs, and the Ca2+ diffusion rate from the dyadic subspace into the cytosol (both indirectly proportional), indicating that the appearance of DADs is multifactorial. This shift in emphasis away from SR overload as the trigger for DADs toward a multifactorial analysis could explain why SERCA overexpression has been shown to suppress DADs (while increasing contractility) and why DADs appear during heart failure (at low SR Ca2+ levels). PMID:21666112
Brotnow, Line; Reiss, David; Stover, Carla S.; Ganiban, Jody; Leve, Leslie D.; Neiderhiser, Jenae M.; Shaw, Daniel S.; Stevens, Hanna E.
2015-01-01
Background Mothers’ stress in pregnancy is considered an environmental risk factor in child development. Multiple stressors may combine to increase risk, and maternal personal characteristics may offset the effects of stress. This study aimed to test the effect of 1) multifactorial prenatal stress, integrating objective “stressors” and subjective “distress” and 2) the moderating effects of maternal characteristics (perceived social support, self-esteem and specific personality traits) on infant birthweight. Method Hierarchical regression modeling was used to examine cross-sectional data on 403 birth mothers and their newborns from an adoption study. Results Distress during pregnancy showed a statistically significant association with birthweight (R2 = 0.032, F (2, 398) = 6.782, p = .001). The hierarchical regression model revealed an almost two-fold increase in variance of birthweight predicted by stressors as compared with distress measures (R2 Δ = 0.049, F (4, 394) = 5.339, p < .001). Further, maternal characteristics moderated this association (R2 Δ = 0.031, F (4, 389) = 3.413, p = .009). Specifically, the expected benefit to birthweight as a function of higher SES was observed only for mothers with lower levels of harm-avoidance and higher levels of perceived social support. Importantly, the results were not better explained by prematurity, pregnancy complications, exposure to drugs, alcohol or environmental toxins. Conclusions The findings support multidimensional theoretical models of prenatal stress. Although both objective stressors and subjectively measured distress predict birthweight, they should be considered distinct and cumulative components of stress. This study further highlights that jointly considering risk factors and protective factors in pregnancy improves the ability to predict birthweight. PMID:26544958
Davison, John; Bond, John; Dawson, Pamela; Steen, I Nicholas; Kenny, Rose Anne
2005-03-01
To determine the effectiveness of multifactorial intervention to prevent falls in cognitively intact older persons with recurrent falls. Randomised controlled trial of multifactorial (medical, physiotherapy and occupational therapy) post-fall assessment and intervention compared with conventional care. Accident & Emergency departments in a university teaching hospital and associated district general hospital. 313 cognitively intact men and women aged over 65 years presenting to Accident & Emergency with a fall or fall-related injury and at least one additional fall in the preceding year; 159 randomised to assessment and intervention and 154 to conventional care. primary outcome was the number of falls and fallers in 1 year after recruitment. Secondary outcomes included injury rates, fall-related hospital admissions, mortality and fear of falling. There were 36% fewer falls in the intervention group (relative risk 0.64, 95% confidence interval 0.46-0.90). The proportion of subjects continuing to fall (65% (94/144) compared with 68% (102/149) relative risk 0.95, 95% confidence interval 0.81-1.12), and the number of fall-related attendances and hospital admissions was not different between groups. Duration of hospital admission was reduced (mean difference admission duration 3.6 days, 95% confidence interval 0.1-7.6) and falls efficacy was better in the intervention group (mean difference in Activities Specific Balance Confidence Score of 7.5, 95% confidence interval 0.72-14.2). Multifactorial intervention is effective at reducing the fall burden in cognitively intact older persons with recurrent falls attending Accident & Emergency, but does not reduce the proportion of subjects still falling.
Multiple-hit parameter estimation in monolithic detectors.
Hunter, William C J; Barrett, Harrison H; Lewellen, Tom K; Miyaoka, Robert S
2013-02-01
We examine a maximum-a-posteriori method for estimating the primary interaction position of gamma rays with multiple interaction sites (hits) in a monolithic detector. In assessing the performance of a multiple-hit estimator over that of a conventional one-hit estimator, we consider a few different detector and readout configurations of a 50-mm-wide square cerium-doped lutetium oxyorthosilicate block. For this study, we use simulated data from SCOUT, a Monte-Carlo tool for photon tracking and modeling scintillation- camera output. With this tool, we determine estimate bias and variance for a multiple-hit estimator and compare these with similar metrics for a one-hit maximum-likelihood estimator, which assumes full energy deposition in one hit. We also examine the effect of event filtering on these metrics; for this purpose, we use a likelihood threshold to reject signals that are not likely to have been produced under the assumed likelihood model. Depending on detector design, we observe a 1%-12% improvement of intrinsic resolution for a 1-or-2-hit estimator as compared with a 1-hit estimator. We also observe improved differentiation of photopeak events using a 1-or-2-hit estimator as compared with the 1-hit estimator; more than 6% of photopeak events that were rejected by likelihood filtering for the 1-hit estimator were accurately identified as photopeak events and positioned without loss of resolution by a 1-or-2-hit estimator; for PET, this equates to at least a 12% improvement in coincidence-detection efficiency with likelihood filtering applied.
Framework for adaptive multiscale analysis of nonhomogeneous point processes.
Helgason, Hannes; Bartroff, Jay; Abry, Patrice
2011-01-01
We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.
Phylogenetic evidence for cladogenetic polyploidization in land plants.
Zhan, Shing H; Drori, Michal; Goldberg, Emma E; Otto, Sarah P; Mayrose, Itay
2016-07-01
Polyploidization is a common and recurring phenomenon in plants and is often thought to be a mechanism of "instant speciation". Whether polyploidization is associated with the formation of new species (cladogenesis) or simply occurs over time within a lineage (anagenesis), however, has never been assessed systematically. We tested this hypothesis using phylogenetic and karyotypic information from 235 plant genera (mostly angiosperms). We first constructed a large database of combined sequence and chromosome number data sets using an automated procedure. We then applied likelihood models (ClaSSE) that estimate the degree of synchronization between polyploidization and speciation events in maximum likelihood and Bayesian frameworks. Our maximum likelihood analysis indicated that 35 genera supported a model that includes cladogenetic transitions over a model with only anagenetic transitions, whereas three genera supported a model that incorporates anagenetic transitions over one with only cladogenetic transitions. Furthermore, the Bayesian analysis supported a preponderance of cladogenetic change in four genera but did not support a preponderance of anagenetic change in any genus. Overall, these phylogenetic analyses provide the first broad confirmation that polyploidization is temporally associated with speciation events, suggesting that it is indeed a major speciation mechanism in plants, at least in some genera. © 2016 Botanical Society of America.
Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level
Savalei, Victoria; Rhemtulla, Mijke
2017-01-01
In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data—that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study. PMID:29276371
Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level.
Savalei, Victoria; Rhemtulla, Mijke
2017-08-01
In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data-that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study.
Reducing weapon-carrying among urban American Indian young people.
Bearinger, Linda H; Pettingell, Sandra L; Resnick, Michael D; Potthoff, Sandra J
2010-07-01
To examine the likelihood of weapon-carrying among urban American Indian young people, given the presence of salient risk and protective factors. The study used data from a confidential, self-report Urban Indian Youth Health Survey with 200 forced-choice items examining risk and protective factors and social, contextual, and demographic information. Between 1995 and 1998, 569 American Indian youths, aged 9-15 years, completed surveys administered in public schools and an after-school program. Using logistic regression, probability profiles compared the likelihood of weapon-carrying, given the combinations of salient risk and protective factors. In the final models, weapon-carrying was associated significantly with one risk factor (substance use) and two protective factors (school connectedness, perceiving peers as having prosocial behavior attitudes/norms). With one risk factor and two protective factors, in various combinations in the models, the likelihood of weapon carrying ranged from 4% (with two protective factors and no risk factor in the model) to 80% of youth (with the risk factor and no protective factors in the model). Even in the presence of the risk factor, the two protective factors decreased the likelihood of weapon-carrying to 25%. This analysis highlights the importance of protective factors in comprehensive assessments and interventions for vulnerable youth. In that the risk factor and two protective factors significantly related to weapon-carrying are amenable to intervention at both individual and population-focused levels, study findings offer a guide for prioritizing strategies for decreasing weapon-carrying among urban American Indian young people. Copyright (c) 2010 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopich, Irina V.
2015-01-21
Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when themore » FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.« less
Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET
Gopich, Irina V.
2015-01-01
Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated. PMID:25612692
NASA Astrophysics Data System (ADS)
Perlovsky, Leonid I.; Webb, Virgil H.; Bradley, Scott R.; Hansen, Christopher A.
1998-07-01
An advanced detection and tracking system is being developed for the U.S. Navy's Relocatable Over-the-Horizon Radar (ROTHR) to provide improved tracking performance against small aircraft typically used in drug-smuggling activities. The development is based on the Maximum Likelihood Adaptive Neural System (MLANS), a model-based neural network that combines advantages of neural network and model-based algorithmic approaches. The objective of the MLANS tracker development effort is to address user requirements for increased detection and tracking capability in clutter and improved track position, heading, and speed accuracy. The MLANS tracker is expected to outperform other approaches to detection and tracking for the following reasons. It incorporates adaptive internal models of target return signals, target tracks and maneuvers, and clutter signals, which leads to concurrent clutter suppression, detection, and tracking (track-before-detect). It is not combinatorial and thus does not require any thresholding or peak picking and can track in low signal-to-noise conditions. It incorporates superresolution spectrum estimation techniques exceeding the performance of conventional maximum likelihood and maximum entropy methods. The unique spectrum estimation method is based on the Einsteinian interpretation of the ROTHR received energy spectrum as a probability density of signal frequency. The MLANS neural architecture and learning mechanism are founded on spectrum models and maximization of the "Einsteinian" likelihood, allowing knowledge of the physical behavior of both targets and clutter to be injected into the tracker algorithms. The paper describes the addressed requirements and expected improvements, theoretical foundations, engineering methodology, and results of the development effort to date.
Mahara, Gehendra; Wang, Chao; Yang, Kun; Chen, Sipeng; Guo, Jin; Gao, Qi; Wang, Wei; Wang, Quanyi; Guo, Xiuhua
2016-01-01
(1) Background: Evidence regarding scarlet fever and its relationship with meteorological, including air pollution factors, is not very available. This study aimed to examine the relationship between ambient air pollutants and meteorological factors with scarlet fever occurrence in Beijing, China. (2) Methods: A retrospective ecological study was carried out to distinguish the epidemic characteristics of scarlet fever incidence in Beijing districts from 2013 to 2014. Daily incidence and corresponding air pollutant and meteorological data were used to develop the model. Global Moran’s I statistic and Anselin’s local Moran’s I (LISA) were applied to detect the spatial autocorrelation (spatial dependency) and clusters of scarlet fever incidence. The spatial lag model (SLM) and spatial error model (SEM) including ordinary least squares (OLS) models were then applied to probe the association between scarlet fever incidence and meteorological including air pollution factors. (3) Results: Among the 5491 cases, more than half (62%) were male, and more than one-third (37.8%) were female, with the annual average incidence rate 14.64 per 100,000 population. Spatial autocorrelation analysis exhibited the existence of spatial dependence; therefore, we applied spatial regression models. After comparing the values of R-square, log-likelihood and the Akaike information criterion (AIC) among the three models, the OLS model (R2 = 0.0741, log likelihood = −1819.69, AIC = 3665.38), SLM (R2 = 0.0786, log likelihood = −1819.04, AIC = 3665.08) and SEM (R2 = 0.0743, log likelihood = −1819.67, AIC = 3665.36), identified that the spatial lag model (SLM) was best for model fit for the regression model. There was a positive significant association between nitrogen oxide (p = 0.027), rainfall (p = 0.036) and sunshine hour (p = 0.048), while the relative humidity (p = 0.034) had an adverse association with scarlet fever incidence in SLM. (4) Conclusions: Our findings indicated that meteorological, as well as air pollutant factors may increase the incidence of scarlet fever; these findings may help to guide scarlet fever control programs and targeting the intervention. PMID:27827946
Mahara, Gehendra; Wang, Chao; Yang, Kun; Chen, Sipeng; Guo, Jin; Gao, Qi; Wang, Wei; Wang, Quanyi; Guo, Xiuhua
2016-11-04
(1) Background: Evidence regarding scarlet fever and its relationship with meteorological, including air pollution factors, is not very available. This study aimed to examine the relationship between ambient air pollutants and meteorological factors with scarlet fever occurrence in Beijing, China. (2) Methods: A retrospective ecological study was carried out to distinguish the epidemic characteristics of scarlet fever incidence in Beijing districts from 2013 to 2014. Daily incidence and corresponding air pollutant and meteorological data were used to develop the model. Global Moran's I statistic and Anselin's local Moran's I (LISA) were applied to detect the spatial autocorrelation (spatial dependency) and clusters of scarlet fever incidence. The spatial lag model (SLM) and spatial error model (SEM) including ordinary least squares (OLS) models were then applied to probe the association between scarlet fever incidence and meteorological including air pollution factors. (3) Results: Among the 5491 cases, more than half (62%) were male, and more than one-third (37.8%) were female, with the annual average incidence rate 14.64 per 100,000 population. Spatial autocorrelation analysis exhibited the existence of spatial dependence; therefore, we applied spatial regression models. After comparing the values of R-square, log-likelihood and the Akaike information criterion (AIC) among the three models, the OLS model (R² = 0.0741, log likelihood = -1819.69, AIC = 3665.38), SLM (R² = 0.0786, log likelihood = -1819.04, AIC = 3665.08) and SEM (R² = 0.0743, log likelihood = -1819.67, AIC = 3665.36), identified that the spatial lag model (SLM) was best for model fit for the regression model. There was a positive significant association between nitrogen oxide ( p = 0.027), rainfall ( p = 0.036) and sunshine hour ( p = 0.048), while the relative humidity ( p = 0.034) had an adverse association with scarlet fever incidence in SLM. (4) Conclusions: Our findings indicated that meteorological, as well as air pollutant factors may increase the incidence of scarlet fever; these findings may help to guide scarlet fever control programs and targeting the intervention.
NASA Astrophysics Data System (ADS)
Strader, Anne; Schneider, Max; Schorlemmer, Danijel; Liukis, Maria
2016-04-01
The Collaboratory for the Study of Earthquake Predictability (CSEP) was developed to rigorously test earthquake forecasts retrospectively and prospectively through reproducible, completely transparent experiments within a controlled environment (Zechar et al., 2010). During 2006-2011, thirteen five-year time-invariant prospective earthquake mainshock forecasts developed by the Regional Earthquake Likelihood Models (RELM) working group were evaluated through the CSEP testing center (Schorlemmer and Gerstenberger, 2007). The number, spatial, and magnitude components of the forecasts were compared to the respective observed seismicity components using a set of consistency tests (Schorlemmer et al., 2007, Zechar et al., 2010). In the initial experiment, all but three forecast models passed every test at the 95% significance level, with all forecasts displaying consistent log-likelihoods (L-test) and magnitude distributions (M-test) with the observed seismicity. In the ten-year RELM experiment update, we reevaluate these earthquake forecasts over an eight-year period from 2008-2016, to determine the consistency of previous likelihood testing results over longer time intervals. Additionally, we test the Uniform California Earthquake Rupture Forecast (UCERF2), developed by the U.S. Geological Survey (USGS), and the earthquake rate model developed by the California Geological Survey (CGS) and the USGS for the National Seismic Hazard Mapping Program (NSHMP) against the RELM forecasts. Both the UCERF2 and NSHMP forecasts pass all consistency tests, though the Helmstetter et al. (2007) and Shen et al. (2007) models exhibit greater information gain per earthquake according to the T- and W- tests (Rhoades et al., 2011). Though all but three RELM forecasts pass the spatial likelihood test (S-test), multiple forecasts fail the M-test due to overprediction of the number of earthquakes during the target period. Though there is no significant difference between the UCERF2 and NSHMP models, residual scores show that the NSHMP model is preferred in locations with earthquake occurrence, due to the lower seismicity rates forecasted by the UCERF2 model.
Loss of delta catenin function in severe autism
Turner, Tychele N.; Sharma, Kamal; Oh, Edwin C.; Liu, Yangfan P.; Collins, Ryan L.; Sosa, Maria X.; Auer, Dallas R.; Brand, Harrison; Sanders, Stephan J.; Moreno-De-Luca, Daniel; Pihur, Vasyl; Plona, Teri; Pike, Kristen; Soppet, Daniel R.; Smith, Michael W.; Cheung, Sau Wai; Martin, Christa Lese; State, Matthew W.; Talkowski, Michael E.; Cook, Edwin; Huganir, Richard; Katsanis, Nicholas; Chakravarti, Aravinda
2015-01-01
SUMMARY Autism is a multifactorial neurodevelopmental disorder affecting more males than females; consequently, under a multifactorial genetic hypothesis, females are affected only when they cross a higher biological threshold. We hypothesize that deleterious variants at conserved residues are enriched in severely affected patients arising from FEMFs (female-enriched multiplex families) with severe disease, enhancing the detection of key autism genes in modest numbers of cases. We show the utility of this strategy by identifying missense and dosage sequence variants in the gene encoding the adhesive junction-associated delta catenin protein (CTNND2) in FEMFs and demonstrating their loss-of-function effect by functional analyses in zebrafish embryos and cultured hippocampal neurons from wildtype and Ctnnd2 null mouse embryos. Finally, through gene expression and network analyses, we highlight a critical role for CTNND2 in neuronal development and an intimate connection to chromatin biology. Our data contribute to the understanding of the genetic architecture of autism and suggest that genetic analyses of phenotypic extremes, such as FEMFs, are of innate value in multifactorial disorders. PMID:25807484
Modeling Adversaries in Counterterrorism Decisions Using Prospect Theory.
Merrick, Jason R W; Leclerc, Philip
2016-04-01
Counterterrorism decisions have been an intense area of research in recent years. Both decision analysis and game theory have been used to model such decisions, and more recently approaches have been developed that combine the techniques of the two disciplines. However, each of these approaches assumes that the attacker is maximizing its utility. Experimental research shows that human beings do not make decisions by maximizing expected utility without aid, but instead deviate in specific ways such as loss aversion or likelihood insensitivity. In this article, we modify existing methods for counterterrorism decisions. We keep expected utility as the defender's paradigm to seek for the rational decision, but we use prospect theory to solve for the attacker's decision to descriptively model the attacker's loss aversion and likelihood insensitivity. We study the effects of this approach in a critical decision, whether to screen containers entering the United States for radioactive materials. We find that the defender's optimal decision is sensitive to the attacker's levels of loss aversion and likelihood insensitivity, meaning that understanding such descriptive decision effects is important in making such decisions. © 2014 Society for Risk Analysis.
A New Monte Carlo Method for Estimating Marginal Likelihoods.
Wang, Yu-Bo; Chen, Ming-Hui; Kuo, Lynn; Lewis, Paul O
2018-06-01
Evaluating the marginal likelihood in Bayesian analysis is essential for model selection. Estimators based on a single Markov chain Monte Carlo sample from the posterior distribution include the harmonic mean estimator and the inflated density ratio estimator. We propose a new class of Monte Carlo estimators based on this single Markov chain Monte Carlo sample. This class can be thought of as a generalization of the harmonic mean and inflated density ratio estimators using a partition weighted kernel (likelihood times prior). We show that our estimator is consistent and has better theoretical properties than the harmonic mean and inflated density ratio estimators. In addition, we provide guidelines on choosing optimal weights. Simulation studies were conducted to examine the empirical performance of the proposed estimator. We further demonstrate the desirable features of the proposed estimator with two real data sets: one is from a prostate cancer study using an ordinal probit regression model with latent variables; the other is for the power prior construction from two Eastern Cooperative Oncology Group phase III clinical trials using the cure rate survival model with similar objectives.
NASA Astrophysics Data System (ADS)
Núñez, M.; Robie, T.; Vlachos, D. G.
2017-10-01
Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).
NASA Astrophysics Data System (ADS)
Wang, Z.
2015-12-01
For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.
Variational Bayesian Parameter Estimation Techniques for the General Linear Model
Starke, Ludger; Ostwald, Dirk
2017-01-01
Variational Bayes (VB), variational maximum likelihood (VML), restricted maximum likelihood (ReML), and maximum likelihood (ML) are cornerstone parametric statistical estimation techniques in the analysis of functional neuroimaging data. However, the theoretical underpinnings of these model parameter estimation techniques are rarely covered in introductory statistical texts. Because of the widespread practical use of VB, VML, ReML, and ML in the neuroimaging community, we reasoned that a theoretical treatment of their relationships and their application in a basic modeling scenario may be helpful for both neuroimaging novices and practitioners alike. In this technical study, we thus revisit the conceptual and formal underpinnings of VB, VML, ReML, and ML and provide a detailed account of their mathematical relationships and implementational details. We further apply VB, VML, ReML, and ML to the general linear model (GLM) with non-spherical error covariance as commonly encountered in the first-level analysis of fMRI data. To this end, we explicitly derive the corresponding free energy objective functions and ensuing iterative algorithms. Finally, in the applied part of our study, we evaluate the parameter and model recovery properties of VB, VML, ReML, and ML, first in an exemplary setting and then in the analysis of experimental fMRI data acquired from a single participant under visual stimulation. PMID:28966572
Estimating ambiguity preferences and perceptions in multiple prior models: Evidence from the field.
Dimmock, Stephen G; Kouwenberg, Roy; Mitchell, Olivia S; Peijnenburg, Kim
2015-12-01
We develop a tractable method to estimate multiple prior models of decision-making under ambiguity. In a representative sample of the U.S. population, we measure ambiguity attitudes in the gain and loss domains. We find that ambiguity aversion is common for uncertain events of moderate to high likelihood involving gains, but ambiguity seeking prevails for low likelihoods and for losses. We show that choices made under ambiguity in the gain domain are best explained by the α-MaxMin model, with one parameter measuring ambiguity aversion (ambiguity preferences) and a second parameter quantifying the perceived degree of ambiguity (perceptions about ambiguity). The ambiguity aversion parameter α is constant and prior probability sets are asymmetric for low and high likelihood events. The data reject several other models, such as MaxMin and MaxMax, as well as symmetric probability intervals. Ambiguity aversion and the perceived degree of ambiguity are both higher for men and for the college-educated. Ambiguity aversion (but not perceived ambiguity) is also positively related to risk aversion. In the loss domain, we find evidence of reflection, implying that ambiguity aversion for gains tends to reverse into ambiguity seeking for losses. Our model's estimates for preferences and perceptions about ambiguity can be used to analyze the economic and financial implications of such preferences.
Janssen, Eva; van Osch, Liesbeth; de Vries, Hein; Lechner, Lilian
2013-01-01
This study aimed to extricate the influence of rational (e.g., 'I think …') and intuitive (e.g., 'I feel …') probability beliefs in the behavioural decision-making process regarding skin cancer prevention practices. Structural equation modelling was used in two longitudinal surveys (sun protection during winter sports [N = 491]; sun protection during summer [N = 277]) to examine direct and indirect behavioural effects of affective and cognitive likelihood (i.e. unmediated or mediated by intention), controlled for attitude, social influence and self-efficacy. Affective likelihood was directly related to sun protection in both studies, whereas no direct effects were found for cognitive likelihood. After accounting for past sun protective behaviour, affective likelihood was only directly related to sun protection in Study 1. No support was found for the indirect effects of affective and cognitive likelihood through intention. The findings underscore the importance of feelings of (cancer) risk in the decision-making process and should be acknowledged by health behaviour theories and risk communication practices. Suggestions for future research are discussed.
The molecular biology of inflammatory bowel diseases.
Corfield, Anthony P; Wallace, Heather M; Probert, Chris S J
2011-08-01
IBDs (inflammatory bowel diseases) are a group of diseases affecting the gastrointestinal tract. The diseases are multifactorial and cover genetic aspects: susceptibility genes, innate and adaptive responses to inflammation, and structure and efficacy of the mucosal protective barrier. Animal models of IBD have been developed to gain further knowledge of the disease mechanisms. These topics form an overlapping background to enable an improved understanding of the molecular features of these diseases. A series of articles is presented based on the topics covered at the Biochemical Society Focused Meeting The Molecular Biology of Inflammatory Bowel Diseases.
Molecular mechanisms underlying osteoarthritis development: Notch and NF-κB.
Saito, Taku; Tanaka, Sakae
2017-05-15
Osteoarthritis (OA) is a multi-factorial and highly prevalent joint disorder worldwide. Since the establishment of murine surgical knee OA models in 2005, many of the key molecules and signalling pathways responsible for OA development have been identified. Here we review the roles of two multi-functional signalling pathways in OA development: Notch and nuclear factor kappa-light-chain-enhancer of activated B cells. Previous studies have identified various aspects of articular chondrocyte regulation by these pathways. However, comprehensive understanding of the molecular networks regulating articular cartilage homeostasis and OA pathogenesis is needed.
ATAC Autocuer Modeling Analysis.
1981-01-01
the analysis of the simple rectangular scrnentation (1) is based on detection and estimation theory (2). This approach uses the concept of maximum ...continuous wave forms. In order to develop the principles of maximum likelihood, it is con- venient to develop the principles for the "classical...the concept of maximum likelihood is significant in that it provides the optimum performance of the detection/estimation problem. With a knowledge of
A quasi-likelihood approach to non-negative matrix factorization
Devarajan, Karthik; Cheung, Vincent C.K.
2017-01-01
A unified approach to non-negative matrix factorization based on the theory of generalized linear models is proposed. This approach embeds a variety of statistical models, including the exponential family, within a single theoretical framework and provides a unified view of such factorizations from the perspective of quasi-likelihood. Using this framework, a family of algorithms for handling signal-dependent noise is developed and its convergence proven using the Expectation-Maximization algorithm. In addition, a measure to evaluate the goodness-of-fit of the resulting factorization is described. The proposed methods allow modeling of non-linear effects via appropriate link functions and are illustrated using an application in biomedical signal processing. PMID:27348511
Uncertainty analysis of signal deconvolution using a measured instrument response function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartouni, E. P.; Beeman, B.; Caggiano, J. A.
2016-10-05
A common analysis procedure minimizes the ln-likelihood that a set of experimental observables matches a parameterized model of the observation. The model includes a description of the underlying physical process as well as the instrument response function (IRF). Here, we investigate the National Ignition Facility (NIF) neutron time-of-flight (nTOF) spectrometers, the IRF is constructed from measurements and models. IRF measurements have a finite precision that can make significant contributions to the uncertainty estimate of the physical model’s parameters. Finally, we apply a Bayesian analysis to properly account for IRF uncertainties in calculating the ln-likelihood function used to find the optimummore » physical parameters.« less
Genetics and child psychiatry: I Advances in quantitative and molecular genetics.
Rutter, M; Silberg, J; O'Connor, T; Simonoff, E
1999-01-01
Advances in quantitative psychiatric genetics as a whole are reviewed with respect to conceptual and methodological issues in relation to statistical model fitting, new genetic designs, twin and adoptee studies, definition of the phenotype, pervasiveness of genetic influences, pervasiveness of environmental influences, shared and nonshared environmental effects, and nature-nurture interplay. Advances in molecular genetics are discussed in relation to the shifts in research strategies to investigate multifactorial disorders (affected relative linkage designs, association strategies, and quantitative trait loci studies); new techniques and identified genetic mechanisms (expansion of trinucleotide repeats, genomic imprinting, mitochondrial DNA, fluorescent in-situ hybridisation, behavioural phenotypes, and animal models); and the successful localisation of genes.
Olfactory-triggered panic attacks among Khmer refugees: a contextual approach.
Hinton, Devon; Pich, Vuth; Chhean, Dara; Pollack, Mark
2004-06-01
One hundred Khmer refugees attending a psychiatric clinic were surveyed to determine the prevalence of olfactory-triggered panic attacks as well as certain characteristics of the episodes, including trigger (i.e. type of odor), frequency, length, somatic symptoms, and the rate of associated flashbacks and catastrophic cognitions. Forty-five of the 100 patients had experienced an olfactory-triggered panic attack in the last month. Trauma associations and catastrophic cognitions (e.g. fears of a 'wind attack', 'weakness', and 'weak heart') were common during events of olfactory panic. Several case examples are presented. A multifactorial model of the generation of olfactory panic is adduced. The therapeutic implications of this model for the treatment of olfactory panic are discussed.
Do's and Don'ts of Computer Models for Planning
ERIC Educational Resources Information Center
Hammond, John S., III
1974-01-01
Concentrates on the managerial issues involved in computer planning models. Describes what computer planning models are and the process by which managers can increase the likelihood of computer planning models being successful in their organizations. (Author/DN)
Modifying high-order aeroelastic math model of a jet transport using maximum likelihood estimation
NASA Technical Reports Server (NTRS)
Anissipour, Amir A.; Benson, Russell A.
1989-01-01
The design of control laws to damp flexible structural modes requires accurate math models. Unlike the design of control laws for rigid body motion (e.g., where robust control is used to compensate for modeling inaccuracies), structural mode damping usually employs narrow band notch filters. In order to obtain the required accuracy in the math model, maximum likelihood estimation technique is employed to improve the accuracy of the math model using flight data. Presented here are all phases of this methodology: (1) pre-flight analysis (i.e., optimal input signal design for flight test, sensor location determination, model reduction technique, etc.), (2) data collection and preprocessing, and (3) post-flight analysis (i.e., estimation technique and model verification). In addition, a discussion is presented of the software tools used and the need for future study in this field.
Score Estimating Equations from Embedded Likelihood Functions under Accelerated Failure Time Model
NING, JING; QIN, JING; SHEN, YU
2014-01-01
SUMMARY The semiparametric accelerated failure time (AFT) model is one of the most popular models for analyzing time-to-event outcomes. One appealing feature of the AFT model is that the observed failure time data can be transformed to identically independent distributed random variables without covariate effects. We describe a class of estimating equations based on the score functions for the transformed data, which are derived from the full likelihood function under commonly used semiparametric models such as the proportional hazards or proportional odds model. The methods of estimating regression parameters under the AFT model can be applied to traditional right-censored survival data as well as more complex time-to-event data subject to length-biased sampling. We establish the asymptotic properties and evaluate the small sample performance of the proposed estimators. We illustrate the proposed methods through applications in two examples. PMID:25663727
A parimutuel gambling perspective to compare probabilistic seismicity forecasts
NASA Astrophysics Data System (ADS)
Zechar, J. Douglas; Zhuang, Jiancang
2014-10-01
Using analogies to gaming, we consider the problem of comparing multiple probabilistic seismicity forecasts. To measure relative model performance, we suggest a parimutuel gambling perspective which addresses shortcomings of other methods such as likelihood ratio, information gain and Molchan diagrams. We describe two variants of the parimutuel approach for a set of forecasts: head-to-head, in which forecasts are compared in pairs, and round table, in which all forecasts are compared simultaneously. For illustration, we compare the 5-yr forecasts of the Regional Earthquake Likelihood Models experiment for M4.95+ seismicity in California.
Elghafghuf, Adel; Dufour, Simon; Reyher, Kristen; Dohoo, Ian; Stryhn, Henrik
2014-12-01
Mastitis is a complex disease affecting dairy cows and is considered to be the most costly disease of dairy herds. The hazard of mastitis is a function of many factors, both managerial and environmental, making its control a difficult issue to milk producers. Observational studies of clinical mastitis (CM) often generate datasets with a number of characteristics which influence the analysis of those data: the outcome of interest may be the time to occurrence of a case of mastitis, predictors may change over time (time-dependent predictors), the effects of factors may change over time (time-dependent effects), there are usually multiple hierarchical levels, and datasets may be very large. Analysis of such data often requires expansion of the data into the counting-process format - leading to larger datasets - thus complicating the analysis and requiring excessive computing time. In this study, a nested frailty Cox model with time-dependent predictors and effects was applied to Canadian Bovine Mastitis Research Network data in which 10,831 lactations of 8035 cows from 69 herds were followed through lactation until the first occurrence of CM. The model was fit to the data as a Poisson model with nested normally distributed random effects at the cow and herd levels. Risk factors associated with the hazard of CM during the lactation were identified, such as parity, calving season, herd somatic cell score, pasture access, fore-stripping, and proportion of treated cases of CM in a herd. The analysis showed that most of the predictors had a strong effect early in lactation and also demonstrated substantial variation in the baseline hazard among cows and between herds. A small simulation study for a setting similar to the real data was conducted to evaluate the Poisson maximum likelihood estimation approach with both Gaussian quadrature method and Laplace approximation. Further, the performance of the two methods was compared with the performance of a widely used estimation approach for frailty Cox models based on the penalized partial likelihood. The simulation study showed good performance for the Poisson maximum likelihood approach with Gaussian quadrature and biased variance component estimates for both the Poisson maximum likelihood with Laplace approximation and penalized partial likelihood approaches. Copyright © 2014. Published by Elsevier B.V.
Recreating a functional ancestral archosaur visual pigment.
Chang, Belinda S W; Jönsson, Karolina; Kazmi, Manija A; Donoghue, Michael J; Sakmar, Thomas P
2002-09-01
The ancestors of the archosaurs, a major branch of the diapsid reptiles, originated more than 240 MYA near the dawn of the Triassic Period. We used maximum likelihood phylogenetic ancestral reconstruction methods and explored different models of evolution for inferring the amino acid sequence of a putative ancestral archosaur visual pigment. Three different types of maximum likelihood models were used: nucleotide-based, amino acid-based, and codon-based models. Where possible, within each type of model, likelihood ratio tests were used to determine which model best fit the data. Ancestral reconstructions of the ancestral archosaur node using the best-fitting models of each type were found to be in agreement, except for three amino acid residues at which one reconstruction differed from the other two. To determine if these ancestral pigments would be functionally active, the corresponding genes were chemically synthesized and then expressed in a mammalian cell line in tissue culture. The expressed artificial genes were all found to bind to 11-cis-retinal to yield stable photoactive pigments with lambda(max) values of about 508 nm, which is slightly redshifted relative to that of extant vertebrate pigments. The ancestral archosaur pigments also activated the retinal G protein transducin, as measured in a fluorescence assay. Our results show that ancestral genes from ancient organisms can be reconstructed de novo and tested for function using a combination of phylogenetic and biochemical methods.
Austin, Peter C
2010-04-22
Multilevel logistic regression models are increasingly being used to analyze clustered data in medical, public health, epidemiological, and educational research. Procedures for estimating the parameters of such models are available in many statistical software packages. There is currently little evidence on the minimum number of clusters necessary to reliably fit multilevel regression models. We conducted a Monte Carlo study to compare the performance of different statistical software procedures for estimating multilevel logistic regression models when the number of clusters was low. We examined procedures available in BUGS, HLM, R, SAS, and Stata. We found that there were qualitative differences in the performance of different software procedures for estimating multilevel logistic models when the number of clusters was low. Among the likelihood-based procedures, estimation methods based on adaptive Gauss-Hermite approximations to the likelihood (glmer in R and xtlogit in Stata) or adaptive Gaussian quadrature (Proc NLMIXED in SAS) tended to have superior performance for estimating variance components when the number of clusters was small, compared to software procedures based on penalized quasi-likelihood. However, only Bayesian estimation with BUGS allowed for accurate estimation of variance components when there were fewer than 10 clusters. For all statistical software procedures, estimation of variance components tended to be poor when there were only five subjects per cluster, regardless of the number of clusters.
Estimation of submarine mass failure probability from a sequence of deposits with age dates
Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.
2013-01-01
The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.
McDonald, Ewan; Watterson, Andrew; Tyler, Andrew N; McArthur, John; Scott, E Marion
2014-01-01
Background: It is suggested the declining male birth proportion in some industrialized countries is linked to ubiquitous endocrine disruptor exposure. Stress and advanced parental age are determinants which frequently present positive findings. Multi-factorial influences on population sex ratio are rarely explored or tested in research. Objectives: To test the hypothesis that dual factors of pollution and population stress affects sex proportion at birth through geographical analysis of Central Scotland. Methods: The study incorporates the use of Geographical Information Systems (GIS) tools to overlay modeled point source endocrine disruptor air emissions with “small-area” data on multiple deprivation (a proxy measurement of stress) and birth sex. Historical review of regional sex ratio trends presents additional data on sex ratio in Scotland to consider. Results: There was no overall concentration in Central Scotland of low sex ratio neighborhoods with areas where endocrine disruptor air pollution and deprivation or economic stress were high. Historical regional trends in Scotland (from 1973), however, do show significantly lower sex ratio values for populations where industrial air pollution is highest (i.e. Eastern Central Scotland). Conclusions: Use of small area data sets and pollution inventories is a potential new method of inquiry for reproductive environmental and health protection monitoring and has produced interesting findings. PMID:25000111
Markov modulated Poisson process models incorporating covariates for rainfall intensity.
Thayakaran, R; Ramesh, N I
2013-01-01
Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.
Johnson, Timothy R; Kuhn, Kristine M
2015-12-01
This paper introduces the ltbayes package for R. This package includes a suite of functions for investigating the posterior distribution of latent traits of item response models. These include functions for simulating realizations from the posterior distribution, profiling the posterior density or likelihood function, calculation of posterior modes or means, Fisher information functions and observed information, and profile likelihood confidence intervals. Inferences can be based on individual response patterns or sets of response patterns such as sum scores. Functions are included for several common binary and polytomous item response models, but the package can also be used with user-specified models. This paper introduces some background and motivation for the package, and includes several detailed examples of its use.
Ahn, Jaeil; Mukherjee, Bhramar; Banerjee, Mousumi; Cooney, Kathleen A.
2011-01-01
Summary The stereotype regression model for categorical outcomes, proposed by Anderson (1984) is nested between the baseline category logits and adjacent category logits model with proportional odds structure. The stereotype model is more parsimonious than the ordinary baseline-category (or multinomial logistic) model due to a product representation of the log odds-ratios in terms of a common parameter corresponding to each predictor and category specific scores. The model could be used for both ordered and unordered outcomes. For ordered outcomes, the stereotype model allows more flexibility than the popular proportional odds model in capturing highly subjective ordinal scaling which does not result from categorization of a single latent variable, but are inherently multidimensional in nature. As pointed out by Greenland (1994), an additional advantage of the stereotype model is that it provides unbiased and valid inference under outcome-stratified sampling as in case-control studies. In addition, for matched case-control studies, the stereotype model is amenable to classical conditional likelihood principle, whereas there is no reduction due to sufficiency under the proportional odds model. In spite of these attractive features, the model has been applied less, as there are issues with maximum likelihood estimation and likelihood based testing approaches due to non-linearity and lack of identifiability of the parameters. We present comprehensive Bayesian inference and model comparison procedure for this class of models as an alternative to the classical frequentist approach. We illustrate our methodology by analyzing data from The Flint Men’s Health Study, a case-control study of prostate cancer in African-American men aged 40 to 79 years. We use clinical staging of prostate cancer in terms of Tumors, Nodes and Metastatsis (TNM) as the categorical response of interest. PMID:19731262
Assessing performance and validating finite element simulations using probabilistic knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolin, Ronald M.; Rodriguez, E. A.
Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrencemore » results are used to validate finite element predictions.« less
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-04-06
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.
Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley
2013-12-15
The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.
NASA Technical Reports Server (NTRS)
Watson, Clifford
2010-01-01
Traditional hazard analysis techniques utilize a two-dimensional representation of the results determined by relative likelihood and severity of the residual risk. These matrices present a quick-look at the Likelihood (Y-axis) and Severity (X-axis) of the probable outcome of a hazardous event. A three-dimensional method, described herein, utilizes the traditional X and Y axes, while adding a new, third dimension, shown as the Z-axis, and referred to as the Level of Control. The elements of the Z-axis are modifications of the Hazard Elimination and Control steps (also known as the Hazard Reduction Precedence Sequence). These steps are: 1. Eliminate risk through design. 2. Substitute less risky materials for more hazardous materials. 3. Install safety devices. 4. Install caution and warning devices. 5. Develop administrative controls (to include special procedures and training.) 6. Provide protective clothing and equipment. When added to the twodimensional models, the level of control adds a visual representation of the risk associated with the hazardous condition, creating a tall-pole for the least-well-controlled failure while establishing the relative likelihood and severity of all causes and effects for an identified hazard. Computer modeling of the analytical results, using spreadsheets and threedimensional charting gives a visual confirmation of the relationship between causes and their controls
NASA Technical Reports Server (NTRS)
Watson, Clifford C.
2011-01-01
Traditional hazard analysis techniques utilize a two-dimensional representation of the results determined by relative likelihood and severity of the residual risk. These matrices present a quick-look at the Likelihood (Y-axis) and Severity (X-axis) of the probable outcome of a hazardous event. A three-dimensional method, described herein, utilizes the traditional X and Y axes, while adding a new, third dimension, shown as the Z-axis, and referred to as the Level of Control. The elements of the Z-axis are modifications of the Hazard Elimination and Control steps (also known as the Hazard Reduction Precedence Sequence). These steps are: 1. Eliminate risk through design. 2. Substitute less risky materials for more hazardous materials. 3. Install safety devices. 4. Install caution and warning devices. 5. Develop administrative controls (to include special procedures and training.) 6. Provide protective clothing and equipment. When added to the two-dimensional models, the level of control adds a visual representation of the risk associated with the hazardous condition, creating a tall-pole for the least-well-controlled failure while establishing the relative likelihood and severity of all causes and effects for an identified hazard. Computer modeling of the analytical results, using spreadsheets and three-dimensional charting gives a visual confirmation of the relationship between causes and their controls.
Risk Presentation Using the Three Dimensions of Likelihood, Severity, and Level of Control
NASA Technical Reports Server (NTRS)
Watson, Clifford
2010-01-01
Traditional hazard analysis techniques utilize a two-dimensional representation of the results determined by relative likelihood and severity of the residual risk. These matrices present a quick-look at the Likelihood (Y-axis) and Severity (X-axis) of the probable outcome of a hazardous event. A three-dimensional method, described herein, utilizes the traditional X and Y axes, while adding a new, third dimension, shown as the Z-axis, and referred to as the Level of Control. The elements of the Z-axis are modifications of the Hazard Elimination and Control steps (also known as the Hazard Reduction Precedence Sequence). These steps are: 1. Eliminate risk through design. 2. Substitute less risky materials for more hazardous materials. 3. Install safety devices. 4. Install caution and warning devices. 5. Develop administrative controls (to include special procedures and training.) 6. Provide protective clothing and equipment. When added to the two-dimensional models, the level of control adds a visual representation of the risk associated with the hazardous condition, creating a tall-pole for the leastwell-controlled failure while establishing the relative likelihood and severity of all causes and effects for an identified hazard. Computer modeling of the analytical results, using spreadsheets and three-dimensional charting gives a visual confirmation of the relationship between causes and their controls.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-01-01
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503
Rosenblum, Michael; van der Laan, Mark J.
2010-01-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636
Wang, Shijun; Liu, Peter; Turkbey, Baris; Choyke, Peter; Pinto, Peter; Summers, Ronald M
2012-01-01
In this paper, we propose a new pharmacokinetic model for parameter estimation of dynamic contrast-enhanced (DCE) MRI by using Gaussian process inference. Our model is based on the Tofts dual-compartment model for the description of tracer kinetics and the observed time series from DCE-MRI is treated as a Gaussian stochastic process. The parameter estimation is done through a maximum likelihood approach and we propose a variant of the coordinate descent method to solve this likelihood maximization problem. The new model was shown to outperform a baseline method on simulated data. Parametric maps generated on prostate DCE data with the new model also provided better enhancement of tumors, lower intensity on false positives, and better boundary delineation when compared with the baseline method. New statistical parameter maps from the process model were also found to be informative, particularly when paired with the PK parameter maps.
Model selection for multi-component frailty models.
Ha, Il Do; Lee, Youngjo; MacKenzie, Gilbert
2007-11-20
Various frailty models have been developed and are now widely used for analysing multivariate survival data. It is therefore important to develop an information criterion for model selection. However, in frailty models there are several alternative ways of forming a criterion and the particular criterion chosen may not be uniformly best. In this paper, we study an Akaike information criterion (AIC) on selecting a frailty structure from a set of (possibly) non-nested frailty models. We propose two new AIC criteria, based on a conditional likelihood and an extended restricted likelihood (ERL) given by Lee and Nelder (J. R. Statist. Soc. B 1996; 58:619-678). We compare their performance using well-known practical examples and demonstrate that the two criteria may yield rather different results. A simulation study shows that the AIC based on the ERL is recommended, when attention is focussed on selecting the frailty structure rather than the fixed effects.
Batchelor, Frances A; Hill, Keith D; Mackintosh, Shylie F; Said, Catherine M; Whitehead, Craig H
2012-09-01
To determine whether a multifactorial falls prevention program reduces falls in people with stroke at risk of recurrent falls and whether this program leads to improvements in gait, balance, strength, and fall-related efficacy. A single blind, multicenter, randomized controlled trial with 12-month follow-up. Participants were recruited after discharge from rehabilitation and followed up in the community. Participants (N=156) were people with stroke at risk of recurrent falls being discharged home from rehabilitation. Tailored multifactorial falls prevention program and usual care (n=71) or control (usual care, n=85). Primary outcomes were rate of falls and proportion of fallers. Secondary outcomes included injurious falls, falls risk, participation, activity, leg strength, gait speed, balance, and falls efficacy. There was no significant difference in fall rate (intervention: 1.89 falls/person-year, control: 1.76 falls/person-year, incidence rate ratio=1.10, P=.74) or the proportion of fallers between the groups (risk ratio=.83, 95% confidence interval=.60-1.14). There was no significant difference in injurious fall rate (intervention: .74 injurious falls/person-year, control: .49 injurious falls/person-year, incidence rate ratio=1.57, P=.25), and there were no significant differences between groups on any other secondary outcome. This multifactorial falls prevention program was not effective in reducing falls in people with stroke who are at risk of falls nor was it more effective than usual care in improving gait, balance, and strength in people with stroke. Further research is required to identify effective interventions for this high-risk group. Copyright © 2012 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Huang, Chiung-Yu; Qin, Jing
2013-01-01
The Canadian Study of Health and Aging (CSHA) employed a prevalent cohort design to study survival after onset of dementia, where patients with dementia were sampled and the onset time of dementia was determined retrospectively. The prevalent cohort sampling scheme favors individuals who survive longer. Thus, the observed survival times are subject to length bias. In recent years, there has been a rising interest in developing estimation procedures for prevalent cohort survival data that not only account for length bias but also actually exploit the incidence distribution of the disease to improve efficiency. This article considers semiparametric estimation of the Cox model for the time from dementia onset to death under a stationarity assumption with respect to the disease incidence. Under the stationarity condition, the semiparametric maximum likelihood estimation is expected to be fully efficient yet difficult to perform for statistical practitioners, as the likelihood depends on the baseline hazard function in a complicated way. Moreover, the asymptotic properties of the semiparametric maximum likelihood estimator are not well-studied. Motivated by the composite likelihood method (Besag 1974), we develop a composite partial likelihood method that retains the simplicity of the popular partial likelihood estimator and can be easily performed using standard statistical software. When applied to the CSHA data, the proposed method estimates a significant difference in survival between the vascular dementia group and the possible Alzheimer’s disease group, while the partial likelihood method for left-truncated and right-censored data yields a greater standard error and a 95% confidence interval covering 0, thus highlighting the practical value of employing a more efficient methodology. To check the assumption of stable disease for the CSHA data, we also present new graphical and numerical tests in the article. The R code used to obtain the maximum composite partial likelihood estimator for the CSHA data is available in the online Supplementary Material, posted on the journal web site. PMID:24000265
Multiple-Hit Parameter Estimation in Monolithic Detectors
Barrett, Harrison H.; Lewellen, Tom K.; Miyaoka, Robert S.
2014-01-01
We examine a maximum-a-posteriori method for estimating the primary interaction position of gamma rays with multiple interaction sites (hits) in a monolithic detector. In assessing the performance of a multiple-hit estimator over that of a conventional one-hit estimator, we consider a few different detector and readout configurations of a 50-mm-wide square cerium-doped lutetium oxyorthosilicate block. For this study, we use simulated data from SCOUT, a Monte-Carlo tool for photon tracking and modeling scintillation- camera output. With this tool, we determine estimate bias and variance for a multiple-hit estimator and compare these with similar metrics for a one-hit maximum-likelihood estimator, which assumes full energy deposition in one hit. We also examine the effect of event filtering on these metrics; for this purpose, we use a likelihood threshold to reject signals that are not likely to have been produced under the assumed likelihood model. Depending on detector design, we observe a 1%–12% improvement of intrinsic resolution for a 1-or-2-hit estimator as compared with a 1-hit estimator. We also observe improved differentiation of photopeak events using a 1-or-2-hit estimator as compared with the 1-hit estimator; more than 6% of photopeak events that were rejected by likelihood filtering for the 1-hit estimator were accurately identified as photopeak events and positioned without loss of resolution by a 1-or-2-hit estimator; for PET, this equates to at least a 12% improvement in coincidence-detection efficiency with likelihood filtering applied. PMID:23193231
NASA Astrophysics Data System (ADS)
Staley, Dennis; Negri, Jacquelyn; Kean, Jason
2016-04-01
Population expansion into fire-prone steeplands has resulted in an increase in post-fire debris-flow risk in the western United States. Logistic regression methods for determining debris-flow likelihood and the calculation of empirical rainfall intensity-duration thresholds for debris-flow initiation represent two common approaches for characterizing hazard and reducing risk. Logistic regression models are currently being used to rapidly assess debris-flow hazard in response to design storms of known intensities (e.g. a 10-year recurrence interval rainstorm). Empirical rainfall intensity-duration thresholds comprise a major component of the United States Geological Survey (USGS) and the National Weather Service (NWS) debris-flow early warning system at a regional scale in southern California. However, these two modeling approaches remain independent, with each approach having limitations that do not allow for synergistic local-scale (e.g. drainage-basin scale) characterization of debris-flow hazard during intense rainfall. The current logistic regression equations consider rainfall a unique independent variable, which prevents the direct calculation of the relation between rainfall intensity and debris-flow likelihood. Regional (e.g. mountain range or physiographic province scale) rainfall intensity-duration thresholds fail to provide insight into the basin-scale variability of post-fire debris-flow hazard and require an extensive database of historical debris-flow occurrence and rainfall characteristics. Here, we present a new approach that combines traditional logistic regression and intensity-duration threshold methodologies. This method allows for local characterization of both the likelihood that a debris-flow will occur at a given rainfall intensity, the direct calculation of the rainfall rates that will result in a given likelihood, and the ability to calculate spatially explicit rainfall intensity-duration thresholds for debris-flow generation in recently burned areas. Our approach synthesizes the two methods by incorporating measured rainfall intensity into each model variable (based on measures of topographic steepness, burn severity and surface properties) within the logistic regression equation. This approach provides a more realistic representation of the relation between rainfall intensity and debris-flow likelihood, as likelihood values asymptotically approach zero when rainfall intensity approaches 0 mm/h, and increase with more intense rainfall. Model performance was evaluated by comparing predictions to several existing regional thresholds. The model, based upon training data collected in southern California, USA, has proven to accurately predict rainfall intensity-duration thresholds for other areas in the western United States not included in the original training dataset. In addition, the improved logistic regression model shows promise for emergency planning purposes and real-time, site-specific early warning. With further validation, this model may permit the prediction of spatially-explicit intensity-duration thresholds for debris-flow generation in areas where empirically derived regional thresholds do not exist. This improvement would permit the expansion of the early-warning system into other regions susceptible to post-fire debris flow.
Predicting Rotator Cuff Tears Using Data Mining and Bayesian Likelihood Ratios
Lu, Hsueh-Yi; Huang, Chen-Yuan; Su, Chwen-Tzeng; Lin, Chen-Chiang
2014-01-01
Objectives Rotator cuff tear is a common cause of shoulder diseases. Correct diagnosis of rotator cuff tears can save patients from further invasive, costly and painful tests. This study used predictive data mining and Bayesian theory to improve the accuracy of diagnosing rotator cuff tears by clinical examination alone. Methods In this retrospective study, 169 patients who had a preliminary diagnosis of rotator cuff tear on the basis of clinical evaluation followed by confirmatory MRI between 2007 and 2011 were identified. MRI was used as a reference standard to classify rotator cuff tears. The predictor variable was the clinical assessment results, which consisted of 16 attributes. This study employed 2 data mining methods (ANN and the decision tree) and a statistical method (logistic regression) to classify the rotator cuff diagnosis into “tear” and “no tear” groups. Likelihood ratio and Bayesian theory were applied to estimate the probability of rotator cuff tears based on the results of the prediction models. Results Our proposed data mining procedures outperformed the classic statistical method. The correction rate, sensitivity, specificity and area under the ROC curve of predicting a rotator cuff tear were statistical better in the ANN and decision tree models compared to logistic regression. Based on likelihood ratios derived from our prediction models, Fagan's nomogram could be constructed to assess the probability of a patient who has a rotator cuff tear using a pretest probability and a prediction result (tear or no tear). Conclusions Our predictive data mining models, combined with likelihood ratios and Bayesian theory, appear to be good tools to classify rotator cuff tears as well as determine the probability of the presence of the disease to enhance diagnostic decision making for rotator cuff tears. PMID:24733553
A score to estimate the likelihood of detecting advanced colorectal neoplasia at colonoscopy
Kaminski, Michal F; Polkowski, Marcin; Kraszewska, Ewa; Rupinski, Maciej; Butruk, Eugeniusz; Regula, Jaroslaw
2014-01-01
Objective This study aimed to develop and validate a model to estimate the likelihood of detecting advanced colorectal neoplasia in Caucasian patients. Design We performed a cross-sectional analysis of database records for 40-year-old to 66-year-old patients who entered a national primary colonoscopy-based screening programme for colorectal cancer in 73 centres in Poland in the year 2007. We used multivariate logistic regression to investigate the associations between clinical variables and the presence of advanced neoplasia in a randomly selected test set, and confirmed the associations in a validation set. We used model coefficients to develop a risk score for detection of advanced colorectal neoplasia. Results Advanced colorectal neoplasia was detected in 2544 of the 35 918 included participants (7.1%). In the test set, a logistic-regression model showed that independent risk factors for advanced colorectal neoplasia were: age, sex, family history of colorectal cancer, cigarette smoking (p<0.001 for these four factors), and Body Mass Index (p=0.033). In the validation set, the model was well calibrated (ratio of expected to observed risk of advanced neoplasia: 1.00 (95% CI 0.95 to 1.06)) and had moderate discriminatory power (c-statistic 0.62). We developed a score that estimated the likelihood of detecting advanced neoplasia in the validation set, from 1.32% for patients scoring 0, to 19.12% for patients scoring 7–8. Conclusions Developed and internally validated score consisting of simple clinical factors successfully estimates the likelihood of detecting advanced colorectal neoplasia in asymptomatic Caucasian patients. Once externally validated, it may be useful for counselling or designing primary prevention studies. PMID:24385598
A score to estimate the likelihood of detecting advanced colorectal neoplasia at colonoscopy.
Kaminski, Michal F; Polkowski, Marcin; Kraszewska, Ewa; Rupinski, Maciej; Butruk, Eugeniusz; Regula, Jaroslaw
2014-07-01
This study aimed to develop and validate a model to estimate the likelihood of detecting advanced colorectal neoplasia in Caucasian patients. We performed a cross-sectional analysis of database records for 40-year-old to 66-year-old patients who entered a national primary colonoscopy-based screening programme for colorectal cancer in 73 centres in Poland in the year 2007. We used multivariate logistic regression to investigate the associations between clinical variables and the presence of advanced neoplasia in a randomly selected test set, and confirmed the associations in a validation set. We used model coefficients to develop a risk score for detection of advanced colorectal neoplasia. Advanced colorectal neoplasia was detected in 2544 of the 35,918 included participants (7.1%). In the test set, a logistic-regression model showed that independent risk factors for advanced colorectal neoplasia were: age, sex, family history of colorectal cancer, cigarette smoking (p<0.001 for these four factors), and Body Mass Index (p=0.033). In the validation set, the model was well calibrated (ratio of expected to observed risk of advanced neoplasia: 1.00 (95% CI 0.95 to 1.06)) and had moderate discriminatory power (c-statistic 0.62). We developed a score that estimated the likelihood of detecting advanced neoplasia in the validation set, from 1.32% for patients scoring 0, to 19.12% for patients scoring 7-8. Developed and internally validated score consisting of simple clinical factors successfully estimates the likelihood of detecting advanced colorectal neoplasia in asymptomatic Caucasian patients. Once externally validated, it may be useful for counselling or designing primary prevention studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
A multifactorial anti‐cachectic approach for cancer cachexia in a rat model undergoing chemotherapy
Toledo, Míriam; Penna, Fabio; Oliva, Francesc; Luque, Melania; Betancourt, Angelica; Marmonti, Enrica; López‐Soriano, Francisco J.; Argilés, Josep M.
2015-01-01
Abstract Background The effectiveness of drugs aimed at counteracting cancer cachexia is generally tested in pre‐clinical rodent models, where only the tumour‐induced alterations are taken into account, excluding the co‐presence of anti‐tumour molecules that could worsen the scenario and/or interfere with the treatment. Methods The aim of the present investigation has been to assess the efficacy of a multifactorial treatment, including formoterol and megestrol acetate, in cachectic tumour‐bearing rats (Yoshida AH‐130, a highly cachectic tumour) undergoing chemotherapy (sorafenib). Results Treatment of cachectic tumour‐bearing rats with sorafenib (90 mg/kg) causes an important decrease in tumour cell content due to both reduced cell proliferation and increased apoptosis. As a consequence, animal survival significantly improves, while cachexia occurrence persists. Multi‐factorial treatment using both formoterol and megestrol acetate is highly effective in preventing muscle wasting and has more powerful effects than the single formoterol administration. In addition, both physical activity and grip strength are significantly improved as compared with the untreated tumour‐bearing animals. The effects of the multi‐factorial treatment include increased food intake (likely due to megestrol acetate) and decreased protein degradation, as shown by the reduced expression of genes associated with both proteasome and calpain proteolytic systems. Conclusions The combination of the two drugs proved to be a promising strategy for treating cancer cachexia in a pre‐clinical setting that better resembles the human condition, thus providing a strong rationale for the use of such combination in clinical trials involving cachectic cancer patients. PMID:27066318
Liu, Zheng; Cai, Wei; Lang, Ming; Yan, Ruizuo; Li, Zhenshen; Zhang, Gaoxiao; Yu, Pei; Wang, Yuqiang; Sun, Yewei; Zhang, Zaijun
2017-04-01
Parkinson's disease (PD) is a complex neurodegenerative disorder with multifactorial pathologies, including progressive loss of dopaminergic (DA) neurons, oxidative stress, mitochondrial dysfunction, and increased monoamine oxidase (MAO) enzyme activity. There are currently only a few agents approved to ameliorate the symptoms of PD; however, no agent is able to reverse the progression of the disease. Due to the multifactorial pathologies, it is necessary to develop multifunctional agents that can affect more than one target involved in the disease pathology. We have designed and synthesized a series of new multifunctional anti-Parkinson's compounds which can protect cerebral granular neurons from 1-methyl-4-phenylpyridinium (MPP + ) insult, scavenge free radicals, and inhibit monoamine oxidase (MAO)/cholinesterase (ChE) activities. Among them, MT-20R exhibited the most potent MAO-B inhibition both in vitro and in vivo. We further investigated the neuroprotective effects of MT-20R using a 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP)-induced PD mouse model. In vivo, MT-20R alleviated MPTP-induced motor deficits, raised the striatal contents of dopamine and its metabolites, and restored the expression of tyrosine hydroxylase (TH) and the number of TH-positive DA neurons in the substantia nigra. Additionally, MT-20R enhanced the expression of Bcl-2, decreased the expression of Bax and Caspase 3, and activated the AKT/Nrf2/HO-1 signaling pathway. These findings suggest that MT-20R may be a novel therapeutic candidate for treatment of PD.
Evaluation of Dynamic Coastal Response to Sea-level Rise Modifies Inundation Likelihood
NASA Technical Reports Server (NTRS)
Lentz, Erika E.; Thieler, E. Robert; Plant, Nathaniel G.; Stippa, Sawyer R.; Horton, Radley M.; Gesch, Dean B.
2016-01-01
Sea-level rise (SLR) poses a range of threats to natural and built environments, making assessments of SLR-induced hazards essential for informed decision making. We develop a probabilistic model that evaluates the likelihood that an area will inundate (flood) or dynamically respond (adapt) to SLR. The broad-area applicability of the approach is demonstrated by producing 30x30m resolution predictions for more than 38,000 sq km of diverse coastal landscape in the northeastern United States. Probabilistic SLR projections, coastal elevation and vertical land movement are used to estimate likely future inundation levels. Then, conditioned on future inundation levels and the current land-cover type, we evaluate the likelihood of dynamic response versus inundation. We find that nearly 70% of this coastal landscape has some capacity to respond dynamically to SLR, and we show that inundation models over-predict land likely to submerge. This approach is well suited to guiding coastal resource management decisions that weigh future SLR impacts and uncertainty against ecological targets and economic constraints.
Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda
2016-08-01
With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Ganga, Ken; Page, Lyman; Cheng, Edward; Meyer, Stephan
1994-01-01
In many cosmological models, the large angular scale anisotropy in the cosmic microwave background is parameterized by a spectral index, n, and a quadrupolar amplitude, Q. For a Harrison-Peebles-Zel'dovich spectrum, n = 1. Using data from the Far Infrared Survey (FIRS) and a new statistical measure, a contour plot of the likelihood for cosmological models for which -1 less than n less than 3 and 0 equal to or less than Q equal to or less than 50 micro K is obtained. Depending upon the details of the analysis, the maximum likelihood occurs at n between 0.8 and 1.4 and Q between 18 and 21 micro K. Regardless of Q, the likelihood is always less than half its maximum for n less than -0.4 and for n greater than 2.2, as it is for Q less than 8 micro K and Q greater than 44 micro K.
Vermunt, Neeltje P C A; Westert, Gert P; Olde Rikkert, Marcel G M; Faber, Marjan J
2018-03-01
To assess the impact of patient characteristics, patient-professional engagement, communication and context on the probability that healthcare professionals will discuss goals or priorities with older patients. Secondary analysis of cross-sectional data from the 2014 Commonwealth Fund International Health Policy Survey of Older Adults. 11 western countries. Community-dwelling adults, aged 55 or older. Assessment of goals and priorities. The final sample size consisted of 17,222 respondents, 54% of whom reported an assessment of their goals and priorities (AGP) by healthcare professionals. In logistic regression model 1, which was used to analyse the entire population, the determinants found to have moderate to large effects on the likelihood of AGP were information exchange on stress, diet or exercise, or both. Country (living in Sweden) and continuity of care (no regular professional or organisation) had moderate to large negative effects on the likelihood of AGP. In model 2, which focussed on respondents who experienced continuity of care, country and information exchange on stress and lifestyle were the main determinants of AGP, with comparable odds ratios to model 1. Furthermore, a professional asking questions also increased the likelihood of AGP. Continuity of care and information exchange is associated with a higher probability of AGP, while people living in Sweden are less likely to experience these assessments. Further study is required to determine whether increasing information exchange and professionals asking more questions may improve goal setting with older patients. Key points A patient goal-oriented approach can be beneficial for older patients with chronic conditions or multimorbidity; however, discussing goals with these patients is not a common practice. The likelihood of discussing goals varies by country, occurring most commonly in the USA, and least often in Sweden. Country-level differences in continuity of care and questions asked by a regularly visited professional affect the goal discussion probability. Patient characteristics, including age, have less impact than expected on the likelihood of sharing goals.
Measurement Model Specification Error in LISREL Structural Equation Models.
ERIC Educational Resources Information Center
Baldwin, Beatrice; Lomax, Richard
This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…
PyEvolve: a toolkit for statistical modelling of molecular evolution.
Butterfield, Andrew; Vedagiri, Vivek; Lang, Edward; Lawrence, Cath; Wakefield, Matthew J; Isaev, Alexander; Huttley, Gavin A
2004-01-05
Examining the distribution of variation has proven an extremely profitable technique in the effort to identify sequences of biological significance. Most approaches in the field, however, evaluate only the conserved portions of sequences - ignoring the biological significance of sequence differences. A suite of sophisticated likelihood based statistical models from the field of molecular evolution provides the basis for extracting the information from the full distribution of sequence variation. The number of different problems to which phylogeny-based maximum likelihood calculations can be applied is extensive. Available software packages that can perform likelihood calculations suffer from a lack of flexibility and scalability, or employ error-prone approaches to model parameterisation. Here we describe the implementation of PyEvolve, a toolkit for the application of existing, and development of new, statistical methods for molecular evolution. We present the object architecture and design schema of PyEvolve, which includes an adaptable multi-level parallelisation schema. The approach for defining new methods is illustrated by implementing a novel dinucleotide model of substitution that includes a parameter for mutation of methylated CpG's, which required 8 lines of standard Python code to define. Benchmarking was performed using either a dinucleotide or codon substitution model applied to an alignment of BRCA1 sequences from 20 mammals, or a 10 species subset. Up to five-fold parallel performance gains over serial were recorded. Compared to leading alternative software, PyEvolve exhibited significantly better real world performance for parameter rich models with a large data set, reducing the time required for optimisation from approximately 10 days to approximately 6 hours. PyEvolve provides flexible functionality that can be used either for statistical modelling of molecular evolution, or the development of new methods in the field. The toolkit can be used interactively or by writing and executing scripts. The toolkit uses efficient processes for specifying the parameterisation of statistical models, and implements numerous optimisations that make highly parameter rich likelihood functions solvable within hours on multi-cpu hardware. PyEvolve can be readily adapted in response to changing computational demands and hardware configurations to maximise performance. PyEvolve is released under the GPL and can be downloaded from http://cbis.anu.edu.au/software.
Copula based flexible modeling of associations between clustered event times.
Geerdens, Candida; Claeskens, Gerda; Janssen, Paul
2016-07-01
Multivariate survival data are characterized by the presence of correlation between event times within the same cluster. First, we build multi-dimensional copulas with flexible and possibly symmetric dependence structures for such data. In particular, clustered right-censored survival data are modeled using mixtures of max-infinitely divisible bivariate copulas. Second, these copulas are fit by a likelihood approach where the vast amount of copula derivatives present in the likelihood is approximated by finite differences. Third, we formulate conditions for clustered right-censored survival data under which an information criterion for model selection is either weakly consistent or consistent. Several of the familiar selection criteria are included. A set of four-dimensional data on time-to-mastitis is used to demonstrate the developed methodology.
Anatomy of the ATLAS diboson anomaly
NASA Astrophysics Data System (ADS)
Allanach, B. C.; Gripaios, Ben; Sutherland, Dave
2015-09-01
We perform a general analysis of new physics interpretations of the recent ATLAS diboson excesses over standard model expectations in LHC Run I collisions. First, we estimate a likelihood function in terms of the truth signal in the W W , W Z , and Z Z channels, finding that the maximum has zero events in the W Z channel, though the likelihood is sufficiently flat to allow other scenarios. Second, we survey the possible effective field theories containing the standard model plus a new resonance that could explain the data, identifying two possibilities, viz. a vector that is either a left- or right-handed S U (2 ) triplet. Finally, we compare these models with other experimental data and determine the parameter regions in which they provide a consistent explanation.
Using Qualitative Hazard Analysis to Guide Quantitative Safety Analysis
NASA Technical Reports Server (NTRS)
Shortle, J. F.; Allocco, M.
2005-01-01
Quantitative methods can be beneficial in many types of safety investigations. However, there are many difficulties in using quantitative m ethods. Far example, there may be little relevant data available. This paper proposes a framework for using quantitative hazard analysis to prioritize hazard scenarios most suitable for quantitative mziysis. The framework first categorizes hazard scenarios by severity and likelihood. We then propose another metric "modeling difficulty" that desc ribes the complexity in modeling a given hazard scenario quantitatively. The combined metrics of severity, likelihood, and modeling difficu lty help to prioritize hazard scenarios for which quantitative analys is should be applied. We have applied this methodology to proposed concepts of operations for reduced wake separation for airplane operatio ns at closely spaced parallel runways.
NASA Astrophysics Data System (ADS)
Aartsen, M. G.; Abraham, K.; Ackermann, M.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Ahrens, M.; Altmann, D.; Anderson, T.; Ansseau, I.; Anton, G.; Archinger, M.; Arguelles, C.; Arlen, T. C.; Auffenberg, J.; Bai, X.; Barwick, S. W.; Baum, V.; Bay, R.; Beatty, J. J.; Becker Tjus, J.; Becker, K.-H.; Beiser, E.; BenZvi, S.; Berghaus, P.; Berley, D.; Bernardini, E.; Bernhard, A.; Besson, D. Z.; Binder, G.; Bindig, D.; Bissok, M.; Blaufuss, E.; Blumenthal, J.; Boersma, D. J.; Bohm, C.; Börner, M.; Bos, F.; Bose, D.; Böser, S.; Botner, O.; Braun, J.; Brayeur, L.; Bretz, H.-P.; Buzinsky, N.; Casey, J.; Casier, M.; Cheung, E.; Chirkin, D.; Christov, A.; Clark, K.; Classen, L.; Coenders, S.; Collin, G. H.; Conrad, J. M.; Cowen, D. F.; Cruz Silva, A. H.; Danninger, M.; Daughhetee, J.; Davis, J. C.; Day, M.; de André, J. P. A. M.; De Clercq, C.; del Pino Rosendo, E.; Dembinski, H.; De Ridder, S.; Desiati, P.; de Vries, K. D.; de Wasseige, G.; de With, M.; DeYoung, T.; Díaz-Vélez, J. C.; di Lorenzo, V.; Dumm, J. P.; Dunkman, M.; Eberhardt, B.; Edsjö, J.; Ehrhardt, T.; Eichmann, B.; Euler, S.; Evenson, P. A.; Fahey, S.; Fazely, A. R.; Feintzeig, J.; Felde, J.; Filimonov, K.; Finley, C.; Flis, S.; Fösig, C.-C.; Fuchs, T.; Gaisser, T. K.; Gaior, R.; Gallagher, J.; Gerhardt, L.; Ghorbani, K.; Gier, D.; Gladstone, L.; Glagla, M.; Glüsenkamp, T.; Goldschmidt, A.; Golup, G.; Gonzalez, J. G.; Góra, D.; Grant, D.; Griffith, Z.; Groß, A.; Ha, C.; Haack, C.; Haj Ismail, A.; Hallgren, A.; Halzen, F.; Hansen, E.; Hansmann, B.; Hanson, K.; Hebecker, D.; Heereman, D.; Helbing, K.; Hellauer, R.; Hickford, S.; Hignight, J.; Hill, G. C.; Hoffman, K. D.; Hoffmann, R.; Holzapfel, K.; Homeier, A.; Hoshina, K.; Huang, F.; Huber, M.; Huelsnitz, W.; Hulth, P. O.; Hultqvist, K.; In, S.; Ishihara, A.; Jacobi, E.; Japaridze, G. S.; Jeong, M.; Jero, K.; Jones, B. J. P.; Jurkovic, M.; Kappes, A.; Karg, T.; Karle, A.; Katz, U.; Kauer, M.; Keivani, A.; Kelley, J. L.; Kemp, J.; Kheirandish, A.; Kiryluk, J.; Klein, S. R.; Kohnen, G.; Koirala, R.; Kolanoski, H.; Konietz, R.; Köpke, L.; Kopper, C.; Kopper, S.; Koskinen, D. J.; Kowalski, M.; Krings, K.; Kroll, G.; Kroll, M.; Krückl, G.; Kunnen, J.; Kurahashi, N.; Kuwabara, T.; Labare, M.; Lanfranchi, J. L.; Larson, M. J.; Lesiak-Bzdak, M.; Leuermann, M.; Leuner, J.; Lu, L.; Lünemann, J.; Madsen, J.; Maggi, G.; Mahn, K. B. M.; Mandelartz, M.; Maruyama, R.; Mase, K.; Matis, H. S.; Maunu, R.; McNally, F.; Meagher, K.; Medici, M.; Meier, M.; Meli, A.; Menne, T.; Merino, G.; Meures, T.; Miarecki, S.; Middell, E.; Mohrmann, L.; Montaruli, T.; Morse, R.; Nahnhauer, R.; Naumann, U.; Neer, G.; Niederhausen, H.; Nowicki, S. C.; Nygren, D. R.; Obertacke Pollmann, A.; Olivas, A.; Omairat, A.; O'Murchadha, A.; Palczewski, T.; Pandya, H.; Pankova, D. V.; Paul, L.; Pepper, J. A.; Pérez de los Heros, C.; Pfendner, C.; Pieloth, D.; Pinat, E.; Posselt, J.; Price, P. B.; Przybylski, G. T.; Quinnan, M.; Raab, C.; Rädel, L.; Rameez, M.; Rawlins, K.; Reimann, R.; Relich, M.; Resconi, E.; Rhode, W.; Richman, M.; Richter, S.; Riedel, B.; Robertson, S.; Rongen, M.; Rott, C.; Ruhe, T.; Ryckbosch, D.; Sabbatini, L.; Sander, H.-G.; Sandrock, A.; Sandroos, J.; Sarkar, S.; Savage, C.; Schatto, K.; Schimp, M.; Schlunder, P.; Schmidt, T.; Schoenen, S.; Schöneberg, S.; Schönwald, A.; Schulte, L.; Schumacher, L.; Scott, P.; Seckel, D.; Seunarine, S.; Silverwood, H.; Soldin, D.; Song, M.; Spiczak, G. M.; Spiering, C.; Stahlberg, M.; Stamatikos, M.; Stanev, T.; Stasik, A.; Steuer, A.; Stezelberger, T.; Stokstad, R. G.; Stößl, A.; Ström, R.; Strotjohann, N. L.; Sullivan, G. W.; Sutherland, M.; Taavola, H.; Taboada, I.; Tatar, J.; Ter-Antonyan, S.; Terliuk, A.; Te{š}ić, G.; Tilav, S.; Toale, P. A.; Tobin, M. N.; Toscano, S.; Tosi, D.; Tselengidou, M.; Turcati, A.; Unger, E.; Usner, M.; Vallecorsa, S.; Vandenbroucke, J.; van Eijndhoven, N.; Vanheule, S.; van Santen, J.; Veenkamp, J.; Vehring, M.; Voge, M.; Vraeghe, M.; Walck, C.; Wallace, A.; Wallraff, M.; Wandkowsky, N.; Weaver, Ch.; Wendt, C.; Westerhoff, S.; Whelan, B. J.; Wiebe, K.; Wiebusch, C. H.; Wille, L.; Williams, D. R.; Wills, L.; Wissing, H.; Wolf, M.; Wood, T. R.; Woschnagg, K.; Xu, D. L.; Xu, X. W.; Xu, Y.; Yanez, J. P.; Yodh, G.; Yoshida, S.; Zoll, M.
2016-04-01
We present an improved event-level likelihood formalism for including neutrino telescope data in global fits to new physics. We derive limits on spin-dependent dark matter-proton scattering by employing the new formalism in a re-analysis of data from the 79-string IceCube search for dark matter annihilation in the Sun, including explicit energy information for each event. The new analysis excludes a number of models in the weak-scale minimal supersymmetric standard model (MSSM) for the first time. This work is accompanied by the public release of the 79-string IceCube data, as well as an associated computer code for applying the new likelihood to arbitrary dark matter models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, R. Derek; Gunther, Jacob H.; Moon, Todd K.
In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less
West, R. Derek; Gunther, Jacob H.; Moon, Todd K.
2016-12-01
In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less
Bayesian Hierarchical Random Effects Models in Forensic Science.
Aitken, Colin G G
2018-01-01
Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios) was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.
NASA Technical Reports Server (NTRS)
Bundick, W. T.
1985-01-01
The application of the Generalized Likelihood Ratio technique to the detection and identification of aircraft control element failures has been evaluated in a linear digital simulation of the longitudinal dynamics of a B-737 aircraft. Simulation results show that the technique has potential but that the effects of wind turbulence and Kalman filter model errors are problems which must be overcome.
Besseris, George J
2013-01-01
Data screening is an indispensable phase in initiating the scientific discovery process. Fractional factorial designs offer quick and economical options for engineering highly-dense structured datasets. Maximum information content is harvested when a selected fractional factorial scheme is driven to saturation while data gathering is suppressed to no replication. A novel multi-factorial profiler is presented that allows screening of saturated-unreplicated designs by decomposing the examined response to its constituent contributions. Partial effects are sliced off systematically from the investigated response to form individual contrasts using simple robust measures. By isolating each time the disturbance attributed solely to a single controlling factor, the Wilcoxon-Mann-Whitney rank stochastics are employed to assign significance. We demonstrate that the proposed profiler possesses its own self-checking mechanism for detecting a potential influence due to fluctuations attributed to the remaining unexplainable error. Main benefits of the method are: 1) easy to grasp, 2) well-explained test-power properties, 3) distribution-free, 4) sparsity-free, 5) calibration-free, 6) simulation-free, 7) easy to implement, and 8) expanded usability to any type and size of multi-factorial screening designs. The method is elucidated with a benchmarked profiling effort for a water filtration process.
A Distribution-Free Multi-Factorial Profiler for Harvesting Information from High-Density Screenings
Besseris, George J.
2013-01-01
Data screening is an indispensable phase in initiating the scientific discovery process. Fractional factorial designs offer quick and economical options for engineering highly-dense structured datasets. Maximum information content is harvested when a selected fractional factorial scheme is driven to saturation while data gathering is suppressed to no replication. A novel multi-factorial profiler is presented that allows screening of saturated-unreplicated designs by decomposing the examined response to its constituent contributions. Partial effects are sliced off systematically from the investigated response to form individual contrasts using simple robust measures. By isolating each time the disturbance attributed solely to a single controlling factor, the Wilcoxon-Mann-Whitney rank stochastics are employed to assign significance. We demonstrate that the proposed profiler possesses its own self-checking mechanism for detecting a potential influence due to fluctuations attributed to the remaining unexplainable error. Main benefits of the method are: 1) easy to grasp, 2) well-explained test-power properties, 3) distribution-free, 4) sparsity-free, 5) calibration-free, 6) simulation-free, 7) easy to implement, and 8) expanded usability to any type and size of multi-factorial screening designs. The method is elucidated with a benchmarked profiling effort for a water filtration process. PMID:24009744
Effect assessment in work environment interventions: a methodological reflection.
Neumann, W P; Eklund, J; Hansson, B; Lindbeck, L
2010-01-01
This paper addresses a number of issues for work environment intervention (WEI) researchers in light of the mixed results reported in the literature. If researchers emphasise study quality over intervention quality, reviews that exclude case studies with high quality and multifactorial interventions may be vulnerable to 'quality criteria selection bias'. Learning from 'failed' interventions is inhibited by both publication bias and reporting lengths that limit information on relevant contextual and implementation factors. The authors argue for the need to develop evaluation approaches consistent with the complexity of multifactorial WEIs that: a) are owned by and aimed at the whole organisation; and b) include intervention in early design stages where potential impact is highest. Context variety, complexity and instability in and around organisations suggest that attention might usefully shift from generalisable 'proof of effectiveness' to a more nuanced identification of intervention elements and the situations in which they are more likely to work as intended. STATEMENT OF RELEVANCE: This paper considers ergonomics interventions from perspectives of what constitutes quality and 'proof". It points to limitations of traditional experimental intervention designs and argues that the complexity of organisational change, and the need for multifactorial interventions that reach deep into work processes for greater impact, should be recognised.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, Jonathan; Thompson, Sandra E.; Brothers, Alan J.
The ability to estimate the likelihood of future events based on current and historical data is essential to the decision making process of many government agencies. Successful predictions related to terror events and characterizing the risks will support development of options for countering these events. The predictive tasks involve both technical and social component models. The social components have presented a particularly difficult challenge. This paper outlines some technical considerations of this modeling activity. Both data and predictions associated with the technical and social models will likely be known with differing certainties or accuracies – a critical challenge is linkingmore » across these model domains while respecting this fundamental difference in certainty level. This paper will describe the technical approach being taken to develop the social model and identification of the significant interfaces between the technical and social modeling in the context of analysis of diversion of nuclear material.« less
Link, William A; Barker, Richard J
2005-03-01
We present a hierarchical extension of the Cormack-Jolly-Seber (CJS) model for open population capture-recapture data. In addition to recaptures of marked animals, we model first captures of animals and losses on capture. The parameter set includes capture probabilities, survival rates, and birth rates. The survival rates and birth rates are treated as a random sample from a bivariate distribution, thus the model explicitly incorporates correlation in these demographic rates. A key feature of the model is that the likelihood function, which includes a CJS model factor, is expressed entirely in terms of identifiable parameters; losses on capture can be factored out of the model. Since the computational complexity of classical likelihood methods is prohibitive, we use Markov chain Monte Carlo in a Bayesian analysis. We describe an efficient candidate-generation scheme for Metropolis-Hastings sampling of CJS models and extensions. The procedure is illustrated using mark-recapture data for the moth Gonodontis bidentata.
Link, William A.; Barker, Richard J.
2005-01-01
We present a hierarchical extension of the Cormack–Jolly–Seber (CJS) model for open population capture–recapture data. In addition to recaptures of marked animals, we model first captures of animals and losses on capture. The parameter set includes capture probabilities, survival rates, and birth rates. The survival rates and birth rates are treated as a random sample from a bivariate distribution, thus the model explicitly incorporates correlation in these demographic rates. A key feature of the model is that the likelihood function, which includes a CJS model factor, is expressed entirely in terms of identifiable parameters; losses on capture can be factored out of the model. Since the computational complexity of classical likelihood methods is prohibitive, we use Markov chain Monte Carlo in a Bayesian analysis. We describe an efficient candidate-generation scheme for Metropolis–Hastings sampling of CJS models and extensions. The procedure is illustrated using mark-recapture data for the moth Gonodontis bidentata.