Sample records for elaboration likelihood model

  1. Reconceptualizing Social Influence in Counseling: The Elaboration Likelihood Model.

    ERIC Educational Resources Information Center

    McNeill, Brian W.; Stoltenberg, Cal D.

    1989-01-01

    Presents Elaboration Likelihood Model (ELM) of persuasion (a reconceptualization of the social influence process) as alternative model of attitude change. Contends ELM unifies conflicting social psychology results and can potentially account for inconsistent research findings in counseling psychology. Provides guidelines on integrating…

  2. The Elaboration Likelihood Model: Implications for the Practice of School Psychology.

    ERIC Educational Resources Information Center

    Petty, Richard E.; Heesacker, Martin; Hughes, Jan N.

    1997-01-01

    Reviews a contemporary theory of attitude change, the Elaboration Likelihood Model (ELM) of persuasion, and addresses its relevance to school psychology. Claims that a key postulate of ELM is that attitude change results from thoughtful (central route) or nonthoughtful (peripheral route) processes. Illustrations of ELM's utility for school…

  3. Counseling Pretreatment and the Elaboration Likelihood Model of Attitude Change.

    ERIC Educational Resources Information Center

    Heesacker, Martin

    1986-01-01

    Results of the application of the Elaboration Likelihood Model (ELM) to a counseling context revealed that more favorable attitudes toward counseling occurred as subjects' ego involvement increased and as intervention quality improved. Counselor credibility affected the degree to which subjects' attitudes reflected argument quality differences.…

  4. Application of the Elaboration Likelihood Model of Attitude Change to Assertion Training.

    ERIC Educational Resources Information Center

    Ernst, John M.; Heesacker, Martin

    1993-01-01

    College students (n=113) participated in study comparing effects of elaboration likelihood model (ELM) based assertion workshop with those of typical assertion workshop. ELM-based workshop was significantly better at producing favorable attitude change, greater intention to act assertively, and more favorable evaluations of workshop content.…

  5. Source and Message Factors in Persuasion: A Reply to Stiff's Critique of the Elaboration Likelihood Model.

    ERIC Educational Resources Information Center

    Petty, Richard E.; And Others

    1987-01-01

    Answers James Stiff's criticism of the Elaboration Likelihood Model (ELM) of persuasion. Corrects certain misperceptions of the ELM and criticizes Stiff's meta-analysis that compares ELM predictions with those derived from Kahneman's elastic capacity model. Argues that Stiff's presentation of the ELM and the conclusions he draws based on the data…

  6. Counseling Pretreatment and the Elaboration Likelihood Model of Attitude Change.

    ERIC Educational Resources Information Center

    Heesacker, Martin

    The importance of high levels of involvement in counseling has been related to theories of interpersonal influence. To examine differing effects of counselor credibility as a function of how personally involved counselors are, the Elaboration Likelihood Model (ELM) of attitude change was applied to counseling pretreatment. Students (N=256) were…

  7. Evaluation of Smoking Prevention Television Messages Based on the Elaboration Likelihood Model

    ERIC Educational Resources Information Center

    Flynn, Brian S.; Worden, John K.; Bunn, Janice Yanushka; Connolly, Scott W.; Dorwaldt, Anne L.

    2011-01-01

    Progress in reducing youth smoking may depend on developing improved methods to communicate with higher risk youth. This study explored the potential of smoking prevention messages based on the Elaboration Likelihood Model (ELM) to address these needs. Structured evaluations of 12 smoking prevention messages based on three strategies derived from…

  8. The Elaboration Likelihood Model and Proxemic Violations as Peripheral Cues to Information Processing.

    ERIC Educational Resources Information Center

    Eaves, Michael

    This paper provides a literature review of the elaboration likelihood model (ELM) as applied in persuasion. Specifically, the paper addresses distraction with regard to effects on persuasion. In addition, the application of proxemic violations as peripheral cues in message processing is discussed. Finally, the paper proposes to shed new light on…

  9. Influencing Attitudes Regarding Special Class Placement Using a Psychoeducational Report: An Investigation of the Elaboration Likelihood Model.

    ERIC Educational Resources Information Center

    Andrews, Lester W.; Gutkin, Terry B.

    1994-01-01

    Investigates variables drawn from the Elaboration Likelihood Model (ELM) that might be manipulated to enhance the persuasiveness of a psychoeducational report. Results showed teachers in training were more persuaded by reports with high message quality. Findings are discussed in terms of the ELM and professional school psychology practice. (RJM)

  10. Examining Sex Differences in Altering Attitudes About Rape: A Test of the Elaboration Likelihood Model.

    ERIC Educational Resources Information Center

    Heppner, Mary J.; And Others

    1995-01-01

    Intervention sought to improve first-year college students' attitudes about rape. Used the Elaboration Likelihood Model to examine men's and women's attitude change process. Found numerous sex differences in ways men and women experienced and changed during and after intervention. Women's attitude showed more lasting change while men's was more…

  11. Applying the elaboration likelihood model of persuasion to a videotape-based eating disorders primary prevention program for adolescent girls.

    PubMed

    Withers, Giselle F; Wertheim, Eleanor H

    2004-01-01

    This study applied principles from the Elaboration Likelihood Model of Persuasion to the prevention of disordered eating. Early adolescent girls watched either a preventive videotape only (n=114) or video plus post-video activity (verbal discussion, written exercises, or control discussion) (n=187); or had no intervention (n=104). Significantly more body image and knowledge improvements occurred at post video and follow-up in the intervention groups compared to no intervention. There were no outcome differences among intervention groups, or between girls with high or low elaboration likelihood. Further research is needed in integrating the videotape into a broader prevention package.

  12. The elaboration likelihood model and communication about food risks.

    PubMed

    Frewer, L J; Howard, C; Hedderley, D; Shepherd, R

    1997-12-01

    Factors such as hazard type and source credibility have been identified as important in the establishment of effective strategies for risk communication. The elaboration likelihood model was adapted to investigate the potential impact of hazard type, information source, and persuasive content of information on individual engagement in elaborative, or thoughtful, cognitions about risk messages. One hundred sixty respondents were allocated to one of eight experimental groups, and the effects of source credibility, persuasive content of information and hazard type were systematically varied. The impact of the different factors on beliefs about the information and elaborative processing examined. Low credibility was particularly important in reducing risk perceptions, although persuasive content and hazard type were also influential in determining whether elaborative processing occurred.

  13. An Elaboration Likelihood Model Based Longitudinal Analysis of Attitude Change during the Process of IT Acceptance via Education Program

    ERIC Educational Resources Information Center

    Lee, Woong-Kyu

    2012-01-01

    The principal objective of this study was to gain insight into attitude changes occurring during IT acceptance from the perspective of elaboration likelihood model (ELM). In particular, the primary target of this study was the process of IT acceptance through an education program. Although the Internet and computers are now quite ubiquitous, and…

  14. Dissociative effects of true and false recall as a function of different encoding strategies.

    PubMed

    Goodwin, Kerri A

    2007-01-01

    Goodwin, Meissner, and Ericsson (2001) proposed a path model in which elaborative encoding predicted the likelihood of verbalisation of critical, nonpresented words at encoding, which in turn predicted the likelihood of false recall. The present study tested this model of false recall experimentally with a manipulation of encoding strategy and the implementation of the process-tracing technique of protocol analysis. Findings indicated that elaborative encoding led to more verbalisations of critical items during encoding than rote rehearsal of list items, but false recall rates were reduced under elaboration conditions (Experiment 2). Interestingly, false recall was more likely to occur when items were verbalised during encoding than not verbalised (Experiment 1), and participants tended to reinstate their encoding strategies during recall, particularly after elaborative encoding (Experiment 1). Theoretical implications for the interplay of encoding and retrieval processes of false recall are discussed.

  15. [Effects of attitude formation, persuasive message, and source expertise on attitude change: an examination based on the Elaboration Likelihood Model and the Attitude Formation Theory].

    PubMed

    Nakamura, M; Saito, K; Wakabayashi, M

    1990-04-01

    The purpose of this study was to investigate how attitude change is generated by the recipient's degree of attitude formation, evaluative-emotional elements contained in the persuasive messages, and source expertise as a peripheral cue in the persuasion context. Hypotheses based on the Attitude Formation Theory of Mizuhara (1982) and the Elaboration Likelihood Model of Petty and Cacioppo (1981, 1986) were examined. Eighty undergraduate students served as subjects in the experiment, the first stage of which involving manipulating the degree of attitude formation with respect to nuclear power development. Then, the experimenter presented persuasive messages with varying combinations of evaluative-emotional elements from a source with either high or low expertise on the subject. Results revealed a significant interaction effect on attitude change among attitude formation, persuasive message and the expertise of the message source. That is, high attitude formation subjects resisted evaluative-emotional persuasion from the high expertise source while low attitude formation subjects changed their attitude when exposed to the same persuasive message from a low expertise source. Results exceeded initial predictions based on the Attitude Formation Theory and the Elaboration Likelihood Model.

  16. Formulating the Rasch Differential Item Functioning Model under the Marginal Maximum Likelihood Estimation Context and Its Comparison with Mantel-Haenszel Procedure in Short Test and Small Sample Conditions

    ERIC Educational Resources Information Center

    Paek, Insu; Wilson, Mark

    2011-01-01

    This study elaborates the Rasch differential item functioning (DIF) model formulation under the marginal maximum likelihood estimation context. Also, the Rasch DIF model performance was examined and compared with the Mantel-Haenszel (MH) procedure in small sample and short test length conditions through simulations. The theoretically known…

  17. Effects of deceptive packaging and product involvement on purchase intention: an elaboration likelihood model perspective.

    PubMed

    Lammers, H B

    2000-04-01

    From an Elaboration Likelihood Model perspective, it was hypothesized that postexposure awareness of deceptive packaging claims would have a greater negative effect on scores for purchase intention by consumers lowly involved rather than highly involved with a product (n = 40). Undergraduates who were classified as either highly or lowly (ns = 20 and 20) involved with M&Ms examined either a deceptive or non-deceptive package design for M&Ms candy and were subsequently informed of the deception employed in the packaging before finally rating their intention to purchase. As anticipated, highly deceived subjects who were low in involvement rated intention to purchase lower than their highly involved peers. Overall, the results attest to the robustness of the model and suggest that the model has implications beyond advertising effects and into packaging effects.

  18. Cognitive Processing of Fear-Arousing Message Content.

    ERIC Educational Resources Information Center

    Hale, Jerold L.; And Others

    1995-01-01

    Investigates two models (the Elaboration Likelihood Model and the Heuristic-Systematic Model) of the cognitive processing of fear-arousing messages in undergraduate students. Finds in three of the four conditions (low fear, high fear, high trait anxiety) that cognitive processing appears to be antagonistic. Finds some evidence of concurrent…

  19. Designing environmental campaigns by using agent-based simulations: strategies for changing environmental attitudes.

    PubMed

    Mosler, Hans-Joachim; Martens, Thomas

    2008-09-01

    Agent-based computer simulation was used to create artificial communities in which each individual was constructed according to the principles of the elaboration likelihood model of Petty and Cacioppo [1986. The elaboration likelihood model of persuasion. In: Berkowitz, L. (Ed.), Advances in Experimental Social Psychology. Academic Press, New York, NY, pp. 123-205]. Campaigning strategies and community characteristics were varied systematically to understand and test their impact on attitudes towards environmental protection. The results show that strong arguments influence a green (environmentally concerned) population with many contacts most effectively, while peripheral cues have the greatest impact on a non-green population with fewer contacts. Overall, deeper information scrutiny increases the impact of strong arguments but is especially important for convincing green populations. Campaigns involving person-to-person communication are superior to mass-media campaigns because they can be adapted to recipients' characteristics.

  20. Estimation of Model's Marginal likelihood Using Adaptive Sparse Grid Surrogates in Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Zeng, X.

    2015-12-01

    A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.

  1. Positive-Themed Suicide Prevention Messages Delivered by Adolescent Peer Leaders: Proximal Impact on Classmates' Coping Attitudes and Perceptions of Adult Support.

    PubMed

    Petrova, Mariya; Wyman, Peter A; Schmeelk-Cone, Karen; Pisani, Anthony R

    2015-12-01

    Developing science-based communication guidance and positive-themed messages for suicide prevention are important priorities. Drawing on social learning and elaboration likelihood models, we designed and tested two positive-focused presentations by high school peer leaders delivered in the context of a suicide prevention program (Sources of Strength). Thirty-six classrooms in four schools (N = 706 students) were randomized to (1) peer leader modeling of healthy coping, (2) peer leader modeling plus audience involvement to identify trusted adults, or (3) control condition. Students' attitudes and norms were assessed by immediate post-only assessments. Exposure to either presentation enhanced positive coping attitudes and perceptions of adult support. Students who reported suicide ideation in the past 12 months benefited more than nonsuicidal students. Beyond modeling alone, audience involvement modestly enhanced expectations of adult support, congruent with the elaboration likelihood model. Positive peer modeling is a promising alternative to communications focused on negative consequences and directives and may enhance social-interpersonal factors linked to reduced suicidal behaviors. © 2015 The American Association of Suicidology.

  2. Positive-Themed Suicide Prevention Messages Delivered by Adolescent Peer Leaders: Proximal Impact on Classmates’ Coping Attitudes and Perceptions of Adult Support

    PubMed Central

    Petrova, Mariya; Wyman, Peter A.; Schmeelk-Cone, Karen; Pisani, Anthony R.

    2015-01-01

    Developing science-based communication guidance and positive-themed messages for suicide prevention are important priorities. Drawing on social learning and elaboration likelihood models, we designed and tested two positive-focused presentations by high school peer leaders delivered in the context of a suicide prevention program (Sources of Strength). Thirty six classrooms in four schools (N=706 students) were randomized to: (a) peer leader modeling of healthy coping, (b) peer leader modeling plus audience involvement to identify trusted adults, or (c) control condition. Students’ attitudes and norms were assessed by immediate post-only assessments. Exposure to either presentation enhanced positive coping attitudes and perceptions of adult support. Students who reported suicide ideation in the past 12 months benefited more than non-suicidal students. Beyond modeling alone, audience involvement modestly enhanced expectations of adult support, congruent with the elaboration likelihood model. Positive peer modeling is a promising alternative to communications focused on negative consequences and directives and may enhance social-interpersonal factors linked to reduced suicidal behaviors. PMID:25692382

  3. An Improved Nested Sampling Algorithm for Model Selection and Assessment

    NASA Astrophysics Data System (ADS)

    Zeng, X.; Ye, M.; Wu, J.; WANG, D.

    2017-12-01

    Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.

  4. The Effect of Persuasion on the Utilization of Program Evaluation Information: A Preliminary Study.

    ERIC Educational Resources Information Center

    Eason, Sandra H.; Thompson, Bruce

    The utilization of program evaluation may be made more effective by means of the application of contemporary persuasion theory. The Elaboration Likelihood Model--a model of cognitive processing, ability, and motivation--was used in this study to test the persuasive effects of source credibility and involvement on message acceptance of evaluation…

  5. The Role of Persuasive Arguments in Changing Affirmative Action Attitudes and Expressed Behavior in Higher Education

    ERIC Educational Resources Information Center

    White, Fiona A.; Charles, Margaret A.; Nelson, Jacqueline K.

    2008-01-01

    The research reported in this article examined the conditions under which persuasive arguments are most effective in changing university students' attitudes and expressed behavior with respect to affirmative action (AA). The conceptual framework was a model that integrated the theory of reasoned action and the elaboration likelihood model of…

  6. An Examination of Behavioral Responses to Stereotypical Deceptive Displays.

    ERIC Educational Resources Information Center

    Huddleston, Bill M.

    A study investigated whether receivers who detect senders behaving deceitfully will automatically become more resistent to the message being presented. By developing predictions derived from the Elaboration Likelihood Model (ELM), the study hypothesized that only noninvolved receivers would respond negatively to deceptive nonverbal cues in a…

  7. Changing the Sexual Aggression-Supportive Attitudes of Men: A Psychoeducational Intervention.

    ERIC Educational Resources Information Center

    Gilbert, Barbara J.; And Others

    1991-01-01

    Assessed psychoeducational intervention designed to change attitudes of men found to be associated with sexual aggression toward women. College men receiving elaboration likelihood model-based intervention showed significantly more attitude change than did control group. One month later, in unrelated naturalistic context, intervention subjects…

  8. Affect and Persuasion: Effects on Motivation for Information Processing.

    ERIC Educational Resources Information Center

    Leach, Mark M; Stoltenberg, Cal D.

    The relationship between mood and information processing, particularly when reviewing the Elaboration Likelihood Model of persuasion, lacks conclusive evidence. This study was designed to investigate the hypothesis that information processing would be greater for mood-topic congruence than non mood-topic congruence. Undergraduate students (N=216)…

  9. The Rocky Road to Change: Implications for Substance Abuse Programs on College Campuses.

    ERIC Educational Resources Information Center

    Scott, Cynthia G.; Ambroson, DeAnn L.

    1994-01-01

    Examines college substance abuse prevention and intervention programs in the framework of the elaboration likelihood model. Discusses the role of persuasion and recommends careful analysis of the relevance, construction, and delivery of messages about substance use and subsequent program evaluation. Recommendations for increasing program…

  10. Source, Message, and Recipient Factors in Counseling and Psychotherapy.

    ERIC Educational Resources Information Center

    Stoltenberg, Cal D.; McNeill, Brian W.

    This paper reviews recent social psychology studies on the influence of message characteristics, issue involvement, and the subject's cognitive response on perceptions of the communicator. The Elaboration Likelihood Model (ELM) is used as a framework to discuss various approaches to persuasion, particularly central and peripheral routes to…

  11. Communication and Persuasion: Factors Influencing a Patient's Behavior.

    ERIC Educational Resources Information Center

    Logan, Henrietta L.

    1991-01-01

    Three elements of persuasion (source, message, and audience) are discussed, and a paradigm for persuasion, the Elaboration Likelihood Model, which unifies many existing attitude theories, is described. Selected concepts and research on attitudes and persuasion are also examined as a context for teaching preventive behaviors and strategies in…

  12. Information processing versus social cognitive mediators of weight loss in a podcast-delivered health intervention.

    PubMed

    Ko, Linda K; Turner-McGrievy, Gabrielle M; Campbell, Marci K

    2014-04-01

    Podcasting is an emerging technology, and previous interventions have shown promising results using theory-based podcast for weight loss among overweight and obese individuals. This study investigated whether constructs of social cognitive theory and information processing theories (IPTs) mediate the effect of a podcast intervention on weight loss among overweight individuals. Data are from Pounds off Digitally, a study testing the efficacy of two weight loss podcast interventions (control podcast and theory-based podcast). Path models were constructed (n = 66). The IPTs, elaboration likelihood model, information control theory, and cognitive load theory mediated the effect of a theory-based podcast on weight loss. The intervention was significantly associated with all IPTs. Information control theory and cognitive load theory were related to elaboration, and elaboration was associated with weight loss. Social cognitive theory constructs did not mediate weight loss. Future podcast interventions grounded in theory may be effective in promoting weight loss.

  13. Elaboration Likelihood and the Counseling Process: The Role of Affect.

    ERIC Educational Resources Information Center

    Stoltenberg, Cal D.; And Others

    The role of affect in counseling has been examined from several orientations. The depth of processing model views the efficiency of information processing as a function of the extent to which the information is processed. The notion of cognitive processing capacity states that processing information at deeper levels engages more of one's limited…

  14. Increasing Positive Perceptions of Counseling: The Importance of Repeated Exposures

    ERIC Educational Resources Information Center

    Kaplan, Scott A.; Vogel, David L.; Gentile, Douglas A.; Wade, Nathaniel G.

    2012-01-01

    This study assesses the effectiveness of repeated exposures to a video intervention based on the Elaboration Likelihood Model. The video was designed to increase help-seeking attitudes and perceptions of peer norms and to decrease the stigma associated with seeking counseling. Participants were 290 undergraduates who were randomly assigned to a…

  15. An Explanation of the Relationship between Instructor Humor and Student Learning: Instructional Humor Processing Theory

    ERIC Educational Resources Information Center

    Wanzer, Melissa B.; Frymier, Ann B.; Irwin, Jeffrey

    2010-01-01

    This paper proposes the Instructional Humor Processing Theory (IHPT), a theory that incorporates elements of incongruity-resolution theory, disposition theory, and the elaboration likelihood model (ELM) of persuasion. IHPT is proposed and offered as an explanation for why some types of instructor-generated humor result in increased student…

  16. Adolescent HIV Prevention: An Application of the Elaboration Likelihood Model.

    ERIC Educational Resources Information Center

    Metzler, April E.; Weiskotten, David; Morgen, Keith J.

    Ninth grade students (n=298) participated in a study to examine the influence source credibility, message, quality, and personal relevance on HIV prevention message efficacy. A pilot study with adolescent focus groups created the high and low quality messages, as well as the high (HIV+) and low (worried parent) credibility sources. Participants…

  17. Evaluating Initial Teacher Education Programmes: Perspectives from the Republic of Ireland

    ERIC Educational Resources Information Center

    Clarke, Marie; Lodge, Anne; Shevlin, Michael

    2012-01-01

    Research studies in teacher education have focussed on the outcomes of preparatory programmes. Less attention has been paid to the processes through which professional learning is acquired. This article argues that the study of attitudes and persuasion is very important in teacher education. The elaboration likelihood model (ELM) of persuasion…

  18. Polarization and Persuasion: Integrating the Elaboration Likelihood Model with Explanations of Group Polarization.

    ERIC Educational Resources Information Center

    Mongeau, Paul A.

    Interest has recently focused on group polarization as a function of attitude processes. Several recent reviewers have challenged polarization researchers to integrate the explanations of polarization to existing theories of attitude change. This review suggests that there exists a clear similarity between the social comparison and persuasive…

  19. Race of source effects in the elaboration likelihood model.

    PubMed

    White, P H; Harkins, S G

    1994-11-01

    In a series of experiments, we investigated the effect of race of source on persuasive communications in the Elaboration Likelihood Model (R.E. Petty & J.T. Cacioppo, 1981, 1986). In Experiment 1, we found no evidence that White participants responded to a Black source as a simple negative cue. Experiment 2 suggested the possibility that exposure to a Black source led to low-involvement message processing. In Experiments 3 and 4, a distraction paradigm was used to test this possibility, and it was found that participants under low involvement were highly motivated to process a message presented by a Black source. In Experiment 5, we found that attitudes toward the source's ethnic group, rather than violations of expectancies, accounted for this processing effect. Taken together, the results of these experiments are consistent with S.L. Gaertner and J.F. Dovidio's (1986) theory of aversive racism, which suggests that Whites, because of a combination of egalitarian values and underlying negative racial attitudes, are very concerned about not appearing unfavorable toward Blacks, leading them to be highly motivated to process messages presented by a source from this group.

  20. Scientific Knowledge and Attitude Change: The Impact of a Citizen Science Project. Research Report

    ERIC Educational Resources Information Center

    Brossard, Dominique; Lewenstein, Bruce; Bonney, Rick

    2005-01-01

    This paper discusses the evaluation of an informal science education project, The Birdhouse Network (TBN) of the Cornell Laboratory of Ornithology. The Elaboration Likelihood Model and the theory of Experiential Education were used as frameworks to analyse the impact of TBN on participants' attitudes toward science and the environment, on their…

  1. Understanding Attitude Change in Developing Effective Substance Abuse Prevention Programs for Adolescents.

    ERIC Educational Resources Information Center

    Scott, Cynthia G.

    1996-01-01

    Alcohol and drug use may be a significant part of the adolescent, high school experience. Programs should be based on an understanding of attitudes and patterns of use, and how change occurs. Elaboration Likelihood Model of Persuasion is a framework with which to examine attitude change and provide a base for building sound drug prevention…

  2. Is the Receptivity of Substance Abuse Prevention Programming Affected by Students' Perceptions of the Instructor?

    ERIC Educational Resources Information Center

    Stephens, Peggy C.; Sloboda, Zili; Grey, Scott; Stephens, Richard; Hammond, Augustine; Hawthorne, Richard; Teasdale, Brent; Williams, Joseph

    2009-01-01

    Drawing on the elaboration likelihood model of persuasive communication, the authors examine the impact of the perceptions of the instructor or source on students' receptivity to a new substance abuse prevention curriculum. Using survey data from a cohort of students participating in the Adolescent Substance Abuse Prevention Study, the authors use…

  3. Explaining the Effects of Narrative in an Entertainment Television Program: Overcoming Resistance to Persuasion

    ERIC Educational Resources Information Center

    Moyer-Guse, Emily; Nabi, Robin L.

    2010-01-01

    Research has examined the ability of entertainment-education (E-E) programs to influence behavior across a variety of health and social issues. However, less is known about the underlying mechanisms that account for these effects. In keeping with the extended elaboration likelihood model (E-ELM) and the entertainment overcoming resistance model…

  4. Information Processing Versus Social Cognitive Mediators of Weight Loss in a Podcast-Delivered Health Intervention

    PubMed Central

    Ko, Linda K.; Turner-McGrievy, Gabrielle; Campbell, Marci K.

    2016-01-01

    Podcasting is an emerging technology, and previous interventions have shown promising results using theory-based podcast for weight loss among overweight and obese individuals. This study investigated whether constructs of social cognitive theory and information processing theories (IPTs) mediate the effect of a podcast intervention on weight loss among overweight individuals. Data are from Pounds off Digitally, a study testing the efficacy of two weight loss podcast interventions (control podcast and theory-based podcast). Path models were constructed (n = 66). The IPTs—elaboration likelihood model, information control theory, and cognitive load theory—mediated the effect of a theory-based podcast on weight loss. The intervention was significantly associated with all IPTs. Information control theory and cognitive load theory were related to elaboration, and elaboration was associated with weight loss. Social cognitive theory constructs did not mediate weight loss. Future podcast interventions grounded in theory may be effective in promoting weight loss. PMID:24082027

  5. Hispanic-American Students' Attitudes toward Enrolling in High School Chemistry: A Study of Planned Behavior and Belief-Based Change.

    ERIC Educational Resources Information Center

    Crawley, Frank E.; Koballa, Thomas R., Jr.

    The study sought to: (1) identify the determinants that motivate Hispanic-American students to enroll in high school chemistry; and (2) determine if providing belief-based information to students and their parents/guardians increases chemistry registration. The Theory of Planned Behavior (TPB) and Elaboration Likelihood Model (ELM) guided the…

  6. Media Literacy and Attitude Change: Assessing the Effectiveness of Media Literacy Training on Children's Responses to Persuasive Messages within the ELM.

    ERIC Educational Resources Information Center

    Yates, Bradford L.

    This study adds to the small but growing body of literature that examines the effectiveness of media literacy training on children's responses to persuasive messages. Within the framework of the Elaboration Likelihood Model (ELM) of persuasion, this research investigates whether media literacy training is a moderating variable in the persuasion…

  7. The Effect of Persuasive Communication Strategies on Rurual Resident Attitues Toward Ecosystem Management

    Treesearch

    Michael A. Tarrant; Christine Overdevest; Alan D. Bright; H. Ken Cordell; Donald B.K. English

    1997-01-01

    This study examined ways of generating favorable public attitudes toward ecosystem management (EM). Five hundred rural residents of the Chattooga River Basin (CRB) participated in a telephone survey. A recent Forest Service message on EM was compared with four messages developed using the elaboration likelihood model (ELM) and a control (no message) group in their...

  8. Evaluation of smoking prevention television messages based on the elaboration likelihood model

    PubMed Central

    Flynn, Brian S.; Worden, John K.; Bunn, Janice Yanushka; Connolly, Scott W.; Dorwaldt, Anne L.

    2011-01-01

    Progress in reducing youth smoking may depend on developing improved methods to communicate with higher risk youth. This study explored the potential of smoking prevention messages based on the Elaboration Likelihood Model (ELM) to address these needs. Structured evaluations of 12 smoking prevention messages based on three strategies derived from the ELM were conducted in classroom settings among a diverse sample of non-smoking middle school students in three states (n = 1771). Students categorized as likely to have higher involvement in a decision to initiate cigarette smoking reported relatively high ratings on a cognitive processing indicator for messages focused on factual arguments about negative consequences of smoking than for messages with fewer or no direct arguments. Message appeal ratings did not show greater preference for this message type among higher involved versus lower involved students. Ratings from students reporting lower academic achievement suggested difficulty processing factual information presented in these messages. The ELM may provide a useful strategy for reaching adolescents at risk for smoking initiation, but particular attention should be focused on lower academic achievers to ensure that messages are appropriate for them. This approach should be explored further before similar strategies could be recommended for large-scale implementation. PMID:21885672

  9. Evaluation of smoking prevention television messages based on the elaboration likelihood model.

    PubMed

    Flynn, Brian S; Worden, John K; Bunn, Janice Yanushka; Connolly, Scott W; Dorwaldt, Anne L

    2011-12-01

    Progress in reducing youth smoking may depend on developing improved methods to communicate with higher risk youth. This study explored the potential of smoking prevention messages based on the Elaboration Likelihood Model (ELM) to address these needs. Structured evaluations of 12 smoking prevention messages based on three strategies derived from the ELM were conducted in classroom settings among a diverse sample of non-smoking middle school students in three states (n = 1771). Students categorized as likely to have higher involvement in a decision to initiate cigarette smoking reported relatively high ratings on a cognitive processing indicator for messages focused on factual arguments about negative consequences of smoking than for messages with fewer or no direct arguments. Message appeal ratings did not show greater preference for this message type among higher involved versus lower involved students. Ratings from students reporting lower academic achievement suggested difficulty processing factual information presented in these messages. The ELM may provide a useful strategy for reaching adolescents at risk for smoking initiation, but particular attention should be focused on lower academic achievers to ensure that messages are appropriate for them. This approach should be explored further before similar strategies could be recommended for large-scale implementation.

  10. A randomized trial to determine the impact on compliance of a psychophysical peripheral cue based on the Elaboration Likelihood Model.

    PubMed

    Horton, Rachael Jane; Minniti, Antoinette; Mireylees, Stewart; McEntegart, Damian

    2008-11-01

    Non-compliance in clinical studies is a significant issue, but causes remain unclear. Utilizing the Elaboration Likelihood Model of persuasion, this study assessed the psychophysical peripheral cue 'Interactive Voice Response System (IVRS) call frequency' on compliance. 71 participants were randomized to once daily (OD), twice daily (BID) or three times daily (TID) call schedules over two weeks. Participants completed 30-item cognitive function tests at each call. Compliance was defined as proportion of expected calls within a narrow window (+/- 30 min around scheduled time), and within a relaxed window (-30 min to +4 h). Data were analyzed by ANOVA and pairwise comparisons adjusted by the Bonferroni correction. There was a relationship between call frequency and compliance. Bonferroni adjusted pairwise comparisons showed significantly higher compliance (p=0.03) for the BID (51.0%) than TID (30.3%) for the narrow window; for the extended window, compliance was higher (p=0.04) with OD (59.5%), than TID (38.4%). The IVRS psychophysical peripheral cue call frequency supported the ELM as a route to persuasion. The results also support OD strategy for optimal compliance. Models suggest specific indicators to enhance compliance with medication dosing and electronic patient diaries to improve health outcomes and data integrity respectively.

  11. The Relation of Source Credibility and Message Frequency to Program Evaluation and Self-Confidence of Students in a Job Shadowing Program

    ERIC Educational Resources Information Center

    Linnehan, Frank

    2004-01-01

    Using a pre- and post-test design, this study examined the relation of an adult's credibility and message frequency to the beliefs of female high school students participating in a job-shadowing program. Hypotheses were based on the Elaboration Likelihood Model of attitude formation and change. Findings indicate that credibility of the adult…

  12. Student and Parental Message Effects on Urban Hispanic-American Students' Intention To Enroll in High School Chemistry.

    ERIC Educational Resources Information Center

    Black, Carolyn Bicknell; Crawley, Frank E.

    This research examined the effects of belief-based messages on the intentions of ninth and tenth grade, Hispanic-American students to enroll in their first elective science course at the pre-college level, chemistry. The design of the study was guided by the theory of planned behavior (Ajzen, 1989) and the Elaboration Likelihood Model of…

  13. Impact of Animated Spokes-Characters in Print Direct-to-Consumer Prescription Drug Advertising: An Elaboration Likelihood Model Approach.

    PubMed

    Bhutada, Nilesh S; Rollins, Brent L; Perri, Matthew

    2017-04-01

    A randomized, posttest-only online survey study of adult U.S. consumers determined the advertising effectiveness (attitude toward ad, brand, company, spokes-characters, attention paid to the ad, drug inquiry intention, and perceived product risk) of animated spokes-characters in print direct-to-consumer (DTC) advertising of prescription drugs and the moderating effects of consumers' involvement. Consumers' responses (n = 490) were recorded for animated versus nonanimated (human) spokes-characters in a fictitious DTC ad. Guided by the elaboration likelihood model, data were analyzed using a 2 (spokes-character type: animated/human) × 2 (involvement: high/low) factorial multivariate analysis of covariance (MANCOVA). The MANCOVA indicated significant main effects of spokes-character type and involvement on the dependent variables after controlling for covariate effects. Of the several ad effectiveness variables, consumers only differed on their attitude toward the spokes-characters between the two spokes-character types (specifically, more favorable attitudes toward the human spokes-character). Apart from perceived product risk, high-involvement consumers reacted more favorably to the remaining ad effectiveness variables compared to the low-involvement consumers, and exhibited significantly stronger drug inquiry intentions during their next doctor visit. Further, the moderating effect of consumers' involvement was not observed (nonsignificant interaction effect between spokes-character type and involvement).

  14. Correlated evolution of migration and sexual dichromatism in the New World orioles (icterus).

    PubMed

    Friedman, Nicholas R; Hofmann, Christopher M; Kondo, Beatrice; Omland, Kevin E

    2009-12-01

    The evolution of sexual dimorphism has long been attributed to sexual selection, specifically as it would drive repeated gains of elaborate male traits. In contrast to this pattern, New World oriole species all exhibit elaborate male plumage, and the repeated gains of sexual dichromatism observed in the genus are due to losses of female elaboration. Interestingly, most sexually dichromatic orioles belong to migratory or temperate-breeding clades. Using character scoring and ancestral state reconstructions from two recent studies in Icterus, we tested a hypothesis of correlated evolution between migration and sexual dichromatism. We employed two discrete phylogenetic comparative approaches: the concentrated changes test and Pagel's discrete likelihood test. Our results show that the evolution of these traits is significantly correlated (CCT: uncorrected P < 0.05; ML: LRT = 12.470, P < 0.005). Indeed, our best model of character evolution suggests that gains of sexual dichromatism are 23 times more likely to occur in migratory taxa. This study demonstrates that a life-history trait with no direct relationship with sexual selection has a strong influence on the evolution of sexual dichromatism. We recommend that researchers further investigate the role of selection on elaborate female traits in the evolution of sexual dimorphism.

  15. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    NASA Astrophysics Data System (ADS)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  16. Intention of Continuing to use the Hospital Information System: Integrating the elaboration-likelihood, social influence and cognitive learning.

    PubMed

    Farzandipour, Mehrdad; Mohamadian, Hashem; Sohrabi, Niloufar

    2016-12-01

    Anticipating effective factors in information system acceptance by using persuasive messages, is one of the main issues less focused on so far. This is one of the first attempts at using the elaboration-likelihood model combined with the perception of emotional, cognitive, self-efficacy, informational and normative influence constructs, in order to investigate the determinants of intention to continue use of the hospital information system in Iran. The present study is a cross-sectional survey conducted in 2014. 600 nursing staff were chosen from clinical sectors of public hospitals using purposive sampling. The questionnaire survey was in two parts: Part one was comprised of demographic data, and part two included 52 questions pertaining to the constructs of the model in the study. To analyze the data, structural equation model using LISREL 8.5 software was applied. The findings suggest that self-efficacy (t= 6.01, β= 0.21), affective response (t= 5.84, β= 0.23), and cognitive response (t= 4.97, β= 0.21) explained 64% of the variance for the intention of continuing to use the hospital information system. Furthermore, the final model was able to explain 0.46 for self-efficacy, 0.44 for normative social influence, 0.52 for affective response, 0.55 for informational social influence, and 0.53 for cognitive response. Designing the necessary mechanisms and effective use of appropriate strategies to improve emotional and cognitive understanding and self-efficacy of the nursing staff is required, in order to increase the intention of continued use of the hospital information system in Iran.

  17. Sexual Orientation as a Peripheral Cue in Advertising: Effects of Models' Sexual Orientation, Argument Strength, and Involvement on Responses to Magazine Ads.

    PubMed

    Ivory, Adrienne Holz

    2017-10-12

    This study examines how sexual orientation of couples featured in magazine advertisements affects heterosexual viewers' responses using the elaboration likelihood model as a framework. A 3 × 2 × 2 × 3 experiment tested effects of sexual orientation, argument strength, involvement, and attitudes toward homosexuality on heterosexuals' attitudes toward the couple, advertisement, brand, and product, purchase intentions, and recall. Results indicate that consumers were accepting of ads with lesbian portrayals. Participants showed more negative attitudes toward gay male portrayals, but attitudes toward heterosexual and lesbian ads were similar. This effect was moderated by participants' attitudes toward homosexuals. Low-involvement consumers showed more negative attitudes toward homosexual portrayals than toward heterosexual portrayals, indicating that sexual orientation may have served as a peripheral cue negatively impacting attitudes toward the couple and ad under low elaboration. These effects were not observed for attitudes toward the brand and product, purchase intentions, or recall.

  18. Using the Elaboration Likelihood Model to Address Drunkorexia Among College Students.

    PubMed

    Glassman, Tavis; Paprzycki, Peter; Castor, Thomas; Wotring, Amy; Wagner-Greene, Victoria; Ritzman, Matthew; Diehr, Aaron J; Kruger, Jessica

    2017-12-26

    The many consequences related to alcohol consumption among college students are well documented. Drunkorexia, a relatively new term and area of research, is characterized by skipping meals to reduce caloric intake and/or exercising excessively in attempt to compensate for calories associated with high volume drinking. The objective of this study was to use the Elaboration Likelihood Model to compare the impact of central and peripheral prevention messages on alcohol consumption and drunkorexic behavior. Researchers employed a quasi-experimental design, collecting pre- or post-test data from 172 college students living in residence halls at a large Midwestern university, to assess the impact of the prevention messages. Participants in the treatment groups received the message in person (flyer), through email, and via a text message in weekly increments. Results showed that participants exposed to the peripherally framed message decreased the frequency of their alcohol consumption over a 30-day period (p =.003), the number of drinks they consumed the last time they drank (p =.029), the frequency they had more than five drinks over a 30-day period (p =.019), as well as the maximum number of drinks they had on any occasion in the past 30 days (p =.014). Conclusions/Importance: While more research is needed in this area, the findings from this study indicate that researchers and practitioners should design peripheral (short and succinct), rather than central (complex and detailed), messages to prevent drunkorexia and its associated behaviors.

  19. Message sensation and cognition values: factors of competition or integration?

    PubMed

    Xu, Jie

    2015-01-01

    Using the Activation Model of Information Exposure and Elaboration Likelihood Model as theoretical frameworks, this study explored the effects of message sensation value (MSV) and message cognition value (MCV) of antismoking public service announcements (PSAs) on ad processing and evaluation among young adults, and the difference between high sensation seekers and low sensation seekers in their perceptions and responses toward ads with different levels of sensation and cognition value. A 2 (MSV: high vs. low) × 2 (MCV: high vs. low) × 2 (need for sensation: high vs. low) mixed experimental design was conducted. Two physiological measures including skin conductance and heart rate were examined. Findings of this study show that MSV was not a distraction but a facilitator of message persuasiveness. These findings contribute to the activation model. In addition, need for sensation moderated the interaction effect of MSV and MCV on ad processing. Low sensation seekers were more likely to experience the interaction between MSV and MCV than high sensation seekers. Several observations related to the findings and implications for antismoking message designs are elaborated. Limitations and directions for future research are also outlined.

  20. Intention of Continuing to use the Hospital Information System: Integrating the elaboration-likelihood, social influence and cognitive learning

    PubMed Central

    Farzandipour, Mehrdad; Mohamadian, Hashem; Sohrabi, Niloufar

    2016-01-01

    Introduction Anticipating effective factors in information system acceptance by using persuasive messages, is one of the main issues less focused on so far. This is one of the first attempts at using the elaboration-likelihood model combined with the perception of emotional, cognitive, self-efficacy, informational and normative influence constructs, in order to investigate the determinants of intention to continue use of the hospital information system in Iran. Methods The present study is a cross-sectional survey conducted in 2014. 600 nursing staff were chosen from clinical sectors of public hospitals using purposive sampling. The questionnaire survey was in two parts: Part one was comprised of demographic data, and part two included 52 questions pertaining to the constructs of the model in the study. To analyze the data, structural equation model using LISREL 8.5 software was applied. Result The findings suggest that self-efficacy (t= 6.01, β= 0.21), affective response (t= 5.84, β= 0.23), and cognitive response (t= 4.97, β= 0.21) explained 64% of the variance for the intention of continuing to use the hospital information system. Furthermore, the final model was able to explain 0.46 for self-efficacy, 0.44 for normative social influence, 0.52 for affective response, 0.55 for informational social influence, and 0.53 for cognitive response. Conclusion Designing the necessary mechanisms and effective use of appropriate strategies to improve emotional and cognitive understanding and self-efficacy of the nursing staff is required, in order to increase the intention of continued use of the hospital information system in Iran. PMID:28163852

  1. Role of negative emotion in communication about CO2 risks.

    PubMed

    Meijnders, A L; Midden, C J; Wilke, H A

    2001-10-01

    This article describes how the effectiveness of risk communication is determined by the interaction between emotional and informative elements. An experiment is described that examined the role of negative emotion in communication about CO2 risks. This experiment was based on the elaboration likelihood model and the related heuristic systematic model of attitude formation. The results indicated that inducing fear of CO2 risks leads to systematic processing of information about energy conservation as a risk-reducing strategy. In turn, this results in more favorable attitudes toward energy conservation if strong arguments are provided. Individual differences in concern seem to have similar effects.

  2. Survival Bayesian Estimation of Exponential-Gamma Under Linex Loss Function

    NASA Astrophysics Data System (ADS)

    Rizki, S. W.; Mara, M. N.; Sulistianingsih, E.

    2017-06-01

    This paper elaborates a research of the cancer patients after receiving a treatment in cencored data using Bayesian estimation under Linex Loss function for Survival Model which is assumed as an exponential distribution. By giving Gamma distribution as prior and likelihood function produces a gamma distribution as posterior distribution. The posterior distribution is used to find estimatior {\\hat{λ }}BL by using Linex approximation. After getting {\\hat{λ }}BL, the estimators of hazard function {\\hat{h}}BL and survival function {\\hat{S}}BL can be found. Finally, we compare the result of Maximum Likelihood Estimation (MLE) and Linex approximation to find the best method for this observation by finding smaller MSE. The result shows that MSE of hazard and survival under MLE are 2.91728E-07 and 0.000309004 and by using Bayesian Linex worths 2.8727E-07 and 0.000304131, respectively. It concludes that the Bayesian Linex is better than MLE.

  3. Multiple model cardinalized probability hypothesis density filter

    NASA Astrophysics Data System (ADS)

    Georgescu, Ramona; Willett, Peter

    2011-09-01

    The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.

  4. A case study for the integration of predictive mineral potential maps

    NASA Astrophysics Data System (ADS)

    Lee, Saro; Oh, Hyun-Joo; Heo, Chul-Ho; Park, Inhye

    2014-09-01

    This study aims to elaborate on the mineral potential maps using various models and verify the accuracy for the epithermal gold (Au) — silver (Ag) deposits in a Geographic Information System (GIS) environment assuming that all deposits shared a common genesis. The maps of potential Au and Ag deposits were produced by geological data in Taebaeksan mineralized area, Korea. The methodological framework consists of three main steps: 1) identification of spatial relationships 2) quantification of such relationships and 3) combination of multiple quantified relationships. A spatial database containing 46 Au-Ag deposits was constructed using GIS. The spatial association between training deposits and 26 related factors were identified and quantified by probabilistic and statistical modelling. The mineral potential maps were generated by integrating all factors using the overlay method and recombined afterwards using the likelihood ratio model. They were verified by comparison with test mineral deposit locations. The verification revealed that the combined mineral potential map had the greatest accuracy (83.97%), whereas it was 72.24%, 65.85%, 72.23% and 71.02% for the likelihood ratio, weight of evidence, logistic regression and artificial neural network models, respectively. The mineral potential map can provide useful information for the mineral resource development.

  5. Focusing on media body ideal images triggers food intake among restrained eaters: a test of restraint theory and the elaboration likelihood model.

    PubMed

    Boyce, Jessica A; Kuijer, Roeline G

    2014-04-01

    Although research consistently shows that images of thin women in the media (media body ideals) affect women negatively (e.g., increased weight dissatisfaction and food intake), this effect is less clear among restrained eaters. The majority of experiments demonstrate that restrained eaters - identified with the Restraint Scale - consume more food than do other participants after viewing media body ideal images; whereas a minority of experiments suggest that such images trigger restrained eaters' dietary restraint. Weight satisfaction and mood results are just as variable. One reason for these inconsistent results might be that different methods of image exposure (e.g., slideshow vs. film) afford varying levels of attention. Therefore, we manipulated attention levels and measured participants' weight satisfaction and food intake. We based our hypotheses on the elaboration likelihood model and on restraint theory. We hypothesised that advertent (i.e., processing the images via central routes of persuasion) and inadvertent (i.e., processing the images via peripheral routes of persuasion) exposure would trigger differing degrees of weight dissatisfaction and dietary disinhibition among restrained eaters (cf. restraint theory). Participants (N = 174) were assigned to one of four conditions: advertent or inadvertent exposure to media or control images. The dependent variables were measured in a supposedly unrelated study. Although restrained eaters' weight satisfaction was not significantly affected by either media exposure condition, advertent (but not inadvertent) media exposure triggered restrained eaters' eating. These results suggest that teaching restrained eaters how to pay less attention to media body ideal images might be an effective strategy in media-literary interventions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Practical Findings from Applying the PSD Model for Evaluating Software Design Specifications

    NASA Astrophysics Data System (ADS)

    Räisänen, Teppo; Lehto, Tuomas; Oinas-Kukkonen, Harri

    This paper presents practical findings from applying the PSD model to evaluating the support for persuasive features in software design specifications for a mobile Internet device. On the one hand, our experiences suggest that the PSD model fits relatively well for evaluating design specifications. On the other hand, the model would benefit from more specific heuristics for evaluating each technique to avoid unnecessary subjectivity. Better distinction between the design principles in the social support category would also make the model easier to use. Practitioners who have no theoretical background can apply the PSD model to increase the persuasiveness of the systems they design. The greatest benefit of the PSD model for researchers designing new systems may be achieved when it is applied together with a sound theory, such as the Elaboration Likelihood Model. Using the ELM together with the PSD model, one may increase the chances for attitude change.

  7. Impact of celebrity pitch in direct-to-consumer advertising of prescription drugs.

    PubMed

    Bhutada, Nilesh S; Menon, Ajit M; Deshpande, Aparna D; Perri, Matthew

    2012-01-01

    Online surveys were conducted to determine the impact of endorser credibility, endorser effectiveness, and consumers' involvement in direct-to-consumer advertising. In a randomized posttest only study, using the elaboration likelihood model, survey participants (U.S. adults) were either exposed to a fictitious prescription drug ad with a celebrity or a noncelebrity endorser. There was no significant difference in credibility and effectiveness between the celebrity and the noncelebrity endorser. High involvement consumers viewed the ad more favorably and exhibited significantly stronger drug inquiry intentions during their next doctor visit. Further, consumers' involvement did not moderate the effect of celebrity endorser.

  8. CFHTLenS: a Gaussian likelihood is a sufficient approximation for a cosmological analysis of third-order cosmic shear statistics

    NASA Astrophysics Data System (ADS)

    Simon, P.; Semboloni, E.; van Waerbeke, L.; Hoekstra, H.; Erben, T.; Fu, L.; Harnois-Déraps, J.; Heymans, C.; Hildebrandt, H.; Kilbinger, M.; Kitching, T. D.; Miller, L.; Schrabback, T.

    2015-05-01

    We study the correlations of the shear signal between triplets of sources in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) to probe cosmological parameters via the matter bispectrum. In contrast to previous studies, we adopt a non-Gaussian model of the data likelihood which is supported by our simulations of the survey. We find that for state-of-the-art surveys, similar to CFHTLenS, a Gaussian likelihood analysis is a reasonable approximation, albeit small differences in the parameter constraints are already visible. For future surveys we expect that a Gaussian model becomes inaccurate. Our algorithm for a refined non-Gaussian analysis and data compression is then of great utility especially because it is not much more elaborate if simulated data are available. Applying this algorithm to the third-order correlations of shear alone in a blind analysis, we find a good agreement with the standard cosmological model: Σ _8=σ _8(Ω _m/0.27)^{0.64}=0.79^{+0.08}_{-0.11} for a flat Λ cold dark matter cosmology with h = 0.7 ± 0.04 (68 per cent credible interval). Nevertheless our models provide only moderately good fits as indicated by χ2/dof = 2.9, including a 20 per cent rms uncertainty in the predicted signal amplitude. The models cannot explain a signal drop on scales around 15 arcmin, which may be caused by systematics. It is unclear whether the discrepancy can be fully explained by residual point spread function systematics of which we find evidence at least on scales of a few arcmin. Therefore we need a better understanding of higher order correlations of cosmic shear and their systematics to confidently apply them as cosmological probes.

  9. Occupational value and relationships to meaning and health: elaborations of the ValMO-model.

    PubMed

    Erlandsson, Lena-Karin; Eklund, Mona; Persson, Dennis

    2011-03-01

    This study investigates the theoretical assumption of the Value and Meaning in Occupations model. The aim was to explore the relationship between occupational value, perceived meaning, and subjective health in a sample of individuals of working age, 50 men and 250 women. Frequency of experienced values in occupations was assessed through the Occupational Value instrument with pre-defined items. Perceived meaning was operationalized and assessed by the Sense of Coherence measure. Subjective health was estimated by two questions from the SF-36 questionnaire. The analyses implied descriptive analyses, correlations, and logistic regression analyses in which sociodemographic variables were included. The findings showed highly significant relationships between occupational value and perceived meaning and when belonging to the high group of occupational value the likelihood was tripled of belonging to the high group of perceived meaning. When married or cohabitating there was double the likelihood of belonging to the high group of perceived meaning. Although perceived meaning was found to be positively associated with subjective health, working full time was the most important factor in explaining subjective health, compared with working less than full time. The results confirm assumptions in the ValMO-model, and the importance of focusing on occupational value in clinical practice is highlighted.

  10. Influences of Teacher-Child Social Interactions on English Language Development in a Head Start Classroom

    ERIC Educational Resources Information Center

    Piker, Ruth Alfaro; Rex, Lesley A.

    2008-01-01

    Increasing numbers of Spanish-speaking preschool children require attention to improve the likelihood of success in school. This study, part of a larger 2-year ethnographic study of a Head Start classroom, elaborates the role of teachers' interactions with students who were learning English. Using an interactional ethnography approach, the authors…

  11. Specification and misspecification of theoretical foundations and logic models for health communication campaigns.

    PubMed

    Slater, Michael D

    2006-01-01

    While increasingly widespread use of behavior change theory is an advance for communication campaigns and their evaluation, such theories provide a necessary but not sufficient condition for theory-based communication interventions. Such interventions and their evaluations need to incorporate theoretical thinking about plausible mechanisms of message effect on health-related attitudes and behavior. Otherwise, strategic errors in message design and dissemination, and misspecified campaign logic models, insensitive to campaign effects, are likely to result. Implications of the elaboration likelihood model, attitude accessibility, attitude to the ad theory, exemplification, and framing are explored, and implications for campaign strategy and evaluation designs are briefly discussed. Initial propositions are advanced regarding a theory of campaign affect generalization derived from attitude to ad theory, and regarding a theory of reframing targeted health behaviors in those difficult contexts in which intended audiences are resistant to the advocated behavior or message.

  12. Animal Disease Import Risk Analysis--a Review of Current Methods and Practice.

    PubMed

    Peeler, E J; Reese, R A; Thrush, M A

    2015-10-01

    The application of risk analysis to the spread of disease with international trade in animals and their products, that is, import risk analysis (IRA), has been largely driven by the Sanitary and Phytosanitary (SPS) agreement of the World Trade Organization (WTO). The degree to which the IRA standard established by the World Organization for Animal Health (OIE), and associated guidance, meets the needs of the SPS agreement is discussed. The use of scenario trees is the core modelling approach used to represent the steps necessary for the hazard to occur. There is scope to elaborate scenario trees for commodity IRA so that the quantity of hazard at each step is assessed, which is crucial to the likelihood of establishment. The dependence between exposure and establishment suggests that they should fall within the same subcomponent. IRA undertaken for trade reasons must include an assessment of consequences to meet SPS criteria, but guidance is sparse. The integration of epidemiological and economic modelling may open a path for better methods. Matrices have been used in qualitative IRA to combine estimates of entry and exposure, and consequences with likelihood, but this approach has flaws and better methods are needed. OIE IRA standards and guidance indicate that the volume of trade should be taken into account, but offer no detail. Some published qualitative IRAs have assumed current levels and patterns of trade without specifying the volume of trade, which constrains the use of IRA to determine mitigation measures (to reduce risk to an acceptable level) and whether the principle of equivalence, fundamental to the SPS agreement, has been observed. It is questionable whether qualitative IRA can meet all the criteria set out in the SPS agreement. Nevertheless, scope exists to elaborate the current standards and guidance, so they better serve the principle of science-based decision-making. © 2013 Crown copyright. This article is published with the permission of the Controller of HMSO and the Queen's Printer for Scotland.

  13. Online shopping interface components: relative importance as peripheral and central cues.

    PubMed

    Warden, Clyde A; Wu, Wann-Yih; Tsai, Dungchun

    2006-06-01

    The Elaboration Likelihood Model (ELM) uses central (more thoughtful) and peripheral (less thoughtful) routes of persuasion to maximize communication effectiveness. This research implements ELM to investigate the relative importance of different aspects of the user experience in online shopping. Of all the issues surrounding online shopping, convenience, access to information, and trust were found to be the most important. These were implemented in an online conjoint shopping task. Respondents were found to use the central route of the ELM on marketing messages that involved issues of minimizing travel, information access, and assurances of system security. Users employed the peripheral ELM route when considering usability, price comparison, and personal information protection. A descriptive model of Web-based marketing components, their roles in the central and peripheral routes, and their relative importance to online consumer segments was developed.

  14. Emotion and persuasion: cognitive and meta-cognitive processes impact attitudes.

    PubMed

    Petty, Richard E; Briñol, Pablo

    2015-01-01

    This article addresses the multiple ways in which emotions can influence attitudes and persuasion via primary and secondary (meta-) cognition. Using the elaboration likelihood model of persuasion as a guide, we review evidence for five fundamental processes that occur at different points along the elaboration continuum. When the extent of thinking is constrained to be low, emotions influence attitudes by relatively simple processes that lead them to change in a manner consistent with the valence of the emotion. When thinking is constrained to be high, emotions can serve as arguments in favour of a proposal if they are relevant to the merits of the advocacy or they can bias thinking if the emotion precedes the message. If thinking is high and emotions become salient after thinking, they can lead people to rely or not rely on the thoughts generated either because the emotion leads people to like or dislike their thoughts (affective validation) or feel more confident or doubtful in their thoughts (cognitive validation). When thinking is unconstrained, emotions influence the extent of thinking about the persuasive communication. Although prior theories have addressed one or more of these fundamental processes, no other approach has integrated them into one framework.

  15. Models and theories of prescribing decisions: A review and suggested a new model.

    PubMed

    Murshid, Mohsen Ali; Mohaidin, Zurina

    2017-01-01

    To date, research on the prescribing decisions of physician lacks sound theoretical foundations. In fact, drug prescribing by doctors is a complex phenomenon influenced by various factors. Most of the existing studies in the area of drug prescription explain the process of decision-making by physicians via the exploratory approach rather than theoretical. Therefore, this review is an attempt to suggest a value conceptual model that explains the theoretical linkages existing between marketing efforts, patient and pharmacist and physician decision to prescribe the drugs. The paper follows an inclusive review approach and applies the previous theoretical models of prescribing behaviour to identify the relational factors. More specifically, the report identifies and uses several valuable perspectives such as the 'persuasion theory - elaboration likelihood model', the stimuli-response marketing model', the 'agency theory', the theory of planned behaviour,' and 'social power theory,' in developing an innovative conceptual paradigm. Based on the combination of existing methods and previous models, this paper suggests a new conceptual model of the physician decision-making process. This unique model has the potential for use in further research.

  16. Entertainment-education and recruitment of cornea donors: the role of emotion and issue involvement.

    PubMed

    Bae, Hyuhn-Suhck

    2008-01-01

    This study examined the role of emotional responses and viewer's level of issue involvement to an entertainment-education show about cornea donation in order to predict intention to register as cornea donors. Results confirmed that sympathy and empathy responses operated as a catalyst for issue involvement, which emerged as an important intermediary in the persuasion process. Issue involvement also was found to be a common causal antecedent of attitude, subjective norm, and perceived behavioral control, the last two of which predict intentions unlike attitude, which does not. The revised path model confirmed that involvement directly influences intention. The findings of this study suggest that adding emotion and involvement in the Theory of Planned Behavior (TPB) enhances the explanatory power of the theory in predicting intentions, which indicates the possibility of combining the Elaboration Likelihood Model (ELM) and the TPB in the prediction of human behaviors.

  17. Drama advertisements: moderating effects of self-relevance on the relations among empathy, information processing, and attitudes.

    PubMed

    Chebat, Jean-Charles; Vercollier, Sarah Drissi; Gélinas-Chebat, Claire

    2003-06-01

    The effects of drama versus lecture format in public service advertisements are studied in a 2 (format) x 2 (malaria vs AIDS) factorial design. Two structural equation models are built (one for each level of self-relevance), showing two distinct patterns. In both low and high self-relevant situations, empathy plays a key role. Under low self-relevance conditions, drama enhances information processing through empathy. Under high self-relevant conditions, the advertisement format has neither significant cognitive or empathetic effects. The information processing generated by the highly relevant topic affects viewers' empathy, which in turn affects the attitude the advertisement and the behavioral intent. As predicted by the Elaboration Likelihood Model, the advertisement format enhances the attitudes and information processing mostly under low self-relevant conditions. Under low self-relevant conditions, empathy enhances information processing while under high self-relevance, the converse relation holds.

  18. Fast integration-based prediction bands for ordinary differential equation models.

    PubMed

    Hass, Helge; Kreutz, Clemens; Timmer, Jens; Kaschek, Daniel

    2016-04-15

    To gain a deeper understanding of biological processes and their relevance in disease, mathematical models are built upon experimental data. Uncertainty in the data leads to uncertainties of the model's parameters and in turn to uncertainties of predictions. Mechanistic dynamic models of biochemical networks are frequently based on nonlinear differential equation systems and feature a large number of parameters, sparse observations of the model components and lack of information in the available data. Due to the curse of dimensionality, classical and sampling approaches propagating parameter uncertainties to predictions are hardly feasible and insufficient. However, for experimental design and to discriminate between competing models, prediction and confidence bands are essential. To circumvent the hurdles of the former methods, an approach to calculate a profile likelihood on arbitrary observations for a specific time point has been introduced, which provides accurate confidence and prediction intervals for nonlinear models and is computationally feasible for high-dimensional models. In this article, reliable and smooth point-wise prediction and confidence bands to assess the model's uncertainty on the whole time-course are achieved via explicit integration with elaborate correction mechanisms. The corresponding system of ordinary differential equations is derived and tested on three established models for cellular signalling. An efficiency analysis is performed to illustrate the computational benefit compared with repeated profile likelihood calculations at multiple time points. The integration framework and the examples used in this article are provided with the software package Data2Dynamics, which is based on MATLAB and freely available at http://www.data2dynamics.org helge.hass@fdm.uni-freiburg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Responder analysis without dichotomization.

    PubMed

    Zhang, Zhiwei; Chu, Jianxiong; Rahardja, Dewi; Zhang, Hui; Tang, Li

    2016-01-01

    In clinical trials, it is common practice to categorize subjects as responders and non-responders on the basis of one or more clinical measurements under pre-specified rules. Such a responder analysis is often criticized for the loss of information in dichotomizing one or more continuous or ordinal variables. It is worth noting that a responder analysis can be performed without dichotomization, because the proportion of responders for each treatment can be derived from a model for the original clinical variables (used to define a responder) and estimated by substituting maximum likelihood estimators of model parameters. This model-based approach can be considerably more efficient and more effective for dealing with missing data than the usual approach based on dichotomization. For parameter estimation, the model-based approach generally requires correct specification of the model for the original variables. However, under the sharp null hypothesis, the model-based approach remains unbiased for estimating the treatment difference even if the model is misspecified. We elaborate on these points and illustrate them with a series of simulation studies mimicking a study of Parkinson's disease, which involves longitudinal continuous data in the definition of a responder.

  20. Validation of software for calculating the likelihood ratio for parentage and kinship.

    PubMed

    Drábek, J

    2009-03-01

    Although the likelihood ratio is a well-known statistical technique, commercial off-the-shelf (COTS) software products for its calculation are not sufficiently validated to suit general requirements for the competence of testing and calibration laboratories (EN/ISO/IEC 17025:2005 norm) per se. The software in question can be considered critical as it directly weighs the forensic evidence allowing judges to decide on guilt or innocence or to identify person or kin (i.e.: in mass fatalities). For these reasons, accredited laboratories shall validate likelihood ratio software in accordance with the above norm. To validate software for calculating the likelihood ratio in parentage/kinship scenarios I assessed available vendors, chose two programs (Paternity Index and familias) for testing, and finally validated them using tests derived from elaboration of the available guidelines for the field of forensics, biomedicine, and software engineering. MS Excel calculation using known likelihood ratio formulas or peer-reviewed results of difficult paternity cases were used as a reference. Using seven testing cases, it was found that both programs satisfied the requirements for basic paternity cases. However, only a combination of two software programs fulfills the criteria needed for our purpose in the whole spectrum of functions under validation with the exceptions of providing algebraic formulas in cases of mutation and/or silent allele.

  1. Direct costs and cost-effectiveness of dual-source computed tomography and invasive coronary angiography in patients with an intermediate pretest likelihood for coronary artery disease.

    PubMed

    Dorenkamp, Marc; Bonaventura, Klaus; Sohns, Christian; Becker, Christoph R; Leber, Alexander W

    2012-03-01

    The study aims to determine the direct costs and comparative cost-effectiveness of latest-generation dual-source computed tomography (DSCT) and invasive coronary angiography for diagnosing coronary artery disease (CAD) in patients suspected of having this disease. The study was based on a previously elaborated cohort with an intermediate pretest likelihood for CAD and on complementary clinical data. Cost calculations were based on a detailed analysis of direct costs, and generally accepted accounting principles were applied. Based on Bayes' theorem, a mathematical model was used to compare the cost-effectiveness of both diagnostic approaches. Total costs included direct costs, induced costs and costs of complications. Effectiveness was defined as the ability of a diagnostic test to accurately identify a patient with CAD. Direct costs amounted to €98.60 for DSCT and to €317.75 for invasive coronary angiography. Analysis of model calculations indicated that cost-effectiveness grew hyperbolically with increasing prevalence of CAD. Given the prevalence of CAD in the study cohort (24%), DSCT was found to be more cost-effective than invasive coronary angiography (€970 vs €1354 for one patient correctly diagnosed as having CAD). At a disease prevalence of 49%, DSCT and invasive angiography were equally effective with costs of €633. Above a threshold value of disease prevalence of 55%, proceeding directly to invasive coronary angiography was more cost-effective than DSCT. With proper patient selection and consideration of disease prevalence, DSCT coronary angiography is cost-effective for diagnosing CAD in patients with an intermediate pretest likelihood for it. However, the range of eligible patients may be smaller than previously reported.

  2. Planck 2013 results. XV. CMB power spectra and likelihood

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Gaier, T. C.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jewell, J.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Laureijs, R. J.; Lawrence, C. R.; Le Jeune, M.; Leach, S.; Leahy, J. P.; Leonardi, R.; León-Tavares, J.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maffei, B.; Maino, D.; Mandolesi, N.; Marinucci, D.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Millea, M.; Mitra, S.; Miville-Deschênes, M.-A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; O'Dwyer, I. J.; Orieux, F.; Osborne, S.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Paykari, P.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rahlin, A.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ringeval, C.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Sanselme, L.; Santos, D.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Türler, M.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Varis, J.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; White, M.; White, S. D. M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-11-01

    This paper presents the Planck 2013 likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations that accounts for all known relevant uncertainties, both instrumental and astrophysical in nature. We use this likelihood to derive our best estimate of the CMB angular power spectrum from Planck over three decades in multipole moment, ℓ, covering 2 ≤ ℓ ≤ 2500. The main source of uncertainty at ℓ ≲ 1500 is cosmic variance. Uncertainties in small-scale foreground modelling and instrumental noise dominate the error budget at higher ℓs. For ℓ < 50, our likelihood exploits all Planck frequency channels from 30 to 353 GHz, separating the cosmological CMB signal from diffuse Galactic foregrounds through a physically motivated Bayesian component separation technique. At ℓ ≥ 50, we employ a correlated Gaussian likelihood approximation based on a fine-grained set of angular cross-spectra derived from multiple detector combinations between the 100, 143, and 217 GHz frequency channels, marginalising over power spectrum foreground templates. We validate our likelihood through an extensive suite of consistency tests, and assess the impact of residual foreground and instrumental uncertainties on the final cosmological parameters. We find good internal agreement among the high-ℓ cross-spectra with residuals below a few μK2 at ℓ ≲ 1000, in agreement with estimated calibration uncertainties. We compare our results with foreground-cleaned CMB maps derived from all Planck frequencies, as well as with cross-spectra derived from the 70 GHz Planck map, and find broad agreement in terms of spectrum residuals and cosmological parameters. We further show that the best-fit ΛCDM cosmology is in excellent agreement with preliminary PlanckEE and TE polarisation spectra. We find that the standard ΛCDM cosmology is well constrained by Planck from the measurements at ℓ ≲ 1500. One specific example is the spectral index of scalar perturbations, for which we report a 5.4σ deviation from scale invariance, ns = 1. Increasing the multipole range beyond ℓ ≃ 1500 does not increase our accuracy for the ΛCDM parameters, but instead allows us to study extensions beyond the standard model. We find no indication of significant departures from the ΛCDM framework. Finally, we report a tension between the Planck best-fit ΛCDM model and the low-ℓ spectrum in the form of a power deficit of 5-10% at ℓ ≲ 40, with a statistical significance of 2.5-3σ. Without a theoretically motivated model for this power deficit, we do not elaborate further on its cosmological implications, but note that this is our most puzzling finding in an otherwise remarkably consistent data set.

  3. Development of environmentally friendly messages to promote longer durations of breastfeeding for already breastfeeding mothers.

    PubMed

    Hamilton, Amanda E

    2015-01-01

    Durations of breastfeeding activity in the United States fall short of established recommendations by leading public health institutions. In response to this problem, this study sought to develop environmentally friendly messages to promote continued breastfeeding for moms already breastfeeding in order to help them reach recommended breastfeeding durations. Messages were successfully cultivated to encourage moms already breastfeeding to meet recommended breastfeeding durations. In addition, this study cultivated strategies by which to use environmentally friendly messages to urge mothers who still need to decide whether to breastfeed or formula feed to breastfeed, although this was not the purpose of the research. Avenues for future communication-based breastfeeding research were also elucidated. The Elaboration Likelihood Model serves as useful theory to assess the role of environmentally friendly messages in the promotion of continued breastfeeding.

  4. Understanding the role consumer involvement plays in the effectiveness of hospital advertising.

    PubMed

    McCullough, Tammy; Dodge, H Robert

    2002-01-01

    Both intensified competition and greater consumer participation in the choice process for healthcare has increased the importance of advertising for health care providers and seriously challenged many of the preconceptions regarding advertising. This study investigates the effectiveness of advertising under conditions of high and low involvement using the Elaboration Likelihood Model to develop hypotheses that are tested in a 2 x 2 x 2 experimental design. The study findings provide insights into the influence of message content and message source on consumers categorized as high or low involvement. It was found that consumers classified as high-involvement are more influenced by a core service-relevant message than those consumers classified as low-involvement. Moreover, a non-physician spokesperson was found to have as much or more influence as a physician spokesperson regardless of the consumers' involvement level.

  5. Validation of persuasive messages for the promotion of physical activity among people with coronary heart disease.

    PubMed

    Mendez, Roberto Della Rosa; Rodrigues, Roberta Cunha Matheus; Spana, Thaís Moreira; Cornélio, Marília Estevam; Gallani, Maria Cecília Bueno Jayme; Pérez-Nebra, Amalia Raquel

    2012-01-01

    to validate the content of persuasive messages for promoting walking among patients with coronary heart disease (CHD). The messages were constructed to strengthen or change patients' attitudes to walking. the selection of persuasive arguments was based on behavioral beliefs (determinants of attitude) related to walking. The messages were constructed based in the Elaboration Likelihood Model and were submitted to content validation. the data was analyzed with the content validity index and by the importance which the patients attributed to the messages' persuasive arguments. Positive behavioral beliefs (i.e. positive and negative reinforcement) and self-efficacy were the appeals which the patients considered important. The messages with validation evidence will be tested in an intervention study for the promotion of the practice of physical activity among patients with CHD.

  6. Models and theories of prescribing decisions: A review and suggested a new model

    PubMed Central

    Mohaidin, Zurina

    2017-01-01

    To date, research on the prescribing decisions of physician lacks sound theoretical foundations. In fact, drug prescribing by doctors is a complex phenomenon influenced by various factors. Most of the existing studies in the area of drug prescription explain the process of decision-making by physicians via the exploratory approach rather than theoretical. Therefore, this review is an attempt to suggest a value conceptual model that explains the theoretical linkages existing between marketing efforts, patient and pharmacist and physician decision to prescribe the drugs. The paper follows an inclusive review approach and applies the previous theoretical models of prescribing behaviour to identify the relational factors. More specifically, the report identifies and uses several valuable perspectives such as the ‘persuasion theory - elaboration likelihood model’, the stimuli–response marketing model’, the ‘agency theory’, the theory of planned behaviour,’ and ‘social power theory,’ in developing an innovative conceptual paradigm. Based on the combination of existing methods and previous models, this paper suggests a new conceptual model of the physician decision-making process. This unique model has the potential for use in further research. PMID:28690701

  7. Using fear appeals in warning labels to promote responsible gambling among VLT players: the key role of depth of information processing.

    PubMed

    Munoz, Yaromir; Chebat, Jean-Charles; Suissa, Jacob Amnon

    2010-12-01

    Video lottery terminals (VLT) are a highly lucrative gambling format, but at the same time they are among the most hazardous. Previous research has shown that threatening warnings may be an appropriate approach for promoting protective behavior. The present study explores the potential benefits of threatening warnings in the fight against compulsive gambling. A 4 × 2 factorial design experiment was used to test our model based on both Elaboration Likelihood Model and Protection Motivation Theory. 258 VLT adult players (58% males, 42% females) with various degrees of problem gambling were exposed to three threat levels (plus a control condition) from two different sources (i.e., either a medical source or a source related to the provider of VLT's). Our results show that both higher threat warnings and the medical source of warnings enhance Depth of Information Processing. It was also found that Depth of Information Processing affects positively attitude change and compliance intentions. The theoretical and managerial implications are discussed.

  8. Dissociating the effects of semantic grouping and rehearsal strategies on event-related brain potentials.

    PubMed

    Schleepen, T M J; Markus, C R; Jonkman, L M

    2014-12-01

    The application of elaborative encoding strategies during learning, such as grouping items on similar semantic categories, increases the likelihood of later recall. Previous studies have suggested that stimuli that encourage semantic grouping strategies had modulating effects on specific ERP components. However, these studies did not differentiate between ERP activation patterns evoked by elaborative working memory strategies like semantic grouping and more simple strategies like rote rehearsal. Identification of neurocognitive correlates underlying successful use of elaborative strategies is important to understand better why certain populations, like children or elderly people, have problems applying such strategies. To compare ERP activation during the application of elaborative versus more simple strategies subjects had to encode either four semantically related or unrelated pictures by respectively applying a semantic category grouping or a simple rehearsal strategy. Another goal was to investigate if maintenance of semantically grouped vs. ungrouped pictures modulated ERP-slow waves differently. At the behavioral level there was only a semantic grouping benefit in terms of faster responding on correct rejections (i.e. when the memory probe stimulus was not part of the memory set). At the neural level, during encoding semantic grouping only had a modest specific modulatory effect on a fronto-central Late Positive Component (LPC), emerging around 650 ms. Other ERP components (i.e. P200, N400 and a second Late Positive Component) that had been earlier related to semantic grouping encoding processes now showed stronger modulation by rehearsal than by semantic grouping. During maintenance semantic grouping had specific modulatory effects on left and right frontal slow wave activity. These results stress the importance of careful control of strategy use when investigating the neural correlates of elaborative encoding. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Comparison of sampling techniques for Bayesian parameter estimation

    NASA Astrophysics Data System (ADS)

    Allison, Rupert; Dunkley, Joanna

    2014-02-01

    The posterior probability distribution for a set of model parameters encodes all that the data have to tell us in the context of a given model; it is the fundamental quantity for Bayesian parameter estimation. In order to infer the posterior probability distribution we have to decide how to explore parameter space. Here we compare three prescriptions for how parameter space is navigated, discussing their relative merits. We consider Metropolis-Hasting sampling, nested sampling and affine-invariant ensemble Markov chain Monte Carlo (MCMC) sampling. We focus on their performance on toy-model Gaussian likelihoods and on a real-world cosmological data set. We outline the sampling algorithms themselves and elaborate on performance diagnostics such as convergence time, scope for parallelization, dimensional scaling, requisite tunings and suitability for non-Gaussian distributions. We find that nested sampling delivers high-fidelity estimates for posterior statistics at low computational cost, and should be adopted in favour of Metropolis-Hastings in many cases. Affine-invariant MCMC is competitive when computing clusters can be utilized for massive parallelization. Affine-invariant MCMC and existing extensions to nested sampling naturally probe multimodal and curving distributions.

  10. Is Advanced Real-Time Energy Metering Sufficient to Persuade People to Save Energy?

    NASA Astrophysics Data System (ADS)

    Ting, L.; Leite, H.; Ponce de Leão, T.

    2012-10-01

    In order to promote a low-carbon economy, EU citizens may soon be able to check their electricity consumption from smart meter. It is hoped that smart meter can, by providing real-time consumption and pricing information to residential users, help reducing demand for electricity. It is argued in this paper that, according the Elaborative Likelihood Model (ELM), these methods are most likely to be effective when consumers perceive the issue of energy conservation relevant to their lives. Nevertheless, some fundamental characteristics of these methods result in limited amount of perceived personal relevance; for instance, energy expenditure expense may be relatively small comparing to other household expenditure like mortgage and consumption information does not enhance interpersonal trust. In this paper, it is suggested that smart meter can apply the "nudge" approaches which respond to ELM as the use of simple rules to make decision, which include the change of feedback delivery and device design.

  11. Decomposition of conditional probability for high-order symbolic Markov chains.

    PubMed

    Melnik, S S; Usatenko, O V

    2017-07-01

    The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.

  12. Decomposition of conditional probability for high-order symbolic Markov chains

    NASA Astrophysics Data System (ADS)

    Melnik, S. S.; Usatenko, O. V.

    2017-07-01

    The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.

  13. Alcohol counter-advertising and the media. A review of recent research.

    PubMed

    Agostinelli, Gina; Grube, Joel W

    2002-01-01

    Counter-advertising commonly is used to balance the effects that alcohol advertising may have on alcohol consumption and alcohol-related problems. Such measures can take the form of print or broadcast advertisements (e.g., public service announcements [PSAs]) as well as product warning labels. The effectiveness of both types of counter-advertising is reviewed using the Elaboration Likelihood Model as a theoretical framework. For print and broadcast counter-advertisements, such factors as their emotional appeal and the credibility of the source, as well as audience factors, can influence their effectiveness. Further, brewer-sponsored counter-advertisements are evaluated and received differently than are the more conventional PSA counter-advertisements. For warning labels, both the content and design of the label influence their effectiveness, as do audience factors. The effectiveness of those labels is evaluated in terms of the extent to which they impact cognitive and affective processes as well as drinking behavior.

  14. Food-chain contamination evaluations in ecological risk assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linder, G.

    Food-chain models have become increasingly important within the ecological risk assessment process. This is the case particularly when acute effects are not readily apparent, or the contaminants of concern are not readily detoxified, have a high likelihood for partitioning into lipids, or have specific target organs or tissues that may increase their significance in evaluating their potential adverse effects. An overview of food-chain models -- conceptual, theoretical, and empirical -- will be considered through a series of papers that will focus on their application within the ecological risk assessment process. Whether a food-chain evaluation is being developed to address relativelymore » simple questions related to chronic effects of toxicants on target populations, or whether a more complex food-web model is being developed to address questions related to multiple-trophic level transfers of toxicants, the elements within the food chain contamination evaluation can be generalized to address the mechanisms of toxicant accumulation in individual organisms. This can then be incorporated into more elaborate models that consider these organismal-level processes within the context of a species life-history or community-level responses that may be associated with long-term exposures.« less

  15. A guideline for the validation of likelihood ratio methods used for forensic evidence evaluation.

    PubMed

    Meuwly, Didier; Ramos, Daniel; Haraksim, Rudolf

    2017-07-01

    This Guideline proposes a protocol for the validation of forensic evaluation methods at the source level, using the Likelihood Ratio framework as defined within the Bayes' inference model. In the context of the inference of identity of source, the Likelihood Ratio is used to evaluate the strength of the evidence for a trace specimen, e.g. a fingermark, and a reference specimen, e.g. a fingerprint, to originate from common or different sources. Some theoretical aspects of probabilities necessary for this Guideline were discussed prior to its elaboration, which started after a workshop of forensic researchers and practitioners involved in this topic. In the workshop, the following questions were addressed: "which aspects of a forensic evaluation scenario need to be validated?", "what is the role of the LR as part of a decision process?" and "how to deal with uncertainty in the LR calculation?". The questions: "what to validate?" focuses on the validation methods and criteria and "how to validate?" deals with the implementation of the validation protocol. Answers to these questions were deemed necessary with several objectives. First, concepts typical for validation standards [1], such as performance characteristics, performance metrics and validation criteria, will be adapted or applied by analogy to the LR framework. Second, a validation strategy will be defined. Third, validation methods will be described. Finally, a validation protocol and an example of validation report will be proposed, which can be applied to the forensic fields developing and validating LR methods for the evaluation of the strength of evidence at source level under the following propositions. Copyright © 2016. Published by Elsevier B.V.

  16. SARS wars: an examination of the quantity and construction of health information in the news media.

    PubMed

    Berry, Tanya R; Wharf-Higgins, Joan; Naylor, P J

    2007-01-01

    The media have the power to sway public perception of health issues by choosing what to publish and the context in which to present information. The media may influence an individual's tendency to overestimate the risk of some health issues while underestimating the risk of others, ultimately influencing health choices. Although some research has been conducted to examine the number of articles on selected health topics, little research has examined how the messages are constructed. The purpose of this article is to describe an examination of the construction of news reports on health topics using aspects of the social amplification of risk model and the elaboration likelihood model of persuasion for theoretical direction. One hundred news media reports (print, radio, television, and Internet) were analyzed in terms of message repetition, context, source, and grammar. Results showed that health topics were more often discussed in terms of risk, by credible sources using strong language. This content analysis provides an empirical starting point for future research into how such health news may influence consumer's perceptions of health topics.

  17. The combined influence of central and peripheral routes in the online persuasion process.

    PubMed

    SanJosé-Cabezudo, Rebeca; Gutiérrez-Arranz, Ana M; Gutiérrez-Cillán, Jesús

    2009-06-01

    The elaboration likelihood model (ELM) is one of the most widely used psychological theories in academic literature to account for how advertising information is processed. The current work seeks to overturn one of the basic principles of the ELM and takes account of new variables in the model that help to explain the online persuasion process more clearly. Specifically, we posit that in a context of high-involvement exposure to advertising (e.g., Web pages), central and peripheral processing routes may act together. In a repeated-measures experimental design, 112 participants were exposed to two Web sites of a fictitious travel agency, differing only in their design--serious versus amusing. Findings evidence that a peripheral cue, such as how the Web pages are presented, does prove relevant when attempting to reflect the level of effectiveness. Moreover, if we take account of individuals' motivation when accessing the Internet, whether cognitive or affective, the motivation will impact their response to the Web site design. The work contributes to ELM literature and may help firms to pinpoint those areas and features of Internet advertising that prove most efficient.

  18. Coalescent methods for estimating phylogenetic trees.

    PubMed

    Liu, Liang; Yu, Lili; Kubatko, Laura; Pearl, Dennis K; Edwards, Scott V

    2009-10-01

    We review recent models to estimate phylogenetic trees under the multispecies coalescent. Although the distinction between gene trees and species trees has come to the fore of phylogenetics, only recently have methods been developed that explicitly estimate species trees. Of the several factors that can cause gene tree heterogeneity and discordance with the species tree, deep coalescence due to random genetic drift in branches of the species tree has been modeled most thoroughly. Bayesian approaches to estimating species trees utilizes two likelihood functions, one of which has been widely used in traditional phylogenetics and involves the model of nucleotide substitution, and the second of which is less familiar to phylogeneticists and involves the probability distribution of gene trees given a species tree. Other recent parametric and nonparametric methods for estimating species trees involve parsimony criteria, summary statistics, supertree and consensus methods. Species tree approaches are an appropriate goal for systematics, appear to work well in some cases where concatenation can be misleading, and suggest that sampling many independent loci will be paramount. Such methods can also be challenging to implement because of the complexity of the models and computational time. In addition, further elaboration of the simplest of coalescent models will be required to incorporate commonly known issues such as deviation from the molecular clock, gene flow and other genetic forces.

  19. A controlled evaluation of an eating disorders primary prevention videotape using the Elaboration Likelihood Model of Persuasion.

    PubMed

    Withers, Giselle F; Twigg, Kylie; Wertheim, Eleanor H; Paxton, Susan J

    2002-11-01

    The aim was to extend findings related to a previously reported eating disorders prevention program by comparing treatment and control groups, adding a follow-up, and examining whether receiver characteristics, personal relevance and need for cognition (NFC), could predict attitude change in early adolescent girls. Grade 7 girls were either shown a brief prevention videotape on dieting and body image (n = 104) or given no intervention (n = 114). All girls completed pre-, post- and 1-month follow-up questionnaires. The intervention group resulted in significantly more positive changes in attitude and knowledge at post-intervention, but only in knowledge at follow-up. There was no strong evidence that pre-intervention characteristics of recipients predicted responses to the videotape intervention when changes were compared to the control group. This prevention videotape appeared to have positive immediate effects, but additional intervention (e.g., booster sessions) may be required for longer-term change. Copyright 2002 Elsevier Science Inc.

  20. Which Type of Risk Information to Use for Whom? Moderating Role of Outcome-Relevant Involvement in the Effects of Statistical and Exemplified Risk Information on Risk Perceptions.

    PubMed

    So, Jiyeon; Jeong, Se-Hoon; Hwang, Yoori

    2017-04-01

    The extant empirical research examining the effectiveness of statistical and exemplar-based health information is largely inconsistent. Under the premise that the inconsistency may be due to an unacknowledged moderator (O'Keefe, 2002), this study examined a moderating role of outcome-relevant involvement (Johnson & Eagly, 1989) in the effects of statistical and exemplified risk information on risk perception. Consistent with predictions based on elaboration likelihood model (Petty & Cacioppo, 1984), findings from an experiment (N = 237) concerning alcohol consumption risks showed that statistical risk information predicted risk perceptions of individuals with high, rather than low, involvement, while exemplified risk information predicted risk perceptions of those with low, rather than high, involvement. Moreover, statistical risk information contributed to negative attitude toward drinking via increased risk perception only for highly involved individuals, while exemplified risk information influenced the attitude through the same mechanism only for individuals with low involvement. Theoretical and practical implications for health risk communication are discussed.

  1. To think or not to think: two pathways towards persuasion by short films on AIDS prevention.

    PubMed

    Igartua, Juan José; Cheng, Lifen; Lopes, Orquídea

    2003-01-01

    Health messages are designed to stimulate an active cognitive process in those audiences generally with little involvement. The Elaboration Likelihood Model by Petty and Cacioppo sustains that subjects with high involvement and those with low involvement react differently to the persuasive message to which they are exposed. One efficient way to capture the attention of the low involvement audiences is to insert the messages within an entertainment context. Our study attempted to analyze affective and cognitive processes to explain the impact of these new formats, fictional shorts for HIV/AIDS prevention. A 2 x 2 factorial design was used, with involvement in the AIDS issue (high/low) and the type of format (musical/dialogue) as independent variables. The finding showed the better the quality of the short (with dialogue style) the more negative affectivity was stimulated, also the more cognitive processing was induced, and a more favorable attitude towards preventive behavior was stimulated.

  2. The effect of food label cues on perceptions of quality and purchase intentions among high-involvement consumers with varying levels of nutrition knowledge.

    PubMed

    Walters, Amber; Long, Marilee

    2012-01-01

    To determine whether differences in nutrition knowledge affected how women (a high-involvement group) interpreted intrinsic cues (ingredient list) and extrinsic cues ("all natural" label) on food labels. A 2 (intrinsic cue) × 2 (extrinsic cue) × 2 (nutrition knowledge expert vs novice) within-subject factorial design was used. Participants were 106 female college students (61 experts, 45 novices). Dependent variables were perception of product quality and purchase intention. As predicted by the elaboration likelihood model, experts used central route processing to scrutinize intrinsic cues and make judgments about food products. Novices used peripheral route processing to make simple inferences about the extrinsic cues in labels. Consumers' levels of nutrition knowledge influenced their ability to process food labels. The United States Food and Drug Administration should regulate the "all natural" food label, because this claim is likely to mislead most consumers. Copyright © 2012 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  3. Teaching Mathematical Modelling: Demonstrating Enrichment and Elaboration

    ERIC Educational Resources Information Center

    Warwick, Jon

    2015-01-01

    This paper uses a series of models to illustrate one of the fundamental processes of model building--that of enrichment and elaboration. The paper describes how a problem context is given which allows a series of models to be developed from a simple initial model using a queuing theory framework. The process encourages students to think about the…

  4. Large signal-to-noise ratio quantification in MLE for ARARMAX models

    NASA Astrophysics Data System (ADS)

    Zou, Yiqun; Tang, Xiafei

    2014-06-01

    It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.

  5. Estimating the variance for heterogeneity in arm-based network meta-analysis.

    PubMed

    Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R

    2018-04-19

    Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.

  6. Expectancies as a Determinant of Interference Phenomena

    ERIC Educational Resources Information Center

    Hasher, Lynn; Greenberg, Michael

    1977-01-01

    One version, by Lockhart, Craik, and Jacoby, of a levels-of-processing model of memory asserts the importance of the role of expectancies about forthcoming information in determining the elaborateness of a memory trace. Confirmed expectancies result in less-elaborated memory traces; disconfirmed expectancies result in elaborate memory traces.…

  7. Team-Based Learning: Moderating Effects of Metacognitive Elaborative Rehearsal and Middle School History Content Recall

    ERIC Educational Resources Information Center

    Roberts, Greg; Scammacca, Nancy; Osman, David J.; Hall, Colby; Mohammed, Sarojani S.; Vaughn, Sharon

    2014-01-01

    Promoting Acceleration of Comprehension and Content through Text (PACT) and similar team-based models directly engage and support students in learning situations that require cognitive elaboration as part of the processing of new information. Elaboration is subject to metacognitive control, as well (Karpicke, "Journal of Experimental…

  8. Elaboration on the Culturally Informed Iranian Hierarchical Wisdom Model: Comparison with Sternberg's ACCEL Model

    ERIC Educational Resources Information Center

    Karami, Sareh; Ghahremani, Mehdi

    2017-01-01

    Using a grounded theory approach to the study of historical texts and an expert interview, we developed the Iranian hierarchical wisdom model (IHWM; Karami & Ghahremani, 2016). According to IHWM, there are three levels to wisdom: practical intelligence, wise, and sage. In this article, we discuss the model and elaborate on it. Next, we examine…

  9. Graphic gambling warnings: how they affect emotions, cognitive responses and attitude change.

    PubMed

    Muñoz, Yaromir; Chebat, Jean-Charles; Borges, Adilson

    2013-09-01

    The present study focuses on the effects of graphic warnings related to excessive gambling. It is based upon a theoretical model derived from both the Protection Motivation Theory (PMT) and the Elaboration Likelihood Model (ELM). We focus on video lottery terminal (VLT), one of the most hazardous format in the gaming industry. Our cohort consisted of 103 actual gamblers who reported previous gambling activity on VLT's on a regular basis. We assess the effectiveness of graphic warnings vs. text-only warnings and the effectiveness of two major arguments (i.e., family vs. financial disruption). A 2 × 2 factorial design was used to test the direct and combined effects of two variables (i.e., warning content and presence vs. absence of a graphic). It was found that the presence of a graphic enhances both cognitive appraisal and fear, and has positive effects on the Depth of Information Processing. In addition, graphic content combined with family disruptions is more effective for changing attitudes and complying with the warning than other combinations of the manipulated variables. It is proposed that ELM and PMT complement each other to explain the effects of warnings. Theoretical and practical implications are discussed.

  10. The importance of campaign saliency as a predictor of attitude and behavior change: A pilot evaluation of social marketing campaign Fat Talk Free Week.

    PubMed

    Garnett, Bernice Raveche; Buelow, Robert; Franko, Debra L; Becker, Carolyn; Rodgers, Rachel F; Austin, S Bryn

    2014-01-01

    Fat Talk Free Week (FTFW), a social marketing campaign designed to decrease self-disparaging talk about body and weight, has not yet been evaluated. We conducted a theory-informed pilot evaluation of FTFW with two college samples using a pre- and posttest design. Aligned with the central tenets of the Elaboration Likelihood Model (ELM), we investigated the importance of FTFW saliency as a predictor of fat talk behavior change. Our analytic sample consisted of 118 female participants (83% of original sample). Approximately 76% of the sample was non-Hispanic White, 14% Asian, and 8% Hispanic. At baseline, more than 50% of respondents reported engaging in frequent self fat talk; at posttest, this number dropped to 34% of respondents. Multivariable regression models supported campaign saliency as the single strongest predictor of a decrease in self fat talk. Our results support the social diffusion of campaign messages among shared communities, as we found significant decreases in fat talk among campaign attenders and nonattenders. FTFW may be a promising short-term health communication campaign to reduce fat talk, as campaign messages are salient among university women and may encourage interpersonal communication.

  11. Standard Setting and Risk Preference: An Elaboration of the Theory of Achievement Motivation and an Empirical Test

    ERIC Educational Resources Information Center

    Kuhl, Julius

    1978-01-01

    A formal elaboration of the original theory of achievement motivation (Atkinson, 1957; Atkinson & Feather, 1966) is proposed that includes personal standards as determinants of motivational tendencies. The results of an experiment are reported that examines the validity of some of the implications of the elaborated model proposed here. (Author/RK)

  12. Task Performance with List-Mode Data

    NASA Astrophysics Data System (ADS)

    Caucci, Luca

    This dissertation investigates the application of list-mode data to detection, estimation, and image reconstruction problems, with an emphasis on emission tomography in medical imaging. We begin by introducing a theoretical framework for list-mode data and we use it to define two observers that operate on list-mode data. These observers are applied to the problem of detecting a signal (known in shape and location) buried in a random lumpy background. We then consider maximum-likelihood methods for the estimation of numerical parameters from list-mode data, and we characterize the performance of these estimators via the so-called Fisher information matrix. Reconstruction from PET list-mode data is then considered. In a process we called "double maximum-likelihood" reconstruction, we consider a simple PET imaging system and we use maximum-likelihood methods to first estimate a parameter vector for each pair of gamma-ray photons that is detected by the hardware. The collection of these parameter vectors forms a list, which is then fed to another maximum-likelihood algorithm for volumetric reconstruction over a grid of voxels. Efficient parallel implementation of the algorithms discussed above is then presented. In this work, we take advantage of two low-cost, mass-produced computing platforms that have recently appeared on the market, and we provide some details on implementing our algorithms on these devices. We conclude this dissertation work by elaborating on a possible application of list-mode data to X-ray digital mammography. We argue that today's CMOS detectors and computing platforms have become fast enough to make X-ray digital mammography list-mode data acquisition and processing feasible.

  13. Investigating Island Evolution: A Galapagos-Based Lesson Using the 5E Instructional Model.

    ERIC Educational Resources Information Center

    DeFina, Anthony V.

    2002-01-01

    Introduces an inquiry-based lesson plan on evolution and the Galapagos Islands. Uses the 5E instructional model which includes phases of engagement, exploration, explanation, elaboration, and evaluation. Includes information on species for exploration and elaboration purposes, and a general rubric for student evaluation. (YDS)

  14. Simulations with Elaborated Worked Example Modeling: Beneficial Effects on Schema Acquisition

    ERIC Educational Resources Information Center

    Meier, Debra K.; Reinhard, Karl J.; Carter, David O.; Brooks, David W.

    2008-01-01

    Worked examples have been effective in enhancing learning outcomes, especially with novice learners. Most of this research has been conducted in laboratory settings. This study examined the impact of embedding elaborated worked example modeling in a computer simulation practice activity on learning achievement among 39 undergraduate students…

  15. Characterizing Perceptual Performance at Multiple Discrimination Precisions in External Noise

    PubMed Central

    Jeon, Seong-Taek; Lu, Zhong-Lin; Dosher, Barbara Anne

    2010-01-01

    Existing observer models developed for studies with the external noise paradigm are strictly only applicable to target detection or identification/discrimination of orthogonal target(s). We elaborated the perceptual template model (PTM) to account for contrast thresholds in identifying non-orthogonal targets. Full contrast psychometric functions were measured in an orientation identification task with four orientation differences across a wide range of external noise levels. We showed that observer performance can be modeled by the elaborated PTM with two templates that correspond to the two stimulus categories. Sampling efficiencies of the human observers were also estimated. The elaborated PTM provides a theoretical framework to characterize joint feature and contrast sensitivity of human observers. PMID:19884915

  16. Using theories of behaviour change to inform interventions for addictive behaviours.

    PubMed

    Webb, Thomas L; Sniehotta, Falko F; Michie, Susan

    2010-11-01

    This paper reviews a set of theories of behaviour change that are used outside the field of addiction and considers their relevance for this field. Ten theories are reviewed in terms of (i) the main tenets of each theory, (ii) the implications of the theory for promoting change in addictive behaviours and (iii) studies in the field of addiction that have used the theory. An augmented feedback loop model based on Control Theory is used to organize the theories and to show how different interventions might achieve behaviour change. Briefly, each theory provided the following recommendations for intervention: Control Theory: prompt behavioural monitoring, Goal-Setting Theory: set specific and challenging goals, Model of Action Phases: form 'implementation intentions', Strength Model of Self-Control: bolster self-control resources, Social Cognition Models (Protection Motivation Theory, Theory of Planned Behaviour, Health Belief Model): modify relevant cognitions, Elaboration Likelihood Model: consider targets' motivation and ability to process information, Prototype Willingness Model: change perceptions of the prototypical person who engages in behaviour and Social Cognitive Theory: modify self-efficacy. There are a range of theories in the field of behaviour change that can be applied usefully to addiction, each one pointing to a different set of modifiable determinants and/or behaviour change techniques. Studies reporting interventions should describe theoretical basis, behaviour change techniques and mode of delivery accurately so that effective interventions can be understood and replicated. © 2010 The Authors. Journal compilation © 2010 Society for the Study of Addiction.

  17. Refusals and Rejections: Designing Messages to Serve Multiple Goals.

    ERIC Educational Resources Information Center

    Saeki, Mimako; O'Keefe, Barbara J.

    1994-01-01

    Tests a rational model of the elaboration of themes found in rejection messages, using Japanese and American participants. Finds partial support for the initial rational model but notes two key revisions: identifies two new themes in rejection messages and suggests substantial differences in the way Americans and Japanese elaborate themes to serve…

  18. Reason, Intuition, and Social Justice: Elaborating on Parson's Career Decision-Making Model.

    ERIC Educational Resources Information Center

    Hartung, Paul J.; Blustein, David L.

    2002-01-01

    Nearly a century ago, Frank Parsons established the Vocation Bureau in Boston and spawned the development of the counseling profession. Elaborating on Parsons's socially responsible vision for counseling, the authors examine contemporary perspectives on career decision making that include both rational and alternative models and propose that these…

  19. The cognitive mediation model: factors influencing public knowledge of the H1N1 pandemic and intention to take precautionary behaviors.

    PubMed

    Ho, Shirley S; Peh, Xianghong; Soh, Veronica W L

    2013-01-01

    This study uses the cognitive mediation model as the theoretical framework to examine the influence of motivations, communication, and news elaboration on public knowledge of the H1N1 pandemic and the intention to take precautionary behaviors in Singapore. Using a nationally representative random digit dialing telephone survey of 1,055 adult Singaporeans, the authors' results show that the cognitive mediation model can be applied to health contexts, in which motivations (surveillance gratification, guidance, and need for cognition) were positively associated with news attention, elaboration, and interpersonal communication. News attention, elaboration, and interpersonal communication in turn positively influence public knowledge about the H1N1 influenza. In addition, results show that the motivations have significant indirect effects on behavioral intentions, as partially mediated by communication (media attention and interpersonal communication), elaboration, and knowledge. The authors conclude that the cognitive mediation model can be extended to behavioral outcomes, above and beyond knowledge. Implications for theory and practice for health communication were discussed.

  20. Assessment of parametric uncertainty for groundwater reactive transport modeling,

    USGS Publications Warehouse

    Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun

    2014-01-01

    The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.

  1. Formality of the Chinese collective leadership.

    PubMed

    Li, Haiying; Graesser, Arthur C

    2016-09-01

    We investigated the linguistic patterns in the discourse of four generations of the collective leadership of the Communist Party of China (CPC) from 1921 to 2012. The texts of Mao Zedong, Deng Xiaoping, Jiang Zemin, and Hu Jintao were analyzed using computational linguistic techniques (a Chinese formality score) to explore the persuasive linguistic features of the leaders in the contexts of power phase, the nation's education level, power duration, and age. The study was guided by the elaboration likelihood model of persuasion, which includes a central route (represented by formal discourse) versus a peripheral route (represented by informal discourse) to persuasion. The results revealed that these leaders adopted the formal, central route more when they were in power than before they came into power. The nation's education level was a significant factor in the leaders' adoption of the persuasion strategy. The leaders' formality also decreased with their increasing age and in-power times. However, the predictability of these factors for formality had subtle differences among the different types of leaders. These results enhance our understanding of the Chinese collective leadership and the role of formality in politically persuasive messages.

  2. "Save 30% if you buy today". Online pharmacies and the enhancement of peripheral thinking in consumers.

    PubMed

    Orizio, Grazia; Rubinelli, Sara; Schulz, Peter J; Domenighini, Serena; Bressanelli, Maura; Caimi, Luigi; Gelatti, Umberto

    2010-09-01

    Online pharmacies (OPs) are recognized as a potential threat to public health. The growth of an unregulated global drugs market risks increasing the spread of counterfeit medicines which are often delivered to consumers without a medical prescription. The aim of the study was to assess the strategies of argumentation that OPs adopt in their marketing. A sample of 175 OPs was analyzed using the content-analysis method, and evaluated by relying on the Elaboration Likelihood Model (ELM) of persuasion. Almost 80% of the sample of OPs did not ask for a medical prescription by the consumer's physician. The selling arguments used included privacy policy, economic, quality, and service issues. About one-third of the OPs did not declare any side-effects regarding the drugs offered. Our results show that OPs advertise their products in an argumentative fashion that enhances consumers' peripheral reflection: by analogically playing with the selling of other commodities, they magnify aspects of the online trade that consumers might find convenient, but overshadow the nature and risks of the actual products they sell. (c) 2010 John Wiley & Sons, Ltd.

  3. Black youth's personal involvement in the HIV/AIDS issue: does the public service announcement still work?

    PubMed

    Keys, Truman R; Morant, Kesha M; Stroman, Carolyn A

    2009-03-01

    Recent public service announcements (PSAs) directed toward Black youth utilize various formats and appeals to stimulate a motivated cognitive process that engenders personal involvement in the HIV/AIDS issue. The Elaboration Likelihood Model (ELM) by Petty and Cacioppo argues that engagement with messages that consist of substantive content causes the audience member to critically analyze the message, which can produce awareness and attitude change. An efficient way to add emphasis to the message and seize the attention of the target audience is to insert the message into an entertainment context. Our study attempted to analyze the impact of the peripheral cue, character appeal, on audience members' attitude change in response to analyzing high- and low-involvement message content. A2 x 4 factorial design was used, with message involvement (high/low) and character appeal (White/Black and celebrity/noncelebrity) as independent variables. The findings showed that celebrity status is the salient factor, with source perception inducing attitude change as a main effect or in an interaction effect with high- and low message content.

  4. Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures

    ERIC Educational Resources Information Center

    Jeon, Minjeong; Rabe-Hesketh, Sophia

    2012-01-01

    In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…

  5. Bias correction in the hierarchical likelihood approach to the analysis of multivariate survival data.

    PubMed

    Jeon, Jihyoun; Hsu, Li; Gorfine, Malka

    2012-07-01

    Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.

  6. Adaptable state based control system

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert D. (Inventor); Dvorak, Daniel L. (Inventor); Gostelow, Kim P. (Inventor); Starbird, Thomas W. (Inventor); Gat, Erann (Inventor); Chien, Steve Ankuo (Inventor); Keller, Robert M. (Inventor)

    2004-01-01

    An autonomous controller, comprised of a state knowledge manager, a control executor, hardware proxies and a statistical estimator collaborates with a goal elaborator, with which it shares common models of the behavior of the system and the controller. The elaborator uses the common models to generate from temporally indeterminate sets of goals, executable goals to be executed by the controller. The controller may be updated to operate in a different system or environment than that for which it was originally designed by the replacement of shared statistical models and by the instantiation of a new set of state variable objects derived from a state variable class. The adaptation of the controller does not require substantial modification of the goal elaborator for its application to the new system or environment.

  7. [The Explore of the Security Strategy Model in Hospital Mobile Clinic New Mode].

    PubMed

    Li, Ke; Xia, Yong; Wang, Wei

    2016-03-01

    The paper elaborates and analyzes the current status of mobile hospital information security, then puts forward a security new model of the mobile treatment, then its architecture and solutions is elaborated. The use of this model makes the overall security level of hospital information to be further improved and enhanced, it has a positive signifi cance to promote the overal hospital management level.

  8. Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.

    PubMed

    Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram

    2017-02-01

    In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.

  9. Integral equation methods for computing likelihoods and their derivatives in the stochastic integrate-and-fire model.

    PubMed

    Paninski, Liam; Haith, Adrian; Szirtes, Gabor

    2008-02-01

    We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.

  10. Characterization and quantification of grape variety by means of shikimic acid concentration and protein fingerprint in still white wines.

    PubMed

    Chabreyrie, David; Chauvet, Serge; Guyon, François; Salagoïty, Marie-Hélène; Antinelli, Jean-François; Medina, Bernard

    2008-08-27

    Protein profiles, obtained by high-performance capillary electrophoresis (HPCE) on white wines previously dialyzed, combined with shikimic acid concentration and multivariate analysis, were used for the determination of grape variety composition of a still white wine. Six varieties were studied through monovarietal wines elaborated in the laboratory: Chardonnay (24 samples), Chenin (24), Petit Manseng (7), Sauvignon (37), Semillon (24), and Ugni Blanc (9). Homemade mixtures were elaborated from authentic monovarietal wines according to a Plackett-Burman sampling plan. After protein peak area normalization, a matrix was elaborated containing protein results of wines (mixtures and monovarietal). Partial least-squares processing was applied to this matrix allowing the elaboration of a model that provided a varietal quantification precision of around 20% for most of the grape varieties studied. The model was applied to commercial samples from various geographical origins, providing encouraging results for control purposes.

  11. Likelihood Ratio Tests for Special Rasch Models

    ERIC Educational Resources Information Center

    Hessen, David J.

    2010-01-01

    In this article, a general class of special Rasch models for dichotomous item scores is considered. Although Andersen's likelihood ratio test can be used to test whether a Rasch model fits to the data, the test does not differentiate between special Rasch models. Therefore, in this article, new likelihood ratio tests are proposed for testing…

  12. Differentiating the Differentiation Models: A Comparison of the Retrieving Effectively from Memory Model (REM) and the Subjective Likelihood Model (SLiM)

    ERIC Educational Resources Information Center

    Criss, Amy H.; McClelland, James L.

    2006-01-01

    The subjective likelihood model [SLiM; McClelland, J. L., & Chappell, M. (1998). Familiarity breeds differentiation: a subjective-likelihood approach to the effects of experience in recognition memory. "Psychological Review," 105(4), 734-760.] and the retrieving effectively from memory model [REM; Shiffrin, R. M., & Steyvers, M. (1997). A model…

  13. Effect of formal and informal likelihood functions on uncertainty assessment in a single event rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Nourali, Mahrouz; Ghahraman, Bijan; Pourreza-Bilondi, Mohsen; Davary, Kamran

    2016-09-01

    In the present study, DREAM(ZS), Differential Evolution Adaptive Metropolis combined with both formal and informal likelihood functions, is used to investigate uncertainty of parameters of the HEC-HMS model in Tamar watershed, Golestan province, Iran. In order to assess the uncertainty of 24 parameters used in HMS, three flood events were used to calibrate and one flood event was used to validate the posterior distributions. Moreover, performance of seven different likelihood functions (L1-L7) was assessed by means of DREAM(ZS)approach. Four likelihood functions, L1-L4, Nash-Sutcliffe (NS) efficiency, Normalized absolute error (NAE), Index of agreement (IOA), and Chiew-McMahon efficiency (CM), is considered as informal, whereas remaining (L5-L7) is represented in formal category. L5 focuses on the relationship between the traditional least squares fitting and the Bayesian inference, and L6, is a hetereoscedastic maximum likelihood error (HMLE) estimator. Finally, in likelihood function L7, serial dependence of residual errors is accounted using a first-order autoregressive (AR) model of the residuals. According to the results, sensitivities of the parameters strongly depend on the likelihood function, and vary for different likelihood functions. Most of the parameters were better defined by formal likelihood functions L5 and L7 and showed a high sensitivity to model performance. Posterior cumulative distributions corresponding to the informal likelihood functions L1, L2, L3, L4 and the formal likelihood function L6 are approximately the same for most of the sub-basins, and these likelihood functions depict almost a similar effect on sensitivity of parameters. 95% total prediction uncertainty bounds bracketed most of the observed data. Considering all the statistical indicators and criteria of uncertainty assessment, including RMSE, KGE, NS, P-factor and R-factor, results showed that DREAM(ZS) algorithm performed better under formal likelihood functions L5 and L7, but likelihood function L5 may result in biased and unreliable estimation of parameters due to violation of the residualerror assumptions. Thus, likelihood function L7 provides posterior distribution of model parameters credibly and therefore can be employed for further applications.

  14. The systematic development of ROsafe: an intervention to promote STI testing among vocational school students.

    PubMed

    Wolfers, Mireille; de Zwart, Onno; Kok, Gerjo

    2012-05-01

    This article describes the development of ROsafe, an intervention to promote sexually transmitted infection (STI) testing at vocational schools in the Netherlands. Using the planning model of intervention mapping (IM), an educational intervention was designed that consisted of two lessons, an Internet site, and sexual health services at the school sites. IM is a stepwise approach for theory- and evidence-based development and implementation of interventions. It includes six steps: needs assessment, specification of the objectives in matrices, selection of theoretical methods and practical strategies, program design, implementation planning, and evaluation. The processes and outcomes that are performed during Steps 1 to 4 of IM are presented, that is, literature review and qualitative and quantitative research in needs assessment, leading to the definition of the desired behavioral outcomes and objectives. The matrix of change objectives for STI-testing behavior is presented, and then the development of theory into program is described, using examples from the program. Finally, the planning for implementation and evaluation is discussed. The educational intervention used methods that were derived from the social cognitive theory, the elaboration likelihood model, the persuasive communication matrix, and theories about risk communication. Strategies included short movies, discussion, knowledge quiz, and an interactive behavioral self-test through the Internet.

  15. How much to trust the senses: Likelihood learning

    PubMed Central

    Sato, Yoshiyuki; Kording, Konrad P.

    2014-01-01

    Our brain often needs to estimate unknown variables from imperfect information. Our knowledge about the statistical distributions of quantities in our environment (called priors) and currently available information from sensory inputs (called likelihood) are the basis of all Bayesian models of perception and action. While we know that priors are learned, most studies of prior-likelihood integration simply assume that subjects know about the likelihood. However, as the quality of sensory inputs change over time, we also need to learn about new likelihoods. Here, we show that human subjects readily learn the distribution of visual cues (likelihood function) in a way that can be predicted by models of statistically optimal learning. Using a likelihood that depended on color context, we found that a learned likelihood generalized to new priors. Thus, we conclude that subjects learn about likelihood. PMID:25398975

  16. Maximum likelihood estimation of finite mixture model for economic data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  17. Improving and Evaluating Nested Sampling Algorithm for Marginal Likelihood Estimation

    NASA Astrophysics Data System (ADS)

    Ye, M.; Zeng, X.; Wu, J.; Wang, D.; Liu, J.

    2016-12-01

    With the growing impacts of climate change and human activities on the cycle of water resources, an increasing number of researches focus on the quantification of modeling uncertainty. Bayesian model averaging (BMA) provides a popular framework for quantifying conceptual model and parameter uncertainty. The ensemble prediction is generated by combining each plausible model's prediction, and each model is attached with a model weight which is determined by model's prior weight and marginal likelihood. Thus, the estimation of model's marginal likelihood is crucial for reliable and accurate BMA prediction. Nested sampling estimator (NSE) is a new proposed method for marginal likelihood estimation. The process of NSE is accomplished by searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm is often used for local sampling. However, M-H is not an efficient sampling algorithm for high-dimensional or complicated parameter space. For improving the efficiency of NSE, it could be ideal to incorporate the robust and efficient sampling algorithm - DREAMzs into the local sampling of NSE. The comparison results demonstrated that the improved NSE could improve the efficiency of marginal likelihood estimation significantly. However, both improved and original NSEs suffer from heavy instability. In addition, the heavy computation cost of huge number of model executions is overcome by using an adaptive sparse grid surrogates.

  18. A random walk rule for phase I clinical trials.

    PubMed

    Durham, S D; Flournoy, N; Rosenberger, W F

    1997-06-01

    We describe a family of random walk rules for the sequential allocation of dose levels to patients in a dose-response study, or phase I clinical trial. Patients are sequentially assigned the next higher, same, or next lower dose level according to some probability distribution, which may be determined by ethical considerations as well as the patient's response. It is shown that one can choose these probabilities in order to center dose level assignments unimodally around any target quantile of interest. Estimation of the quantile is discussed; the maximum likelihood estimator and its variance are derived under a two-parameter logistic distribution, and the maximum likelihood estimator is compared with other nonparametric estimators. Random walk rules have clear advantages: they are simple to implement, and finite and asymptotic distribution theory is completely worked out. For a specific random walk rule, we compute finite and asymptotic properties and give examples of its use in planning studies. Having the finite distribution theory available and tractable obviates the need for elaborate simulation studies to analyze the properties of the design. The small sample properties of our rule, as determined by exact theory, compare favorably to those of the continual reassessment method, determined by simulation.

  19. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies

    PubMed Central

    Rukhin, Andrew L.

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed. PMID:26989583

  20. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.

    PubMed

    Rukhin, Andrew L

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.

  1. Apport de l'information geographique dans l'elaboration d'un indicateur de developpement urbain: Abidjan et l'ile de Montreal

    NASA Astrophysics Data System (ADS)

    Zoro, Emma-Georgina

    The objective of this project is to carry out a comparative analysis of two urban environments with remote sensing and Geographic Informations Systems, integrating multi-source data. The city of Abidjan (Cote d'Ivoire) and Montreal Island (Quebec) were selected. This study lies within the context of the strong demographic and space growths of urban environments. A supervised classification based on the theory of evidence allowed the identification of mixed pixels. However, the accuracy of this method is lower than that of the bayesian theory. Nevertheless, this method showed that the most credible classes (maximum believes in "closed world") are most probable (maximum probabilities) and thus confirms the bayesian maximum-likelihood decision. On the other hand, the contrary is not necessarily true because of the rules of combination. The urban cover map resulting from classification by the maximum likelihood method was then used to determine a relation between the residential surface and the number of inhabitants in a sector. Moreover, the area of green spaces was an input data (environmental component) for the Urban Development Indicator (IDU), the elaborated model for quantifying the quality of life in urban environment. Moreover, this indicator was defined to allow a total and efficient comparison of urban environments. Following a thorough bibliographical review, seven criteria were retained to describe the optimal conditions for the populations well-being. These criteria were then estimated from standardized indices. The choice of these criteria is a function of the availability of the data to be integrated into the GIS. As the criteria selected have not the same importance in the definition of the quality of urban life, one needed to rank by the method of multicriteria hierarchy and to normalize them in order to join them together in only one parameter. The composite indicator IDU thus obtained allowed to establish that Abidjan had an average development in 1995. While Montreal Island had a strong urban development. Moreover, the comparison of the IDUs reveals requirements of health and educational facilities for Abidjan. In addition, from 1989 to 1995, Abidjan developed itself while Montreal Island showed a light decreasing IDU between 1991 and 1996. Theses assertions are confirmed by the studies carried out on these urban communities and validated the relevance of IDU for quantifying and comparing urban development. Such work can be used by decisions makers to establish urban policies for sustainable development.

  2. Hurdle models for multilevel zero-inflated data via h-likelihood.

    PubMed

    Molas, Marek; Lesaffre, Emmanuel

    2010-12-30

    Count data often exhibit overdispersion. One type of overdispersion arises when there is an excess of zeros in comparison with the standard Poisson distribution. Zero-inflated Poisson and hurdle models have been proposed to perform a valid likelihood-based analysis to account for the surplus of zeros. Further, data often arise in clustered, longitudinal or multiple-membership settings. The proper analysis needs to reflect the design of a study. Typically random effects are used to account for dependencies in the data. We examine the h-likelihood estimation and inference framework for hurdle models with random effects for complex designs. We extend the h-likelihood procedures to fit hurdle models, thereby extending h-likelihood to truncated distributions. Two applications of the methodology are presented. Copyright © 2010 John Wiley & Sons, Ltd.

  3. MODEL-BASED CLUSTERING FOR CLASSIFICATION OF AQUATIC SYSTEMS AND DIAGNOSIS OF ECOLOGICAL STRESS

    EPA Science Inventory

    Clustering approaches were developed using the classification likelihood, the mixture likelihood, and also using a randomization approach with a model index. Using a clustering approach based on the mixture and classification likelihoods, we have developed an algorithm that...

  4. Term-Weighting Approaches in Automatic Text Retrieval.

    ERIC Educational Resources Information Center

    Salton, Gerard; Buckley, Christopher

    1988-01-01

    Summarizes the experimental evidence that indicates that text indexing systems based on the assignment of appropriately weighted single terms produce retrieval results superior to those obtained with more elaborate text representations, and provides baseline single term indexing models with which more elaborate content analysis procedures can be…

  5. Elaborating on Threshold Concepts

    ERIC Educational Resources Information Center

    Rountree, Janet; Robins, Anthony; Rountree, Nathan

    2013-01-01

    We propose an expanded definition of Threshold Concepts (TCs) that requires the successful acquisition and internalisation not only of knowledge, but also its practical elaboration in the domains of applied strategies and mental models. This richer definition allows us to clarify the relationship between TCs and Fundamental Ideas, and to account…

  6. Massive optimal data compression and density estimation for scalable, likelihood-free inference in cosmology

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen

    2018-07-01

    Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper, we use massive asymptotically optimal data compression to reduce the dimensionality of the data space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parametrized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate DELFI with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological data sets.

  7. Asymptotic Properties of Induced Maximum Likelihood Estimates of Nonlinear Models for Item Response Variables: The Finite-Generic-Item-Pool Case.

    ERIC Educational Resources Information Center

    Jones, Douglas H.

    The progress of modern mental test theory depends very much on the techniques of maximum likelihood estimation, and many popular applications make use of likelihoods induced by logistic item response models. While, in reality, item responses are nonreplicate within a single examinee and the logistic models are only ideal, practitioners make…

  8. Semiotic Mediation within an AT Frame

    ERIC Educational Resources Information Center

    Maracci, Mirko; Mariotti, Maria Alessandra

    2013-01-01

    This article is meant to present a specific elaboration of the notion of mediation in relation to the use of artefacts to enhance mathematics teaching and learning: the elaboration offered by the Theory of Semiotic Mediation. In particular, it provides an explicit model--consistent with the activity-actions-operations framework--of the actions…

  9. Risk prediction and aversion by anterior cingulate cortex.

    PubMed

    Brown, Joshua W; Braver, Todd S

    2007-12-01

    The recently proposed error-likelihood hypothesis suggests that anterior cingulate cortex (ACC) and surrounding areas will become active in proportion to the perceived likelihood of an error. The hypothesis was originally derived from a computational model prediction. The same computational model now makes a further prediction that ACC will be sensitive not only to predicted error likelihood, but also to the predicted magnitude of the consequences, should an error occur. The product of error likelihood and predicted error consequence magnitude collectively defines the general "expected risk" of a given behavior in a manner analogous but orthogonal to subjective expected utility theory. New fMRI results from an incentivechange signal task now replicate the error-likelihood effect, validate the further predictions of the computational model, and suggest why some segments of the population may fail to show an error-likelihood effect. In particular, error-likelihood effects and expected risk effects in general indicate greater sensitivity to earlier predictors of errors and are seen in risk-averse but not risk-tolerant individuals. Taken together, the results are consistent with an expected risk model of ACC and suggest that ACC may generally contribute to cognitive control by recruiting brain activity to avoid risk.

  10. Finite mixture model: A maximum likelihood estimation approach on time series data

    NASA Astrophysics Data System (ADS)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  11. Model criticism based on likelihood-free inference, with an application to protein network evolution.

    PubMed

    Ratmann, Oliver; Andrieu, Christophe; Wiuf, Carsten; Richardson, Sylvia

    2009-06-30

    Mathematical models are an important tool to explain and comprehend complex phenomena, and unparalleled computational advances enable us to easily explore them without any or little understanding of their global properties. In fact, the likelihood of the data under complex stochastic models is often analytically or numerically intractable in many areas of sciences. This makes it even more important to simultaneously investigate the adequacy of these models-in absolute terms, against the data, rather than relative to the performance of other models-but no such procedure has been formally discussed when the likelihood is intractable. We provide a statistical interpretation to current developments in likelihood-free Bayesian inference that explicitly accounts for discrepancies between the model and the data, termed Approximate Bayesian Computation under model uncertainty (ABCmicro). We augment the likelihood of the data with unknown error terms that correspond to freely chosen checking functions, and provide Monte Carlo strategies for sampling from the associated joint posterior distribution without the need of evaluating the likelihood. We discuss the benefit of incorporating model diagnostics within an ABC framework, and demonstrate how this method diagnoses model mismatch and guides model refinement by contrasting three qualitative models of protein network evolution to the protein interaction datasets of Helicobacter pylori and Treponema pallidum. Our results make a number of model deficiencies explicit, and suggest that the T. pallidum network topology is inconsistent with evolution dominated by link turnover or lateral gene transfer alone.

  12. The Augmented Cognitive Mediation Model: Examining Antecedents of Factual and Structural Breast Cancer Knowledge Among Singaporean Women.

    PubMed

    Lee, Edmund W J; Shin, Mincheol; Kawaja, Ariffin; Ho, Shirley S

    2016-05-01

    As knowledge acquisition is an important component of health communication research, this study examines factors associated with Singaporean women's breast cancer knowledge using an augmented cognitive mediation model. We conducted a nationally representative study that surveyed 802 women between the ages of 30 and 70 using random-digit dialing. The results supported the augmented cognitive mediation model, which proposes the inclusion of risk perception as a motivator of health information seeking and structural knowledge as an additional knowledge dimension. There was adequate support for the hypothesized paths in the model. Risk perception was positively associated with attention to newspaper, television, Internet, and interpersonal communication. Attention to the three media channels was associated with interpersonal communication, but only newspaper and television attention were associated with elaboration. Interpersonal communication was positively associated with structural knowledge, whereas elaboration was associated with both factual and structural knowledge. Differential indirect effects between media attention and knowledge dimensions via interpersonal communication and elaboration were found. Theoretical and practical implications are discussed.

  13. Examining an Elaborated Sociocultural Model of Disordered Eating Among College Women: The Roles of Social Comparison and Body Surveillance

    PubMed Central

    Fitzsimmons-Craft, Ellen E.; Bardone-Cone, Anna M.; Bulik, Cynthia M.; Wonderlich, Stephen A.; Crosby, Ross D.; Engel, Scott G.

    2014-01-01

    Social comparison (i.e., body, eating, exercise) and body surveillance were tested as mediators of the thin-ideal internalization-body dissatisfaction relationship in the context of an elaborated sociocultural model of disordered eating. Participants were 219 college women who completed two questionnaire sessions 3 months apart. The cross-sectional elaborated sociocultural model (i.e., including social comparison and body surveillance as mediators of the thin-ideal internalization-body dissatisfaction relation) provided a good fit to the data, and the total indirect effect from thin-ideal internalization to body dissatisfaction through the mediators was significant. Social comparison emerged as a significant specific mediator while body surveillance did not. The mediation model did not hold prospectively; however, social comparison accounted for unique variance in body dissatisfaction and disordered eating 3 months later. Results suggest that thin-ideal internalization may not be “automatically” associated with body dissatisfaction and that it may be especially important to target comparison in prevention and intervention efforts. PMID:25160010

  14. A general methodology for maximum likelihood inference from band-recovery data

    USGS Publications Warehouse

    Conroy, M.J.; Williams, B.K.

    1984-01-01

    A numerical procedure is described for obtaining maximum likelihood estimates and associated maximum likelihood inference from band- recovery data. The method is used to illustrate previously developed one-age-class band-recovery models, and is extended to new models, including the analysis with a covariate for survival rates and variable-time-period recovery models. Extensions to R-age-class band- recovery, mark-recapture models, and twice-yearly marking are discussed. A FORTRAN program provides computations for these models.

  15. In “Step” with HIV Vaccines? A Content Analysis of Local Recruitment Campaigns for an International HIV Vaccine Study

    PubMed Central

    Frew, Paula M.; Macias, Wendy; Chan, Kayshin; Harding, Ashley C.

    2009-01-01

    During the past two decades of the HIV/AIDS pandemic, several recruitment campaigns were designed to generate community involvement in preventive HIV vaccine clinical trials. These efforts utilized a blend of advertising and marketing strategies mixed with public relations and community education approaches to attract potential study participants to clinical trials (integrated marketing communications). Although more than 30,000 persons worldwide have participated in preventive HIV vaccine studies, no systematic analysis of recruitment campaigns exists. This content analysis study was conducted to examine several United States and Canadian recruitment campaigns for one of the largest-scale HIV vaccine trials to date (the “Step Study”). This study examined persuasive features consistent with the Elaboration Likelihood Model (ELM) including message content, personal relevance of HIV/AIDS and vaccine research, intended audiences, information sources, and other contextual features. The results indicated variation in messages and communication approaches with gay men more exclusively targeted in these regions. Racial/ethnic representations also differed by campaign. Most of the materials promote affective evaluation of the information through heuristic cueing. Implications for subsequent campaigns and research directions are discussed. PMID:19609373

  16. A Spiritually-based approach to breast cancer awareness: Cognitive response analysis of communication effectiveness

    PubMed Central

    Holt, Cheryl L.; Lee, Crystal; Wright, Katrina

    2017-01-01

    The purpose of this study was to compare the communication effectiveness of a spiritually-based approach to breast cancer early detection education with a secular approach, among African American women, by conducting a cognitive response analysis. A total of 108 women from six Alabama churches were randomly assigned by church to receive a spiritually-based or secular educational booklet discussing breast cancer early detection. Based on the Elaboration Likelihood Model (Petty & Cacioppo, 1981), after reading the booklets participants were asked to complete a thought-listing task writing down any thoughts they experienced and rating them as positive, negative, or neutral. Two independent coders then used five dimensions to code participants thoughts. Compared with the secular booklet, the spiritually-based booklet resulted in significantly more thoughts involving personal connection, self-assessment, and spiritually-based responses. These results suggest that a spiritually-based approach to breast cancer awareness may be more effective than the secular because it caused women to more actively process the message, stimulating central route processing. The incorporation of spiritually-based content into church-based breast cancer education could be a promising health communication approach for African American women. PMID:18443989

  17. Application of Human Augmentics: A Persuasive Asthma Inhaler.

    PubMed

    Grossman, Brent; Conner, Steve; Mosnaim, Giselle; Albers, Joshua; Leigh, Jason; Jones, Steve; Kenyon, Robert

    2017-03-01

    This article describes a tailored health intervention delivered on a mobile phone platform, integrating low-literacy design strategies and basic principles of behavior change, to promote increased adherence and asthma control among underserved minority adolescents. We based the intervention and design principles on theories of Human Augmentics and the Elaboration Likelihood Model. We tested the efficacy of using electronic monitoring devices that incorporate informative and persuasive elements to improve adherence to a prescribed daily medication regimen intended to reduce use of asthma rescue medications. We describe the theoretical framework, hardware and software systems, and results of user testing for design purposes and a clinical pilot study incorporating use of the device and software by the targeted population. The results of the clinical pilot study showed an 83% completion rate for the treatment as well as improved adherence. Of note, 8% and 58% of participants achieved clinically significant adherence targets at baseline and last week of the study, respectively. Rescue asthma medication use decreased from a median of 3 puffs per week at baseline to 0 puffs per week during the last week of the study. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. In "Step" with HIV Vaccines? A Content Analysis of Local Recruitment Campaigns for an International HIV Vaccine Study.

    PubMed

    Frew, Paula M; Macias, Wendy; Chan, Kayshin; Harding, Ashley C

    2009-01-01

    During the past two decades of the HIV/AIDS pandemic, several recruitment campaigns were designed to generate community involvement in preventive HIV vaccine clinical trials. These efforts utilized a blend of advertising and marketing strategies mixed with public relations and community education approaches to attract potential study participants to clinical trials (integrated marketing communications). Although more than 30,000 persons worldwide have participated in preventive HIV vaccine studies, no systematic analysis of recruitment campaigns exists. This content analysis study was conducted to examine several United States and Canadian recruitment campaigns for one of the largest-scale HIV vaccine trials to date (the "Step Study"). This study examined persuasive features consistent with the Elaboration Likelihood Model (ELM) including message content, personal relevance of HIV/AIDS and vaccine research, intended audiences, information sources, and other contextual features. The results indicated variation in messages and communication approaches with gay men more exclusively targeted in these regions. Racial/ethnic representations also differed by campaign. Most of the materials promote affective evaluation of the information through heuristic cueing. Implications for subsequent campaigns and research directions are discussed.

  19. Exposure to digital marketing enhances young adults’ interest in energy drinks: An exploratory investigation

    PubMed Central

    Buchanan, Limin; Kelly, Bridget; Yeatman, Heather

    2017-01-01

    Young adults experience faster weight gain and consume more unhealthy food than any other age groups. The impact of online food marketing on “digital native” young adults is unclear. This study examined the effects of online marketing on young adults’ consumption behaviours, using energy drinks as a case example. The elaboration likelihood model of persuasion was used as the theoretical basis. A pre-test post-test experimental research design was adopted using mixed-methods. Participants (aged 18–24) were randomly assigned to control or experimental groups (N = 30 each). Experimental group participants’ attitudes towards and intended purchase and consumption of energy drinks were examined via surveys and semi-structured interviews after their exposure to two popular energy drink brands’ websites and social media sites (exposure time 8 minutes). Exposure to digital marketing contents of energy drinks improved the experimental group participants’ attitudes towards and purchase and consumption intention of energy drinks. This study indicates the influential power of unhealthy online marketing on cognitively mature young adults. This study draws public health attentions to young adults, who to date have been less of a focus of researchers but are influenced by online food advertising. PMID:28152016

  20. Exposure to digital marketing enhances young adults' interest in energy drinks: An exploratory investigation.

    PubMed

    Buchanan, Limin; Kelly, Bridget; Yeatman, Heather

    2017-01-01

    Young adults experience faster weight gain and consume more unhealthy food than any other age groups. The impact of online food marketing on "digital native" young adults is unclear. This study examined the effects of online marketing on young adults' consumption behaviours, using energy drinks as a case example. The elaboration likelihood model of persuasion was used as the theoretical basis. A pre-test post-test experimental research design was adopted using mixed-methods. Participants (aged 18-24) were randomly assigned to control or experimental groups (N = 30 each). Experimental group participants' attitudes towards and intended purchase and consumption of energy drinks were examined via surveys and semi-structured interviews after their exposure to two popular energy drink brands' websites and social media sites (exposure time 8 minutes). Exposure to digital marketing contents of energy drinks improved the experimental group participants' attitudes towards and purchase and consumption intention of energy drinks. This study indicates the influential power of unhealthy online marketing on cognitively mature young adults. This study draws public health attentions to young adults, who to date have been less of a focus of researchers but are influenced by online food advertising.

  1. Airflow and Particle Transport Through Human Airways: A Systematic Review

    NASA Astrophysics Data System (ADS)

    Kharat, S. B.; Deoghare, A. B.; Pandey, K. M.

    2017-08-01

    This paper describes review of the relevant literature about two phase analysis of air and particle flow through human airways. An emphasis of the review is placed on elaborating the steps involved in two phase analysis, which are Geometric modelling methods and Mathematical models. The first two parts describes various approaches that are followed for constructing an Airway model upon which analysis are conducted. Broad two categories of geometric modelling viz. Simplified modelling and Accurate modelling using medical scans are discussed briefly. Ease and limitations of simplified models, then examples of CT based models are discussed. In later part of the review different mathematical models implemented by researchers for analysis are briefed. Mathematical models used for Air and Particle phases are elaborated separately.

  2. Stratification, Elaboration and Formalisation of Design Documents: Effects on the Production of Instructional Materials

    ERIC Educational Resources Information Center

    Boot, Eddy W.; Nelson, Jon; van Merrienboer, Jeroen J. G.; Gibbons, Andrew S.

    2007-01-01

    Designers and producers of instructional materials lack a common design language. As a result, producers have difficulties translating design documents into technical specifications. The 3D-model is introduced to improve the stratification, elaboration and formalisation of design documents. It is hypothesised that producers working with improved…

  3. Development and Validation of Two Scales to Measure Elaboration and Behaviors Associated with Stewardship in Children

    ERIC Educational Resources Information Center

    Vezeau, Susan Lynn; Powell, Robert B.; Stern, Marc J.; Moore, D. DeWayne; Wright, Brett A.

    2017-01-01

    This investigation examines the development of two scales that measure elaboration and behaviors associated with stewardship in children. The scales were developed using confirmatory factor analysis to investigate their construct validity, reliability, and psychometric properties. Results suggest that a second-order factor model structure provides…

  4. The Role of Elaboration in the Comprehension and Retention of Prose: A Critical Review.

    ERIC Educational Resources Information Center

    Reder, Lynne M.

    1980-01-01

    Recent research in the area of prose comprehension is reviewed, including factors that affect amount of recall, representations of text structures, and use of world knowledge to aid comprehension. The need for more information processing models of comprehension is emphasized. Elaboration is considered important for comprehension and retention.…

  5. Unified framework to evaluate panmixia and migration direction among multiple sampling locations.

    PubMed

    Beerli, Peter; Palczewski, Michal

    2010-05-01

    For many biological investigations, groups of individuals are genetically sampled from several geographic locations. These sampling locations often do not reflect the genetic population structure. We describe a framework using marginal likelihoods to compare and order structured population models, such as testing whether the sampling locations belong to the same randomly mating population or comparing unidirectional and multidirectional gene flow models. In the context of inferences employing Markov chain Monte Carlo methods, the accuracy of the marginal likelihoods depends heavily on the approximation method used to calculate the marginal likelihood. Two methods, modified thermodynamic integration and a stabilized harmonic mean estimator, are compared. With finite Markov chain Monte Carlo run lengths, the harmonic mean estimator may not be consistent. Thermodynamic integration, in contrast, delivers considerably better estimates of the marginal likelihood. The choice of prior distributions does not influence the order and choice of the better models when the marginal likelihood is estimated using thermodynamic integration, whereas with the harmonic mean estimator the influence of the prior is pronounced and the order of the models changes. The approximation of marginal likelihood using thermodynamic integration in MIGRATE allows the evaluation of complex population genetic models, not only of whether sampling locations belong to a single panmictic population, but also of competing complex structured population models.

  6. Likelihood-based gene annotations for gap filling and quality assessment in genome-scale metabolic models

    DOE PAGES

    Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; ...

    2014-10-16

    Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genesmore » and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface.« less

  7. Likelihood-Based Gene Annotations for Gap Filling and Quality Assessment in Genome-Scale Metabolic Models

    PubMed Central

    Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; Chia, Nicholas; Price, Nathan D.

    2014-01-01

    Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genes and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface. PMID:25329157

  8. Multilevel and Latent Variable Modeling with Composite Links and Exploded Likelihoods

    ERIC Educational Resources Information Center

    Rabe-Hesketh, Sophia; Skrondal, Anders

    2007-01-01

    Composite links and exploded likelihoods are powerful yet simple tools for specifying a wide range of latent variable models. Applications considered include survival or duration models, models for rankings, small area estimation with census information, models for ordinal responses, item response models with guessing, randomized response models,…

  9. Modeling of 2D diffusion processes based on microscopy data: parameter estimation and practical identifiability analysis.

    PubMed

    Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J

    2013-01-01

    Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.

  10. A long-term earthquake rate model for the central and eastern United States from smoothed seismicity

    USGS Publications Warehouse

    Moschetti, Morgan P.

    2015-01-01

    I present a long-term earthquake rate model for the central and eastern United States from adaptive smoothed seismicity. By employing pseudoprospective likelihood testing (L-test), I examined the effects of fixed and adaptive smoothing methods and the effects of catalog duration and composition on the ability of the models to forecast the spatial distribution of recent earthquakes. To stabilize the adaptive smoothing method for regions of low seismicity, I introduced minor modifications to the way that the adaptive smoothing distances are calculated. Across all smoothed seismicity models, the use of adaptive smoothing and the use of earthquakes from the recent part of the catalog optimizes the likelihood for tests with M≥2.7 and M≥4.0 earthquake catalogs. The smoothed seismicity models optimized by likelihood testing with M≥2.7 catalogs also produce the highest likelihood values for M≥4.0 likelihood testing, thus substantiating the hypothesis that the locations of moderate-size earthquakes can be forecast by the locations of smaller earthquakes. The likelihood test does not, however, maximize the fraction of earthquakes that are better forecast than a seismicity rate model with uniform rates in all cells. In this regard, fixed smoothing models perform better than adaptive smoothing models. The preferred model of this study is the adaptive smoothed seismicity model, based on its ability to maximize the joint likelihood of predicting the locations of recent small-to-moderate-size earthquakes across eastern North America. The preferred rate model delineates 12 regions where the annual rate of M≥5 earthquakes exceeds 2×10−3. Although these seismic regions have been previously recognized, the preferred forecasts are more spatially concentrated than the rates from fixed smoothed seismicity models, with rate increases of up to a factor of 10 near clusters of high seismic activity.

  11. A smartphone app to communicate child passenger safety: an application of theory to practice.

    PubMed

    Gielen, A C; McDonald, E M; Omaki, E; Shields, W; Case, J; Aitken, M

    2015-10-01

    Child passenger safety remains an important public health problem because motor vehicle crashes are the leading cause of death for children, and the majority of children ride improperly restrained. Using a mobile app to communicate with parents about injury prevention offers promise but little information is available on how to create such a tool. The purpose of this article is to illustrate a theory-based approach to developing a tailored, smartphone app for communicating child passenger safety information to parents. The theoretical basis for the tailoring is the elaboration likelihood model, and we utilized the precaution adoption process model (PAPM) to reflect the stage-based nature of behavior change. We created assessment items (written at ≤6th grade reading level) to determine the child's proper type of car seat, the parent's PAPM stage and beliefs on selected constructs designed to facilitate stage movement according to the theory. A message library and template were created to provide a uniform structure for the tailored feedback. We demonstrate how messages derived in this way can be delivered through new m-health technology and conclude with recommendations for the utility of the methods used here for other m-health, patient education interventions. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  12. Likelihood testing of seismicity-based rate forecasts of induced earthquakes in Oklahoma and Kansas

    USGS Publications Warehouse

    Moschetti, Morgan P.; Hoover, Susan M.; Mueller, Charles

    2016-01-01

    Likelihood testing of induced earthquakes in Oklahoma and Kansas has identified the parameters that optimize the forecasting ability of smoothed seismicity models and quantified the recent temporal stability of the spatial seismicity patterns. Use of the most recent 1-year period of earthquake data and use of 10–20-km smoothing distances produced the greatest likelihood. The likelihood that the locations of January–June 2015 earthquakes were consistent with optimized forecasts decayed with increasing elapsed time between the catalogs used for model development and testing. Likelihood tests with two additional sets of earthquakes from 2014 exhibit a strong sensitivity of the rate of decay to the smoothing distance. Marked reductions in likelihood are caused by the nonstationarity of the induced earthquake locations. Our results indicate a multiple-fold benefit from smoothed seismicity models in developing short-term earthquake rate forecasts for induced earthquakes in Oklahoma and Kansas, relative to the use of seismic source zones.

  13. Determining the accuracy of maximum likelihood parameter estimates with colored residuals

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1994-01-01

    An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.

  14. Tests for detecting overdispersion in models with measurement error in covariates.

    PubMed

    Yang, Yingsi; Wong, Man Yu

    2015-11-30

    Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  16. Teaching and learning grade 7 science concepts by elaborate analogies: Mainstream and East and South Asian ESL students' experiences

    NASA Astrophysics Data System (ADS)

    Kim, Judy Joo-Hyun

    This study explored the effectiveness of an instructional tool, elaborate analogy, in teaching the particle theory to both Grade 7 mainstream and East or South Asian ESL students. Ten Grade 7 science classes from five different schools in a large school district in the Greater Toronto area participated. Each of the ten classes were designated as either Group X or Y. Using a quasi-experimental counterbalanced design, Group X students were taught one science unit using the elaborate analogies, while Group Y students were taught by their teachers' usual methods of teaching. The instructional methods used for Group X and Y were interchanged for the subsequent science unit. Quantitative data were collected from 95 students (50 mainstream and 45 ESL) by means of a posttest and a follow-up test for each of the units. When the differences between mainstream and East or South Asian ESL students were analyzed, the results indicate that both groups scored higher on the posttests when they were instructed with elaborate analogies, and that the difference between the two groups was not significant. That is, the ESL students, as well as the mainstream students, benefited academically when they were instructed with the elaborate analogies. The students obtained higher inferential scores on the posttest when their teacher connected the features of less familiar and more abstract scientific concepts to the features of the familiar and easy-to-visualize concept of school dances. However, after two months, the students were unable to recall inferential content knowledge. This is perhaps due to the lack of opportunity for the students to represent and test their initial mental models. Rather than merely employing elaborate analogies, perhaps, science teachers can supplement the use of elaborate analogies with explicit guidance in helping students to represent and test the coherence of their mental models.

  17. Multiple robustness in factorized likelihood models.

    PubMed

    Molina, J; Rotnitzky, A; Sued, M; Robins, J M

    2017-09-01

    We consider inference under a nonparametric or semiparametric model with likelihood that factorizes as the product of two or more variation-independent factors. We are interested in a finite-dimensional parameter that depends on only one of the likelihood factors and whose estimation requires the auxiliary estimation of one or several nuisance functions. We investigate general structures conducive to the construction of so-called multiply robust estimating functions, whose computation requires postulating several dimension-reducing models but which have mean zero at the true parameter value provided one of these models is correct.

  18. Estimating Function Approaches for Spatial Point Processes

    NASA Astrophysics Data System (ADS)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.

  19. Bayesian logistic regression approaches to predict incorrect DRG assignment.

    PubMed

    Suleiman, Mani; Demirhan, Haydar; Boyd, Leanne; Girosi, Federico; Aksakalli, Vural

    2018-05-07

    Episodes of care involving similar diagnoses and treatments and requiring similar levels of resource utilisation are grouped to the same Diagnosis-Related Group (DRG). In jurisdictions which implement DRG based payment systems, DRGs are a major determinant of funding for inpatient care. Hence, service providers often dedicate auditing staff to the task of checking that episodes have been coded to the correct DRG. The use of statistical models to estimate an episode's probability of DRG error can significantly improve the efficiency of clinical coding audits. This study implements Bayesian logistic regression models with weakly informative prior distributions to estimate the likelihood that episodes require a DRG revision, comparing these models with each other and to classical maximum likelihood estimates. All Bayesian approaches had more stable model parameters than maximum likelihood. The best performing Bayesian model improved overall classification per- formance by 6% compared to maximum likelihood, with a 34% gain compared to random classification, respectively. We found that the original DRG, coder and the day of coding all have a significant effect on the likelihood of DRG error. Use of Bayesian approaches has improved model parameter stability and classification accuracy. This method has already lead to improved audit efficiency in an operational capacity.

  20. Univariate and bivariate likelihood-based meta-analysis methods performed comparably when marginal sensitivity and specificity were the targets of inference.

    PubMed

    Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H

    2017-03-01

    To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Two-part models with stochastic processes for modelling longitudinal semicontinuous data: Computationally efficient inference and modelling the overall marginal mean.

    PubMed

    Yiu, Sean; Tom, Brian Dm

    2017-01-01

    Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.

  2. Physiological control of elaborate male courtship: Female choice for neuromuscular systems

    PubMed Central

    Fusani, Leonida; Barske, Julia; Day, Lainy D.; Fuxjager, Matthew J.; Schlinger, Barney A.

    2015-01-01

    Males of many animal species perform specialized courtship behaviours to gain copulations with females. Identifying physiological and anatomical specializations underlying performance of these behaviours helps clarify mechanisms through which sexual selection promotes the evolution of elaborate courtship. Our knowledge about neuromuscular specializations that support elaborate displays is limited to a few model species. In this review, we focus on the physiological control of the courtship of a tropical bird, the golden-collared manakin, which has been the focus of our research for nearly 20 years. Male manakins perform physically elaborate courtship displays that are quick, accurate and powerful. Females seem to choose males based on their motor skills suggesting that neuromuscular specializations possessed by these males are driven by female choice. Male courtship is activated by androgens and androgen receptors are expressed in qualitatively and quantitatively unconventional ways in manakin brain, spinal cord and skeletal muscles. We propose that in some species, females select males based on their neuromuscular capabilities and acquired skills and that elaborate steroid-dependent courtship displays evolve to signal these traits. PMID:25086380

  3. Genealogical Working Distributions for Bayesian Model Testing with Phylogenetic Uncertainty

    PubMed Central

    Baele, Guy; Lemey, Philippe; Suchard, Marc A.

    2016-01-01

    Marginal likelihood estimates to compare models using Bayes factors frequently accompany Bayesian phylogenetic inference. Approaches to estimate marginal likelihoods have garnered increased attention over the past decade. In particular, the introduction of path sampling (PS) and stepping-stone sampling (SS) into Bayesian phylogenetics has tremendously improved the accuracy of model selection. These sampling techniques are now used to evaluate complex evolutionary and population genetic models on empirical data sets, but considerable computational demands hamper their widespread adoption. Further, when very diffuse, but proper priors are specified for model parameters, numerical issues complicate the exploration of the priors, a necessary step in marginal likelihood estimation using PS or SS. To avoid such instabilities, generalized SS (GSS) has recently been proposed, introducing the concept of “working distributions” to facilitate—or shorten—the integration process that underlies marginal likelihood estimation. However, the need to fix the tree topology currently limits GSS in a coalescent-based framework. Here, we extend GSS by relaxing the fixed underlying tree topology assumption. To this purpose, we introduce a “working” distribution on the space of genealogies, which enables estimating marginal likelihoods while accommodating phylogenetic uncertainty. We propose two different “working” distributions that help GSS to outperform PS and SS in terms of accuracy when comparing demographic and evolutionary models applied to synthetic data and real-world examples. Further, we show that the use of very diffuse priors can lead to a considerable overestimation in marginal likelihood when using PS and SS, while still retrieving the correct marginal likelihood using both GSS approaches. The methods used in this article are available in BEAST, a powerful user-friendly software package to perform Bayesian evolutionary analyses. PMID:26526428

  4. The Elaborated Environmental Stress Hypothesis as a Framework for Understanding the Association Between Motor Skills and Internalizing Problems: A Mini-Review

    PubMed Central

    Mancini, Vincent O.; Rigoli, Daniela; Cairney, John; Roberts, Lynne D.; Piek, Jan P.

    2016-01-01

    Poor motor skills have been shown to be associated with a range of psychosocial issues, including internalizing problems (anxiety and depression). While well-documented empirically, our understanding of why this relationship occurs remains theoretically underdeveloped. The Elaborated Environmental Stress Hypothesis by Cairney et al. (2013) provides a promising framework that seeks to explain the association between motor skills and internalizing problems, specifically in children with developmental coordination disorder (DCD). The framework posits that poor motor skills predispose the development of internalizing problems via interactions with intermediary environmental stressors. At the time the model was proposed, limited direct evidence was available to support or refute the framework. Several studies and developments related to the framework have since been published. This mini-review seeks to provide an up-to-date overview of recent developments related to the Elaborated Environmental Stress Hypothesis. We briefly discuss the past research that led to its development, before moving to studies that have investigated the framework since it was proposed. While originally developed within the context of DCD in childhood, recent developments have found support for the model in community samples. Through the reviewed literature, this article provides support for the Elaborated Environmental Stress Hypothesis as a promising theoretical framework that explains the psychosocial correlates across the broader spectrum of motor ability. However, given its recent conceptualization, ongoing evaluation of the Elaborated Environmental Stress Hypothesis is recommended. PMID:26941690

  5. Three methods to construct predictive models using logistic regression and likelihood ratios to facilitate adjustment for pretest probability give similar results.

    PubMed

    Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les

    2008-01-01

    To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.

  6. Likelihood analysis of spatial capture-recapture models for stratified or class structured populations

    USGS Publications Warehouse

    Royle, J. Andrew; Sutherland, Christopher S.; Fuller, Angela K.; Sun, Catherine C.

    2015-01-01

    We develop a likelihood analysis framework for fitting spatial capture-recapture (SCR) models to data collected on class structured or stratified populations. Our interest is motivated by the necessity of accommodating the problem of missing observations of individual class membership. This is particularly problematic in SCR data arising from DNA analysis of scat, hair or other material, which frequently yields individual identity but fails to identify the sex. Moreover, this can represent a large fraction of the data and, given the typically small sample sizes of many capture-recapture studies based on DNA information, utilization of the data with missing sex information is necessary. We develop the class structured likelihood for the case of missing covariate values, and then we address the scaling of the likelihood so that models with and without class structured parameters can be formally compared regardless of missing values. We apply our class structured model to black bear data collected in New York in which sex could be determined for only 62 of 169 uniquely identified individuals. The models containing sex-specificity of both the intercept of the SCR encounter probability model and the distance coefficient, and including a behavioral response are strongly favored by log-likelihood. Estimated population sex ratio is strongly influenced by sex structure in model parameters illustrating the importance of rigorous modeling of sex differences in capture-recapture models.

  7. Extrusomes in ciliates: diversification, distribution, and phylogenetic implications.

    PubMed

    Rosati, Giovanna; Modeo, Letizia

    2003-01-01

    Exocytosis is, in all likelihood, an important communication method among microbes. Ciliates are highly differentiated and specialized micro-organisms for which versatile and/or sophisticated exocytotic organelles may represent important adaptive tools. Thus, in ciliates, we find a broad range of different extrusomes, i.e ejectable membrane-bound organelles. Structurally simple extrusomes, like mucocysts and cortical granules, are widespread in different taxa within the phylum. They play the roles in each case required for the ecological needs of the organisms. Then, we find a number of more elaborate extrusomes, whose distribution within the phylum is more limited, and in some way related to phylogenetic affinities. Herein we provide a survey of literature and our data on selected extrusomes in ciliates. Their morphology, distribution, and possible function are discussed. The possible phylogenetic implications of their diversity are considered.

  8. Gaussian Mixture Models of Between-Source Variation for Likelihood Ratio Computation from Multivariate Data

    PubMed Central

    Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin

    2016-01-01

    In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680

  9. The Equivalence of Two Methods of Parameter Estimation for the Rasch Model.

    ERIC Educational Resources Information Center

    Blackwood, Larry G.; Bradley, Edwin L.

    1989-01-01

    Two methods of estimating parameters in the Rasch model are compared. The equivalence of likelihood estimations from the model of G. J. Mellenbergh and P. Vijn (1981) and from usual unconditional maximum likelihood (UML) estimation is demonstrated. Mellenbergh and Vijn's model is a convenient method of calculating UML estimates. (SLD)

  10. Computation of nonlinear least squares estimator and maximum likelihood using principles in matrix calculus

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.

    2017-11-01

    This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation

  11. Developing a non-point source P loss indicator in R and its parameter uncertainty assessment using GLUE: a case study in northern China.

    PubMed

    Su, Jingjun; Du, Xinzhong; Li, Xuyong

    2018-05-16

    Uncertainty analysis is an important prerequisite for model application. However, the existing phosphorus (P) loss indexes or indicators were rarely evaluated. This study applied generalized likelihood uncertainty estimation (GLUE) method to assess the uncertainty of parameters and modeling outputs of a non-point source (NPS) P indicator constructed in R language. And the influences of subjective choices of likelihood formulation and acceptability threshold of GLUE on model outputs were also detected. The results indicated the following. (1) Parameters RegR 2 , RegSDR 2 , PlossDP fer , PlossDP man , DPDR, and DPR were highly sensitive to overall TP simulation and their value ranges could be reduced by GLUE. (2) Nash efficiency likelihood (L 1 ) seemed to present better ability in accentuating high likelihood value simulations than the exponential function (L 2 ) did. (3) The combined likelihood integrating the criteria of multiple outputs acted better than single likelihood in model uncertainty assessment in terms of reducing the uncertainty band widths and assuring the fitting goodness of whole model outputs. (4) A value of 0.55 appeared to be a modest choice of threshold value to balance the interests between high modeling efficiency and high bracketing efficiency. Results of this study could provide (1) an option to conduct NPS modeling under one single computer platform, (2) important references to the parameter setting for NPS model development in similar regions, (3) useful suggestions for the application of GLUE method in studies with different emphases according to research interests, and (4) important insights into the watershed P management in similar regions.

  12. A strategy for improved computational efficiency of the method of anchored distributions

    NASA Astrophysics Data System (ADS)

    Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram

    2013-06-01

    This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.

  13. A Bayesian Alternative for Multi-objective Ecohydrological Model Specification

    NASA Astrophysics Data System (ADS)

    Tang, Y.; Marshall, L. A.; Sharma, A.; Ajami, H.

    2015-12-01

    Process-based ecohydrological models combine the study of hydrological, physical, biogeochemical and ecological processes of the catchments, which are usually more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov Chain Monte Carlo (MCMC) techniques. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological framework. In our study, a formal Bayesian approach is implemented in an ecohydrological model which combines a hydrological model (HyMOD) and a dynamic vegetation model (DVM). Simulations focused on one objective likelihood (Streamflow/LAI) and multi-objective likelihoods (Streamflow and LAI) with different weights are compared. Uniform, weakly informative and strongly informative prior distributions are used in different simulations. The Kullback-leibler divergence (KLD) is used to measure the dis(similarity) between different priors and corresponding posterior distributions to examine the parameter sensitivity. Results show that different prior distributions can strongly influence posterior distributions for parameters, especially when the available data is limited or parameters are insensitive to the available data. We demonstrate differences in optimized parameters and uncertainty limits in different cases based on multi-objective likelihoods vs. single objective likelihoods. We also demonstrate the importance of appropriately defining the weights of objectives in multi-objective calibration according to different data types.

  14. Two models for evaluating landslide hazards

    USGS Publications Warehouse

    Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.

    2006-01-01

    Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.

  15. Perspectives of young Chinese Singaporean women on seeking and processing information to decide about vaccinating against human papillomavirus.

    PubMed

    Basnyat, Iccha; Lim, Cheryl

    2017-07-06

    Human papillomavirus (HPV) vaccination uptake in Singapore is low among young women. Low uptake has been found to be linked to low awareness. Thus, this study aimed to understand active and passive vaccine information-seeking behavior. Furthermore, guided by the Elaboration Likelihood Model (ELM), this study examined young women's (aged 21-26 years) processing of information they acquired in their decision to get vaccinated. ELM postulates that information processing could be through the central (i.e., logic-based) or peripheral (i.e., heuristic-based) route. Twenty-six in-depth interviews were conducted from January to March 2016. Data were analyzed using thematic analysis. Two meta-themes-information acquisition and vaccination decision-revealed the heuristic-based information processing was employed. These young women acquired information passively within their social network and actively in healthcare settings. However, they used heuristic cues, such as closeness and trust, to process the information. Similarly, vaccination decisions revealed that women relied on heuristic cues, such as sense of belonging and validation among peers and source credibility and likability in medical settings, in their decision to get vaccinated. The findings of this study highlight that intervention efforts should focus on strengthening social support among personal networks to increase the uptake of the vaccine.

  16. Effects of argument quality, source credibility and self-reported diabetes knowledge on message attitudes: an experiment using diabetes related messages.

    PubMed

    Lin, Tung-Cheng; Hwang, Lih-Lian; Lai, Yung-Jye

    2017-05-17

    Previous studies have reported that credibility and content (argument quality) are the most critical factors affecting the quality of health information and its acceptance and use; however, this causal relationship merits further investigation in the context of health education. Moreover, message recipients' prior knowledge may moderate these relationships. This study used the elaboration likelihood model to determine the main effects of argument quality, source credibility and the moderating effect of self-reported diabetes knowledge on message attitudes. A between-subjects experimental design using an educational message concerning diabetes for manipulation was applied to validate the effects empirically. A total of 181 participants without diabetes were recruited from the Department of Health, Taipei City Government. Four group messages were manipulated in terms of argument quality (high and low) × source credibility (high and low). Argument quality and source credibility of health information significantly influenced the attitude of message recipients. The participants with high self-reported knowledge participants exhibited significant disapproval for messages with low argument quality. Effective health information should provide objective descriptions and cite reliable sources; in addition, it should provide accurate, customised messages for recipients who have high background knowledge level and ability to discern message quality. © 2017 Health Libraries Group Health Information & Libraries Journal.

  17. Vegetation mapping from high-resolution satellite images in the heterogeneous arid environments of Socotra Island (Yemen)

    NASA Astrophysics Data System (ADS)

    Malatesta, Luca; Attorre, Fabio; Altobelli, Alfredo; Adeeb, Ahmed; De Sanctis, Michele; Taleb, Nadim M.; Scholte, Paul T.; Vitale, Marcello

    2013-01-01

    Socotra Island (Yemen), a global biodiversity hotspot, is characterized by high geomorphological and biological diversity. In this study, we present a high-resolution vegetation map of the island based on combining vegetation analysis and classification with remote sensing. Two different image classification approaches were tested to assess the most accurate one in mapping the vegetation mosaic of Socotra. Spectral signatures of the vegetation classes were obtained through a Gaussian mixture distribution model, and a sequential maximum a posteriori (SMAP) classification was applied to account for the heterogeneity and the complex spatial pattern of the arid vegetation. This approach was compared to the traditional maximum likelihood (ML) classification. Satellite data were represented by a RapidEye image with 5 m pixel resolution and five spectral bands. Classified vegetation relevés were used to obtain the training and evaluation sets for the main plant communities. Postclassification sorting was performed to adjust the classification through various rule-based operations. Twenty-eight classes were mapped, and SMAP, with an accuracy of 87%, proved to be more effective than ML (accuracy: 66%). The resulting map will represent an important instrument for the elaboration of conservation strategies and the sustainable use of natural resources in the island.

  18. Effects of a risk-based online mammography intervention on accuracy of perceived risk and mammography intentions.

    PubMed

    Seitz, Holli H; Gibson, Laura; Skubisz, Christine; Forquer, Heather; Mello, Susan; Schapira, Marilyn M; Armstrong, Katrina; Cappella, Joseph N

    2016-10-01

    This experiment tested the effects of an individualized risk-based online mammography decision intervention. The intervention employs exemplification theory and the Elaboration Likelihood Model of persuasion to improve the match between breast cancer risk and mammography intentions. 2918 women ages 35-49 were stratified into two levels of 10-year breast cancer risk (<1.5%; ≥1.5%) then randomly assigned to one of eight conditions: two comparison conditions and six risk-based intervention conditions that varied according to a 2 (amount of content: brief vs. extended) x 3 (format: expository vs. untailored exemplar [example case] vs. tailored exemplar) design. Outcomes included mammography intentions and accuracy of perceived breast cancer risk. Risk-based intervention conditions improved the match between objective risk estimates and perceived risk, especially for high-numeracy women with a 10-year breast cancer risk ≤1.5%. For women with a risk≤1.5%, exemplars improved accuracy of perceived risk and all risk-based interventions increased intentions to wait until age 50 to screen. A risk-based mammography intervention improved accuracy of perceived risk and the match between objective risk estimates and mammography intentions. Interventions could be applied in online or clinical settings to help women understand risk and make mammography decisions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Development of radio dramas for health communication pilot intervention in Canadian Inuit communities

    PubMed Central

    Racicot-Matta, Cassandra; Wilcke, Markus; Egeland, Grace M.

    2016-01-01

    A mixed-methods approach was used to develop a culturally appropriate health intervention over radio within the Inuit community of Pangnirtung, Nunavut (NU), Canada. The radio dramas were developed, recorded and tested pre-intervention through the use of Participatory Process and informed by the extended elaboration likelihood model (EELM) for education–communication. The radio messages were tested in two focus groups (n = 4 and n = 5) to determine fidelity of the radio dramas to the EELM theory. Focus group feedback identified that revisions needed to be made to two characteristics required of educational programmes by the EELM theorem: first, the quality of the production was improved by adding Inuit youth recorded music and second, the homophily (relatability of characters) of radio dramas was improved by re-recording the dramas with voices of local youth who had been trained in media communication studies. These adjustments would not have been implemented had pre-intervention testing of the radio dramas not taken place and could have reduced effectiveness of the overall intervention. Therefore, it is highly recommended that media tools for health communication/education be tested with the intended target audience before commencement of programmes. Participatory Process was identified to be a powerful tool in the development and sustainability of culturally appropriate community health programming. PMID:24957329

  20. Effects of a Risk-based Online Mammography Intervention on Accuracy of Perceived Risk and Mammography Intentions

    PubMed Central

    Seitz, Holli H.; Gibson, Laura; Skubisz, Christine; Forquer, Heather; Mello, Susan; Schapira, Marilyn M.; Armstrong, Katrina; Cappella, Joseph N.

    2016-01-01

    Objective This experiment tested the effects of an individualized risk-based online mammography decision intervention. The intervention employs exemplification theory and the Elaboration Likelihood Model of persuasion to improve the match between breast cancer risk and mammography intentions. Methods 2,918 women ages 35-49 were stratified into two levels of 10-year breast cancer risk (< 1.5%; ≥ 1.5%) then randomly assigned to one of eight conditions: two comparison conditions and six risk-based intervention conditions that varied according to a 2 (amount of content: brief vs. extended) × 3 (format: expository vs. untailored exemplar [example case] vs. tailored exemplar) design. Outcomes included mammography intentions and accuracy of perceived breast cancer risk. Results Risk-based intervention conditions improved the match between objective risk estimates and perceived risk, especially for high-numeracy women with a 10-year breast cancer risk <1.5%. For women with a risk < 1.5%, exemplars improved accuracy of perceived risk and all risk-based interventions increased intentions to wait until age 50 to screen. Conclusion A risk-based mammography intervention improved accuracy of perceived risk and the match between objective risk estimates and mammography intentions. Practice Implications Interventions could be applied in online or clinical settings to help women understand risk and make mammography decisions. PMID:27178707

  1. Catching fire and spreading it: A glimpse into displayed entrepreneurial passion in crowdfunding campaigns.

    PubMed

    Li, Junchao Jason; Chen, Xiao-Ping; Kotha, Suresh; Fisher, Greg

    2017-07-01

    Crowdfunding is an emerging phenomenon that enables entrepreneurs to solicit financial contributions for new projects from mass audiences. Drawing on the elaboration likelihood model of persuasion and emotional contagion theory, the authors examined the importance of displayed entrepreneurial passion when seeking resources in a crowdfunding context. They proposed that entrepreneurs' displayed passion in the introductory video for a crowdfunding project increases viewers' experienced enthusiasm about the project (i.e., passion contagion), which then prompts them to contribute financially and to share campaign information via social-media channels. Such sharing further facilitates campaign success. In addition, the authors proposed that perceived project innovativeness strengthens the positive effect of displayed passion on social-media exposure and the funding amount a project garners. They first tested their hypotheses in 2 studies using a combination of survey and archival data from the world's 2 most popular crowdfunding platforms: Indiegogo (Study 1) and Kickstarter (Study 2). They then conducted an experiment (Study 3) to validate the proposed passion contagion process, and the effect of displayed entrepreneurial passion at the individual level. Findings from these 3 studies significantly supported their hypotheses. The authors discuss the theoretical and practical implications of their findings. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. The "id" knows more than the "ego" admits: neuropsychoanalytic and primal consciousness perspectives on the interface between affective and cognitive neuroscience.

    PubMed

    Solms, Mark; Panksepp, Jaak

    2012-04-17

    It is commonly believed that consciousness is a higher brain function. Here we consider the likelihood, based on abundant neuroevolutionary data that lower brain affective phenomenal experiences provide the "energy" for the developmental construction of higher forms of cognitive consciousness. This view is concordant with many of the theoretical formulations of Sigmund Freud. In this reconceptualization, all of consciousness may be dependent on the original evolution of affective phenomenal experiences that coded survival values. These subcortical energies provided a foundation that could be used for the epigenetic construction of perceptual and other higher forms of consciousness. From this perspective, perceptual experiences were initially affective at the primary-process brainstem level, but capable of being elaborated by secondary learning and memory processes into tertiary-cognitive forms of consciousness. Within this view, although all individual neural activities are unconscious, perhaps along with secondary-process learning and memory mechanisms, the primal sub-neocortical networks of emotions and other primal affects may have served as the sentient scaffolding for the construction of resolved perceptual and higher mental activities within the neocortex. The data supporting this neuro-psycho-evolutionary vision of the emergence of mind is discussed in relation to classical psychoanalytical models.

  3. The “Id” Knows More than the “Ego” Admits: Neuropsychoanalytic and Primal Consciousness Perspectives on the Interface Between Affective and Cognitive Neuroscience

    PubMed Central

    Solms, Mark; Panksepp, Jaak

    2012-01-01

    It is commonly believed that consciousness is a higher brain function. Here we consider the likelihood, based on abundant neuroevolutionary data that lower brain affective phenomenal experiences provide the “energy” for the developmental construction of higher forms of cognitive consciousness. This view is concordant with many of the theoretical formulations of Sigmund Freud. In this reconceptualization, all of consciousness may be dependent on the original evolution of affective phenomenal experiences that coded survival values. These subcortical energies provided a foundation that could be used for the epigenetic construction of perceptual and other higher forms of consciousness. From this perspective, perceptual experiences were initially affective at the primary-process brainstem level, but capable of being elaborated by secondary learning and memory processes into tertiary-cognitive forms of consciousness. Within this view, although all individual neural activities are unconscious, perhaps along with secondary-process learning and memory mechanisms, the primal sub-neocortical networks of emotions and other primal affects may have served as the sentient scaffolding for the construction of resolved perceptual and higher mental activities within the neocortex. The data supporting this neuro-psycho-evolutionary vision of the emergence of mind is discussed in relation to classical psychoanalytical models. PMID:24962770

  4. Maximum likelihood estimation and EM algorithm of Copas-like selection model for publication bias correction.

    PubMed

    Ning, Jing; Chen, Yong; Piao, Jin

    2017-07-01

    Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Maintained Individual Data Distributed Likelihood Estimation (MIDDLE)

    PubMed Central

    Boker, Steven M.; Brick, Timothy R.; Pritikin, Joshua N.; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D.; Maes, Hermine H.; Neale, Michael C.

    2015-01-01

    Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly-impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participants’ personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual’s data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies. PMID:26717128

  6. The Effects of Model Misspecification and Sample Size on LISREL Maximum Likelihood Estimates.

    ERIC Educational Resources Information Center

    Baldwin, Beatrice

    The robustness of LISREL computer program maximum likelihood estimates under specific conditions of model misspecification and sample size was examined. The population model used in this study contains one exogenous variable; three endogenous variables; and eight indicator variables, two for each latent variable. Conditions of model…

  7. A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses

    ERIC Educational Resources Information Center

    Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini

    2012-01-01

    The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…

  8. Robust analysis of semiparametric renewal process models

    PubMed Central

    Lin, Feng-Chang; Truong, Young K.; Fine, Jason P.

    2013-01-01

    Summary A rate model is proposed for a modulated renewal process comprising a single long sequence, where the covariate process may not capture the dependencies in the sequence as in standard intensity models. We consider partial likelihood-based inferences under a semiparametric multiplicative rate model, which has been widely studied in the context of independent and identical data. Under an intensity model, gap times in a single long sequence may be used naively in the partial likelihood with variance estimation utilizing the observed information matrix. Under a rate model, the gap times cannot be treated as independent and studying the partial likelihood is much more challenging. We employ a mixing condition in the application of limit theory for stationary sequences to obtain consistency and asymptotic normality. The estimator's variance is quite complicated owing to the unknown gap times dependence structure. We adapt block bootstrapping and cluster variance estimators to the partial likelihood. Simulation studies and an analysis of a semiparametric extension of a popular model for neural spike train data demonstrate the practical utility of the rate approach in comparison with the intensity approach. PMID:24550568

  9. Models and analysis for multivariate failure time data

    NASA Astrophysics Data System (ADS)

    Shih, Joanna Huang

    The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.

  10. Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models

    ERIC Educational Resources Information Center

    Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai

    2011-01-01

    Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…

  11. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    ERIC Educational Resources Information Center

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  12. Expertise and age differences in pilot decision making.

    PubMed

    Morrow, Daniel G; Miller, Lisa M Soederberg; Ridolfo, Heather E; Magnor, Clifford; Fischer, Ute M; Kokayeff, Nina K; Stine-Morrow, Elizabeth A L

    2009-01-01

    We examined the influence of age and expertise on pilot decision making. Older and younger expert and novice pilots read at their own pace scenarios describing simpler or more complex flight situations. Then in a standard interview they discussed the scenario problem and how they would respond. Protocols were coded for identification of problem and solutions to this problem, and frequency of elaborations on problem and solution. Scenario comprehension was measured as differential reading time allocation to problem-critical information and scenario memory by the accuracy of answering questions about the scenarios after the interview. All groups accurately identified the problems, but experts elaborated problem descriptions more than novices did. Experts also spent more time reading critical information in the complex scenarios, which may reflect time needed to develop elaborate situation models of the problems. Expertise comprehension benefits were similar for older and younger pilots. Older experts were especially likely to elaborate the problem compared to younger experts, while older novices were less likely to elaborate the problem and to identify appropriate solutions compared to their younger counterparts. The findings suggest age invariance in knowledge-based comprehension relevant to pilot decision making.

  13. The Long-Term Impact of High School Civics Curricula on Political Knowledge, Democratic Attitudes and Civic Behaviors: A Multi-Level Model of Direct and Mediated Effects through Communication. CIRCLE Working Paper #65

    ERIC Educational Resources Information Center

    Hutchens, Myiah J.; Eveland, William P., Jr.

    2009-01-01

    This report examines the effects of exposure to various elements of a civics curriculum on civic participation, two forms of political knowledge, internal political efficacy, political cynicism, news elaboration, discussion elaboration and various forms of interpersonal and mediated political communication behaviors. The data are based on a…

  14. Predicting the likelihood of altered streamflows at ungauged rivers across the conterminous United States

    USGS Publications Warehouse

    Eng, Kenny; Carlisle, Daren M.; Wolock, David M.; Falcone, James A.

    2013-01-01

    An approach is presented in this study to aid water-resource managers in characterizing streamflow alteration at ungauged rivers. Such approaches can be used to take advantage of the substantial amounts of biological data collected at ungauged rivers to evaluate the potential ecological consequences of altered streamflows. National-scale random forest statistical models are developed to predict the likelihood that ungauged rivers have altered streamflows (relative to expected natural condition) for five hydrologic metrics (HMs) representing different aspects of the streamflow regime. The models use human disturbance variables, such as number of dams and road density, to predict the likelihood of streamflow alteration. For each HM, separate models are derived to predict the likelihood that the observed metric is greater than (‘inflated’) or less than (‘diminished’) natural conditions. The utility of these models is demonstrated by applying them to all river segments in the South Platte River in Colorado, USA, and for all 10-digit hydrologic units in the conterminous United States. In general, the models successfully predicted the likelihood of alteration to the five HMs at the national scale as well as in the South Platte River basin. However, the models predicting the likelihood of diminished HMs consistently outperformed models predicting inflated HMs, possibly because of fewer sites across the conterminous United States where HMs are inflated. The results of these analyses suggest that the primary predictors of altered streamflow regimes across the Nation are (i) the residence time of annual runoff held in storage in reservoirs, (ii) the degree of urbanization measured by road density and (iii) the extent of agricultural land cover in the river basin.

  15. A Penalized Likelihood Framework For High-Dimensional Phylogenetic Comparative Methods And An Application To New-World Monkeys Brain Evolution.

    PubMed

    Julien, Clavel; Leandro, Aristide; Hélène, Morlon

    2018-06-19

    Working with high-dimensional phylogenetic comparative datasets is challenging because likelihood-based multivariate methods suffer from low statistical performances as the number of traits p approaches the number of species n and because some computational complications occur when p exceeds n. Alternative phylogenetic comparative methods have recently been proposed to deal with the large p small n scenario but their use and performances are limited. Here we develop a penalized likelihood framework to deal with high-dimensional comparative datasets. We propose various penalizations and methods for selecting the intensity of the penalties. We apply this general framework to the estimation of parameters (the evolutionary trait covariance matrix and parameters of the evolutionary model) and model comparison for the high-dimensional multivariate Brownian (BM), Early-burst (EB), Ornstein-Uhlenbeck (OU) and Pagel's lambda models. We show using simulations that our penalized likelihood approach dramatically improves the estimation of evolutionary trait covariance matrices and model parameters when p approaches n, and allows for their accurate estimation when p equals or exceeds n. In addition, we show that penalized likelihood models can be efficiently compared using Generalized Information Criterion (GIC). We implement these methods, as well as the related estimation of ancestral states and the computation of phylogenetic PCA in the R package RPANDA and mvMORPH. Finally, we illustrate the utility of the new proposed framework by evaluating evolutionary models fit, analyzing integration patterns, and reconstructing evolutionary trajectories for a high-dimensional 3-D dataset of brain shape in the New World monkeys. We find a clear support for an Early-burst model suggesting an early diversification of brain morphology during the ecological radiation of the clade. Penalized likelihood offers an efficient way to deal with high-dimensional multivariate comparative data.

  16. Theoretical Foundations of Appeals Used in Alcohol-Abuse and Drunk-Driving Public Service Announcements in the United States, 1995-2010.

    PubMed

    Niederdeppe, Jeff; Avery, Rosemary J; Miller, Emily Elizabeth Namaste

    2018-05-01

    The study identifies the extent to which theoretical constructs drawn from well-established message effect communication theories are reflected in the content of alcohol-related public service announcements (PSAs) airing in the United States over a 16-year period. Content analysis of 18 530 141 alcohol-abuse (AA) and drunk-driving (DD) PSAs appearing on national network and local cable television stations in the 210 largest designated marketing areas (DMAs) from January 1995 through December 2010. The authors developed a detailed content analytic codebook and trained undergraduate coders to reliably identify the extent to which theoretical constructs and other creative ad elements are reflected in the PSAs. We show these patterns using basic descriptive statistics. Although both classes of alcohol-related PSAs used strategies that are consistent with major message effect theories, their specific theoretical orientations differed dramatically. The AA PSAs were generally consistent with constructs emphasized by the Extended Parallel Process Model (EPPM), whereas DD PSAs were more likely to use normative strategies emphasized by the Focus Theory of Narrative Conduct (FTNC) or source credibility appeals central to the Elaboration Likelihood Model. Having identified message content, future research should use deductive approaches to determine if volume and message content of alcohol-control PSAs have an impact on measures of alcohol consumption and/or measures of drunk driving, such as fatalities or driving while intoxicated/driving under the influence arrests.

  17. Maximum Likelihood Item Easiness Models for Test Theory Without an Answer Key

    PubMed Central

    Batchelder, William H.

    2014-01-01

    Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce two extensions to the basic model in order to account for item rating easiness/difficulty. The first extension is a multiplicative model and the second is an additive model. We show how the multiplicative model is related to the Rasch model. We describe several maximum-likelihood estimation procedures for the models and discuss issues of model fit and identifiability. We describe how the CCT models could be used to give alternative consensus-based measures of reliability. We demonstrate the utility of both the basic and extended models on a set of essay rating data and give ideas for future research. PMID:29795812

  18. Modeling gene expression measurement error: a quasi-likelihood approach

    PubMed Central

    Strimmer, Korbinian

    2003-01-01

    Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution) or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale). Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood). Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic) variance structure of the data. As the quasi-likelihood behaves (almost) like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye) effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also improved the power of tests to identify differential expression. PMID:12659637

  19. Model averaging techniques for quantifying conceptual model uncertainty.

    PubMed

    Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg

    2010-01-01

    In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.

  20. SEMModComp: An R Package for Calculating Likelihood Ratio Tests for Mean and Covariance Structure Models

    ERIC Educational Resources Information Center

    Levy, Roy

    2010-01-01

    SEMModComp, a software package for conducting likelihood ratio tests for mean and covariance structure modeling is described. The package is written in R and freely available for download or on request.

  1. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  2. Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2014-02-01

    Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.

  3. Technical Note: Approximate Bayesian parameterization of a complex tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2013-08-01

    Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.

  4. Maximum Likelihood Analysis of Nonlinear Structural Equation Models with Dichotomous Variables

    ERIC Educational Resources Information Center

    Song, Xin-Yuan; Lee, Sik-Yum

    2005-01-01

    In this article, a maximum likelihood approach is developed to analyze structural equation models with dichotomous variables that are common in behavioral, psychological and social research. To assess nonlinear causal effects among the latent variables, the structural equation in the model is defined by a nonlinear function. The basic idea of the…

  5. Persuading people to eat less junk food: a cognitive resource match between attitudinal ambivalence and health message framing.

    PubMed

    Yan, Changmin

    2015-01-01

    This study investigated the interactive effects of attitudinal ambivalence and health message framing on persuading people to eat less junk food. Within the heuristic-systematic model of information processing, an attitudinal ambivalence (ambivalent or univalent toward eating junk food) by health message framing (advantage- or disadvantage-framed appeals) between-subjects experiment was conducted to explore a cognitive resource-matching effect and the underlying mediation processes. Ambivalent individuals reported a higher level of cognitive elaboration than univalent individuals did. The disadvantage frame engendered more extensive cognitive elaboration than the advantage frame did. Ambivalent individuals were more persuaded by the disadvantage frame and, for them, cognitive elaboration mediated the persuasion process via the systematic route. Univalent individuals were equally persuaded by the advantage frame and the disadvantage frame and, for them, neither the perceived frame valence nor cognitive elaboration mediated persuasion. Discussion of the null results among the univalent group leads to a response-reinforcement explanation. Theoretical and practical implications are discussed.

  6. Distinctiveness and encoding effects in online sentence comprehension

    PubMed Central

    Hofmeister, Philip; Vasishth, Shravan

    2014-01-01

    In explicit memory recall and recognition tasks, elaboration and contextual isolation both facilitate memory performance. Here, we investigate these effects in the context of sentence processing: targets for retrieval during online sentence processing of English object relative clause constructions differ in the amount of elaboration associated with the target noun phrase, or the homogeneity of superficial features (text color). Experiment 1 shows that greater elaboration for targets during the encoding phase reduces reading times at retrieval sites, but elaboration of non-targets has considerably weaker effects. Experiment 2 illustrates that processing isolated superficial features of target noun phrases—here, a green word in a sentence with words colored white—does not lead to enhanced memory performance, despite triggering longer encoding times. These results are interpreted in the light of the memory models of Nairne, 1990, 2001, 2006, which state that encoding remnants contribute to the set of retrieval cues that provide the basis for similarity-based interference effects. PMID:25566105

  7. Level of Construal, Mind Wandering, and Repetitive Thought

    PubMed Central

    Watkins, Edward R.

    2010-01-01

    In this reply to the comment of McVay and Kane (2010)Watkins’s (2008) elaborated control theory informs their perspective on the role of executive control in mind wandering. I argue that although in a number of places the elaborated control theory is consistent with the perspective of McVay and Kane that mind wandering represents a failure of executive control, their account makes a number of claims that are not articulated in the elaborated control theory—most notably, the hypothesis that level of construal moderates entry of thoughts into awareness. Moreover, the relevant literature suggests that the relationship between level of construal and executive control may be more complex, and may be determined by multiple factors beyond those proposed in this executive-control failure account of mind wandering. Finally, the implications of this model of mind wandering for understanding repetitive thought in general are considered, and it is proposed that examining level of executive control as a further moderating variable within elaborated control theory may be of value.

  8. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  9. Thinking versus feeling: differentiating between cognitive and affective components of perceived cancer risk.

    PubMed

    Janssen, Eva; van Osch, Liesbeth; Lechner, Lilian; Candel, Math; de Vries, Hein

    2012-01-01

    Despite the increased recognition of affect in guiding probability estimates, perceived risk has been mainly operationalised in a cognitive way and the differentiation between rational and intuitive judgements is largely unexplored. This study investigated the validity of a measurement instrument differentiating cognitive and affective probability beliefs and examined whether behavioural decision making is mainly guided by cognition or affect. Data were obtained from four surveys focusing on smoking (N=268), fruit consumption (N=989), sunbed use (N=251) and sun protection (N=858). Correlational analyses showed that affective likelihood was more strongly correlated with worry compared to cognitive likelihood and confirmatory factor analysis provided support for a two-factor model of perceived likelihood instead of a one-factor model (i.e. cognition and affect combined). Furthermore, affective likelihood was significantly associated with the various outcome variables, whereas the association for cognitive likelihood was absent in three studies. The findings provide support for the construct validity of the measures used to assess cognitive and affective likelihood. Since affective likelihood might be a better predictor of health behaviour than the commonly used cognitive operationalisation, both dimensions should be considered in future research.

  10. Reconstructing the origin and elaboration of insect-trapping inflorescences in the Araceae1

    PubMed Central

    Bröderbauer, David; Diaz, Anita; Weber, Anton

    2016-01-01

    Premise of the study Floral traps are among the most sophisticated devices that have evolved in angiosperms in the context of pollination, but the evolution of trap pollination has not yet been studied in a phylogenetic context. We aim to determine the evolutionary history of morphological traits that facilitate trap pollination and to elucidate the impact of pollinators on the evolution of inflorescence traps in the family Araceae. Methods Inflorescence morphology was investigated to determine the presence of trapping devices and to classify functional types of traps. We inferred phylogenetic relationships in the family using maximum likelihood and Bayesian methods. Character evolution of trapping devices, trap types, and pollinator types was then assessed with maximum parsimony and Bayesian methods. We also tested for an association of trap pollination with specific pollinator types. Key results Inflorescence traps have evolved independently at least 10 times within the Araceae. Trapping devices were found in 27 genera. On the basis of different combinations of trapping devices, six functional types of traps were identified. Trap pollination in Araceae is correlated with pollination by flies. Conclusions Trap pollination in the Araceae is more common than was previously thought. Preadaptations such as papillate cells or elongated sterile flowers facilitated the evolution of inflorescence traps. In some clades, imperfect traps served as a precursor for the evolution of more elaborate traps. Traps that evolved in association with fly pollination were most probably derived from mutualistic ancestors, offering a brood-site to their pollinators. PMID:22965851

  11. Stimulation of inorganic pyrophosphate elaboration by cultured cartilage and chondrocytes.

    PubMed

    Ryan, L M; Kurup, I; Rosenthal, A K; McCarty, D J

    1989-08-01

    Inorganic pyrophosphate elaboration by articular cartilage may favor calcium pyrophosphate dihydrate crystal deposition. Frequently crystal deposits form in persons affected with metabolic diseases. The cartilage organ culture system was used to model these metabolic conditions while measuring the influence on extracellular pyrophosphate elaboration. Alterations of ambient pH, thyroid stimulating hormone levels, and parathyroid hormone levels did not change pyrophosphate accumulation in the media. However, subphysiologic ambient calcium concentrations (25, 100, 500 microM) increased pyrophosphate accumulation about chondrocytes 3- to 10-fold. Low calcium also induced release of [14C]adenine-labeled nucleotides from chondrocytes, potential substrates for generation of extracellular pyrophosphate by ectoenzymes. Exposing cartilage to 10% fetal bovine serum also enhanced by 50% the egress of inorganic pyrophosphate from the tissue.

  12. An exploratory investigation of real-world reasoning in paranoia.

    PubMed

    Huddy, V; Brown, G P; Boyd, T; Wykes, T

    2014-03-01

    Paranoid thinking has been linked to greater availability in memory of past threats to the self. However, remembered experiences may not always closely resemble events that trigger paranoia, so novel explanations must be elaborated for the likelihood of threat to be determined. We investigated the ability of paranoid individuals to construct explanations for everyday situations and whether these modulate their emotional impact. Twenty-one participants experiencing paranoia and 21 healthy controls completed a mental simulation task that yields a measure of the coherence of reasoning in everyday situations. When responses featured positive content, clinical participants produced less coherent narratives in response to paranoid themed scenarios than healthy controls. There was no significant difference between the groups when responses featured negative content. The current study suggests that difficulty in scenario construction may exacerbate paranoia by reducing access to non-threatening explanations for everyday events, and this consequently increases distress. © 2012 The British Psychological Society.

  13. Effects of Sequences of Cognitions on Group Performance Over Time

    PubMed Central

    Molenaar, Inge; Chiu, Ming Ming

    2017-01-01

    Extending past research showing that sequences of low cognitions (low-level processing of information) and high cognitions (high-level processing of information through questions and elaborations) influence the likelihoods of subsequent high and low cognitions, this study examines whether sequences of cognitions are related to group performance over time; 54 primary school students (18 triads) discussed and wrote an essay about living in another country (32,375 turns of talk). Content analysis and statistical discourse analysis showed that within each lesson, groups with more low cognitions or more sequences of low cognition followed by high cognition added more essay words. Groups with more high cognitions, sequences of low cognition followed by low cognition, or sequences of high cognition followed by an action followed by low cognition, showed different words and sequences, suggestive of new ideas. The links between cognition sequences and group performance over time can inform facilitation and assessment of student discussions. PMID:28490854

  14. Effects of Sequences of Cognitions on Group Performance Over Time.

    PubMed

    Molenaar, Inge; Chiu, Ming Ming

    2017-04-01

    Extending past research showing that sequences of low cognitions (low-level processing of information) and high cognitions (high-level processing of information through questions and elaborations) influence the likelihoods of subsequent high and low cognitions, this study examines whether sequences of cognitions are related to group performance over time; 54 primary school students (18 triads) discussed and wrote an essay about living in another country (32,375 turns of talk). Content analysis and statistical discourse analysis showed that within each lesson, groups with more low cognitions or more sequences of low cognition followed by high cognition added more essay words. Groups with more high cognitions, sequences of low cognition followed by low cognition, or sequences of high cognition followed by an action followed by low cognition, showed different words and sequences, suggestive of new ideas. The links between cognition sequences and group performance over time can inform facilitation and assessment of student discussions.

  15. Addressing group dynamics in a brief motivational intervention for college student drinkers.

    PubMed

    Faris, Alexander S; Brown, Janice M

    2003-01-01

    Previous research indicates that brief motivational interventions for college student drinkers may be less effective in group settings than individual settings. Social psychological theories about counterproductive group dynamics may partially explain this finding. The present study examined potential problems with group motivational interventions by comparing outcomes from a standard group motivational intervention (SGMI; n = 25), an enhanced group motivational intervention (EGMI; n = 27) designed to suppress counterproductive processes, and a no intervention control (n = 23). SGMI and EGMI participants reported disruptive group dynamics as evidenced by low elaboration likelihood, production blocking, and social loafing, though the level of disturbance was significantly lower for EGMI individuals (p = .001). Despite counteracting group dynamics in the EGMI condition, participants in the two interventions were statistically similar in post-intervention problem recognition and future drinking intentions. The results raise concerns over implementing individually-based interventions in group settings without making necessary adjustments.

  16. Parameter estimation of history-dependent leaky integrate-and-fire neurons using maximum-likelihood methods

    PubMed Central

    Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst

    2012-01-01

    When a neuronal spike train is observed, what can we say about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then to choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate and fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that its unique global minimum can thus be found by gradient descent techniques. The global minimum property requires independence of spike time intervals. Lack of history dependence is, however, an important constraint that is not fulfilled in many biological neurons which are known to generate a rich repertoire of spiking behaviors that are incompatible with history independence. Therefore, we expanded the integrate and fire model by including one additional variable, a variable threshold (Mihalas & Niebur, 2009) allowing for history-dependent firing patterns. This neuronal model produces a large number of spiking behaviors while still being linear. Linearity is important as it maintains the distribution of the random variables and still allows for maximum likelihood methods to be used. In this study we show that, although convexity of the negative log-likelihood is not guaranteed for this model, the minimum of the negative log-likelihood function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) frequently reaches the global minimum. PMID:21851282

  17. Likelihoods for fixed rank nomination networks

    PubMed Central

    HOFF, PETER; FOSDICK, BAILEY; VOLFOVSKY, ALEX; STOVEL, KATHERINE

    2014-01-01

    Many studies that gather social network data use survey methods that lead to censored, missing, or otherwise incomplete information. For example, the popular fixed rank nomination (FRN) scheme, often used in studies of schools and businesses, asks study participants to nominate and rank at most a small number of contacts or friends, leaving the existence of other relations uncertain. However, most statistical models are formulated in terms of completely observed binary networks. Statistical analyses of FRN data with such models ignore the censored and ranked nature of the data and could potentially result in misleading statistical inference. To investigate this possibility, we compare Bayesian parameter estimates obtained from a likelihood for complete binary networks with those obtained from likelihoods that are derived from the FRN scheme, and therefore accommodate the ranked and censored nature of the data. We show analytically and via simulation that the binary likelihood can provide misleading inference, particularly for certain model parameters that relate network ties to characteristics of individuals and pairs of individuals. We also compare these different likelihoods in a data analysis of several adolescent social networks. For some of these networks, the parameter estimates from the binary and FRN likelihoods lead to different conclusions, indicating the importance of analyzing FRN data with a method that accounts for the FRN survey design. PMID:25110586

  18. Involvement of prelimbic medial prefrontal cortex in panic-like elaborated defensive behaviour and innate fear-induced antinociception elicited by GABAA receptor blockade in the dorsomedial and ventromedial hypothalamic nuclei: role of the endocannabinoid CB1 receptor.

    PubMed

    Freitas, Renato Leonardo de; Salgado-Rohner, Carlos José; Hallak, Jaime Eduardo Cecílio; Crippa, José Alexandre de Souza; Coimbra, Norberto Cysne

    2013-09-01

    It has been shown that GABAA receptor blockade in the dorsomedial and ventromedial hypothalamic nuclei (DMH and VMH, respectively) induces elaborated defensive behavioural responses accompanied by antinociception, which has been utilized as an experimental model of panic attack. Furthermore, the prelimbic (PL) division of the medial prefrontal cortex (MPFC) has been related to emotional reactions and the processing of nociceptive information. The aim of the present study was to investigate the possible involvement of the PL cortex and the participation of local cannabinoid CB1 receptors in the elaboration of panic-like reactions and in innate fear-induced antinociception. Elaborated fear-induced responses were analysed during a 10-min period in an open-field test arena. Microinjection of the GABAA receptor antagonist bicuculline into the DMH/VMH evoked panic-like behaviour and fear-induced antinociception, which was decreased by microinjection of the non-selective synaptic contact blocker cobalt chloride in the PL cortex. Moreover, microinjection of AM251 (25, 100 or 400 pmol), an endocannabinoid CB1 receptor antagonist, into the PL cortex also attenuated the defensive behavioural responses and the antinociception that follows innate fear behaviour elaborated by DMH/VMH. These data suggest that the PL cortex plays an important role in the organization of elaborated forward escape behaviour and that this cortical area is also involved in the elaboration of innate fear-induced antinociception. Additionally, CB1 receptors in the PL cortex modulate both panic-like behaviours and fear-induced antinociception elicited by disinhibition of the DMH/VMH through microinjection of bicuculline.

  19. Inferring the parameters of a Markov process from snapshots of the steady state

    NASA Astrophysics Data System (ADS)

    Dettmer, Simon L.; Berg, Johannes

    2018-02-01

    We seek to infer the parameters of an ergodic Markov process from samples taken independently from the steady state. Our focus is on non-equilibrium processes, where the steady state is not described by the Boltzmann measure, but is generally unknown and hard to compute, which prevents the application of established equilibrium inference methods. We propose a quantity we call propagator likelihood, which takes on the role of the likelihood in equilibrium processes. This propagator likelihood is based on fictitious transitions between those configurations of the system which occur in the samples. The propagator likelihood can be derived by minimising the relative entropy between the empirical distribution and a distribution generated by propagating the empirical distribution forward in time. Maximising the propagator likelihood leads to an efficient reconstruction of the parameters of the underlying model in different systems, both with discrete configurations and with continuous configurations. We apply the method to non-equilibrium models from statistical physics and theoretical biology, including the asymmetric simple exclusion process (ASEP), the kinetic Ising model, and replicator dynamics.

  20. A methodology proposal for collaborative business process elaboration using a model-driven approach

    NASA Astrophysics Data System (ADS)

    Mu, Wenxin; Bénaben, Frédérick; Pingaud, Hervé

    2015-05-01

    Business process management (BPM) principles are commonly used to improve processes within an organisation. But they can equally be applied to supporting the design of an Information System (IS). In a collaborative situation involving several partners, this type of BPM approach may be useful to support the design of a Mediation Information System (MIS), which would ensure interoperability between the partners' ISs (which are assumed to be service oriented). To achieve this objective, the first main task is to build a collaborative business process cartography. The aim of this article is to present a method for bringing together collaborative information and elaborating collaborative business processes from the information gathered (by using a collaborative situation framework, an organisational model, an informational model, a functional model and a metamodel and by using model transformation rules).

  1. Gaussian copula as a likelihood function for environmental models

    NASA Astrophysics Data System (ADS)

    Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.

    2017-12-01

    Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an interesting departure from the usage of fully parametric distributions as likelihood functions - and they could help us to better capture the statistical properties of errors and make more reliable predictions.

  2. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  3. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  4. Maximum Likelihood Analysis of a Two-Level Nonlinear Structural Equation Model with Fixed Covariates

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Song, Xin-Yuan

    2005-01-01

    In this article, a maximum likelihood (ML) approach for analyzing a rather general two-level structural equation model is developed for hierarchically structured data that are very common in educational and/or behavioral research. The proposed two-level model can accommodate nonlinear causal relations among latent variables as well as effects…

  5. Detecting Growth Shape Misspecifications in Latent Growth Models: An Evaluation of Fit Indexes

    ERIC Educational Resources Information Center

    Leite, Walter L.; Stapleton, Laura M.

    2011-01-01

    In this study, the authors compared the likelihood ratio test and fit indexes for detection of misspecifications of growth shape in latent growth models through a simulation study and a graphical analysis. They found that the likelihood ratio test, MFI, and root mean square error of approximation performed best for detecting model misspecification…

  6. Maximum Likelihood Item Easiness Models for Test Theory without an Answer Key

    ERIC Educational Resources Information Center

    France, Stephen L.; Batchelder, William H.

    2015-01-01

    Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce…

  7. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    1992-01-01

    Describes algorithms used in the computer program LOGIMO for obtaining maximum likelihood estimates of the parameters in loglinear models. These algorithms are also useful for the analysis of loglinear item-response theory models. Presents modified versions of the iterative proportional fitting and Newton-Raphson algorithms. Simulated data…

  8. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    ERIC Educational Resources Information Center

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  9. Profile-likelihood Confidence Intervals in Item Response Theory Models.

    PubMed

    Chalmers, R Philip; Pek, Jolynn; Liu, Yang

    2017-01-01

    Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.

  10. High-Performance Clock Synchronization Algorithms for Distributed Wireless Airborne Computer Networks with Applications to Localization and Tracking of Targets

    DTIC Science & Technology

    2010-06-01

    GMKPF represents a better and more flexible alternative to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ...accurate results relative to GML and EML when the network delays are modeled in terms of a single non-Gaussian/non-exponential distribution or as a...to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ) estimators for clock offset estimation in non-Gaussian or non

  11. Comparison of two weighted integration models for the cueing task: linear and likelihood

    NASA Technical Reports Server (NTRS)

    Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.

    2003-01-01

    In a task in which the observer must detect a signal at two locations, presenting a precue that predicts the location of a signal leads to improved performance with a valid cue (signal location matches the cue), compared to an invalid cue (signal location does not match the cue). The cue validity effect has often been explained with a limited capacity attentional mechanism improving the perceptual quality at the cued location. Alternatively, the cueing effect can also be explained by unlimited capacity models that assume a weighted combination of noisy responses across the two locations. We compare two weighted integration models, a linear model and a sum of weighted likelihoods model based on a Bayesian observer. While qualitatively these models are similar, quantitatively they predict different cue validity effects as the signal-to-noise ratios (SNR) increase. To test these models, 3 observers performed in a cued discrimination task of Gaussian targets with an 80% valid precue across a broad range of SNR's. Analysis of a limited capacity attentional switching model was also included and rejected. The sum of weighted likelihoods model best described the psychophysical results, suggesting that human observers approximate a weighted combination of likelihoods, and not a weighted linear combination.

  12. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  13. [Prevention of occupational accidents with biological material as per Green and Kreuter Model].

    PubMed

    Manetti, Marcela Luisa; da Costa, João Carlos Souza; Marziale, Maria Helena Palucci; Trovó, Marli Elisa

    2006-03-01

    This study aimed at diagnosing the occurrence of occupational accidents deriving from exposition to biological substance among workers of a hospital from São Paulo, Brazil, analyzing the adopted safety measures and elaborating a flowchart of preventive actions according to the Health Promotion Model by Green and Kreuter. It is an exploratory study with data collected electronically from the website REPAT - Electronic Network for the Prevention of Occupational Accidents with biological substances. The strategy used by the hospital did not reduce the injures. Results were used to elaborate a flowchart of preventive actions in order to improve the workers' quality of life.

  14. Elaboration of technology organizational models of constructing high-rise buildings in plans of construction organization

    NASA Astrophysics Data System (ADS)

    Osipenkova, Irina; Simankina, Tatyana; Syrygina, Taisiia; Lukinov, Vitaliy

    2018-03-01

    This article represents features of the elaboration of technology organizational models of high-rise building construction in technology organizational documentation on the example of the plan of construction organization. Some examples of enhancing the effectiveness of high-rise building construction based on developments of several options of the organizational and technological plan are examined. Qualitative technology organizational documentation allows to increase the competitiveness of construction companies and provides prime cost of construction and assembly works reductions. Emphasis is placed on the necessity to comply with the principle of comprehensiveness of engineering, scientific and research works, development activities and scientific and technical support.

  15. Laser marking as a result of applying reverse engineering

    NASA Astrophysics Data System (ADS)

    Mihalache, Andrei; Nagîţ, Gheorghe; Rîpanu, Marius Ionuţ; Slǎtineanu, Laurenţiu; Dodun, Oana; Coteaţǎ, Margareta

    2018-05-01

    The elaboration of a modern manufacturing technology needs a certain quantum of information concerning the part to be obtained. When it is necessary to elaborate the technology for an existing object, such an information could be ensured by using the principles specific to the reverse engineering. Essentially, in the case of this method, the analysis of the surfaces and of other characteristics of the part must offer enough information for the elaboration of the part manufacturing technology. On the other hand, it is known that the laser marking is a processing method able to ensure the transfer of various inscriptions or drawings on a part. Sometimes, the laser marking could be based on the analysis of an existing object, whose image could be used to generate the same object or an improved object. There are many groups of factors able to affect the results of applying the laser marking process. A theoretical analysis was proposed to show that the heights of triangles obtained by means of a CNC marking equipment depend on the width of the line generated by the laser spot on the workpiece surface. An experimental research was thought and materialized to highlight the influence exerted by the line with and the angle of lines intersections on the accuracy of the marking process. By mathematical processing of the experimental results, empirical mathematical models were determined. The power type model and the graphical representation elaborated on the base of this model offered an image concerning the influences exerted by the considered input factors on the marking process accuracy.

  16. Maximum Likelihood Estimation of Nonlinear Structural Equation Models.

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Zhu, Hong-Tu

    2002-01-01

    Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)

  17. ARMA-Based SEM When the Number of Time Points T Exceeds the Number of Cases N: Raw Data Maximum Likelihood.

    ERIC Educational Resources Information Center

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2003-01-01

    Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)

  18. Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling

    NASA Astrophysics Data System (ADS)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-04-01

    Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with respect to other commonly used approaches in the literature.

  19. Bayesian experimental design for models with intractable likelihoods.

    PubMed

    Drovandi, Christopher C; Pettitt, Anthony N

    2013-12-01

    In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables. © 2013, The International Biometric Society.

  20. Poisson point process modeling for polyphonic music transcription.

    PubMed

    Peeling, Paul; Li, Chung-fai; Godsill, Simon

    2007-04-01

    Peaks detected in the frequency domain spectrum of a musical chord are modeled as realizations of a nonhomogeneous Poisson point process. When several notes are superimposed to make a chord, the processes for individual notes combine to give another Poisson process, whose likelihood is easily computable. This avoids a data association step linking individual harmonics explicitly with detected peaks in the spectrum. The likelihood function is ideal for Bayesian inference about the unknown note frequencies in a chord. Here, maximum likelihood estimation of fundamental frequencies shows very promising performance on real polyphonic piano music recordings.

  1. Maximum Likelihood Estimation of Nonlinear Structural Equation Models with Ignorable Missing Data

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Song, Xin-Yuan; Lee, John C. K.

    2003-01-01

    The existing maximum likelihood theory and its computer software in structural equation modeling are established on the basis of linear relationships among latent variables with fully observed data. However, in social and behavioral sciences, nonlinear relationships among the latent variables are important for establishing more meaningful models…

  2. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  3. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    EPA Science Inventory

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  4. Modeling abundance effects in distance sampling

    USGS Publications Warehouse

    Royle, J. Andrew; Dawson, D.K.; Bates, S.

    2004-01-01

    Distance-sampling methods are commonly used in studies of animal populations to estimate population density. A common objective of such studies is to evaluate the relationship between abundance or density and covariates that describe animal habitat or other environmental influences. However, little attention has been focused on methods of modeling abundance covariate effects in conventional distance-sampling models. In this paper we propose a distance-sampling model that accommodates covariate effects on abundance. The model is based on specification of the distance-sampling likelihood at the level of the sample unit in terms of local abundance (for each sampling unit). This model is augmented with a Poisson regression model for local abundance that is parameterized in terms of available covariates. Maximum-likelihood estimation of detection and density parameters is based on the integrated likelihood, wherein local abundance is removed from the likelihood by integration. We provide an example using avian point-transect data of Ovenbirds (Seiurus aurocapillus) collected using a distance-sampling protocol and two measures of habitat structure (understory cover and basal area of overstory trees). The model yields a sensible description (positive effect of understory cover, negative effect on basal area) of the relationship between habitat and Ovenbird density that can be used to evaluate the effects of habitat management on Ovenbird populations.

  5. Modeling regional variation in riverine fish biodiversity in the Arkansas-White-Red River basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schweizer, Peter E; Jager, Yetta

    The patterns of biodiversity in freshwater systems are shaped by biogeography, environmental gradients, and human-induced factors. In this study, we developed empirical models to explain fish species richness in subbasins of the Arkansas White Red River basin as a function of discharge, elevation, climate, land cover, water quality, dams, and longitudinal position. We used information-theoretic criteria to compare generalized linear mixed models and identified well-supported models. Subbasin attributes that were retained as predictors included discharge, elevation, number of downstream dams, percent forest, percent shrubland, nitrate, total phosphorus, and sediment. The random component of our models, which assumed a negative binomialmore » distribution, included spatial correlation within larger river basins and overdispersed residual variance. This study differs from previous biodiversity modeling efforts in several ways. First, obtaining likelihoods for negative binomial mixed models, and thereby avoiding reliance on quasi-likelihoods, has only recently become practical. We found the ranking of models based on these likelihood estimates to be more believable than that produced using quasi-likelihoods. Second, because we had access to a regional-scale watershed model for this river basin, we were able to include model-estimated water quality attributes as predictors. Thus, the resulting models have potential value as tools with which to evaluate the benefits of water quality improvements to fish.« less

  6. Regression estimators for generic health-related quality of life and quality-adjusted life years.

    PubMed

    Basu, Anirban; Manca, Andrea

    2012-01-01

    To develop regression models for outcomes with truncated supports, such as health-related quality of life (HRQoL) data, and account for features typical of such data such as a skewed distribution, spikes at 1 or 0, and heteroskedasticity. Regression estimators based on features of the Beta distribution. First, both a single equation and a 2-part model are presented, along with estimation algorithms based on maximum-likelihood, quasi-likelihood, and Bayesian Markov-chain Monte Carlo methods. A novel Bayesian quasi-likelihood estimator is proposed. Second, a simulation exercise is presented to assess the performance of the proposed estimators against ordinary least squares (OLS) regression for a variety of HRQoL distributions that are encountered in practice. Finally, the performance of the proposed estimators is assessed by using them to quantify the treatment effect on QALYs in the EVALUATE hysterectomy trial. Overall model fit is studied using several goodness-of-fit tests such as Pearson's correlation test, link and reset tests, and a modified Hosmer-Lemeshow test. The simulation results indicate that the proposed methods are more robust in estimating covariate effects than OLS, especially when the effects are large or the HRQoL distribution has a large spike at 1. Quasi-likelihood techniques are more robust than maximum likelihood estimators. When applied to the EVALUATE trial, all but the maximum likelihood estimators produce unbiased estimates of the treatment effect. One and 2-part Beta regression models provide flexible approaches to regress the outcomes with truncated supports, such as HRQoL, on covariates, after accounting for many idiosyncratic features of the outcomes distribution. This work will provide applied researchers with a practical set of tools to model outcomes in cost-effectiveness analysis.

  7. Measurement of CIB power spectra with CAM-SPEC from Planck HFI maps

    NASA Astrophysics Data System (ADS)

    Mak, Suet Ying; Challinor, Anthony; Efstathiou, George; Lagache, Guilaine

    2015-08-01

    We present new measurements of the cosmic infrared background (CIB) anisotropies and its first likelihood using Planck HFI data at 353, 545, and 857 GHz. The measurements are based on cross-frequency power spectra and likelihood analysis using the CAM-SPEC package, rather than map based template removal of foregrounds as done in previous Planck CIB analysis. We construct the likelihood of the CIB temperature fluctuations, an extension of CAM-SPEC likelihood as used in CMB analysis to higher frequency, and use it to drive the best estimate of the CIB power spectrum over three decades in multiple moment, l, covering 50 ≤ l ≤ 2500. We adopt parametric models of the CIB and foreground contaminants (Galactic cirrus, infrared point sources, and cosmic microwave background anisotropies), and calibrate the dataset uniformly across frequencies with known Planck beam and noise properties in the likelihood construction. We validate our likelihood through simulations and extensive suite of consistency tests, and assess the impact of instrumental and data selection effects on the final CIB power spectrum constraints. Two approaches are developed for interpreting the CIB power spectrum. The first approach is based on simple parametric model which model the cross frequency power using amplitudes, correlation coefficients, and known multipole dependence. The second approach is based on the physical models for galaxy clustering and the evolution of infrared emission of galaxies. The new approaches fit all auto- and cross- power spectra very well, with the best fit of χ2ν = 1.04 (parametric model). Using the best foreground solution, we find that the cleaned CIB power spectra are in good agreement with previous Planck and Herschel measurements.

  8. Multimodal Likelihoods in Educational Assessment: Will the Real Maximum Likelihood Score Please Stand up?

    ERIC Educational Resources Information Center

    Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike

    2011-01-01

    It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…

  9. A review and comparison of Bayesian and likelihood-based inferences in beta regression and zero-or-one-inflated beta regression.

    PubMed

    Liu, Fang; Eugenio, Evercita C

    2018-04-01

    Beta regression is an increasingly popular statistical technique in medical research for modeling of outcomes that assume values in (0, 1), such as proportions and patient reported outcomes. When outcomes take values in the intervals [0,1), (0,1], or [0,1], zero-or-one-inflated beta (zoib) regression can be used. We provide a thorough review on beta regression and zoib regression in the modeling, inferential, and computational aspects via the likelihood-based and Bayesian approaches. We demonstrate the statistical and practical importance of correctly modeling the inflation at zero/one rather than ad hoc replacing them with values close to zero/one via simulation studies; the latter approach can lead to biased estimates and invalid inferences. We show via simulation studies that the likelihood-based approach is computationally faster in general than MCMC algorithms used in the Bayesian inferences, but runs the risk of non-convergence, large biases, and sensitivity to starting values in the optimization algorithm especially with clustered/correlated data, data with sparse inflation at zero and one, and data that warrant regularization of the likelihood. The disadvantages of the regular likelihood-based approach make the Bayesian approach an attractive alternative in these cases. Software packages and tools for fitting beta and zoib regressions in both the likelihood-based and Bayesian frameworks are also reviewed.

  10. Indeterminate lung nodules in cancer patients: pretest probability of malignancy and the role of 18F-FDG PET/CT.

    PubMed

    Evangelista, Laura; Panunzio, Annalori; Polverosi, Roberta; Pomerri, Fabio; Rubello, Domenico

    2014-03-01

    The purpose of this study was to determine likelihood of malignancy for indeterminate lung nodules identified on CT comparing two standardized models with (18)F-FDG PET/CT. Fifty-nine cancer patients with indeterminate lung nodules (solid tumors; diameter, ≥5 mm) on CT had FDG PET/CT for lesion characterization. Mayo Clinic and Veterans Affairs Cooperative Study models of likelihood of malignancy were applied to solitary pulmonary nodules. High probability of malignancy was assigned a priori for multiple nodules. Low (<5%), intermediate (5-60%), and high (>60%) pretest malignancy probabilities were analyzed separately. Patients were reclassified with PET/CT. Histopathology or 2-year imaging follow-up established diagnosis. Outcome-based reclassification differences were defined as net reclassification improvement. A null hypothesis of asymptotic test was applied. Thirty-one patients had histology-proven malignancy. PET/CT was true-positive in 24 and true-negative in 25 cases. Negative predictive value was 78% and positive predictive value was 89%. On the basis of the Mayo Clinic model (n=31), 18 patients had low, 12 had intermediate, and one had high pretest likelihood; on the basis of the Veterans Affairs model (n=26), 5 patients had low, 20 had intermediate, and one had high pretest likelihood. Because of multiple lung nodules, 28 patients were classified as having high malignancy risk. PET/CT showed 32 negative and 27 positive scans. Net reclassification improvements respectively were 0.95 and 1.6 for Mayo Clinic and Veterans Affairs models (both p<0.0001). Fourteen of 31 (45.2%) and 12 of 26 (46.2%) patients with low and intermediate pretest likelihood, respectively, had positive findings on PET/CT for the Mayo Clinic and Veterans Affairs models, respectively. Of 15 patients with high pretest likelihood and negative findings on PET/CT, 13 (86.7%) did not have lung malignancy. PET/CT improves stratification of cancer patients with indeterminate pulmonary nodules. A substantial number of patients considered at low and intermediate pretest likelihood of malignancy with histology-proven lung malignancy showed abnormal PET/CT findings.

  11. What Matters in Scientific Explanations: Effects of Elaboration and Content

    PubMed Central

    Rottman, Benjamin M.; Keil, Frank C.

    2011-01-01

    Given the breadth and depth of available information, determining which components of an explanation are most important is a crucial process for simplifying learning. Three experiments tested whether people believe that components of an explanation with more elaboration are more important. In Experiment 1, participants read separate and unstructured components that comprised explanations of real-world scientific phenomena, rated the components on their importance for understanding the explanations, and drew graphs depicting which components elaborated on which other components. Participants gave higher importance scores for components that they judged to be elaborated upon by other components. Experiment 2 demonstrated that experimentally increasing the amount of elaboration of a component increased the perceived importance of the elaborated component. Furthermore, Experiment 3 demonstrated that elaboration increases the importance of the elaborated information by providing insight into understanding the elaborated information; information that was too technical to provide insight into the elaborated component did not increase the importance of the elaborated component. While learning an explanation, people piece together the structure of elaboration relationships between components and use the insight provided by elaboration to identify important components. PMID:21924709

  12. Cross-validation to select Bayesian hierarchical models in phylogenetics.

    PubMed

    Duchêne, Sebastián; Duchêne, David A; Di Giallonardo, Francesca; Eden, John-Sebastian; Geoghegan, Jemma L; Holt, Kathryn E; Ho, Simon Y W; Holmes, Edward C

    2016-05-26

    Recent developments in Bayesian phylogenetic models have increased the range of inferences that can be drawn from molecular sequence data. Accordingly, model selection has become an important component of phylogenetic analysis. Methods of model selection generally consider the likelihood of the data under the model in question. In the context of Bayesian phylogenetics, the most common approach involves estimating the marginal likelihood, which is typically done by integrating the likelihood across model parameters, weighted by the prior. Although this method is accurate, it is sensitive to the presence of improper priors. We explored an alternative approach based on cross-validation that is widely used in evolutionary analysis. This involves comparing models according to their predictive performance. We analysed simulated data and a range of viral and bacterial data sets using a cross-validation approach to compare a variety of molecular clock and demographic models. Our results show that cross-validation can be effective in distinguishing between strict- and relaxed-clock models and in identifying demographic models that allow growth in population size over time. In most of our empirical data analyses, the model selected using cross-validation was able to match that selected using marginal-likelihood estimation. The accuracy of cross-validation appears to improve with longer sequence data, particularly when distinguishing between relaxed-clock models. Cross-validation is a useful method for Bayesian phylogenetic model selection. This method can be readily implemented even when considering complex models where selecting an appropriate prior for all parameters may be difficult.

  13. Inverse Ising problem in continuous time: A latent variable approach

    NASA Astrophysics Data System (ADS)

    Donner, Christian; Opper, Manfred

    2017-12-01

    We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.

  14. A Solution to Separation and Multicollinearity in Multiple Logistic Regression

    PubMed Central

    Shen, Jianzhao; Gao, Sujuan

    2010-01-01

    In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27–38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth’s penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study. PMID:20376286

  15. A Solution to Separation and Multicollinearity in Multiple Logistic Regression.

    PubMed

    Shen, Jianzhao; Gao, Sujuan

    2008-10-01

    In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27-38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth's penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study.

  16. The development of an automatic recognition system for earmark and earprint comparisons.

    PubMed

    Junod, Stéphane; Pasquier, Julien; Champod, Christophe

    2012-10-10

    The value of earmarks as an efficient means of personal identification is still subject to debate. It has been argued that the field is lacking a firm systematic and structured data basis to help practitioners to form their conclusions. Typically, there is a paucity of research guiding as to the selectivity of the features used in the comparison process between an earmark and reference earprints taken from an individual. This study proposes a system for the automatic comparison of earprints and earmarks, operating without any manual extraction of key-points or manual annotations. For each donor, a model is created using multiple reference prints, hence capturing the donor within source variability. For each comparison between a mark and a model, images are automatically aligned and a proximity score, based on a normalized 2D correlation coefficient, is calculated. Appropriate use of this score allows deriving a likelihood ratio that can be explored under known state of affairs (both in cases where it is known that the mark has been left by the donor that gave the model and conversely in cases when it is established that the mark originates from a different source). To assess the system performance, a first dataset containing 1229 donors elaborated during the FearID research project was used. Based on these data, for mark-to-print comparisons, the system performed with an equal error rate (EER) of 2.3% and about 88% of marks are found in the first 3 positions of a hitlist. When performing print-to-print transactions, results show an equal error rate of 0.5%. The system was then tested using real-case data obtained from police forces. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  17. Quasar microlensing models with constraints on the Quasar light curves

    NASA Astrophysics Data System (ADS)

    Tie, S. S.; Kochanek, C. S.

    2018-01-01

    Quasar microlensing analyses implicitly generate a model of the variability of the source quasar. The implied source variability may be unrealistic yet its likelihood is generally not evaluated. We used the damped random walk (DRW) model for quasar variability to evaluate the likelihood of the source variability and applied the revized algorithm to a microlensing analysis of the lensed quasar RX J1131-1231. We compared estimates of the size of the quasar disc and the average stellar mass of the lens galaxy with and without applying the DRW likelihoods for the source variability model and found no significant effect on the estimated physical parameters. The most likely explanation is that unreliastic source light-curve models are generally associated with poor microlensing fits that already make a negligible contribution to the probability distributions of the derived parameters.

  18. Analysis of hourly crash likelihood using unbalanced panel data mixed logit model and real-time driving environmental big data.

    PubMed

    Chen, Feng; Chen, Suren; Ma, Xiaoxiang

    2018-06-01

    Driving environment, including road surface conditions and traffic states, often changes over time and influences crash probability considerably. It becomes stretched for traditional crash frequency models developed in large temporal scales to capture the time-varying characteristics of these factors, which may cause substantial loss of critical driving environmental information on crash prediction. Crash prediction models with refined temporal data (hourly records) are developed to characterize the time-varying nature of these contributing factors. Unbalanced panel data mixed logit models are developed to analyze hourly crash likelihood of highway segments. The refined temporal driving environmental data, including road surface and traffic condition, obtained from the Road Weather Information System (RWIS), are incorporated into the models. Model estimation results indicate that the traffic speed, traffic volume, curvature and chemically wet road surface indicator are better modeled as random parameters. The estimation results of the mixed logit models based on unbalanced panel data show that there are a number of factors related to crash likelihood on I-25. Specifically, weekend indicator, November indicator, low speed limit and long remaining service life of rutting indicator are found to increase crash likelihood, while 5-am indicator and number of merging ramps per lane per mile are found to decrease crash likelihood. The study underscores and confirms the unique and significant impacts on crash imposed by the real-time weather, road surface, and traffic conditions. With the unbalanced panel data structure, the rich information from real-time driving environmental big data can be well incorporated. Copyright © 2018 National Safety Council and Elsevier Ltd. All rights reserved.

  19. A Maximum Likelihood Approach to Functional Mapping of Longitudinal Binary Traits

    PubMed Central

    Wang, Chenguang; Li, Hongying; Wang, Zhong; Wang, Yaqun; Wang, Ningtao; Wang, Zuoheng; Wu, Rongling

    2013-01-01

    Despite their importance in biology and biomedicine, genetic mapping of binary traits that change over time has not been well explored. In this article, we develop a statistical model for mapping quantitative trait loci (QTLs) that govern longitudinal responses of binary traits. The model is constructed within the maximum likelihood framework by which the association between binary responses is modeled in terms of conditional log odds-ratios. With this parameterization, the maximum likelihood estimates (MLEs) of marginal mean parameters are robust to the misspecification of time dependence. We implement an iterative procedures to obtain the MLEs of QTL genotype-specific parameters that define longitudinal binary responses. The usefulness of the model was validated by analyzing a real example in rice. Simulation studies were performed to investigate the statistical properties of the model, showing that the model has power to identify and map specific QTLs responsible for the temporal pattern of binary traits. PMID:23183762

  20. Model selection and parameter estimation in structural dynamics using approximate Bayesian computation

    NASA Astrophysics Data System (ADS)

    Ben Abdessalem, Anis; Dervilis, Nikolaos; Wagg, David; Worden, Keith

    2018-01-01

    This paper will introduce the use of the approximate Bayesian computation (ABC) algorithm for model selection and parameter estimation in structural dynamics. ABC is a likelihood-free method typically used when the likelihood function is either intractable or cannot be approached in a closed form. To circumvent the evaluation of the likelihood function, simulation from a forward model is at the core of the ABC algorithm. The algorithm offers the possibility to use different metrics and summary statistics representative of the data to carry out Bayesian inference. The efficacy of the algorithm in structural dynamics is demonstrated through three different illustrative examples of nonlinear system identification: cubic and cubic-quintic models, the Bouc-Wen model and the Duffing oscillator. The obtained results suggest that ABC is a promising alternative to deal with model selection and parameter estimation issues, specifically for systems with complex behaviours.

  1. New prior sampling methods for nested sampling - Development and testing

    NASA Astrophysics Data System (ADS)

    Stokes, Barrie; Tuyl, Frank; Hudson, Irene

    2017-06-01

    Nested Sampling is a powerful algorithm for fitting models to data in the Bayesian setting, introduced by Skilling [1]. The nested sampling algorithm proceeds by carrying out a series of compressive steps, involving successively nested iso-likelihood boundaries, starting with the full prior distribution of the problem parameters. The "central problem" of nested sampling is to draw at each step a sample from the prior distribution whose likelihood is greater than the current likelihood threshold, i.e., a sample falling inside the current likelihood-restricted region. For both flat and informative priors this ultimately requires uniform sampling restricted to the likelihood-restricted region. We present two new methods of carrying out this sampling step, and illustrate their use with the lighthouse problem [2], a bivariate likelihood used by Gregory [3] and a trivariate Gaussian mixture likelihood. All the algorithm development and testing reported here has been done with Mathematica® [4].

  2. Quasi-Maximum Likelihood Estimation of Structural Equation Models with Multiple Interaction and Quadratic Effects

    ERIC Educational Resources Information Center

    Klein, Andreas G.; Muthen, Bengt O.

    2007-01-01

    In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…

  3. Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods

    ERIC Educational Resources Information Center

    Zhong, Xiaoling; Yuan, Ke-Hai

    2011-01-01

    In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…

  4. Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth

    ERIC Educational Resources Information Center

    Jeon, Minjeong

    2012-01-01

    Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…

  5. A likelihood-based time series modeling approach for application in dendrochronology to examine the growth-climate relations and forest disturbance history

    EPA Science Inventory

    A time series intervention analysis (TSIA) of dendrochronological data to infer the tree growth-climate-disturbance relations and forest disturbance history is described. Maximum likelihood is used to estimate the parameters of a structural time series model with components for ...

  6. Daughter-Initiated Cancer Screening Appeals to Mothers.

    PubMed

    Mosavel, M; Genderson, M W

    2016-12-01

    Youth-initiated health interventions may provide a much needed avenue for intergenerational dissemination of health information among families who bear the greatest burden from unequal distribution of morbidity and mortality. The findings presented in this paper are from a pilot study of the feasibility and impact of female youth-initiated messages (mostly daughters) encouraging adult female relatives (mostly mothers) to obtain cancer screening within low-income African American families living in a Southern US state. Results are compared between an intervention and control group. Intervention group youth (n = 22) were exposed to a 60-min interactive workshop where they were assisted to prepare a factual and emotional appeal to their adult relative to obtain specific screening. The face-to-face workshops were guided by the Elaboration Likelihood Model (ELM) and the Theory of Planned Behavior (TPB). Control group girls (n = 18) were only provided with a pamphlet with information about cancer screening and specific steps about how to encourage their relative to obtain screening. Intervention youth (86 %) and adults (82 %) reported that the message was shared while 71 % in the control group reported sharing or receiving the message. Importantly, more women in the intervention group reported that they obtained a screen (e.g., mammogram, Pap smear) directly based on the youth's appeal. These findings can have major implications for youth-initiated health promotion efforts, especially among hard-to-reach populations.

  7. A GIS/Remote Sensing-based methodology for groundwater potentiality assessment in Tirnavos area, Greece

    NASA Astrophysics Data System (ADS)

    Oikonomidis, D.; Dimogianni, S.; Kazakis, N.; Voudouris, K.

    2015-06-01

    The aim of this paper is to assess the groundwater potentiality combining Geographic Information Systems and Remote Sensing with data obtained from the field, as an additional tool to the hydrogeological research. The present study was elaborated in the broader area of Tirnavos, covering 419.4 km2. The study area is located in Thessaly (central Greece) and is crossed by two rivers, Pinios and Titarisios. Agriculture is one of the main elements of Thessaly's economy resulting in intense agricultural activity and consequently increased exploitation of groundwater resources. Geographic Information Systems (GIS) and Remote Sensing (RS) were used in order to create a map that depicts the likelihood of existence of groundwater, consisting of five classes, showing the groundwater potentiality and ranging from very high to very low. The extraction of this map is based on the study of input data such as: rainfall, potential recharge, lithology, lineament density, slope, drainage density and depth to groundwater. Weights were assigned to all these factors according to their relevance to groundwater potential and eventually a map based on weighted spatial modeling system was created. Furthermore, a groundwater quality suitability map was illustrated by overlaying the groundwater potentiality map with the map showing the potential zones for drinking groundwater in the study area. The results provide significant information and the maps could be used from local authorities for groundwater exploitation and management.

  8. Development of radio dramas for health communication pilot intervention in Canadian Inuit communities.

    PubMed

    Racicot-Matta, Cassandra; Wilcke, Markus; Egeland, Grace M

    2016-03-01

    A mixed-methods approach was used to develop a culturally appropriate health intervention over radio within the Inuit community of Pangnirtung, Nunavut (NU), Canada. The radio dramas were developed, recorded and tested pre-intervention through the use of Participatory Process and informed by the extended elaboration likelihood model (EELM) for education-communication. The radio messages were tested in two focus groups (n = 4 and n = 5) to determine fidelity of the radio dramas to the EELM theory. Focus group feedback identified that revisions needed to be made to two characteristics required of educational programmes by the EELM theorem: first, the quality of the production was improved by adding Inuit youth recorded music and second, the homophily (relatability of characters) of radio dramas was improved by re-recording the dramas with voices of local youth who had been trained in media communication studies. These adjustments would not have been implemented had pre-intervention testing of the radio dramas not taken place and could have reduced effectiveness of the overall intervention. Therefore, it is highly recommended that media tools for health communication/education be tested with the intended target audience before commencement of programmes. Participatory Process was identified to be a powerful tool in the development and sustainability of culturally appropriate community health programming. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Daughter-Initiated Cancer Screening Appeals to Mothers

    PubMed Central

    Mosavel, Maghboeba; Genderson, Maureen Wilson

    2015-01-01

    Youth-initiated health interventions may provide a much needed avenue for intergenerational dissemination of health information among families who bear the greatest burden from unequal distribution of morbidity and mortality. The findings presented in this paper are from a pilot study of the feasibility and impact of female youth-initiated messages (mostly daughters) encouraging adult female relatives (mostly mothers) to obtain cancer screening within low income African American families living in a Southern US state. Results are compared between an intervention and control group. Intervention group youth (n=22) were exposed to a 60-minute interactive workshop where they were assisted to prepare a factual and emotional appeal to their adult relative to obtain specific screening. The face-to-face workshops were guided by the Elaboration Likelihood Model (ELM) and the Theory of Planned Behavior (TPB). Control group girls (n=18) were only provided with a pamphlet with information about cancer screening and specific steps about how to encourage their relative to obtain screening. Intervention youth (86%) and adults (82%) reported that the message was shared while 71% in the control group reported sharing or receiving the message. Importantly, more women in the intervention group reported that they obtained a screen (e.g., mammogram, Pap smear) directly based on the youth's appeal. These findings can have major implications for youth-initiated health promotion efforts, especially among hard-to-reach populations. PMID:26590969

  10. Designing prenatal care messages for low-income Mexican women.

    PubMed Central

    Alcalay, R; Ghee, A; Scrimshaw, S

    1993-01-01

    Communication theories and research data were used to design cross-cultural health education messages. A University of California Los Angeles-Universidad Autonoma in Tijuana, Mexico, research team used the methods of ethnographic and survey research to study behaviors, attitudes, and knowledge concerning prenatal care of a sample of pregnant low-income women living in Tijuana. This audience provided information that served as a framework for a series of messages to increase awareness and change prenatal care behaviors. The message design process was guided by persuasion theories that included Petty and Caccioppo's elaboration likelihood model, McGuire's persuasion matrix, and Bandura's social learning theory. The results from the research showed that poor women in Tijuana tend to delay or not seek prenatal care. They were not aware of symptoms that could warn of pregnancy complications. Their responses also revealed pregnant women's culturally specific beliefs and behaviors regarding pregnancy. After examination of these and other results from the study, prenatal care messages about four topics were identified as the most relevant to communicate to this audience: health services use, the mother's weight gain, nutrition and anemia, and symptoms of high-risk complications during pregnancy. A poster, a calendar, a brochure, and two radio songs were produced and pretested in focus groups with low-income women in Tijuana. Each medium included one or more messages addressing informational, attitudinal, or behavioral needs, or all three, of the target population. PMID:8497574

  11. Direct-to-consumer advertising via the Internet: the role of Web site design.

    PubMed

    Sewak, Saurabh S; Wilkin, Noel E; Bentley, John P; Smith, Mickey C

    2005-06-01

    Recent attempts to propose criteria for judging the quality of pharmaceutical and healthcare Web sites do not distinguish between attributes of Web site design related to content and other attributes not related to the content. The Elaboration Likelihood Model from persuasion literature is used as a framework for investigating the effects of Web site design on consequents like attitude and knowledge acquisition. A between-subjects, 2 (high or low involvement)x2 (Web site designed with high or low aspects of visual appeal) factorial design was used in this research. College students were randomly assigned to these treatment groups yielding a balanced design with 29 observations per treatment cell. Analysis of variance results for the effects of involvement and Web site design on attitude and knowledge indicated that the interaction between the independent variables was not significant in both analyses. Examination of main effects revealed that participants who viewed the Web site with higher visual appeal actually had slightly lower knowledge scores (6.32) than those who viewed the Web site with lower visual appeal (7.03, F(1,112)=3.827, P=.053). Results of this research seem to indicate that aspects of Web site design (namely aspects of visual appeal and quality) may not play a role in attaining desired promotional objectives, which can include development of favorable attitudes toward the product and facilitating knowledge acquisition.

  12. Reading About the Flu Online: How Health-Protective Behavioral Intentions Are Influenced by Media Multitasking, Polychronicity, and Strength of Health-Related Arguments.

    PubMed

    Kononova, Anastasia; Yuan, Shupei; Joo, Eunsin

    2017-06-01

    As health organizations increasingly use the Internet to communicate medical information and advice (Shortliffe et al., 2000; World Health Organization, 2013), studying factors that affect health information processing and health-protective behaviors becomes extremely important. The present research applied the elaboration likelihood model of persuasion to explore the effects of media multitasking, polychronicity (preference for multitasking), and strength of health-related arguments on health-protective behavioral intentions. Participants read an online article about influenza that included strong and weak suggestions to engage in flu-preventive behaviors. In one condition, participants read the article and checked Facebook; in another condition, they were exposed only to the article. Participants expressed greater health-protective behavioral intentions in the media multitasking condition than in the control condition. Strong arguments were found to elicit more positive behavioral intentions than weak arguments. Moderate and high polychronics showed greater behavioral intentions than low polychronics when they read the article in the multitasking condition. The difference in intentions to follow strong and weak arguments decreased for moderate and high polychronics. The results of the present study suggest that health communication practitioners should account for not only media use situations in which individuals typically read about health online but also individual differences in information processing, which puts more emphasis on the strength of health-protective suggestions when targeting light multitaskers.

  13. Data Model as an Architectural View

    DTIC Science & Technology

    2009-10-01

    store order - processing system. Logical. The logical data model is an evolution of the conceptual data model towards a data management technology (e.g...online store order - processing system at different stages. Perhaps the first draft was elaborated by the architect during discussion of requirements

  14. Additive hazards regression and partial likelihood estimation for ecological monitoring data across space.

    PubMed

    Lin, Feng-Chang; Zhu, Jun

    2012-01-01

    We develop continuous-time models for the analysis of environmental or ecological monitoring data such that subjects are observed at multiple monitoring time points across space. Of particular interest are additive hazards regression models where the baseline hazard function can take on flexible forms. We consider time-varying covariates and take into account spatial dependence via autoregression in space and time. We develop statistical inference for the regression coefficients via partial likelihood. Asymptotic properties, including consistency and asymptotic normality, are established for parameter estimates under suitable regularity conditions. Feasible algorithms utilizing existing statistical software packages are developed for computation. We also consider a simpler additive hazards model with homogeneous baseline hazard and develop hypothesis testing for homogeneity. A simulation study demonstrates that the statistical inference using partial likelihood has sound finite-sample properties and offers a viable alternative to maximum likelihood estimation. For illustration, we analyze data from an ecological study that monitors bark beetle colonization of red pines in a plantation of Wisconsin.

  15. Insecure Attachment Style and Dysfunctional Sexual Beliefs Predict Sexual Coercion Proclivity in University Men

    PubMed Central

    Dang, Silvain S; Gorzalka, Boris B

    2015-01-01

    Introduction Past studies have shown an association between low sexual functioning and engaging in sexually coercive behaviors among men. The mechanism of this relationship is not well understood. Moreover, most studies in this area have been done in incarcerated sex offenders. Aims The aim of the current study was to investigate the role of potential distal predictors of sexual coercion, including insecure attachment style and dysfunctional sexual beliefs, in mediating the relationship between sexual functioning and sexual coercion. The study also seeks to extend past findings to a novel non-forensic population. Methods Male university students (N = 367) anonymously completed online questionnaires. Main Outcome Measures Participants completed the Sexual Experiences Survey, Improved Illinois Rape Myth Acceptance Scale, Hostility Towards Women Scale, Likelihood of Rape Item, Experiences in Close Relationships Scale, Dysfunctional Sexual Beliefs Scale, and Brief Sexual Functioning Questionnaire. Results Sexual functioning was not significantly associated with sexually coercive behaviors in our sample (r = 0.08, P = 0.247), though a significant correlation between sexual functioning and rape myth acceptance was found (r = 0.18, P = 0.007). Path analysis of all variables showed that the likelihood of rape item was the strongest correlate of sexually coercive behaviors (β = 0.34, P < 0.001), while dysfunctional sexual beliefs appeared to mediate the association between anxious attachment and likelihood of rape item score. Anxious (r = −0.27, P = 0.001) and avoidant (r = −0.19, P = 0.004) attachment also correlated significantly with lower sexual functioning. Conclusions These findings suggest the relationship between sexual functioning and sexual coercion may be less robust than previously reported, and may be due to a shared association with other factors. The results elaborate on the interrelation between attachment style and dysfunctional sexual beliefs as predictors of sexual coercion proclivity, suggesting avenues for further research. PMID:26185675

  16. ModelTest Server: a web-based tool for the statistical selection of models of nucleotide substitution online

    PubMed Central

    Posada, David

    2006-01-01

    ModelTest server is a web-based application for the selection of models of nucleotide substitution using the program ModelTest. The server takes as input a text file with likelihood scores for the set of candidate models. Models can be selected with hierarchical likelihood ratio tests, or with the Akaike or Bayesian information criteria. The output includes several statistics for the assessment of model selection uncertainty, for model averaging or to estimate the relative importance of model parameters. The server can be accessed at . PMID:16845102

  17. Come On! Using intervention mapping to help healthy pregnant women achieve healthy weight gain.

    PubMed

    Merkx, Astrid; Ausems, Marlein; de Vries, Raymond; Nieuwenhuijze, Marianne J

    2017-06-01

    Gaining too much or too little weight in pregnancy (according to Institute of Medicine (IOM) guidelines) negatively affects both mother and child, but many women find it difficult to manage their gestational weight gain (GWG). Here we describe the use of the intervention mapping protocol to design 'Come On!', an intervention to promote adequate GWG among healthy pregnant women. We used the six steps of intervention mapping: (i) needs assessment; (ii) formulation of change objectives; (iii) selection of theory-based methods and practical strategies; (iv) development of the intervention programme; (v) development of an adoption and implementation plan; and (vi) development of an evaluation plan. A consortium of users and related professionals guided the process of development. As a result of the needs assessment, two goals for the intervention were formulated: (i) helping healthy pregnant women to stay within the IOM guidelines for GWG; and (ii) getting midwives to adequately support the efforts of healthy pregnant women to gain weight within the IOM guidelines. To reach these goals, change objectives and determinants influencing the change objectives were formulated. Theories used were the Transtheoretical Model, Social Cognitive Theory and the Elaboration Likelihood Model. Practical strategies to use the theories were the foundation for the development of 'Come On!', a comprehensive programme that included a tailored Internet programme for pregnant women, training for midwives, an information card for midwives, and a scheduled discussion between the midwife and the pregnant woman during pregnancy. The programme was pre-tested and evaluated in an effect study.

  18. Evolution of complex fruiting-body morphologies in homobasidiomycetes.

    PubMed Central

    Hibbett, David S; Binder, Manfred

    2002-01-01

    The fruiting bodies of homobasidiomycetes include some of the most complex forms that have evolved in the fungi, such as gilled mushrooms, bracket fungi and puffballs ('pileate-erect') forms. Homobasidiomycetes also include relatively simple crust-like 'resupinate' forms, however, which account for ca. 13-15% of the described species in the group. Resupinate homobasidiomycetes have been interpreted either as a paraphyletic grade of plesiomorphic forms or a polyphyletic assemblage of reduced forms. The former view suggests that morphological evolution in homobasidiomycetes has been marked by independent elaboration in many clades, whereas the latter view suggests that parallel simplification has been a common mode of evolution. To infer patterns of morphological evolution in homobasidiomycetes, we constructed phylogenetic trees from a dataset of 481 species and performed ancestral state reconstruction (ASR) using parsimony and maximum likelihood (ML) methods. ASR with both parsimony and ML implies that the ancestor of the homobasidiomycetes was resupinate, and that there have been multiple gains and losses of complex forms in the homobasidiomycetes. We also used ML to address whether there is an asymmetry in the rate of transformations between simple and complex forms. Models of morphological evolution inferred with ML indicate that the rate of transformations from simple to complex forms is about three to six times greater than the rate of transformations in the reverse direction. A null model of morphological evolution, in which there is no asymmetry in transformation rates, was rejected. These results suggest that there is a 'driven' trend towards the evolution of complex forms in homobasidiomycetes. PMID:12396494

  19. Changes across age groups in self-choice elaboration and incidental memory.

    PubMed

    Toyota, Hiroshi; Tatsumi, Tomoko

    2003-04-01

    This study investigated differences in the self-choice elaboration and an experimenter-provided elaboration on incidental memory of 7- to 12-yr.-olds. In a self-choice elaboration condition 34 second and 25 sixth graders were asked to choose one of the two sentence frames into which each target could fit more congruously, whereas in an experimenter-provided elaboration they were asked to judge the congruity of each target to each frame. In free recall, sixth graders recalled targets in bizarre sentence frames better than second graders for self-choice elaboration condition. An age difference was not found for the experimenter-provided elaboration. In cued recall self-choice elaboration led to better performance of sixth graders for recalling targets than an experimenter-provided elaboration in both bizarre and common sentence frames. However, the different types of elaboration did not alter the recall of second graders. These results were interpreted as showing that the effectiveness of a self-choice elaboration depends on the subjects' age and the type of sentence.

  20. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance Structure Models to Block-Toeplitz Matrices Representing Single-Subject Multivariate Time-Series.

    ERIC Educational Resources Information Center

    Molenaar, Peter C. M.; Nesselroade, John R.

    1998-01-01

    Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…

  1. Changes across age groups in self-choice elaboration effects on incidental memory.

    PubMed

    Toyota, Hiroshi; Konishi, Tomoko

    2004-08-01

    The present study investigated age differences in the effects of a self-choice elaboration and an experimenter-provided elaboration on incidental memory. Adults, sixth grade, and second grade subjects chose which of two sentence frames the target fit better in a self-choice elaboration condition. They then judged whether each target made sense in its sentence frame in the experimenter-provided elaboration, then did free recall tests. Only adults recalled better the targets with an image sentence with self-choice elaboration, rather than experimenter-provided elaboration. However, self-choice elaboration was far superior for the recall of targets with nonimage sentences only for second graders. Thus, the effects of self-choice elaboration were determined both by age and by type of sentence frame.

  2. Epistemic Gameplay and Discovery in Computational Model-Based Inquiry Activities

    ERIC Educational Resources Information Center

    Wilkerson, Michelle Hoda; Shareff, Rebecca; Laina, Vasiliki; Gravel, Brian

    2018-01-01

    In computational modeling activities, learners are expected to discover the inner workings of scientific and mathematical systems: First elaborating their understandings of a given system through constructing a computer model, then "debugging" that knowledge by testing and refining the model. While such activities have been shown to…

  3. Updated logistic regression equations for the calculation of post-fire debris-flow likelihood in the western United States

    USGS Publications Warehouse

    Staley, Dennis M.; Negri, Jacquelyn A.; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.

    2016-06-30

    Wildfire can significantly alter the hydrologic response of a watershed to the extent that even modest rainstorms can generate dangerous flash floods and debris flows. To reduce public exposure to hazard, the U.S. Geological Survey produces post-fire debris-flow hazard assessments for select fires in the western United States. We use publicly available geospatial data describing basin morphology, burn severity, soil properties, and rainfall characteristics to estimate the statistical likelihood that debris flows will occur in response to a storm of a given rainfall intensity. Using an empirical database and refined geospatial analysis methods, we defined new equations for the prediction of debris-flow likelihood using logistic regression methods. We showed that the new logistic regression model outperformed previous models used to predict debris-flow likelihood.

  4. Integration within the Felsenstein equation for improved Markov chain Monte Carlo methods in population genetics

    PubMed Central

    Hey, Jody; Nielsen, Rasmus

    2007-01-01

    In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231

  5. Explaining the effect of event valence on unrealistic optimism.

    PubMed

    Gold, Ron S; Brown, Mark G

    2009-05-01

    People typically exhibit 'unrealistic optimism' (UO): they believe they have a lower chance of experiencing negative events and a higher chance of experiencing positive events than does the average person. UO has been found to be greater for negative than positive events. This 'valence effect' has been explained in terms of motivational processes. An alternative explanation is provided by the 'numerosity model', which views the valence effect simply as a by-product of a tendency for likelihood estimates pertaining to the average member of a group to increase with the size of the group. Predictions made by the numerosity model were tested in two studies. In each, UO for a single event was assessed. In Study 1 (n = 115 students), valence was manipulated by framing the event either negatively or positively, and participants estimated their own likelihood and that of the average student at their university. In Study 2 (n = 139 students), valence was again manipulated and participants again estimated their own likelihood; additionally, group size was manipulated by having participants estimate the likelihood of the average student in a small, medium-sized, or large group. In each study, the valence effect was found, but was due to an effect on estimates of own likelihood, not the average person's likelihood. In Study 2, valence did not interact with group size. The findings contradict the numerosity model, but are in accord with the motivational explanation. Implications for health education are discussed.

  6. Bayesian structural equation modeling in sport and exercise psychology.

    PubMed

    Stenling, Andreas; Ivarsson, Andreas; Johnson, Urban; Lindwall, Magnus

    2015-08-01

    Bayesian statistics is on the rise in mainstream psychology, but applications in sport and exercise psychology research are scarce. In this article, the foundations of Bayesian analysis are introduced, and we will illustrate how to apply Bayesian structural equation modeling in a sport and exercise psychology setting. More specifically, we contrasted a confirmatory factor analysis on the Sport Motivation Scale II estimated with the most commonly used estimator, maximum likelihood, and a Bayesian approach with weakly informative priors for cross-loadings and correlated residuals. The results indicated that the model with Bayesian estimation and weakly informative priors provided a good fit to the data, whereas the model estimated with a maximum likelihood estimator did not produce a well-fitting model. The reasons for this discrepancy between maximum likelihood and Bayesian estimation are discussed as well as potential advantages and caveats with the Bayesian approach.

  7. Performance of the likelihood ratio difference (G2 Diff) test for detecting unidimensionality in applications of the multidimensional Rasch model.

    PubMed

    Harrell-Williams, Leigh; Wolfe, Edward W

    2014-01-01

    Previous research has investigated the influence of sample size, model misspecification, test length, ability distribution offset, and generating model on the likelihood ratio difference test in applications of item response models. This study extended that research to the evaluation of dimensionality using the multidimensional random coefficients multinomial logit model (MRCMLM). Logistic regression analysis of simulated data reveal that sample size and test length have a large effect on the capacity of the LR difference test to correctly identify unidimensionality, with shorter tests and smaller sample sizes leading to smaller Type I error rates. Higher levels of simulated misfit resulted in fewer incorrect decisions than data with no or little misfit. However, Type I error rates indicate that the likelihood ratio difference test is not suitable under any of the simulated conditions for evaluating dimensionality in applications of the MRCMLM.

  8. Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.

    ERIC Educational Resources Information Center

    Butler, Ronald W.

    The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…

  9. Impact of Violation of the Missing-at-Random Assumption on Full-Information Maximum Likelihood Method in Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.; Guo, Fanmin

    2014-01-01

    The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…

  10. Applying a Weighted Maximum Likelihood Latent Trait Estimator to the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Bergeron, Jennifer M.

    2005-01-01

    This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…

  11. Make the most of your samples: Bayes factor estimators for high-dimensional models of sequence evolution.

    PubMed

    Baele, Guy; Lemey, Philippe; Vansteelandt, Stijn

    2013-03-06

    Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model's marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. We here assess the original 'model-switch' path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model's marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation.

  12. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  13. Implementing a Technology-Supported Model for Cross-Organisational Learning and Knowledge Building for Teachers

    ERIC Educational Resources Information Center

    Tammets, Kairit; Pata, Kai; Laanpere, Mart

    2012-01-01

    This study proposed using the elaborated learning and knowledge building model (LKB model) derived from Nonaka and Takeuchi's knowledge management model for supporting cross-organisational teacher development in the temporarily extended organisations composed of universities and schools. It investigated the main LKB model components in the context…

  14. Exclusion probabilities and likelihood ratios with applications to mixtures.

    PubMed

    Slooten, Klaas-Jan; Egeland, Thore

    2016-01-01

    The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model.

  15. Modified Maxium Likelihood Estimation Method for Completely Separated and Quasi-Completely Separated Data for a Dose-Response Model

    DTIC Science & Technology

    2015-08-01

    McCullagh, P.; Nelder, J.A. Generalized Linear Model , 2nd ed.; Chapman and Hall: London, 1989. 7. Johnston, J. Econometric Methods, 3rd ed.; McGraw...FOR A DOSE-RESPONSE MODEL ECBC-TN-068 Kyong H. Park Steven J. Lagan RESEARCH AND TECHNOLOGY DIRECTORATE August 2015 Approved for public release...Likelihood Estimation Method for Completely Separated and Quasi-Completely Separated Data for a Dose-Response Model 5a. CONTRACT NUMBER 5b. GRANT

  16. A COMPARATIVE ANALYSIS OF THE SUPERNOVA LEGACY SURVEY SAMPLE WITH ΛCDM AND THE R{sub h}=ct UNIVERSE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Jun-Jie; Wu, Xue-Feng; Melia, Fulvio

    The use of Type Ia supernovae (SNe Ia) has thus far produced the most reliable measurement of the expansion history of the universe, suggesting that ΛCDM offers the best explanation for the redshift–luminosity distribution observed in these events. However, analysis of other kinds of sources, such as cosmic chronometers, gamma-ray bursts, and high-z quasars, conflicts with this conclusion, indicating instead that the constant expansion rate implied by the R{sub h} = ct universe is a better fit to the data. The central difficulty with the use of SNe Ia as standard candles is that one must optimize three or fourmore » nuisance parameters characterizing supernova (SN) luminosities simultaneously with the parameters of an expansion model. Hence, in comparing competing models, one must reduce the data independently for each. We carry out such a comparison of ΛCDM and the R{sub h} = ct universe using the SN Legacy Survey sample of 252 SN events, and show that each model fits its individually reduced data very well. However, since R{sub h} = ct has only one free parameter (the Hubble constant), it follows from a standard model selection technique that it is to be preferred over ΛCDM, the minimalist version of which has three (the Hubble constant, the scaled matter density, and either the spatial curvature constant or the dark energy equation-of-state parameter). We estimate using the Bayes Information Criterion that in a pairwise comparison, the likelihood of R{sub h} = ct is ∼90%, compared with only ∼10% for a minimalist form of ΛCDM, in which dark energy is simply a cosmological constant. Compared to R{sub h} = ct, versions of the standard model with more elaborate parametrizations of dark energy are judged to be even less likely.« less

  17. Modeling Goal-Directed User Exploration in Human-Computer Interaction

    DTIC Science & Technology

    2011-02-01

    scent, other factors including the layout position and grouping of options in the user-interface also affect user exploration and the likelihood of...grouping of options in the user-interface also affect user exploration and the likelihood of success. This dissertation contributes a new model of goal...better inform UI design. 1.1 RESEARCH GAPS IN MODELING In addition to infoscent, the layout of the UI also affects the choices made during

  18. Make the most of your samples: Bayes factor estimators for high-dimensional models of sequence evolution

    PubMed Central

    2013-01-01

    Background Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model’s marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. Results We here assess the original ‘model-switch’ path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model’s marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. Conclusions We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation. PMID:23497171

  19. Fast maximum likelihood estimation of mutation rates using a birth-death process.

    PubMed

    Wu, Xiaowei; Zhu, Hongxiao

    2015-02-07

    Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.

  20. Improvement and comparison of likelihood functions for model calibration and parameter uncertainty analysis within a Markov chain Monte Carlo scheme

    NASA Astrophysics Data System (ADS)

    Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim

    2014-11-01

    In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.

  1. A unifying framework for marginalized random intercept models of correlated binary outcomes

    PubMed Central

    Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian M.

    2013-01-01

    We demonstrate that many current approaches for marginal modeling of correlated binary outcomes produce likelihoods that are equivalent to the copula-based models herein. These general copula models of underlying latent threshold random variables yield likelihood-based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data with exchangeable correlation structures. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalized random intercept models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts. PMID:25342871

  2. Method for somatic cell nuclear transfer in zebrafish.

    PubMed

    Siripattarapravat, Kannika; Cibelli, Jose B

    2011-01-01

    Somatic cell nuclear transfer (SCNT) has been a well-known technique for decades and widely applied to generate identical animals, including ones with genetic alterations. The system has been demonstrated successfully in zebrafish. The elaborated requirements of SCNT, however, limit reproducibility of the established model to a few groups in zebrafish research community. In this chapter, we meticulously outline each step of the published protocol as well as preparations of equipments and reagents used in zebrafish SCNT. All describable detailed-tips are elaborated in texts and figures. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Der Aufbau mentaler Modelle durch bildliche Darstellungen: Eine experimentalle Studie uber die Bedeutung der Merkmalsdimensionen Elaboriertheit und Strukturierheit im Sachunterricht der Grundschule (The Development of Mental Processes through Graphic Representation with Diverging Degrees of Elaboration and Structurization: An Experimental Study Carried Out in Elementary Science Instruction in Primary School).

    ERIC Educational Resources Information Center

    Martschinke, Sabine

    1996-01-01

    Examines types of graphical representation as to their suitability for knowledge acquisition in primary grades. Uses the concept of mental models to clarify the relationship between external presentation and internal representation of knowledge. Finds that students who learned with highly elaborated and highly structured pictures displayed the…

  4. Likelihood ratio decisions in memory: three implied regularities.

    PubMed

    Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T

    2009-06-01

    We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.

  5. On the Relationships between Jeffreys Modal and Weighted Likelihood Estimation of Ability under Logistic IRT Models

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2012-01-01

    This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…

  6. Recovery of Item Parameters in the Nominal Response Model: A Comparison of Marginal Maximum Likelihood Estimation and Markov Chain Monte Carlo Estimation.

    ERIC Educational Resources Information Center

    Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun

    2002-01-01

    Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)

  7. A hybrid model for combining case-control and cohort studies in systematic reviews of diagnostic tests

    PubMed Central

    Chen, Yong; Liu, Yulun; Ning, Jing; Cormier, Janice; Chu, Haitao

    2014-01-01

    Systematic reviews of diagnostic tests often involve a mixture of case-control and cohort studies. The standard methods for evaluating diagnostic accuracy only focus on sensitivity and specificity and ignore the information on disease prevalence contained in cohort studies. Consequently, such methods cannot provide estimates of measures related to disease prevalence, such as population averaged or overall positive and negative predictive values, which reflect the clinical utility of a diagnostic test. In this paper, we propose a hybrid approach that jointly models the disease prevalence along with the diagnostic test sensitivity and specificity in cohort studies, and the sensitivity and specificity in case-control studies. In order to overcome the potential computational difficulties in the standard full likelihood inference of the proposed hybrid model, we propose an alternative inference procedure based on the composite likelihood. Such composite likelihood based inference does not suffer computational problems and maintains high relative efficiency. In addition, it is more robust to model mis-specifications compared to the standard full likelihood inference. We apply our approach to a review of the performance of contemporary diagnostic imaging modalities for detecting metastases in patients with melanoma. PMID:25897179

  8. Multiple ionization of neon by soft x-rays at ultrahigh intensity

    NASA Astrophysics Data System (ADS)

    Guichard, R.; Richter, M.; Rost, J.-M.; Saalmann, U.; Sorokin, A. A.; Tiedtke, K.

    2013-08-01

    At the free-electron laser FLASH, multiple ionization of neon atoms was quantitatively investigated at photon energies of 93.0 and 90.5 eV. For ion charge states up to 6+, we compare the respective absolute photoionization yields with results from a minimal model and an elaborate description including standard sequential and direct photoionization channels. Both approaches are based on rate equations and take into account a Gaussian spatial intensity distribution of the laser beam. From the comparison we conclude that photoionization up to a charge of 5+ can be described by the minimal model which we interpret as sequential photoionization assisted by electron shake-up processes. For higher charges, the experimental ionization yields systematically exceed the elaborate rate-based prediction.

  9. Equivalence of binormal likelihood-ratio and bi-chi-squared ROC curve models

    PubMed Central

    Hillis, Stephen L.

    2015-01-01

    A basic assumption for a meaningful diagnostic decision variable is that there is a monotone relationship between it and its likelihood ratio. This relationship, however, generally does not hold for a decision variable that results in a binormal ROC curve. As a result, receiver operating characteristic (ROC) curve estimation based on the assumption of a binormal ROC-curve model produces improper ROC curves that have “hooks,” are not concave over the entire domain, and cross the chance line. Although in practice this “improperness” is usually not noticeable, sometimes it is evident and problematic. To avoid this problem, Metz and Pan proposed basing ROC-curve estimation on the assumption of a binormal likelihood-ratio (binormal-LR) model, which states that the decision variable is an increasing transformation of the likelihood-ratio function of a random variable having normal conditional diseased and nondiseased distributions. However, their development is not easy to follow. I show that the binormal-LR model is equivalent to a bi-chi-squared model in the sense that the families of corresponding ROC curves are the same. The bi-chi-squared formulation provides an easier-to-follow development of the binormal-LR ROC curve and its properties in terms of well-known distributions. PMID:26608405

  10. The Equivalence of Information-Theoretic and Likelihood-Based Methods for Neural Dimensionality Reduction

    PubMed Central

    Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.

    2015-01-01

    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448

  11. A Teacher Accountability Model for Overcoming Self-Exclusion of Pupils

    ERIC Educational Resources Information Center

    Jamal, Abu-Hussain; Tilchin, Oleg; Essawi, Mohammad

    2015-01-01

    Self-exclusion of pupils is one of the prominent challenges of education. In this paper we propose the TERA model, which shapes the process of creating formative accountability of teachers to overcome the self-exclusion of pupils. Development of the model includes elaboration and integration of interconnected model components. The TERA model…

  12. [The Development of Information Centralization and Management Integration System for Monitors Based on Wireless Sensor Network].

    PubMed

    Xu, Xiu; Zhang, Honglei; Li, Yiming; Li, Bin

    2015-07-01

    Developed the information centralization and management integration system for monitors of different brands and models with wireless sensor network technologies such as wireless location and wireless communication, based on the existing wireless network. With adaptive implementation and low cost, the system which possesses the advantages of real-time, efficiency and elaboration is able to collect status and data of the monitors, locate the monitors, and provide services with web server, video server and locating server via local network. Using an intranet computer, the clinical and device management staffs can access the status and parameters of monitors. Applications of this system provide convenience and save human resource for clinical departments, as well as promote the efficiency, accuracy and elaboration for the device management. The successful achievement of this system provides solution for integrated and elaborated management of the mobile devices including ventilator and infusion pump.

  13. Elaboration of austenitic stainless steel samples with bimodal grain size distributions and investigation of their mechanical behavior

    NASA Astrophysics Data System (ADS)

    Flipon, B.; de la Cruz, L. Garcia; Hug, E.; Keller, C.; Barbe, F.

    2017-10-01

    Samples of 316L austenitic stainless steel with bimodal grain size distributions are elaborated using two distinct routes. The first one is based on powder metallurgy using spark plasma sintering of two powders with different particle sizes. The second route applies the reverse-annealing method: it consists in inducing martensitic phase transformation by plastic strain and further annealing in order to obtain two austenitic grain populations with different sizes. Microstructural analy ses reveal that both methods are suitable to generate significative grain size contrast and to control this contrast according to the elaboration conditions. Mechanical properties under tension are then characterized for different grain size distributions. Crystal plasticity finite element modelling is further applied in a configuration of bimodal distribution to analyse the role played by coarse grains within a matrix of fine grains, considering not only their volume fraction but also their spatial arrangement.

  14. Delimitation of areas under the real pressure from agricultural activities due to nitrate water pollution in Poland

    NASA Astrophysics Data System (ADS)

    Wozniak, E.; Nasilowska, S.; Jarocinska, A.; Igras, J.; Stolarska, M.; Bernoussi, A. S.; Karaczun, Z.

    2012-04-01

    The aim of the performed research was to determine catchments under the nitrogen pressure in Poland in period of 2007-2010. National Water Management Authority in Poland uses the elaborated methodology to fulfil requirements of Nitrate Directive and Water Framework Directive. Multicriteria GIS analysis was conducted on the base on various types of environmental data, maps and remote sensing products. Final model of real agricultural pressure was made using two components: (i) potential pressure connected with agriculture (ii) the vulnerability of the area. The agricultural pressure was calculated using the amount of nitrogen in fertilizers and the amount of nitrogen produced by animal breeding. The animal pressure was based on the information about the number of bred animals of each species for communes in Poland. The spatial distribution of vegetation pressure was calculated using kriging for the whole country base on the information about 5000 points with the amount of nitrogen dose in fertilizers. The vulnerability model was elaborated only for arable lands. It was based on the probability of the precipitation penetration to the ground water and runoff to surface waters. Catchment, Hydrogeological, Soil, Relief or Land Cover maps allowed taking into account constant environmental conditions. Additionally information about precipitation for each day of analysis and evapotranspiration for every 16-day period (calculated from satellite images) were used to present influence of meteorological condition on vulnerability of the terrain. The risk model is the sum of the vulnerability model and the agricultural pressure model. In order to check the accuracy of the elaborated model, the authors compared the results with the eutrophication measurements. The model accuracy is from 85,3% to 91,3%.

  15. Improving Reasoning and Recall: The Differential Effects of Elaborative Interrogation and Mnemonic Elaboration.

    ERIC Educational Resources Information Center

    Scruggs, Thomas E.; And Others

    1993-01-01

    Fifty-three adolescents with learning disabilities or mild mental retardation were taught reasons for dinosaur extinction. Those taught in a mnemonic elaborative interrogation condition recalled more reasons than did students who received direct teaching. Students in elaborative interrogation and mnemonic elaborative interrogation groups recalled…

  16. False Memories for Suggestions: The Impact of Conceptual Elaboration

    PubMed Central

    Zaragoza, Maria S.; Mitchell, Karen J.; Payment, Kristie; Drivdahl, Sarah

    2010-01-01

    Relatively little attention has been paid to the potential role that reflecting on the meaning and implications of suggested events (i.e., conceptual elaboration) might play in promoting the creation of false memories. Two experiments assessed whether encouraging repeated conceptual elaboration, would, like perceptual elaboration, increase false memory for suggested events. Results showed that conceptual elaboration of suggested events more often resulted in high confidence false memories (Experiment 1) and false memories that were accompanied by the phenomenal experience of remembering them (Experiment 2) than did surface-level processing. Moreover, conceptual elaboration consistently led to higher rates of false memory than did perceptual elaboration. The false memory effects that resulted from conceptual elaboration were highly dependent on the organization of the postevent interview questions, such that conceptual elaboration only increased false memory beyond surface level processing when participants evaluated both true and suggested information in relation to the same theme or dimension. PMID:21103451

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamlet, Benjamin R.; Harris, James M.; Burns, John F.

    This document contains 4 use case realizations generated from the model contained in Rational Software Architect. These use case realizations are the current versions of the realizations originally delivered in Elaboration Iteration 3.

  18. The PEARL Model of Sustainable Development

    ERIC Educational Resources Information Center

    Bilgin, Mert

    2012-01-01

    This paper addresses perception (P), environment (E), action (A), relationship (R), and locality (L) as the social indicators of sustainable development (SD), the capital letters of which label the PEARL model. The paper refers to PEARL with regard to three aspects to elaborate the promises and limits of the model. Theoretically; it discusses…

  19. Pedagogical View of Model Metabolic Cycles

    ERIC Educational Resources Information Center

    García-Herrero, Victor; Sillero, Antonio

    2015-01-01

    The main purpose of this study was to present a simplified view of model metabolic cycles. Although the models have been elaborated with the "Mathematica" Program, and using a system of differential equations, the main conclusions were presented in a rather intuitive way, easily understandable by students of general courses of…

  20. Solving Large Problems with a Small Working Memory

    ERIC Educational Resources Information Center

    Pizlo, Zygmunt; Stefanov, Emil

    2013-01-01

    We describe an important elaboration of our multiscale/multiresolution model for solving the Traveling Salesman Problem (TSP). Our previous model emulated the non-uniform distribution of receptors on the human retina and the shifts of visual attention. This model produced near-optimal solutions of TSP in linear time by performing hierarchical…

  1. Approximate likelihood calculation on a phylogeny for Bayesian estimation of divergence times.

    PubMed

    dos Reis, Mario; Yang, Ziheng

    2011-07-01

    The molecular clock provides a powerful way to estimate species divergence times. If information on some species divergence times is available from the fossil or geological record, it can be used to calibrate a phylogeny and estimate divergence times for all nodes in the tree. The Bayesian method provides a natural framework to incorporate different sources of information concerning divergence times, such as information in the fossil and molecular data. Current models of sequence evolution are intractable in a Bayesian setting, and Markov chain Monte Carlo (MCMC) is used to generate the posterior distribution of divergence times and evolutionary rates. This method is computationally expensive, as it involves the repeated calculation of the likelihood function. Here, we explore the use of Taylor expansion to approximate the likelihood during MCMC iteration. The approximation is much faster than conventional likelihood calculation. However, the approximation is expected to be poor when the proposed parameters are far from the likelihood peak. We explore the use of parameter transforms (square root, logarithm, and arcsine) to improve the approximation to the likelihood curve. We found that the new methods, particularly the arcsine-based transform, provided very good approximations under relaxed clock models and also under the global clock model when the global clock is not seriously violated. The approximation is poorer for analysis under the global clock when the global clock is seriously wrong and should thus not be used. The results suggest that the approximate method may be useful for Bayesian dating analysis using large data sets.

  2. ELASTIC NET FOR COX'S PROPORTIONAL HAZARDS MODEL WITH A SOLUTION PATH ALGORITHM.

    PubMed

    Wu, Yichao

    2012-01-01

    For least squares regression, Efron et al. (2004) proposed an efficient solution path algorithm, the least angle regression (LAR). They showed that a slight modification of the LAR leads to the whole LASSO solution path. Both the LAR and LASSO solution paths are piecewise linear. Recently Wu (2011) extended the LAR to generalized linear models and the quasi-likelihood method. In this work we extend the LAR further to handle Cox's proportional hazards model. The goal is to develop a solution path algorithm for the elastic net penalty (Zou and Hastie (2005)) in Cox's proportional hazards model. This goal is achieved in two steps. First we extend the LAR to optimizing the log partial likelihood plus a fixed small ridge term. Then we define a path modification, which leads to the solution path of the elastic net regularized log partial likelihood. Our solution path is exact and piecewise determined by ordinary differential equation systems.

  3. Evaluation of risk from acts of terrorism :the adversary/defender model using belief and fuzzy sets.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darby, John L.

    Risk from an act of terrorism is a combination of the likelihood of an attack, the likelihood of success of the attack, and the consequences of the attack. The considerable epistemic uncertainty in each of these three factors can be addressed using the belief/plausibility measure of uncertainty from the Dempster/Shafer theory of evidence. The adversary determines the likelihood of the attack. The success of the attack and the consequences of the attack are determined by the security system and mitigation measures put in place by the defender. This report documents a process for evaluating risk of terrorist acts using anmore » adversary/defender model with belief/plausibility as the measure of uncertainty. Also, the adversary model is a linguistic model that applies belief/plausibility to fuzzy sets used in an approximate reasoning rule base.« less

  4. Pseudomonas aeruginosa dose response and bathing water infection.

    PubMed

    Roser, D J; van den Akker, B; Boase, S; Haas, C N; Ashbolt, N J; Rice, S A

    2014-03-01

    Pseudomonas aeruginosa is the opportunistic pathogen mostly implicated in folliculitis and acute otitis externa in pools and hot tubs. Nevertheless, infection risks remain poorly quantified. This paper reviews disease aetiologies and bacterial skin colonization science to advance dose-response theory development. Three model forms are identified for predicting disease likelihood from pathogen density. Two are based on Furumoto & Mickey's exponential 'single-hit' model and predict infection likelihood and severity (lesions/m2), respectively. 'Third-generation', mechanistic, dose-response algorithm development is additionally scoped. The proposed formulation integrates dispersion, epidermal interaction, and follicle invasion. The review also details uncertainties needing consideration which pertain to water quality, outbreaks, exposure time, infection sites, biofilms, cerumen, environmental factors (e.g. skin saturation, hydrodynamics), and whether P. aeruginosa is endogenous or exogenous. The review's findings are used to propose a conceptual infection model and identify research priorities including pool dose-response modelling, epidermis ecology and infection likelihood-based hygiene management.

  5. Cosmological parameter estimation using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  6. Negotiating Multicollinearity with Spike-and-Slab Priors.

    PubMed

    Ročková, Veronika; George, Edward I

    2014-08-01

    In multiple regression under the normal linear model, the presence of multicollinearity is well known to lead to unreliable and unstable maximum likelihood estimates. This can be particularly troublesome for the problem of variable selection where it becomes more difficult to distinguish between subset models. Here we show how adding a spike-and-slab prior mitigates this difficulty by filtering the likelihood surface into a posterior distribution that allocates the relevant likelihood information to each of the subset model modes. For identification of promising high posterior models in this setting, we consider three EM algorithms, the fast closed form EMVS version of Rockova and George (2014) and two new versions designed for variants of the spike-and-slab formulation. For a multimodal posterior under multicollinearity, we compare the regions of convergence of these three algorithms. Deterministic annealing versions of the EMVS algorithm are seen to substantially mitigate this multimodality. A single simple running example is used for illustration throughout.

  7. Does the portrayal of tanning in Australian women's magazines relate to real women's tanning beliefs and behavior?

    PubMed

    Dixon, Helen G; Warne, Charles D; Scully, Maree L; Wakefield, Melanie A; Dobbinson, Suzanne J

    2011-04-01

    Content analysis data on the tans of 4,422 female Caucasian models sampled from spring and summer magazine issues were combined with readership data to generate indices of potential exposure to social modeling of tanning via popular women's magazines over a 15-year period (1987 to 2002). Associations between these indices and cross-sectional telephone survey data from the same period on 5,675 female teenagers' and adults' tanning attitudes, beliefs, and behavior were examined using logistic regression models. Among young women, greater exposure to tanning in young women's magazines was associated with increased likelihood of endorsing pro-tan attitudes and beliefs. Among women of all ages, greater exposure to tanned models via the most popular women's magazines was associated with increased likelihood of attempting to get a tan but lower likelihood of endorsing pro-tan attitudes. Popular women's magazines may promote and reflect real women's tanning beliefs and behavior.

  8. Maximum likelihood convolutional decoding (MCD) performance due to system losses

    NASA Technical Reports Server (NTRS)

    Webster, L.

    1976-01-01

    A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.

  9. Iterative Procedures for Exact Maximum Likelihood Estimation in the First-Order Gaussian Moving Average Model

    DTIC Science & Technology

    1990-11-01

    1 = Q- 1 - 1 QlaaQ- 1.1 + a’Q-1a This is a simple case of a general formula called Woodbury’s formula by some authors; see, for example, Phadke and...1 2. The First-Order Moving Average Model ..... .................. 3. Some Approaches to the Iterative...the approximate likelihood function in some time series models. Useful suggestions have been the Cholesky decomposition of the covariance matrix and

  10. Parameter estimation in astronomy through application of the likelihood ratio. [satellite data analysis techniques

    NASA Technical Reports Server (NTRS)

    Cash, W.

    1979-01-01

    Many problems in the experimental estimation of parameters for models can be solved through use of the likelihood ratio test. Applications of the likelihood ratio, with particular attention to photon counting experiments, are discussed. The procedures presented solve a greater range of problems than those currently in use, yet are no more difficult to apply. The procedures are proved analytically, and examples from current problems in astronomy are discussed.

  11. On the Existence and Uniqueness of JML Estimates for the Partial Credit Model

    ERIC Educational Resources Information Center

    Bertoli-Barsotti, Lucio

    2005-01-01

    A necessary and sufficient condition is given in this paper for the existence and uniqueness of the maximum likelihood (the so-called joint maximum likelihood) estimate of the parameters of the Partial Credit Model. This condition is stated in terms of a structural property of the pattern of the data matrix that can be easily verified on the basis…

  12. Modeling forest bird species' likelihood of occurrence in Utah with Forest Inventory and Analysis and Landfire map products and ecologically based pseudo-absence points

    Treesearch

    Phoebe L. Zarnetske; Thomas C., Jr. Edwards; Gretchen G. Moisen

    2007-01-01

    Estimating species likelihood of occurrence across extensive landscapes is a powerful management tool. Unfortunately, available occurrence data for landscape-scale modeling is often lacking and usually only in the form of observed presences. Ecologically based pseudo-absence points were generated from within habitat envelopes to accompany presence-only data in habitat...

  13. A Computer Program for Solving a Set of Conditional Maximum Likelihood Equations Arising in the Rasch Model for Questionnaires.

    ERIC Educational Resources Information Center

    Andersen, Erling B.

    A computer program for solving the conditional likelihood equations arising in the Rasch model for questionnaires is described. The estimation method and the computational problems involved are described in a previous research report by Andersen, but a summary of those results are given in two sections of this paper. A working example is also…

  14. Moral Identity Predicts Doping Likelihood via Moral Disengagement and Anticipated Guilt.

    PubMed

    Kavussanu, Maria; Ring, Christopher

    2017-08-01

    In this study, we integrated elements of social cognitive theory of moral thought and action and the social cognitive model of moral identity to better understand doping likelihood in athletes. Participants (N = 398) recruited from a variety of team sports completed measures of moral identity, moral disengagement, anticipated guilt, and doping likelihood. Moral identity predicted doping likelihood indirectly via moral disengagement and anticipated guilt. Anticipated guilt about potential doping mediated the relationship between moral disengagement and doping likelihood. Our findings provide novel evidence to suggest that athletes, who feel that being a moral person is central to their self-concept, are less likely to use banned substances due to their lower tendency to morally disengage and the more intense feelings of guilt they expect to experience for using banned substances.

  15. Method to enhance the performance of synthetic origin-destination (O-D) trip table estimation models.

    DOT National Transportation Integrated Search

    1998-01-01

    The conventional methods of determining origin-destination (O-D) trip tables involve elaborate surveys, e.g., home interviews, that require considerable time, staff, and funds. To overcome this drawback, a number of theoretical models that synthesize...

  16. A black box optimization approach to parameter estimation in a model for long/short term variations dynamics of commodity prices

    NASA Astrophysics Data System (ADS)

    De Santis, Alberto; Dellepiane, Umberto; Lucidi, Stefano

    2012-11-01

    In this paper we investigate the estimation problem for a model of the commodity prices. This model is a stochastic state space dynamical model and the problem unknowns are the state variables and the system parameters. Data are represented by the commodity spot prices, very seldom time series of Futures contracts are available for free. Both the system joint likelihood function (state variables and parameters) and the system marginal likelihood (the state variables are eliminated) function are addressed.

  17. A comparison of abundance estimates from extended batch-marking and Jolly–Seber-type experiments

    PubMed Central

    Cowen, Laura L E; Besbeas, Panagiotis; Morgan, Byron J T; Schwarz, Carl J

    2014-01-01

    Little attention has been paid to the use of multi-sample batch-marking studies, as it is generally assumed that an individual's capture history is necessary for fully efficient estimates. However, recently, Huggins et al. (2010) present a pseudo-likelihood for a multi-sample batch-marking study where they used estimating equations to solve for survival and capture probabilities and then derived abundance estimates using a Horvitz–Thompson-type estimator. We have developed and maximized the likelihood for batch-marking studies. We use data simulated from a Jolly–Seber-type study and convert this to what would have been obtained from an extended batch-marking study. We compare our abundance estimates obtained from the Crosbie–Manly–Arnason–Schwarz (CMAS) model with those of the extended batch-marking model to determine the efficiency of collecting and analyzing batch-marking data. We found that estimates of abundance were similar for all three estimators: CMAS, Huggins, and our likelihood. Gains are made when using unique identifiers and employing the CMAS model in terms of precision; however, the likelihood typically had lower mean square error than the pseudo-likelihood method of Huggins et al. (2010). When faced with designing a batch-marking study, researchers can be confident in obtaining unbiased abundance estimators. Furthermore, they can design studies in order to reduce mean square error by manipulating capture probabilities and sample size. PMID:24558576

  18. Using latent class analysis to model prescription medications in the measurement of falling among a community elderly population

    PubMed Central

    2013-01-01

    Background Falls among the elderly are a major public health concern. Therefore, the possibility of a modeling technique which could better estimate fall probability is both timely and needed. Using biomedical, pharmacological and demographic variables as predictors, latent class analysis (LCA) is demonstrated as a tool for the prediction of falls among community dwelling elderly. Methods Using a retrospective data-set a two-step LCA modeling approach was employed. First, we looked for the optimal number of latent classes for the seven medical indicators, along with the patients’ prescription medication and three covariates (age, gender, and number of medications). Second, the appropriate latent class structure, with the covariates, were modeled on the distal outcome (fall/no fall). The default estimator was maximum likelihood with robust standard errors. The Pearson chi-square, likelihood ratio chi-square, BIC, Lo-Mendell-Rubin Adjusted Likelihood Ratio test and the bootstrap likelihood ratio test were used for model comparisons. Results A review of the model fit indices with covariates shows that a six-class solution was preferred. The predictive probability for latent classes ranged from 84% to 97%. Entropy, a measure of classification accuracy, was good at 90%. Specific prescription medications were found to strongly influence group membership. Conclusions In conclusion the LCA method was effective at finding relevant subgroups within a heterogenous at-risk population for falling. This study demonstrated that LCA offers researchers a valuable tool to model medical data. PMID:23705639

  19. On the predictability of event boundaries in discourse: An ERP investigation.

    PubMed

    Delogu, Francesca; Drenhaus, Heiner; Crocker, Matthew W

    2018-02-01

    When reading a text describing an everyday activity, comprehenders build a model of the situation described that includes prior knowledge of the entities, locations, and sequences of actions that typically occur within the event. Previous work has demonstrated that such knowledge guides the processing of incoming information by making event boundaries more or less expected. In the present ERP study, we investigated whether comprehenders' expectations about event boundaries are influenced by how elaborately common events are described in the context. Participants read short stories in which a common activity (e.g., washing the dishes) was described either in brief or in an elaborate manner. The final sentence contained a target word referring to a more predictable action marking a fine event boundary (e.g., drying) or a less predictable action, marking a coarse event boundary (e.g., jogging). The results revealed a larger N400 effect for coarse event boundaries compared to fine event boundaries, but no interaction with description length. Between 600 and 1000 ms, however, elaborate contexts elicited a larger frontal positivity compared to brief contexts. This effect was largely driven by less predictable targets, marking coarse event boundaries. We interpret the P600 effect as indexing the updating of the situation model at event boundaries, consistent with Event Segmentation Theory (EST). The updating process is more demanding with coarse event boundaries, which presumably require the construction of a new situation model.

  20. On the occurrence of false positives in tests of migration under an isolation with migration model

    PubMed Central

    Hey, Jody; Chung, Yujin; Sethuraman, Arun

    2015-01-01

    The population genetic study of divergence is often done using a Bayesian genealogy sampler, like those implemented in IMa2 and related programs, and these analyses frequently include a likelihood-ratio test of the null hypothesis of no migration between populations. Cruickshank and Hahn (2014, Molecular Ecology, 23, 3133–3157) recently reported a high rate of false positive test results with IMa2 for data simulated with small numbers of loci under models with no migration and recent splitting times. We confirm these findings and discover that they are caused by a failure of the assumptions underlying likelihood ratio tests that arises when using marginal likelihoods for a subset of model parameters. We also show that for small data sets, with little divergence between samples from two populations, an excellent fit can often be found by a model with a low migration rate and recent splitting time and a model with a high migration rate and a deep splitting time. PMID:26456794

  1. Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.

    PubMed

    Xie, Yanmei; Zhang, Biao

    2017-04-20

    Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and Nutrition Examination Survey (NHANES).

  2. A new model to predict weak-lensing peak counts. II. Parameter constraint strategies

    NASA Astrophysics Data System (ADS)

    Lin, Chieh-An; Kilbinger, Martin

    2015-11-01

    Context. Peak counts have been shown to be an excellent tool for extracting the non-Gaussian part of the weak lensing signal. Recently, we developed a fast stochastic forward model to predict weak-lensing peak counts. Our model is able to reconstruct the underlying distribution of observables for analysis. Aims: In this work, we explore and compare various strategies for constraining a parameter using our model, focusing on the matter density Ωm and the density fluctuation amplitude σ8. Methods: First, we examine the impact from the cosmological dependency of covariances (CDC). Second, we perform the analysis with the copula likelihood, a technique that makes a weaker assumption than does the Gaussian likelihood. Third, direct, non-analytic parameter estimations are applied using the full information of the distribution. Fourth, we obtain constraints with approximate Bayesian computation (ABC), an efficient, robust, and likelihood-free algorithm based on accept-reject sampling. Results: We find that neglecting the CDC effect enlarges parameter contours by 22% and that the covariance-varying copula likelihood is a very good approximation to the true likelihood. The direct techniques work well in spite of noisier contours. Concerning ABC, the iterative process converges quickly to a posterior distribution that is in excellent agreement with results from our other analyses. The time cost for ABC is reduced by two orders of magnitude. Conclusions: The stochastic nature of our weak-lensing peak count model allows us to use various techniques that approach the true underlying probability distribution of observables, without making simplifying assumptions. Our work can be generalized to other observables where forward simulations provide samples of the underlying distribution.

  3. Clarifying the Hubble constant tension with a Bayesian hierarchical model of the local distance ladder

    NASA Astrophysics Data System (ADS)

    Feeney, Stephen M.; Mortlock, Daniel J.; Dalmasso, Niccolò

    2018-05-01

    Estimates of the Hubble constant, H0, from the local distance ladder and from the cosmic microwave background (CMB) are discrepant at the ˜3σ level, indicating a potential issue with the standard Λ cold dark matter (ΛCDM) cosmology. A probabilistic (i.e. Bayesian) interpretation of this tension requires a model comparison calculation, which in turn depends strongly on the tails of the H0 likelihoods. Evaluating the tails of the local H0 likelihood requires the use of non-Gaussian distributions to faithfully represent anchor likelihoods and outliers, and simultaneous fitting of the complete distance-ladder data set to ensure correct uncertainty propagation. We have hence developed a Bayesian hierarchical model of the full distance ladder that does not rely on Gaussian distributions and allows outliers to be modelled without arbitrary data cuts. Marginalizing over the full ˜3000-parameter joint posterior distribution, we find H0 = (72.72 ± 1.67) km s-1 Mpc-1 when applied to the outlier-cleaned Riess et al. data, and (73.15 ± 1.78) km s-1 Mpc-1 with supernova outliers reintroduced (the pre-cut Cepheid data set is not available). Using our precise evaluation of the tails of the H0 likelihood, we apply Bayesian model comparison to assess the evidence for deviation from ΛCDM given the distance-ladder and CMB data. The odds against ΛCDM are at worst ˜10:1 when considering the Planck 2015 XIII data, regardless of outlier treatment, considerably less dramatic than naïvely implied by the 2.8σ discrepancy. These odds become ˜60:1 when an approximation to the more-discrepant Planck Intermediate XLVI likelihood is included.

  4. Differential-associative processing or example elaboration: Which strategy is best for learning the definitions of related and unrelated concepts?

    PubMed

    Hannon, Brenda

    2012-10-01

    Definitions of related concepts (e.g., genotype - phenotype ) are prevalent in introductory classes. Consequently, it is important that educators and students know which strategy(s) work best for learning them. This study showed that a new comparative elaboration strategy, called differential-associative processing, was better for learning definitions of related concepts than was an integrative elaborative strategy, called example elaboration. This outcome occurred even though example elaboration was administered in a naturalistic way (Experiment 1) and students spent more time in the example elaboration condition learning (Experiments 1, 2, 3), and generating pieces of information about the concepts (Experiments 2 and 3). Further, with unrelated concepts ( morpheme-fluid intelligence ), performance was similar regardless if students used differential-associative processing or example elaboration (Experiment 3). Taken as a whole, these results suggest that differential-associative processing is better than example elaboration for learning definitions of related concepts and is as good as example elaboration for learning definitions of unrelated concepts.

  5. Differential-associative processing or example elaboration: Which strategy is best for learning the definitions of related and unrelated concepts?

    PubMed Central

    Hannon, Brenda

    2013-01-01

    Definitions of related concepts (e.g., genotype–phenotype) are prevalent in introductory classes. Consequently, it is important that educators and students know which strategy(s) work best for learning them. This study showed that a new comparative elaboration strategy, called differential-associative processing, was better for learning definitions of related concepts than was an integrative elaborative strategy, called example elaboration. This outcome occurred even though example elaboration was administered in a naturalistic way (Experiment 1) and students spent more time in the example elaboration condition learning (Experiments 1, 2, 3), and generating pieces of information about the concepts (Experiments 2 and 3). Further, with unrelated concepts (morpheme-fluid intelligence), performance was similar regardless if students used differential-associative processing or example elaboration (Experiment 3). Taken as a whole, these results suggest that differential-associative processing is better than example elaboration for learning definitions of related concepts and is as good as example elaboration for learning definitions of unrelated concepts. PMID:24347814

  6. Statistical methods for the beta-binomial model in teratology.

    PubMed Central

    Yamamoto, E; Yanagimoto, T

    1994-01-01

    The beta-binomial model is widely used for analyzing teratological data involving littermates. Recent developments in statistical analyses of teratological data are briefly reviewed with emphasis on the model. For statistical inference of the parameters in the beta-binomial distribution, separation of the likelihood introduces an likelihood inference. This leads to reducing biases of estimators and also to improving accuracy of empirical significance levels of tests. Separate inference of the parameters can be conducted in a unified way. PMID:8187716

  7. Gyre and gimble: a maximum-likelihood replacement for Patterson correlation refinement.

    PubMed

    McCoy, Airlie J; Oeffner, Robert D; Millán, Claudia; Sammito, Massimo; Usón, Isabel; Read, Randy J

    2018-04-01

    Descriptions are given of the maximum-likelihood gyre method implemented in Phaser for optimizing the orientation and relative position of rigid-body fragments of a model after the orientation of the model has been identified, but before the model has been positioned in the unit cell, and also the related gimble method for the refinement of rigid-body fragments of the model after positioning. Gyre refinement helps to lower the root-mean-square atomic displacements between model and target molecular-replacement solutions for the test case of antibody Fab(26-10) and improves structure solution with ARCIMBOLDO_SHREDDER.

  8. Estimation of Dynamic Discrete Choice Models by Maximum Likelihood and the Simulated Method of Moments

    PubMed Central

    Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano

    2015-01-01

    We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926

  9. Random Walks on a Simple Cubic Lattice, the Multinomial Theorem, and Configurational Properties of Polymers

    ERIC Educational Resources Information Center

    Hladky, Paul W.

    2007-01-01

    Random-climb models enable undergraduate chemistry students to visualize polymer molecules, quantify their configurational properties, and relate molecular structure to a variety of physical properties. The model could serve as an introduction to more elaborate models of polymer molecules and could help in learning topics such as lattice models of…

  10. A single-index threshold Cox proportional hazard model for identifying a treatment-sensitive subset based on multiple biomarkers.

    PubMed

    He, Ye; Lin, Huazhen; Tu, Dongsheng

    2018-06-04

    In this paper, we introduce a single-index threshold Cox proportional hazard model to select and combine biomarkers to identify patients who may be sensitive to a specific treatment. A penalized smoothed partial likelihood is proposed to estimate the parameters in the model. A simple, efficient, and unified algorithm is presented to maximize this likelihood function. The estimators based on this likelihood function are shown to be consistent and asymptotically normal. Under mild conditions, the proposed estimators also achieve the oracle property. The proposed approach is evaluated through simulation analyses and application to the analysis of data from two clinical trials, one involving patients with locally advanced or metastatic pancreatic cancer and one involving patients with resectable lung cancer. Copyright © 2018 John Wiley & Sons, Ltd.

  11. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  12. Cramer-Rao Bound, MUSIC, and Maximum Likelihood. Effects of Temporal Phase Difference

    DTIC Science & Technology

    1990-11-01

    Technical Report 1373 November 1990 Cramer-Rao Bound, MUSIC , And Maximum Likelihood Effects of Temporal Phase o Difference C. V. TranI OTIC Approved... MUSIC , and Maximum Likelihood (ML) asymptotic variances corresponding to the two-source direction-of-arrival estimation where sources were modeled as...1pI = 1.00, SNR = 20 dB ..................................... 27 2. MUSIC for two equipowered signals impinging on a 5-element ULA (a) IpI = 0.50, SNR

  13. Population Synthesis of Radio and Gamma-ray Pulsars using the Maximum Likelihood Approach

    NASA Astrophysics Data System (ADS)

    Billman, Caleb; Gonthier, P. L.; Harding, A. K.

    2012-01-01

    We present the results of a pulsar population synthesis of normal pulsars from the Galactic disk using a maximum likelihood method. We seek to maximize the likelihood of a set of parameters in a Monte Carlo population statistics code to better understand their uncertainties and the confidence region of the model's parameter space. The maximum likelihood method allows for the use of more applicable Poisson statistics in the comparison of distributions of small numbers of detected gamma-ray and radio pulsars. Our code simulates pulsars at birth using Monte Carlo techniques and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and gamma-ray emission characteristics. We select measured distributions of radio pulsars from the Parkes Multibeam survey and Fermi gamma-ray pulsars to perform a likelihood analysis of the assumed model parameters such as initial period and magnetic field, and radio luminosity. We present the results of a grid search of the parameter space as well as a search for the maximum likelihood using a Markov Chain Monte Carlo method. We express our gratitude for the generous support of the Michigan Space Grant Consortium, of the National Science Foundation (REU and RUI), the NASA Astrophysics Theory and Fundamental Program and the NASA Fermi Guest Investigator Program.

  14. Bayesian multimodel inference of soil microbial respiration models: Theory, application and future prospective

    NASA Astrophysics Data System (ADS)

    Elshall, A. S.; Ye, M.; Niu, G. Y.; Barron-Gafford, G.

    2015-12-01

    Models in biogeoscience involve uncertainties in observation data, model inputs, model structure, model processes and modeling scenarios. To accommodate for different sources of uncertainty, multimodal analysis such as model combination, model selection, model elimination or model discrimination are becoming more popular. To illustrate theoretical and practical challenges of multimodal analysis, we use an example about microbial soil respiration modeling. Global soil respiration releases more than ten times more carbon dioxide to the atmosphere than all anthropogenic emissions. Thus, improving our understanding of microbial soil respiration is essential for improving climate change models. This study focuses on a poorly understood phenomena, which is the soil microbial respiration pulses in response to episodic rainfall pulses (the "Birch effect"). We hypothesize that the "Birch effect" is generated by the following three mechanisms. To test our hypothesis, we developed and assessed five evolving microbial-enzyme models against field measurements from a semiarid Savannah that is characterized by pulsed precipitation. These five model evolve step-wise such that the first model includes none of these three mechanism, while the fifth model includes the three mechanisms. The basic component of Bayesian multimodal analysis is the estimation of marginal likelihood to rank the candidate models based on their overall likelihood with respect to observation data. The first part of the study focuses on using this Bayesian scheme to discriminate between these five candidate models. The second part discusses some theoretical and practical challenges, which are mainly the effect of likelihood function selection and the marginal likelihood estimation methods on both model ranking and Bayesian model averaging. The study shows that making valid inference from scientific data is not a trivial task, since we are not only uncertain about the candidate scientific models, but also about the statistical methods that are used to discriminate between these models.

  15. Efficient simulation and likelihood methods for non-neutral multi-allele models.

    PubMed

    Joyce, Paul; Genz, Alan; Buzbas, Erkan Ozge

    2012-06-01

    Throughout the 1980s, Simon Tavaré made numerous significant contributions to population genetics theory. As genetic data, in particular DNA sequence, became more readily available, a need to connect population-genetic models to data became the central issue. The seminal work of Griffiths and Tavaré (1994a , 1994b , 1994c) was among the first to develop a likelihood method to estimate the population-genetic parameters using full DNA sequences. Now, we are in the genomics era where methods need to scale-up to handle massive data sets, and Tavaré has led the way to new approaches. However, performing statistical inference under non-neutral models has proved elusive. In tribute to Simon Tavaré, we present an article in spirit of his work that provides a computationally tractable method for simulating and analyzing data under a class of non-neutral population-genetic models. Computational methods for approximating likelihood functions and generating samples under a class of allele-frequency based non-neutral parent-independent mutation models were proposed by Donnelly, Nordborg, and Joyce (DNJ) (Donnelly et al., 2001). DNJ (2001) simulated samples of allele frequencies from non-neutral models using neutral models as auxiliary distribution in a rejection algorithm. However, patterns of allele frequencies produced by neutral models are dissimilar to patterns of allele frequencies produced by non-neutral models, making the rejection method inefficient. For example, in some cases the methods in DNJ (2001) require 10(9) rejections before a sample from the non-neutral model is accepted. Our method simulates samples directly from the distribution of non-neutral models, making simulation methods a practical tool to study the behavior of the likelihood and to perform inference on the strength of selection.

  16. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si

    2014-12-01

    The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less

  17. Conflict effects without conflict in anterior cingulate cortex: multiple response effects and context specific representations

    PubMed Central

    Brown, Joshua W.

    2009-01-01

    The error likelihood computational model of anterior cingulate cortex (ACC) (Brown & Braver, 2005) has successfully predicted error likelihood effects, risk prediction effects, and how individual differences in conflict and error likelihood effects vary with trait differences in risk aversion. The same computational model now makes a further prediction that apparent conflict effects in ACC may result in part from an increasing number of simultaneously active responses, regardless of whether or not the cued responses are mutually incompatible. In Experiment 1, the model prediction was tested with a modification of the Eriksen flanker task, in which some task conditions require two otherwise mutually incompatible responses to be generated simultaneously. In that case, the two response processes are no longer in conflict with each other. The results showed small but significant medial PFC effects in the incongruent vs. congruent contrast, despite the absence of response conflict, consistent with model predictions. This is the multiple response effect. Nonetheless, actual response conflict led to greater ACC activation, suggesting that conflict effects are specific to particular task contexts. In Experiment 2, results from a change signal task suggested that the context dependence of conflict signals does not depend on error likelihood effects. Instead, inputs to ACC may reflect complex and task specific representations of motor acts, such as bimanual responses. Overall, the results suggest the existence of a richer set of motor signals monitored by medial PFC and are consistent with distinct effects of multiple responses, conflict, and error likelihood in medial PFC. PMID:19375509

  18. Differentiation of subsequent memory effects between retrieval practice and elaborative study.

    PubMed

    Liu, Yi; Rosburg, Timm; Gao, Chuanji; Weber, Christine; Guo, Chunyan

    2017-07-01

    Retrieval practice enhances memory retention more than re-studying. The underlying mechanisms of this retrieval practice effect have remained widely unclear. According to the elaborative retrieval hypothesis, activation of elaborative information occurs to a larger extent during testing than re-studying. In contrast, the episodic context account has suggested that recollecting prior episodic information (especially the temporal context) contributes to memory retention. To adjudicate the distinction between these two accounts, the present study used the classical retrieval practice effect paradigm to compare retrieval practice and elaborative study. In an initial behavioral experiment, retrieval practice produced greater retention than elaboration and re-studying in a one-week delayed test. In a subsequent event-related potential (ERP) experiment, retrieval practice resulted in reliably superior accuracy in the delayed test compared to elaborative study. In the ERPs, a frontally distributed subsequent memory effect (SME), starting at 300ms, occurred in the elaborative study condition, but not in the retrieval practice condition. A parietal SME emerged in the retrieval practice condition from 500 to 700ms, but was absent in the elaborative study condition. After 700ms, a late SME was present in the retrieval practice condition, but not in the elaborative study condition. Moreover, SMEs lasted longer in retrieval practice than in elaboration. The frontal SME in the elaborative study condition might be related to semantic processing or working memory-based elaboration, whereas the parietal and widespread SME in the retrieval practice condition might be associated with episodic recollection processes. These findings contradict the elaborative retrieval theory, and suggest that contextual recollection rather than activation of semantic information contributes to the retrieval practice effect, supporting the episodic context account. Copyright © 2017. Published by Elsevier B.V.

  19. Estimation of parameters of dose volume models and their confidence limits

    NASA Astrophysics Data System (ADS)

    van Luijk, P.; Delvigne, T. C.; Schilstra, C.; Schippers, J. M.

    2003-07-01

    Predictions of the normal-tissue complication probability (NTCP) for the ranking of treatment plans are based on fits of dose-volume models to clinical and/or experimental data. In the literature several different fit methods are used. In this work frequently used methods and techniques to fit NTCP models to dose response data for establishing dose-volume effects, are discussed. The techniques are tested for their usability with dose-volume data and NTCP models. Different methods to estimate the confidence intervals of the model parameters are part of this study. From a critical-volume (CV) model with biologically realistic parameters a primary dataset was generated, serving as the reference for this study and describable by the NTCP model. The CV model was fitted to this dataset. From the resulting parameters and the CV model, 1000 secondary datasets were generated by Monte Carlo simulation. All secondary datasets were fitted to obtain 1000 parameter sets of the CV model. Thus the 'real' spread in fit results due to statistical spreading in the data is obtained and has been compared with estimates of the confidence intervals obtained by different methods applied to the primary dataset. The confidence limits of the parameters of one dataset were estimated using the methods, employing the covariance matrix, the jackknife method and directly from the likelihood landscape. These results were compared with the spread of the parameters, obtained from the secondary parameter sets. For the estimation of confidence intervals on NTCP predictions, three methods were tested. Firstly, propagation of errors using the covariance matrix was used. Secondly, the meaning of the width of a bundle of curves that resulted from parameters that were within the one standard deviation region in the likelihood space was investigated. Thirdly, many parameter sets and their likelihood were used to create a likelihood-weighted probability distribution of the NTCP. It is concluded that for the type of dose response data used here, only a full likelihood analysis will produce reliable results. The often-used approximations, such as the usage of the covariance matrix, produce inconsistent confidence limits on both the parameter sets and the resulting NTCP values.

  20. A method to enhance the performance of synthetic origin-destination (O-D) trip table estimation models.

    DOT National Transportation Integrated Search

    1998-01-01

    The conventional methods of determining origin-destination (O-D) trip tables involve elaborate surveys, e.g., home interviews, that require considerable time, staff, and funds. To overcome this drawback, a number of theoretical models that synthesize...

  1. Predicting Likelihood of Surgery Prior to First Visit in Patients with Back and Lower Extremity Symptoms: A simple mathematical model based on over 8000 patients.

    PubMed

    Boden, Lauren M; Boden, Stephanie A; Premkumar, Ajay; Gottschalk, Michael B; Boden, Scott D

    2018-02-09

    Retrospective analysis of prospectively collected data. To create a data-driven triage system stratifying patients by likelihood of undergoing spinal surgery within one year of presentation. Low back pain (LBP) and radicular lower extremity (LE) symptoms are common musculoskeletal problems. There is currently no standard data-derived triage process based on information that can be obtained prior to the initial physician-patient encounter to direct patients to the optimal physician type. We analyzed patient-reported data from 8006 patients with a chief complaint of LBP and/or LE radicular symptoms who presented to surgeons at a large multidisciplinary spine center between September 1, 2005 and June 30, 2016. Univariate and multivariate analysis identified independent risk factors for undergoing spinal surgery within one year of initial visit. A model incorporating these risk factors was created using a random sample of 80% of the total patients in our cohort, and validated on the remaining 20%. The baseline one-year surgery rate within our cohort was 39% for all patients and 42% for patients with LE symptoms. Those identified as high likelihood by the center's existing triage process had a surgery rate of 45%. The new triage scoring system proposed in this study was able to identify a high likelihood group in which 58% underwent surgery, which is a 46% higher surgery rate than in non-triaged patients and a 29% improvement from our institution's existing triage system. The data-driven triage model and scoring system derived and validated in this study (Spine Surgery Likelihood model [SSL-11]), significantly improved existing processes in predicting the likelihood of undergoing spinal surgery within one year of initial presentation. This triage system will allow centers to more selectively screen for surgical candidates and more effectively direct patients to surgeons or non-operative spine specialists. 4.

  2. Analysis of Multipsectral Time Series for supporting Forest Management Plans

    NASA Astrophysics Data System (ADS)

    Simoniello, T.; Carone, M. T.; Costantini, G.; Frattegiani, M.; Lanfredi, M.; Macchiato, M.

    2010-05-01

    Adequate forest management requires specific plans based on updated and detailed mapping. Multispectral satellite time series have been largely applied to forest monitoring and studies at different scales tanks to their capability of providing synoptic information on some basic parameters descriptive of vegetation distribution and status. As a low expensive tool for supporting forest management plans in operative context, we tested the use of Landsat-TM/ETM time series (1987-2006) in the high Agri Valley (Southern Italy) for planning field surveys as well as for the integration of existing cartography. As preliminary activity to make all scenes radiometrically consistent the no-change regression normalization was applied to the time series; then all the data concerning available forest maps, municipal boundaries, water basins, rivers, and roads were overlapped in a GIS environment. From the 2006 image we elaborated the NDVI map and analyzed the distribution for each land cover class. To separate the physiological variability and identify the anomalous areas, a threshold on the distributions was applied. To label the non homogenous areas, a multitemporal analysis was performed by separating heterogeneity due to cover changes from that linked to basilar unit mapping and classification labelling aggregations. Then a map of priority areas was produced to support the field survey plan. To analyze the territorial evolution, the historical land cover maps were elaborated by adopting a hybrid classification approach based on a preliminary segmentation, the identification of training areas, and a subsequent maximum likelihood categorization. Such an analysis was fundamental for the general assessment of the territorial dynamics and in particular for the evaluation of the efficacy of past intervention activities.

  3. Signal Partitioning Algorithm for Highly Efficient Gaussian Mixture Modeling in Mass Spectrometry

    PubMed Central

    Polanski, Andrzej; Marczyk, Michal; Pietrowska, Monika; Widlak, Piotr; Polanska, Joanna

    2015-01-01

    Mixture - modeling of mass spectra is an approach with many potential applications including peak detection and quantification, smoothing, de-noising, feature extraction and spectral signal compression. However, existing algorithms do not allow for automated analyses of whole spectra. Therefore, despite highlighting potential advantages of mixture modeling of mass spectra of peptide/protein mixtures and some preliminary results presented in several papers, the mixture modeling approach was so far not developed to the stage enabling systematic comparisons with existing software packages for proteomic mass spectra analyses. In this paper we present an efficient algorithm for Gaussian mixture modeling of proteomic mass spectra of different types (e.g., MALDI-ToF profiling, MALDI-IMS). The main idea is automated partitioning of protein mass spectral signal into fragments. The obtained fragments are separately decomposed into Gaussian mixture models. The parameters of the mixture models of fragments are then aggregated to form the mixture model of the whole spectrum. We compare the elaborated algorithm to existing algorithms for peak detection and we demonstrate improvements of peak detection efficiency obtained by using Gaussian mixture modeling. We also show applications of the elaborated algorithm to real proteomic datasets of low and high resolution. PMID:26230717

  4. Testing students' e-learning via Facebook through Bayesian structural equation modeling.

    PubMed

    Salarzadeh Jenatabadi, Hashem; Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad

    2017-01-01

    Learning is an intentional activity, with several factors affecting students' intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods' results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated.

  5. Testing students’ e-learning via Facebook through Bayesian structural equation modeling

    PubMed Central

    Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad

    2017-01-01

    Learning is an intentional activity, with several factors affecting students’ intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods’ results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated. PMID:28886019

  6. The role of self-regulatory efficacy, moral disengagement and guilt on doping likelihood: A social cognitive theory perspective.

    PubMed

    Ring, Christopher; Kavussanu, Maria

    2018-03-01

    Given the concern over doping in sport, researchers have begun to explore the role played by self-regulatory processes in the decision whether to use banned performance-enhancing substances. Grounded on Bandura's (1991) theory of moral thought and action, this study examined the role of self-regulatory efficacy, moral disengagement and anticipated guilt on the likelihood to use a banned substance among college athletes. Doping self-regulatory efficacy was associated with doping likelihood both directly (b = -.16, P < .001) and indirectly (b = -.29, P < .001) through doping moral disengagement. Moral disengagement also contributed directly to higher doping likelihood and lower anticipated guilt about doping, which was associated with higher doping likelihood. Overall, the present findings provide evidence to support a model of doping based on Bandura's social cognitive theory of moral thought and action, in which self-regulatory efficacy influences the likelihood to use banned performance-enhancing substances both directly and indirectly via moral disengagement.

  7. Self-corrected elaboration and spacing effects in incidental memory.

    PubMed

    Toyota, Hiroshi

    2006-04-01

    The present study investigated the effect of self-corrected elaboration on incidental memory as a function of types of presentation (massed vs spaced) and sentence frames (image vs nonimage). The subjects were presented a target word and an incongruous sentence frame and asked to correct the target to make a common sentence in the self-corrected elaboration condition, whereas in the experimenter-corrected elaboration condition they were asked to rate the appropriateness of the congruous word presented, followed by free recall test. The superiority of the self-corrected elaboration to the experimenter-corrected elaboration was observed only in some situations of combinations by the types of presentation and sentence frames. These results were discussed in terms of the effectiveness of the self-corrected elaboration.

  8. Synthesizing Regression Results: A Factored Likelihood Method

    ERIC Educational Resources Information Center

    Wu, Meng-Jia; Becker, Betsy Jane

    2013-01-01

    Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported…

  9. Maximum likelihood estimation for Cox's regression model under nested case-control sampling.

    PubMed

    Scheike, Thomas H; Juul, Anders

    2004-04-01

    Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used to obtain information additional to the relative risk estimates of covariates.

  10. Robust Likelihoods for Inflationary Gravitational Waves from Maps of Cosmic Microwave Background Polarization

    NASA Technical Reports Server (NTRS)

    Switzer, Eric Ryan; Watts, Duncan J.

    2016-01-01

    The B-mode polarization of the cosmic microwave background provides a unique window into tensor perturbations from inflationary gravitational waves. Survey effects complicate the estimation and description of the power spectrum on the largest angular scales. The pixel-space likelihood yields parameter distributions without the power spectrum as an intermediate step, but it does not have the large suite of tests available to power spectral methods. Searches for primordial B-modes must rigorously reject and rule out contamination. Many forms of contamination vary or are uncorrelated across epochs, frequencies, surveys, or other data treatment subsets. The cross power and the power spectrum of the difference of subset maps provide approaches to reject and isolate excess variance. We develop an analogous joint pixel-space likelihood. Contamination not modeled in the likelihood produces parameter-dependent bias and complicates the interpretation of the difference map. We describe a null test that consistently weights the difference map. Excess variance should either be explicitly modeled in the covariance or be removed through reprocessing the data.

  11. The Development of Ethnic/Racial Self-Labeling: Individual Differences in Context.

    PubMed

    Cheon, Yuen Mi; Bayless, Sara Douglass; Wang, Yijie; Yip, Tiffany

    2018-03-15

    Ethnic/racial self-labeling represents one's knowledge of and preference for ethnic/racial group membership, which is related to, but distinguishable from, ethnic/racial identity. This study examined the development of ethnic/racial self-labeling over time by including the concept of elaboration among a diverse sample of 297 adolescents (Time 1 mean age 14.75, 67% female, 37.4% Asian or Asian American, 10.4% Black, African American, or West Indian, 23.2% Hispanic or Latinx, 24.2% White, 4.4% other). Growth mixture modeling revealed two distinct patterns-low and high self-labeling elaboration from freshman to sophomore year of high school. Based on logistic regression analyses, the level of self-labeling elaboration was generally low among the adolescents who were foreign-born, reported low levels of ethnic/racial identity exploration, or attended highly diverse schools. We also found a person-by-context interaction where the impact of school diversity varied for foreign-born and native-born adolescents (b = 12.81, SE = 6.30, p < 0.05) and by the level of ethnic/racial identity commitment (b = 14.32, SE = 6.65, p < 0.05). These findings suggest varying patterns in ethnic/racial self-labeling elaboration among adolescents from diverse backgrounds and their linkage to individual and contextual factors.

  12. Comparison of smoothing methods for the development of a smoothed seismicity model for Alaska and the implications for seismic hazard

    NASA Astrophysics Data System (ADS)

    Moschetti, M. P.; Mueller, C. S.; Boyd, O. S.; Petersen, M. D.

    2013-12-01

    In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.

  13. Comparison of smoothing methods for the development of a smoothed seismicity model for Alaska and the implications for seismic hazard

    USGS Publications Warehouse

    Moschetti, Morgan P.; Mueller, Charles S.; Boyd, Oliver S.; Petersen, Mark D.

    2014-01-01

    In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.

  14. Rational selection of experimental readout and intervention sites for reducing uncertainties in computational model predictions.

    PubMed

    Flassig, Robert J; Migal, Iryna; der Zalm, Esther van; Rihko-Struckmann, Liisa; Sundmacher, Kai

    2015-01-16

    Understanding the dynamics of biological processes can substantially be supported by computational models in the form of nonlinear ordinary differential equations (ODE). Typically, this model class contains many unknown parameters, which are estimated from inadequate and noisy data. Depending on the ODE structure, predictions based on unmeasured states and associated parameters are highly uncertain, even undetermined. For given data, profile likelihood analysis has been proven to be one of the most practically relevant approaches for analyzing the identifiability of an ODE structure, and thus model predictions. In case of highly uncertain or non-identifiable parameters, rational experimental design based on various approaches has shown to significantly reduce parameter uncertainties with minimal amount of effort. In this work we illustrate how to use profile likelihood samples for quantifying the individual contribution of parameter uncertainty to prediction uncertainty. For the uncertainty quantification we introduce the profile likelihood sensitivity (PLS) index. Additionally, for the case of several uncertain parameters, we introduce the PLS entropy to quantify individual contributions to the overall prediction uncertainty. We show how to use these two criteria as an experimental design objective for selecting new, informative readouts in combination with intervention site identification. The characteristics of the proposed multi-criterion objective are illustrated with an in silico example. We further illustrate how an existing practically non-identifiable model for the chlorophyll fluorescence induction in a photosynthetic organism, D. salina, can be rendered identifiable by additional experiments with new readouts. Having data and profile likelihood samples at hand, the here proposed uncertainty quantification based on prediction samples from the profile likelihood provides a simple way for determining individual contributions of parameter uncertainties to uncertainties in model predictions. The uncertainty quantification of specific model predictions allows identifying regions, where model predictions have to be considered with care. Such uncertain regions can be used for a rational experimental design to render initially highly uncertain model predictions into certainty. Finally, our uncertainty quantification directly accounts for parameter interdependencies and parameter sensitivities of the specific prediction.

  15. Tweeting and Eating: The Effect of Links and Likes on Food-Hypersensitive Consumers’ Perceptions of Tweets

    PubMed Central

    Hamshaw, Richard J. T.; Barnett, Julie; Lucas, Jane S.

    2018-01-01

    Moving on from literature that focuses on how consumers use social media and the benefits of organizations utilizing platforms for health and risk communication, this study explores how specific characteristics of tweets affect the way in which they are perceived. An online survey with 251 participants with self-reported food hypersensitivity (FH) took part in an online experiment to consider the impact of tweet characteristics on perceptions of source credibility, message credibility, persuasiveness, and intention to act upon the presented information. Positioning the research hypotheses within the framework of the Elaboration Likelihood Model and Uses and Gratifications Theory, the study explored motivations for using social media and tested the impact of the affordances of Twitter—(1) the inclusion of links and (2) the number of social validation indicators (likes and retweets). Having links accompanying tweets significantly increased ratings of the tweets’ message credibility, as well as persuasiveness of their content. Socially validated tweets had no effect on these same variables. Parents of FH children were found to utilize social media for social reasons more than hypersensitive adults; concern level surrounding a reaction did not appear to alter the level of use. Links were considered valuable in obtaining social media users to attend to useful or essential food health and risk information. Future research in this area can usefully consider the nature and the effects of social validation in relation to other social media platforms and with other groups. PMID:29740573

  16. Tweeting and Eating: The Effect of Links and Likes on Food-Hypersensitive Consumers' Perceptions of Tweets.

    PubMed

    Hamshaw, Richard J T; Barnett, Julie; Lucas, Jane S

    2018-01-01

    Moving on from literature that focuses on how consumers use social media and the benefits of organizations utilizing platforms for health and risk communication, this study explores how specific characteristics of tweets affect the way in which they are perceived. An online survey with 251 participants with self-reported food hypersensitivity (FH) took part in an online experiment to consider the impact of tweet characteristics on perceptions of source credibility, message credibility, persuasiveness, and intention to act upon the presented information. Positioning the research hypotheses within the framework of the Elaboration Likelihood Model and Uses and Gratifications Theory, the study explored motivations for using social media and tested the impact of the affordances of Twitter-(1) the inclusion of links and (2) the number of social validation indicators (likes and retweets). Having links accompanying tweets significantly increased ratings of the tweets' message credibility, as well as persuasiveness of their content. Socially validated tweets had no effect on these same variables. Parents of FH children were found to utilize social media for social reasons more than hypersensitive adults; concern level surrounding a reaction did not appear to alter the level of use. Links were considered valuable in obtaining social media users to attend to useful or essential food health and risk information. Future research in this area can usefully consider the nature and the effects of social validation in relation to other social media platforms and with other groups.

  17. Developing effective messages about potable recycled water: The importance of message structure and content

    NASA Astrophysics Data System (ADS)

    Price, J.; Fielding, K. S.; Gardner, J.; Leviston, Z.; Green, M.

    2015-04-01

    Community opposition is a barrier to potable recycled water schemes. Effective communication strategies about such schemes are needed. Drawing on social psychological literature, two experimental studies are presented, which explore messages that improve public perceptions of potable recycled water. The Elaboration-Likelihood Model of information processing and attitude change is tested and supported. Study 1 (N = 415) premeasured support for recycled water, and trust in government information at Time 1. Messages varied in complexity and sidedness were presented at Time 2 (3 weeks later), and support and trust were remeasured. Support increased after receiving information, provided that participants received complex rather than simple information. Trust in government was also higher after receiving information. There was tentative evidence of this in response to two-sided messages rather than one-sided messages. Initial attitudes to recycled water moderated responses to information. Those initially neutral or ambivalent responded differently to simple and one-sided messages, compared to participants with positive or negative attitudes. Study 2 (N = 957) tested the effectiveness of information about the low relative risks, and/or benefits of potable recycled water, compared to control groups. Messages about the low risks resulted in higher support when the issue of recycled water was relevant. Messages about benefits resulted in higher perceived issue relevance, but did not translate into greater support. The results highlight the importance of understanding people's motivation to process information, and need to tailor communication to match attitudes and stage of recycled water schemes' development.

  18. Narrative Interest Standard: A Novel Approach to Surrogate Decision-Making for People With Dementia.

    PubMed

    Wilkins, James M

    2017-06-17

    Dementia is a common neurodegenerative process that can significantly impair decision-making capacity as the disease progresses. When a person is found to lack capacity to make a decision, a surrogate decision-maker is generally sought to aid in decision-making. Typical bases for surrogate decision-making include the substituted judgment standard and the best interest standard. Given the heterogeneous and progressive course of dementia, however, these standards for surrogate decision-making are often insufficient in providing guidance for the decision-making for a person with dementia, escalating the likelihood of conflict in these decisions. In this article, the narrative interest standard is presented as a novel and more appropriate approach to surrogate decision-making for people with dementia. Through case presentation and ethical analysis, the standard mechanisms for surrogate decision-making for people with dementia are reviewed and critiqued. The narrative interest standard is then introduced and discussed as a dementia-specific model for surrogate decision-making. Through incorporation of elements of a best interest standard in focusing on the current benefit-burden ratio and elements of narrative to provide context, history, and flexibility for values and preferences that may change over time, the narrative interest standard allows for elaboration of an enriched context for surrogate decision-making for people with dementia. More importantly, however, a narrative approach encourages the direct contribution from people with dementia in authoring the story of what matters to them in their lives.

  19. Elaborating the Conceptual Space of Information-Seeking Phenomena

    ERIC Educational Resources Information Center

    Savolainen, Reijo

    2016-01-01

    Introduction: The article contributes to conceptual studies of information behaviour research by examining the conceptualisations of information seeking and related terms such as information search and browsing. Method: The study builds on Bates' integrated model of information seeking and searching, originally presented in 2002. The model was…

  20. Spherical Model Integrating Academic Competence with Social Adjustment and Psychopathology.

    ERIC Educational Resources Information Center

    Schaefer, Earl S.; And Others

    This study replicates and elaborates a three-dimensional, spherical model that integrates research findings concerning social and emotional behavior, psychopathology, and academic competence. Kindergarten teachers completed an extensive set of rating scales on 100 children, including the Classroom Behavior Inventory and the Child Adaptive Behavior…

  1. Spectral likelihood expansions for Bayesian inference

    NASA Astrophysics Data System (ADS)

    Nagel, Joseph B.; Sudret, Bruno

    2016-03-01

    A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the likelihood function in terms of orthogonal polynomials. From this spectral likelihood expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.

  2. Evaluating the use of verbal probability expressions to communicate likelihood information in IPCC reports

    NASA Astrophysics Data System (ADS)

    Harris, Adam

    2014-05-01

    The Intergovernmental Panel on Climate Change (IPCC) prescribes that the communication of risk and uncertainty information pertaining to scientific reports, model predictions etc. be communicated with a set of 7 likelihood expressions. These range from "Extremely likely" (intended to communicate a likelihood of greater than 99%) through "As likely as not" (33-66%) to "Extremely unlikely" (less than 1%). Psychological research has investigated the degree to which these expressions are interpreted as intended by the IPCC, both within and across cultures. I will present a selection of this research and demonstrate some problems associated with communicating likelihoods in this way, as well as suggesting some potential improvements.

  3. Maximum likelihood estimation of signal detection model parameters for the assessment of two-stage diagnostic strategies.

    PubMed

    Lirio, R B; Dondériz, I C; Pérez Abalo, M C

    1992-08-01

    The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.

  4. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory. [Project Psychometric Aspects of Item Banking No. 53.] Research Report 91-1.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual counts in the full contingency table. This is…

  5. Group B Streptococcus Induces Neutrophil Recruitment to Gestational Tissues and Elaboration of Extracellular Traps and Nutritional Immunity.

    PubMed

    Kothary, Vishesh; Doster, Ryan S; Rogers, Lisa M; Kirk, Leslie A; Boyd, Kelli L; Romano-Keeler, Joann; Haley, Kathryn P; Manning, Shannon D; Aronoff, David M; Gaddy, Jennifer A

    2017-01-01

    Streptococcus agalactiae , or Group B Streptococcus (GBS), is a gram-positive bacterial pathogen associated with infection during pregnancy and is a major cause of morbidity and mortality in neonates. Infection of the extraplacental membranes surrounding the developing fetus, a condition known as chorioamnionitis, is characterized histopathologically by profound infiltration of polymorphonuclear cells (PMNs, neutrophils) and greatly increases the risk for preterm labor, stillbirth, or neonatal GBS infection. The advent of animal models of chorioamnionitis provides a powerful tool to study host-pathogen relationships in vivo and ex vivo . The purpose of this study was to evaluate the innate immune response elicited by GBS and evaluate how antimicrobial strategies elaborated by these innate immune cells affect bacteria. Our work using a mouse model of GBS ascending vaginal infection during pregnancy reveals that clinically isolated GBS has the capacity to invade reproductive tissues and elicit host immune responses including infiltration of PMNs within the choriodecidua and placenta during infection, mirroring the human condition. Upon interacting with GBS, murine neutrophils elaborate DNA-containing extracellular traps, which immobilize GBS and are studded with antimicrobial molecules including lactoferrin. Exposure of GBS to holo- or apo-forms of lactoferrin reveals that the iron-sequestration activity of lactoferrin represses GBS growth and viability in a dose-dependent manner. Together, these data indicate that the mouse model of ascending infection is a useful tool to recapitulate human models of GBS infection during pregnancy. Furthermore, this work reveals that neutrophil extracellular traps ensnare GBS and repress bacterial growth via deposition of antimicrobial molecules, which drive nutritional immunity via metal sequestration strategies.

  6. Accounting for informatively missing data in logistic regression by means of reassessment sampling.

    PubMed

    Lin, Ji; Lyles, Robert H

    2015-05-20

    We explore the 'reassessment' design in a logistic regression setting, where a second wave of sampling is applied to recover a portion of the missing data on a binary exposure and/or outcome variable. We construct a joint likelihood function based on the original model of interest and a model for the missing data mechanism, with emphasis on non-ignorable missingness. The estimation is carried out by numerical maximization of the joint likelihood function with close approximation of the accompanying Hessian matrix, using sharable programs that take advantage of general optimization routines in standard software. We show how likelihood ratio tests can be used for model selection and how they facilitate direct hypothesis testing for whether missingness is at random. Examples and simulations are presented to demonstrate the performance of the proposed method. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Hybrid pairwise likelihood analysis of animal behavior experiments.

    PubMed

    Cattelan, Manuela; Varin, Cristiano

    2013-12-01

    The study of the determinants of fights between animals is an important issue in understanding animal behavior. For this purpose, tournament experiments among a set of animals are often used by zoologists. The results of these tournament experiments are naturally analyzed by paired comparison models. Proper statistical analysis of these models is complicated by the presence of dependence between the outcomes of fights because the same animal is involved in different contests. This paper discusses two different model specifications to account for between-fights dependence. Models are fitted through the hybrid pairwise likelihood method that iterates between optimal estimating equations for the regression parameters and pairwise likelihood inference for the association parameters. This approach requires the specification of means and covariances only. For this reason, the method can be applied also when the computation of the joint distribution is difficult or inconvenient. The proposed methodology is investigated by simulation studies and applied to real data about adult male Cape Dwarf Chameleons. © 2013, The International Biometric Society.

  8. ELASTIC NET FOR COX’S PROPORTIONAL HAZARDS MODEL WITH A SOLUTION PATH ALGORITHM

    PubMed Central

    Wu, Yichao

    2012-01-01

    For least squares regression, Efron et al. (2004) proposed an efficient solution path algorithm, the least angle regression (LAR). They showed that a slight modification of the LAR leads to the whole LASSO solution path. Both the LAR and LASSO solution paths are piecewise linear. Recently Wu (2011) extended the LAR to generalized linear models and the quasi-likelihood method. In this work we extend the LAR further to handle Cox’s proportional hazards model. The goal is to develop a solution path algorithm for the elastic net penalty (Zou and Hastie (2005)) in Cox’s proportional hazards model. This goal is achieved in two steps. First we extend the LAR to optimizing the log partial likelihood plus a fixed small ridge term. Then we define a path modification, which leads to the solution path of the elastic net regularized log partial likelihood. Our solution path is exact and piecewise determined by ordinary differential equation systems. PMID:23226932

  9. Negotiating Multicollinearity with Spike-and-Slab Priors

    PubMed Central

    Ročková, Veronika

    2014-01-01

    In multiple regression under the normal linear model, the presence of multicollinearity is well known to lead to unreliable and unstable maximum likelihood estimates. This can be particularly troublesome for the problem of variable selection where it becomes more difficult to distinguish between subset models. Here we show how adding a spike-and-slab prior mitigates this difficulty by filtering the likelihood surface into a posterior distribution that allocates the relevant likelihood information to each of the subset model modes. For identification of promising high posterior models in this setting, we consider three EM algorithms, the fast closed form EMVS version of Rockova and George (2014) and two new versions designed for variants of the spike-and-slab formulation. For a multimodal posterior under multicollinearity, we compare the regions of convergence of these three algorithms. Deterministic annealing versions of the EMVS algorithm are seen to substantially mitigate this multimodality. A single simple running example is used for illustration throughout. PMID:25419004

  10. Assessing sediment hazard through a weight of evidence approach with bioindicator organisms: a practical model to elaborate data from sediment chemistry, bioavailability, biomarkers and ecotoxicological bioassays.

    PubMed

    Piva, Francesco; Ciaprini, Francesco; Onorati, Fulvio; Benedetti, Maura; Fattorini, Daniele; Ausili, Antonella; Regoli, Francesco

    2011-04-01

    Quality assessments are crucial to all activities related to removal and management of sediments. Following a multidisciplinary, weight of evidence approach, a new model is presented here for comprehensive assessment of hazards associated to polluted sediments. The lines of evidence considered were sediment chemistry, assessment of bioavailability, sub-lethal effects on biomarkers, and ecotoxicological bioassays. A conceptual and software-assisted model was developed with logical flow-charts elaborating results from each line of evidence on the basis of several chemical and biological parameters, normative guidelines or scientific evidence; the data are thus summarized into four specific synthetic indices, before their integration into an overall sediment hazard evaluation. This model was validated using European eels (Anguilla anguilla) as the bioindicator species, exposed under laboratory conditions to sediments from an industrial site, and caged under field conditions in two harbour areas. The concentrations of aliphatic hydrocarbons, polycyclic aromatic hydrocarbons and trace metals were much higher in the industrial compared to harbour sediments, and accordingly the bioaccumulation in liver and gills of exposed eels showed marked differences between conditions seen. Among biomarkers, significant variations were observed for cytochrome P450-related responses, oxidative stress biomarkers, lysosomal stability and genotoxic effects; the overall elaboration of these data, as those of standard ecotoxicological bioassays with bacteria, algae and copepods, confirmed a higher level of biological hazard for industrial sediments. Based on comparisons with expert judgment, the model presented efficiently discriminates between the various conditions, both as individual modules and as an integrated final evaluation, and it appears to be a powerful tool to support more complex processes of environmental risk assessment. Copyright © 2010 Elsevier Ltd. All rights reserved.

  11. The Atacama Cosmology Telescope: Likelihood for Small-Scale CMB Data

    NASA Technical Reports Server (NTRS)

    Dunkley, J.; Calabrese, E.; Sievers, J.; Addison, G. E.; Battaglia, N.; Battistelli, E. S.; Bond, J. R.; Das, S.; Devlin, M. J.; Dunner, R.; hide

    2013-01-01

    The Atacama Cosmology Telescope has measured the angular power spectra of microwave fluctuations to arcminute scales at frequencies of 148 and 218 GHz, from three seasons of data. At small scales the fluctuations in the primordial Cosmic Microwave Background (CMB) become increasingly obscured by extragalactic foregounds and secondary CMB signals. We present results from a nine-parameter model describing these secondary effects, including the thermal and kinematic Sunyaev-Zel'dovich (tSZ and kSZ) power; the clustered and Poisson-like power from Cosmic Infrared Background (CIB) sources, and their frequency scaling; the tSZ-CIB correlation coefficient; the extragalactic radio source power; and thermal dust emission from Galactic cirrus in two different regions of the sky. In order to extract cosmological parameters, we describe a likelihood function for the ACT data, fitting this model to the multi-frequency spectra in the multipole range 500 < l < 10000. We extend the likelihood to include spectra from the South Pole Telescope at frequencies of 95, 150, and 220 GHz. Accounting for different radio source levels and Galactic cirrus emission, the same model provides an excellent fit to both datasets simultaneously, with ?2/dof= 675/697 for ACT, and 96/107 for SPT. We then use the multi-frequency likelihood to estimate the CMB power spectrum from ACT in bandpowers, marginalizing over the secondary parameters. This provides a simplified 'CMB-only' likelihood in the range 500 < l < 3500 for use in cosmological parameter estimation

  12. Significance of autobiographical episodes and spacing effects in incidental memory.

    PubMed

    Toyota, Hiroshi

    2013-10-01

    Participants were presented with target words on two occasions, and were asked each time to generate a memory of a past episode associated with the targets. Participants were also instructed to rate the importance (significance elaboration) or pleasantness of the pisode (pleasantness elaboration) in an orienting task, followed by an unexpect d recall test. Significance elaboration led to better recall than pleasantness elaboration, but only in the spaced presentation. The spaced presentation led to better tree recall than massed presentation with significance elaboration, but the difference between the two types of presentation was not observed with pleasantness elaboration. These results suggest that the significance of an episode is more critical than the pleasantness of an episode in determining the effectiveness of autobiographical elaboration in facilitating recall.

  13. Two stochastic models useful in petroleum exploration

    NASA Technical Reports Server (NTRS)

    Kaufman, G. M.; Bradley, P. G.

    1972-01-01

    A model of the petroleum exploration process that tests empirically the hypothesis that at an early stage in the exploration of a basin, the process behaves like sampling without replacement is proposed along with a model of the spatial distribution of petroleum reserviors that conforms to observed facts. In developing the model of discovery, the following topics are discussed: probabilitistic proportionality, likelihood function, and maximum likelihood estimation. In addition, the spatial model is described, which is defined as a stochastic process generating values of a sequence or random variables in a way that simulates the frequency distribution of areal extent, the geographic location, and shape of oil deposits

  14. Maximum likelihood estimation for periodic autoregressive moving average models

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  15. Systems view on spatial planning and perception based on invariants in agent-environment dynamics

    PubMed Central

    Mettler, Bérénice; Kong, Zhaodan; Li, Bin; Andersh, Jonathan

    2015-01-01

    Modeling agile and versatile spatial behavior remains a challenging task, due to the intricate coupling of planning, control, and perceptual processes. Previous results have shown that humans plan and organize their guidance behavior by exploiting patterns in the interactions between agent or organism and the environment. These patterns, described under the concept of Interaction Patterns (IPs), capture invariants arising from equivalences and symmetries in the interaction with the environment, as well as effects arising from intrinsic properties of human control and guidance processes, such as perceptual guidance mechanisms. The paper takes a systems' perspective, considering the IP as a unit of organization, and builds on its properties to present a hierarchical model that delineates the planning, control, and perceptual processes and their integration. The model's planning process is further elaborated by showing that the IP can be abstracted, using spatial time-to-go functions. The perceptual processes are elaborated from the hierarchical model. The paper provides experimental support for the model's ability to predict the spatial organization of behavior and the perceptual processes. PMID:25628524

  16. A maximum likelihood convolutional decoder model vs experimental data comparison

    NASA Technical Reports Server (NTRS)

    Chen, R. Y.

    1979-01-01

    This article describes the comparison of a maximum likelihood convolutional decoder (MCD) prediction model and the actual performance of the MCD at the Madrid Deep Space Station. The MCD prediction model is used to develop a subroutine that has been utilized by the Telemetry Analysis Program (TAP) to compute the MCD bit error rate for a given signal-to-noise ratio. The results indicate that that the TAP can predict quite well compared to the experimental measurements. An optimal modulation index also can be found through TAP.

  17. Designing a Qualitative Data Collection Strategy (QDCS) for Africa - Phase 1: A Gap Analysis of Existing Models, Simulations, and Tools Relating to Africa

    DTIC Science & Technology

    2012-06-01

    generalized behavioral model characterized after the fictional Seldon equations (the one elaborated upon by Isaac Asimov in the 1951 novel, The...Foundation). Asimov described the Seldon equations as essentially statistical models with historical data of a sufficient size and variability that they

  18. Modelling in Evaluating a Working Life Project in Higher Education

    ERIC Educational Resources Information Center

    Sarja, Anneli; Janhonen, Sirpa; Havukainen, Pirjo; Vesterinen, Anne

    2012-01-01

    This article describes an evaluation method based on collaboration between the higher education, a care home and university, in a R&D project. The aim of the project was to elaborate modelling as a tool of developmental evaluation for innovation and competence in project cooperation. The approach was based on activity theory. Modelling enabled a…

  19. Visual Modeling as a Motivation for Studying Mathematics and Art

    ERIC Educational Resources Information Center

    Sendova, Evgenia; Grkovska, Slavica

    2005-01-01

    The paper deals with the possibility of enriching the curriculum in mathematics, informatics and art by means of visual modeling of abstract paintings. The authors share their belief that in building a computer model of a construct, one gains deeper insight into the construct, and is motivated to elaborate one's knowledge in mathematics and…

  20. Example Elaboration as a Neglected Instructional Strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Girill, T R

    Over the last decade an unfolding cognitive-psychology research program on how learners use examples to develop effective problem solving expertise has yielded well-established empirical findings. Chi et al., Renkl, Reimann, and Neubert (in various papers) have confirmed statistically significant differences in how good and poor learners inferentially elaborate (self explain) example steps as they study. Such example elaboration is highly relevant to software documentation and training, yet largely neglected in the current literature. This paper summarizes the neglected research on example use and puts its neglect in a disciplinary perspective. The author then shows that differences in support for examplemore » elaboration in commercial software documentation reveal previously over looked usability issues. These issues involve example summaries, using goals and goal structures to reinforce example elaborations, and prompting readers to recognize the role of example parts. Secondly, I show how these same example elaboration techniques can build cognitive maturity among underperforming high school students who study technical writing. Principle based elaborations, condition elaborations, and role recognition of example steps all have their place in innovative, high school level, technical writing exercises, and all promote far transfer problem solving. Finally, I use these studies to clarify the constructivist debate over what writers and readers contribute to text meaning. I argue that writers can influence how readers elaborate on examples, and that because of the great empirical differences in example study effectiveness (and reader choices) writers should do what they can (through within text design features) to encourage readers to elaborate examples in the most successful ways. Example elaboration is a uniquely effective way to learn from worked technical examples. This paper summarizes years of research that clarifies example elaboration. I then show how example elaboration can make complex software documentation more useful, improve the benefits of technical writing exercises for underperforming students, and enlighten the general discussion of how writers can and should help their readers.« less

  1. The Effect of Urban Life on Traditional Values

    ERIC Educational Resources Information Center

    Fischer, Claude S.

    1975-01-01

    Three models are elaborated that predict an association between urbanism and nontraditional behavior. Secondary analysis of American survey data on religiosity, church attendance, and attitudes toward alcohol and birth control confirm the general urbanism-deviance association and suggest the accuracy of the model which regards such behavior as due…

  2. OER "Produsage" as a Model to Support Language Teaching and Learning

    ERIC Educational Resources Information Center

    MacKinnon, Teresa; Pasfield-Neofitou, Sarah

    2016-01-01

    Language education faculty face myriad challenges in finding teaching resources that are suitable, of high quality, and allow for the modifications needed to meet the requirements of their course contexts and their learners. The article elaborates the grassroots model of "produsage" (a portmanteau of "production" and…

  3. Individual, team, and coach predictors of players' likelihood to aggress in youth soccer.

    PubMed

    Chow, Graig M; Murray, Kristen E; Feltz, Deborah L

    2009-08-01

    The purpose of this study was to examine personal and socioenvironmental factors of players' likelihood to aggress. Participants were youth soccer players (N = 258) and their coaches (N = 23) from high school and club teams. Players completed the Judgments About Moral Behavior in Youth Sports Questionnaire (JAMBYSQ; Stephens, Bredemeier, & Shields, 1997), which assessed athletes' stage of moral development, team norm for aggression, and self-described likelihood to aggress against an opponent. Coaches were administered the Coaching Efficacy Scale (CES; Feltz, Chase, Moritz, & Sullivan, 1999). Using multilevel modeling, results demonstrated that the team norm for aggression at the athlete and team level were significant predictors of athletes' self likelihood to aggress scores. Further, coaches' game strategy efficacy emerged as a positive predictor of their players' self-described likelihood to aggress. The findings contribute to previous research examining the socioenvironmental predictors of athletic aggression in youth sport by demonstrating the importance of coaching efficacy beliefs.

  4. The Effects of Verbal Elaboration and Visual Elaboration on Student Learning.

    ERIC Educational Resources Information Center

    Chanlin, Lih-Juan

    1997-01-01

    This study examined: (1) the effectiveness of integrating verbal elaboration (metaphors) and different visual presentation strategies (still and animated graphics) in learning biotechnology concepts; (2) whether the use of verbal elaboration with different visual presentation strategies facilitates cognitive processes; and (3) how students employ…

  5. Posterior propriety for hierarchical models with log-likelihoods that have norm bounds

    DOE PAGES

    Michalak, Sarah E.; Morris, Carl N.

    2015-07-17

    Statisticians often use improper priors to express ignorance or to provide good frequency properties, requiring that posterior propriety be verified. Our paper addresses generalized linear mixed models, GLMMs, when Level I parameters have Normal distributions, with many commonly-used hyperpriors. It provides easy-to-verify sufficient posterior propriety conditions based on dimensions, matrix ranks, and exponentiated norm bounds, ENBs, for the Level I likelihood. Since many familiar likelihoods have ENBs, which is often verifiable via log-concavity and MLE finiteness, our novel use of ENBs permits unification of posterior propriety results and posterior MGF/moment results for many useful Level I distributions, including those commonlymore » used with multilevel generalized linear models, e.g., GLMMs and hierarchical generalized linear models, HGLMs. Furthermore, those who need to verify existence of posterior distributions or of posterior MGFs/moments for a multilevel generalized linear model given a proper or improper multivariate F prior as in Section 1 should find the required results in Sections 1 and 2 and Theorem 3 (GLMMs), Theorem 4 (HGLMs), or Theorem 5 (posterior MGFs/moments).« less

  6. Occupancy Modeling Species-Environment Relationships with Non-ignorable Survey Designs.

    PubMed

    Irvine, Kathryn M; Rodhouse, Thomas J; Wright, Wilson J; Olsen, Anthony R

    2018-05-26

    Statistical models supporting inferences about species occurrence patterns in relation to environmental gradients are fundamental to ecology and conservation biology. A common implicit assumption is that the sampling design is ignorable and does not need to be formally accounted for in analyses. The analyst assumes data are representative of the desired population and statistical modeling proceeds. However, if datasets from probability and non-probability surveys are combined or unequal selection probabilities are used, the design may be non ignorable. We outline the use of pseudo-maximum likelihood estimation for site-occupancy models to account for such non-ignorable survey designs. This estimation method accounts for the survey design by properly weighting the pseudo-likelihood equation. In our empirical example, legacy and newer randomly selected locations were surveyed for bats to bridge a historic statewide effort with an ongoing nationwide program. We provide a worked example using bat acoustic detection/non-detection data and show how analysts can diagnose whether their design is ignorable. Using simulations we assessed whether our approach is viable for modeling datasets composed of sites contributed outside of a probability design Pseudo-maximum likelihood estimates differed from the usual maximum likelihood occu31 pancy estimates for some bat species. Using simulations we show the maximum likelihood estimator of species-environment relationships with non-ignorable sampling designs was biased, whereas the pseudo-likelihood estimator was design-unbiased. However, in our simulation study the designs composed of a large proportion of legacy or non-probability sites resulted in estimation issues for standard errors. These issues were likely a result of highly variable weights confounded by small sample sizes (5% or 10% sampling intensity and 4 revisits). Aggregating datasets from multiple sources logically supports larger sample sizes and potentially increases spatial extents for statistical inferences. Our results suggest that ignoring the mechanism for how locations were selected for data collection (e.g., the sampling design) could result in erroneous model-based conclusions. Therefore, in order to ensure robust and defensible recommendations for evidence-based conservation decision-making, the survey design information in addition to the data themselves must be available for analysts. Details for constructing the weights used in estimation and code for implementation are provided. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  7. Business owners' action planning and its relationship to business success in three African countries.

    PubMed

    Frese, Michael; Krauss, Stefanie I; Keith, Nina; Escher, Susanne; Grabarkiewicz, Rafal; Luneng, Siv Tonje; Heers, Constanze; Unger, Jens; Friedrich, Christian

    2007-11-01

    A model of business success was developed with motivational resources (locus of control, self-efficacy, achievement motivation, and self-reported personal initiative) and cognitive resources (cognitive ability and human capital) as independent variables, business owners' elaborate and proactive planning as a mediator, and business size and growth as dependent variables. Three studies with a total of 408 African micro and small-scale business owners were conducted in South Africa, Zimbabwe, and Namibia. Structural equation analyses partially supported the hypotheses on the importance of psychological planning by the business owners. Elaborate and proactive planning was substantially related to business size and to an external evaluation of business success and was a (partial) mediator for the relationship between cognitive resources and business success. The model carries important implications for selection, training, and coaching of business owners. (c) 2007 APA

  8. On Muthen's Maximum Likelihood for Two-Level Covariance Structure Models

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Hayashi, Kentaro

    2005-01-01

    Data in social and behavioral sciences are often hierarchically organized. Special statistical procedures that take into account the dependence of such observations have been developed. Among procedures for 2-level covariance structure analysis, Muthen's maximum likelihood (MUML) has the advantage of easier computation and faster convergence. When…

  9. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    ERIC Educational Resources Information Center

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  10. Likelihood-Ratio DIF Testing: Effects of Nonnormality

    ERIC Educational Resources Information Center

    Woods, Carol M.

    2008-01-01

    Differential item functioning (DIF) occurs when an item has different measurement properties for members of one group versus another. Likelihood-ratio (LR) tests for DIF based on item response theory (IRT) involve statistically comparing IRT models that vary with respect to their constraints. A simulation study evaluated how violation of the…

  11. A Study of Item Bias for Attitudinal Measurement Using Maximum Likelihood Factor Analysis.

    ERIC Educational Resources Information Center

    Mayberry, Paul W.

    A technique for detecting item bias that is responsive to attitudinal measurement considerations is a maximum likelihood factor analysis procedure comparing multivariate factor structures across various subpopulations, often referred to as SIFASP. The SIFASP technique allows for factorial model comparisons in the testing of various hypotheses…

  12. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    ERIC Educational Resources Information Center

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  13. Robust Multipoint Water-Fat Separation Using Fat Likelihood Analysis

    PubMed Central

    Yu, Huanzhou; Reeder, Scott B.; Shimakawa, Ann; McKenzie, Charles A.; Brittain, Jean H.

    2016-01-01

    Fat suppression is an essential part of routine MRI scanning. Multiecho chemical-shift based water-fat separation methods estimate and correct for Bo field inhomogeneity. However, they must contend with the intrinsic challenge of water-fat ambiguity that can result in water-fat swapping. This problem arises because the signals from two chemical species, when both are modeled as a single discrete spectral peak, may appear indistinguishable in the presence of Bo off-resonance. In conventional methods, the water-fat ambiguity is typically removed by enforcing field map smoothness using region growing based algorithms. In reality, the fat spectrum has multiple spectral peaks. Using this spectral complexity, we introduce a novel concept that identifies water and fat for multiecho acquisitions by exploiting the spectral differences between water and fat. A fat likelihood map is produced to indicate if a pixel is likely to be water-dominant or fat-dominant by comparing the fitting residuals of two different signal models. The fat likelihood analysis and field map smoothness provide complementary information, and we designed an algorithm (Fat Likelihood Analysis for Multiecho Signals) to exploit both mechanisms. It is demonstrated in a wide variety of data that the Fat Likelihood Analysis for Multiecho Signals algorithm offers highly robust water-fat separation for 6-echo acquisitions, particularly in some previously challenging applications. PMID:21842498

  14. Validation of DNA-based identification software by computation of pedigree likelihood ratios.

    PubMed

    Slooten, K

    2011-08-01

    Disaster victim identification (DVI) can be aided by DNA-evidence, by comparing the DNA-profiles of unidentified individuals with those of surviving relatives. The DNA-evidence is used optimally when such a comparison is done by calculating the appropriate likelihood ratios. Though conceptually simple, the calculations can be quite involved, especially with large pedigrees, precise mutation models etc. In this article we describe a series of test cases designed to check if software designed to calculate such likelihood ratios computes them correctly. The cases include both simple and more complicated pedigrees, among which inbred ones. We show how to calculate the likelihood ratio numerically and algebraically, including a general mutation model and possibility of allelic dropout. In Appendix A we show how to derive such algebraic expressions mathematically. We have set up these cases to validate new software, called Bonaparte, which performs pedigree likelihood ratio calculations in a DVI context. Bonaparte has been developed by SNN Nijmegen (The Netherlands) for the Netherlands Forensic Institute (NFI). It is available free of charge for non-commercial purposes (see www.dnadvi.nl for details). Commercial licenses can also be obtained. The software uses Bayesian networks and the junction tree algorithm to perform its calculations. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  15. False Memories for Suggestions: The Impact of Conceptual Elaboration

    ERIC Educational Resources Information Center

    Zaragoza, Maria S.; Mitchell, Karen J.; Payment, Kristie; Drivdahl, Sarah

    2011-01-01

    Relatively little attention has been paid to the potential role that reflecting on the meaning and implications of suggested events (i.e., conceptual elaboration) might play in promoting the creation of false memories. Two experiments assessed whether encouraging repeated conceptual elaboration, would, like perceptual elaboration, increase false…

  16. Interactions among Elaborative Interrogation, Knowledge, and Interest in the Process of Constructing Knowledge from Text

    ERIC Educational Resources Information Center

    Ozgungor, Sevgi; Guthrie, John T.

    2004-01-01

    The authors examined the impact of elaborative interrogation on knowledge construction during expository text reading, specifically, the interactions among elaborative interrogation, knowledge, and interest. Three measures of learning were taken: recall, inference, and coherence. Elaborative interrogation affected all aspects of learning measured,…

  17. Mother and Child Narrative Elaborations during Booksharing in Low-Income Mexican-American Dyads

    ERIC Educational Resources Information Center

    Escobar, Kelly; Melzi, Gigliana; Tamis-LeMonda, Catherine S.

    2017-01-01

    Caregivers' narrative elaborations have been consistently shown to relate to language, literacy, and cognitive skills in children. However, research with Latinos yields mixed findings in terms of how much caregivers elaborate and the benefits of elaborations for Latino children's development, especially within booksharing contexts. Moreover,…

  18. The Effects of Elaboration on Self-Learning Procedures from Text.

    ERIC Educational Resources Information Center

    Yang, Fu-mei

    This study investigated the effects of augmenting and deleting elaborations in an existing self-instructional text for a micro-computer database application, "Microsoft Works User's Manual." A total of 60 undergraduate students were randomly assigned to the original, elaborated, or unelaborated text versions. The elaborated version…

  19. Deep-Elaborative Learning of Introductory Management Accounting for Business Students

    ERIC Educational Resources Information Center

    Choo, Freddie; Tan, Kim B.

    2005-01-01

    Research by Choo and Tan (1990; 1995) suggests that accounting students, who engage in deep-elaborative learning, have a better understanding of the course materials. The purposes of this paper are: (1) to describe a deep-elaborative instructional approach (hereafter DEIA) that promotes deep-elaborative learning of introductory management…

  20. The Effects of Levels of Elaboration on Learners' Strategic Processing of Text

    ERIC Educational Resources Information Center

    Dornisch, Michele; Sperling, Rayne A.; Zeruth, Jill A.

    2011-01-01

    In the current work, we examined learners' comprehension when engaged with elaborative processing strategies. In Experiment 1, we randomly assigned students to one of five elaborative processing conditions and addressed differences in learners' lower- and higher-order learning outcomes and ability to employ elaborative strategies. Findings…

  1. Maximum Likelihood Shift Estimation Using High Resolution Polarimetric SAR Clutter Model

    NASA Astrophysics Data System (ADS)

    Harant, Olivier; Bombrun, Lionel; Vasile, Gabriel; Ferro-Famil, Laurent; Gay, Michel

    2011-03-01

    This paper deals with a Maximum Likelihood (ML) shift estimation method in the context of High Resolution (HR) Polarimetric SAR (PolSAR) clutter. Texture modeling is exposed and the generalized ML texture tracking method is extended to the merging of various sensors. Some results on displacement estimation on the Argentiere glacier in the Mont Blanc massif using dual-pol TerraSAR-X (TSX) and quad-pol RADARSAT-2 (RS2) sensors are finally discussed.

  2. Approximate Bayesian computation in large-scale structure: constraining the galaxy-halo connection

    NASA Astrophysics Data System (ADS)

    Hahn, ChangHoon; Vakili, Mohammadjavad; Walsh, Kilian; Hearin, Andrew P.; Hogg, David W.; Campbell, Duncan

    2017-08-01

    Standard approaches to Bayesian parameter inference in large-scale structure assume a Gaussian functional form (chi-squared form) for the likelihood. This assumption, in detail, cannot be correct. Likelihood free inferences such as approximate Bayesian computation (ABC) relax these restrictions and make inference possible without making any assumptions on the likelihood. Instead ABC relies on a forward generative model of the data and a metric for measuring the distance between the model and data. In this work, we demonstrate that ABC is feasible for LSS parameter inference by using it to constrain parameters of the halo occupation distribution (HOD) model for populating dark matter haloes with galaxies. Using specific implementation of ABC supplemented with population Monte Carlo importance sampling, a generative forward model using HOD and a distance metric based on galaxy number density, two-point correlation function and galaxy group multiplicity function, we constrain the HOD parameters of mock observation generated from selected 'true' HOD parameters. The parameter constraints we obtain from ABC are consistent with the 'true' HOD parameters, demonstrating that ABC can be reliably used for parameter inference in LSS. Furthermore, we compare our ABC constraints to constraints we obtain using a pseudo-likelihood function of Gaussian form with MCMC and find consistent HOD parameter constraints. Ultimately, our results suggest that ABC can and should be applied in parameter inference for LSS analyses.

  3. On framing potential features of SWCNTs and MWCNTs in mixed convective flow

    NASA Astrophysics Data System (ADS)

    Hayat, T.; Ullah, Siraj; Khan, M. Ijaz; Alsaedi, A.

    2018-03-01

    Our target in this research article is to elaborate the characteristics of Darcy-Forchheimer relation in carbon-water nanoliquid flow induced by impermeable stretched cylinder. Energy expression is modeled through viscous dissipation and nonlinear thermal radiation. Application of appropriate transformations yields nonlinear ODEs through nonlinear PDEs. Shooting technique is adopted for the computations of nonlinear ODEs. Importance of influential variables for velocity and thermal fields is elaborated graphically. Moreover rate of heat transfer and drag force are calculated and demonstrated through Tables. Our analysis reports that velocity is higher for ratio of rate constant and buoyancy factor when compared with porosity and volume fraction.

  4. Bayesian inference for OPC modeling

    NASA Astrophysics Data System (ADS)

    Burbine, Andrew; Sturtevant, John; Fryer, David; Smith, Bruce W.

    2016-03-01

    The use of optical proximity correction (OPC) demands increasingly accurate models of the photolithographic process. Model building and inference techniques in the data science community have seen great strides in the past two decades which make better use of available information. This paper aims to demonstrate the predictive power of Bayesian inference as a method for parameter selection in lithographic models by quantifying the uncertainty associated with model inputs and wafer data. Specifically, the method combines the model builder's prior information about each modelling assumption with the maximization of each observation's likelihood as a Student's t-distributed random variable. Through the use of a Markov chain Monte Carlo (MCMC) algorithm, a model's parameter space is explored to find the most credible parameter values. During parameter exploration, the parameters' posterior distributions are generated by applying Bayes' rule, using a likelihood function and the a priori knowledge supplied. The MCMC algorithm used, an affine invariant ensemble sampler (AIES), is implemented by initializing many walkers which semiindependently explore the space. The convergence of these walkers to global maxima of the likelihood volume determine the parameter values' highest density intervals (HDI) to reveal champion models. We show that this method of parameter selection provides insights into the data that traditional methods do not and outline continued experiments to vet the method.

  5. Conceptualization of an R&D Based Learning-to-Innovate Model for Science Education

    NASA Astrophysics Data System (ADS)

    Lai, Oiki Sylvia

    The purpose of this research was to conceptualize an R & D based learning-to-innovate (LTI) model. The problem to be addressed was the lack of a theoretical L TI model, which would inform science pedagogy. The absorptive capacity (ACAP) lens was adopted to untangle the R & D LTI phenomenon into four learning processes: problem-solving via knowledge acquisition, incremental improvement via knowledge participation, scientific discovery via knowledge creation, and product design via knowledge productivity. The four knowledge factors were the latent factors and each factor had seven manifest elements as measured variables. The key objectives of the non experimental quantitative survey were to measure the relative importance of the identified elements and to explore the underlining structure of the variables. A questionnaire had been prepared, and was administered to more than 155 R & D professionals from four sectors - business, academic, government, and nonprofit. The results showed that every identified element was important to the R & D professionals, in terms of improving the related type of innovation. The most important elements were highlighted to serve as building blocks for elaboration. In search for patterns of the data matrix, exploratory factor analysis (EF A) was performed. Principal component analysis was the first phase of EF A to extract factors; while maximum likelihood estimation (MLE) was used to estimate the model. EF A yielded the finding of two aspects in each kind of knowledge. Logical names were assigned to represent the nature of the subsets: problem and knowledge under knowledge acquisition, planning and participation under knowledge participation, exploration and discovery under knowledge creation, and construction and invention under knowledge productivity. These two constructs, within each kind of knowledge, added structure to the vague R & D based LTI model. The research questions and hypotheses testing were addressed using correlation analysis. The alternative hypotheses that there were positive relationships between knowledge factors and their corresponding types of innovation were accepted. In-depth study of each process is recommended in both research and application. Experimental tests are needed, in order to ultimately present the LTI model to enhance the scientific knowledge absorptive capacity of the learners to facilitate their innovation performance.

  6. Thoughts About Created Environment: A Neuman Systems Model Concept.

    PubMed

    Verberk, Frans; Fawcett, Jacqueline

    2017-04-01

    This essay is about the Neuman systems model concept of the created environment. The essay, based on work by Frans Verberk, a Neuman systems model scholar from the Netherlands, extends understanding of the created environment by explaining how this distinctive perspective of environment represents an elaboration of the physiological, psychological, sociocultural, developmental, and spiritual variables, which are other central concepts of the Neuman Systems Model.

  7. SMURC: High-Dimension Small-Sample Multivariate Regression With Covariance Estimation.

    PubMed

    Bayar, Belhassen; Bouaynaya, Nidhal; Shterenberg, Roman

    2017-03-01

    We consider a high-dimension low sample-size multivariate regression problem that accounts for correlation of the response variables. The system is underdetermined as there are more parameters than samples. We show that the maximum likelihood approach with covariance estimation is senseless because the likelihood diverges. We subsequently propose a normalization of the likelihood function that guarantees convergence. We call this method small-sample multivariate regression with covariance (SMURC) estimation. We derive an optimization problem and its convex approximation to compute SMURC. Simulation results show that the proposed algorithm outperforms the regularized likelihood estimator with known covariance matrix and the sparse conditional Gaussian graphical model. We also apply SMURC to the inference of the wing-muscle gene network of the Drosophila melanogaster (fruit fly).

  8. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions

    PubMed Central

    Barrett, Harrison H.; Dainty, Christopher; Lara, David

    2008-01-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255

  9. What are hierarchical models and how do we analyze them?

    USGS Publications Warehouse

    Royle, Andy

    2016-01-01

    In this chapter we provide a basic definition of hierarchical models and introduce the two canonical hierarchical models in this book: site occupancy and N-mixture models. The former is a hierarchical extension of logistic regression and the latter is a hierarchical extension of Poisson regression. We introduce basic concepts of probability modeling and statistical inference including likelihood and Bayesian perspectives. We go through the mechanics of maximizing the likelihood and characterizing the posterior distribution by Markov chain Monte Carlo (MCMC) methods. We give a general perspective on topics such as model selection and assessment of model fit, although we demonstrate these topics in practice in later chapters (especially Chapters 5, 6, 7, and 10 Chapter 5 Chapter 6 Chapter 7 Chapter 10)

  10. Stable aesthetic standards delusion: changing 'artistic quality' by elaboration.

    PubMed

    Carbon, Claus-Christian; Hesslinger, Vera M

    2014-01-01

    The present study challenges the notion that judgments of artistic quality are based on stable aesthetic standards. We propose that such standards are a delusion and that judgments of artistic quality are the combined result of exposure, elaboration, and discourse. We ran two experiments using elaboration tasks based on the repeated evaluation technique in which different versions of the Mona Lisa had to be elaborated deeply. During the initial task either the version known from the Louvre or an alternative version owned by the Prado was elaborated; during the second task both versions were elaborated in a comparative fashion. After both tasks multiple blends of the two versions had to be evaluated concerning several aesthetic key variables. Judgments of artistic quality of the blends were significantly different depending on the initially elaborated version of the Mona Lisa, indicating experience-based aesthetic processing, which contradicts the notion of stable aesthetic standards.

  11. Measuring Knowledge Elaboration Based on a Computer-Assisted Knowledge Map Analytical Approach to Collaborative Learning

    ERIC Educational Resources Information Center

    Zheng, Lanqin; Huang, Ronghuai; Hwang, Gwo-Jen; Yang, Kaicheng

    2015-01-01

    The purpose of this study is to quantitatively measure the level of knowledge elaboration and explore the relationships between prior knowledge of a group, group performance, and knowledge elaboration in collaborative learning. Two experiments were conducted to investigate the level of knowledge elaboration. The collaborative learning objective in…

  12. A spatial Bayesian network model to assess the benefits of early warning for urban flood risk to people

    NASA Astrophysics Data System (ADS)

    Balbi, Stefano; Villa, Ferdinando; Mojtahed, Vahid; Hegetschweiler, Karin Tessa; Giupponi, Carlo

    2016-06-01

    This article presents a novel methodology to assess flood risk to people by integrating people's vulnerability and ability to cushion hazards through coping and adapting. The proposed approach extends traditional risk assessments beyond material damages; complements quantitative and semi-quantitative data with subjective and local knowledge, improving the use of commonly available information; and produces estimates of model uncertainty by providing probability distributions for all of its outputs. Flood risk to people is modeled using a spatially explicit Bayesian network model calibrated on expert opinion. Risk is assessed in terms of (1) likelihood of non-fatal physical injury, (2) likelihood of post-traumatic stress disorder and (3) likelihood of death. The study area covers the lower part of the Sihl valley (Switzerland) including the city of Zurich. The model is used to estimate the effect of improving an existing early warning system, taking into account the reliability, lead time and scope (i.e., coverage of people reached by the warning). Model results indicate that the potential benefits of an improved early warning in terms of avoided human impacts are particularly relevant in case of a major flood event.

  13. A simulation study on Bayesian Ridge regression models for several collinearity levels

    NASA Astrophysics Data System (ADS)

    Efendi, Achmad; Effrihan

    2017-12-01

    When analyzing data with multiple regression model if there are collinearities, then one or several predictor variables are usually omitted from the model. However, there sometimes some reasons, for instance medical or economic reasons, the predictors are all important and should be included in the model. Ridge regression model is not uncommon in some researches to use to cope with collinearity. Through this modeling, weights for predictor variables are used for estimating parameters. The next estimation process could follow the concept of likelihood. Furthermore, for the estimation nowadays the Bayesian version could be an alternative. This estimation method does not match likelihood one in terms of popularity due to some difficulties; computation and so forth. Nevertheless, with the growing improvement of computational methodology recently, this caveat should not at the moment become a problem. This paper discusses about simulation process for evaluating the characteristic of Bayesian Ridge regression parameter estimates. There are several simulation settings based on variety of collinearity levels and sample sizes. The results show that Bayesian method gives better performance for relatively small sample sizes, and for other settings the method does perform relatively similar to the likelihood method.

  14. Chemical modeling of acid-base properties of soluble biopolymers derived from municipal waste treatment materials.

    PubMed

    Tabasso, Silvia; Berto, Silvia; Rosato, Roberta; Marinos, Janeth Alicia Tafur; Ginepro, Marco; Zelano, Vincenzo; Daniele, Pier Giuseppe; Montoneri, Enzo

    2015-02-04

    This work reports a study of the proton-binding capacity of biopolymers obtained from different materials supplied by a municipal biowaste treatment plant located in Northern Italy. One material was the anaerobic fermentation digestate of the urban wastes organic humid fraction. The others were the compost of home and public gardening residues and the compost of the mix of the above residues, digestate and sewage sludge. These materials were hydrolyzed under alkaline conditions to yield the biopolymers by saponification. The biopolymers were characterized by 13C NMR spectroscopy, elemental analysis and potentiometric titration. The titration data were elaborated to attain chemical models for interpretation of the proton-binding capacity of the biopolymers obtaining the acidic sites concentrations and their protonation constants. The results obtained with the models and by NMR spectroscopy were elaborated together in order to better characterize the nature of the macromolecules. The chemical nature of the biopolymers was found dependent upon the nature of the sourcing materials.

  15. Chemical Modeling of Acid-Base Properties of Soluble Biopolymers Derived from Municipal Waste Treatment Materials

    PubMed Central

    Tabasso, Silvia; Berto, Silvia; Rosato, Roberta; Tafur Marinos, Janeth Alicia; Ginepro, Marco; Zelano, Vincenzo; Daniele, Pier Giuseppe; Montoneri, Enzo

    2015-01-01

    This work reports a study of the proton-binding capacity of biopolymers obtained from different materials supplied by a municipal biowaste treatment plant located in Northern Italy. One material was the anaerobic fermentation digestate of the urban wastes organic humid fraction. The others were the compost of home and public gardening residues and the compost of the mix of the above residues, digestate and sewage sludge. These materials were hydrolyzed under alkaline conditions to yield the biopolymers by saponification. The biopolymers were characterized by 13C NMR spectroscopy, elemental analysis and potentiometric titration. The titration data were elaborated to attain chemical models for interpretation of the proton-binding capacity of the biopolymers obtaining the acidic sites concentrations and their protonation constants. The results obtained with the models and by NMR spectroscopy were elaborated together in order to better characterize the nature of the macromolecules. The chemical nature of the biopolymers was found dependent upon the nature of the sourcing materials. PMID:25658795

  16. How users adopt healthcare information: An empirical study of an online Q&A community.

    PubMed

    Jin, Jiahua; Yan, Xiangbin; Li, Yijun; Li, Yumei

    2016-02-01

    The emergence of social media technology has led to the creation of many online healthcare communities, where patients can easily share and look for healthcare-related information from peers who have experienced a similar problem. However, with increased user-generated content, there is a need to constantly analyse which content should be trusted as one sifts through enormous amounts of healthcare information. This study aims to explore patients' healthcare information seeking behavior in online communities. Based on dual-process theory and the knowledge adoption model, we proposed a healthcare information adoption model for online communities. This model highlights that information quality, emotional support, and source credibility are antecedent variables of adoption likelihood of healthcare information, and competition among repliers and involvement of recipients moderate the relationship between the antecedent variables and adoption likelihood. Empirical data were collected from the healthcare module of China's biggest Q&A community-Baidu Knows. Text mining techniques were adopted to calculate the information quality and emotional support contained in each reply text. A binary logistics regression model and hierarchical regression approach were employed to test the proposed conceptual model. Information quality, emotional support, and source credibility have significant and positive impact on healthcare information adoption likelihood, and among these factors, information quality has the biggest impact on a patient's adoption decision. In addition, competition among repliers and involvement of recipients were tested as moderating effects between these antecedent factors and the adoption likelihood. Results indicate competition among repliers positively moderates the relationship between source credibility and adoption likelihood, and recipients' involvement positively moderates the relationship between information quality, source credibility, and adoption decision. In addition to information quality and source credibility, emotional support has significant positive impact on individuals' healthcare information adoption decisions. Moreover, the relationships between information quality, source credibility, emotional support, and adoption decision are moderated by competition among repliers and involvement of recipients. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  17. Re-Framing Inclusive Education through the Capability Approach: An Elaboration of the Model of Relational Inclusion

    ERIC Educational Resources Information Center

    Dalkilic, Maryam; Vadeboncoeur, Jennifer A.

    2016-01-01

    Scholars have called for the articulation of new frameworks in special education that are responsive to culture and context and that address the limitations of medical and social models of disability. In this article, we advance a theoretical and practical framework for inclusive education based on the integration of a model of relational…

  18. The Nature of Study Programmes in Vocational Education: Evaluation of the Model for Comprehensive Competence-Based Vocational Education in the Netherlands

    ERIC Educational Resources Information Center

    Sturing, Lidwien; Biemans, Harm J. A.; Mulder, Martin; de Bruijn, Elly

    2011-01-01

    In a previous series of studies, a model of comprehensive competence-based vocational education (CCBE model) was developed, consisting of eight principles of competence-based vocational education (CBE) that were elaborated for four implementation levels (Wesselink et al. "European journal of vocational training" 40:38-51 2007a). The…

  19. Using the β-binomial distribution to characterize forest health

    Treesearch

    S.J. Zarnoch; R.L. Anderson; R.M. Sheffield

    1995-01-01

    The β-binomial distribution is suggested as a model for describing and analyzing the dichotomous data obtained from programs monitoring the health of forests in the United States. Maximum likelihood estimation of the parameters is given as well as asymptotic likelihood ratio tests. The procedure is illustrated with data on dogwood anthracnose infection (caused...

  20. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    ERIC Educational Resources Information Center

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  1. Expected versus Observed Information in SEM with Incomplete Normal and Nonnormal Data

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2010-01-01

    Maximum likelihood is the most common estimation method in structural equation modeling. Standard errors for maximum likelihood estimates are obtained from the associated information matrix, which can be estimated from the sample using either expected or observed information. It is known that, with complete data, estimates based on observed or…

  2. Five Methods for Estimating Angoff Cut Scores with IRT

    ERIC Educational Resources Information Center

    Wyse, Adam E.

    2017-01-01

    This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…

  3. Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.

    1985-01-01

    Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.

  4. optBINS: Optimal Binning for histograms

    NASA Astrophysics Data System (ADS)

    Knuth, Kevin H.

    2018-03-01

    optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.

  5. A maximum pseudo-profile likelihood estimator for the Cox model under length-biased sampling

    PubMed Central

    Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.

    2012-01-01

    This paper considers semiparametric estimation of the Cox proportional hazards model for right-censored and length-biased data arising from prevalent sampling. To exploit the special structure of length-biased sampling, we propose a maximum pseudo-profile likelihood estimator, which can handle time-dependent covariates and is consistent under covariate-dependent censoring. Simulation studies show that the proposed estimator is more efficient than its competitors. A data analysis illustrates the methods and theory. PMID:23843659

  6. Understanding Self-Sufficiency of Welfare Leavers in Illinois: Elaborating Models with Psychosocial Factors.

    ERIC Educational Resources Information Center

    Julnes, George; Fan, Xitao; Hayashi, Kentaro

    2001-01-01

    Used survey (for 1,001 adults) and administrative data (for 137,330 first-exit cases) in structural equation modeling to examine psychological and social factors as determinants of welfare dependency and self-sufficiency. Findings show well-being to be a predictor of low recidivism and high employment. (SLD)

  7. A Review of Structural Equation Modeling Applications in Turkish Educational Science Literature, 2010-2015

    ERIC Educational Resources Information Center

    Karakaya-Ozyer, Kubra; Aksu-Dunya, Beyza

    2018-01-01

    Structural equation modeling (SEM) is one of the most popular multivariate statistical techniques in Turkish educational research. This study elaborates the SEM procedures employed by 75 educational research articles which were published from 2010 to 2015 in Turkey. After documenting and coding 75 academic papers, categorical frequencies and…

  8. Agency, Values, and Well-Being: A Human Development Model

    ERIC Educational Resources Information Center

    Welzel, Christian; Inglehart, Ronald

    2010-01-01

    This paper argues that feelings of agency are linked to human well-being through a sequence of adaptive mechanisms that promote human development, once existential conditions become permissive. In the first part, we elaborate on the evolutionary logic of this model and outline why an evolutionary perspective is helpful to understand changes in…

  9. Towards Graduateness: Exploring Academic Intellectual Development in University Master's Students

    ERIC Educational Resources Information Center

    Steur, Jessica; Jansen, Ellen; Hofman, Adriaan

    2016-01-01

    Our research aims to contribute to the body of knowledge on graduateness by proposing a model that explicates the expected level performance of graduates. In this study, the model is elaborated for 3 graduateness domains: reflective thinking, scholarship, and moral citizenship. We used data on students' perceived abilities in these domains that…

  10. Six Rehearsal Techniques for the Public Speaker: Improving Memory, Increasing Delivery Skills and Reducing Speech Stress.

    ERIC Educational Resources Information Center

    Crane, Loren D.

    This paper describes six specific techniques that speech communication students may use in rehearsals to improve memory, to increase delivery skills, and to reduce speech stress. The techniques are idea association, covert modeling, desensitization, language elaboration, overt modeling, and self-regulation. Recent research is reviewed that…

  11. Entomology: Promoting Creativity in the Science Lab

    ERIC Educational Resources Information Center

    Akcay, Behiye B.

    2013-01-01

    A class activity has been designed to help fourth grade students to identify basic insect features as a means of promoting student creativity while making an imaginary insect model. The 5Es (Engage, Explore, Explain, Extend [or Elaborate], and Evaluate) learning cycle teaching model is used. The 5Es approach allows students to work in small…

  12. Let's Get Charged Up

    ERIC Educational Resources Information Center

    Duran, Emilio; Worch, Eric; Boros, Amy; Keeley, Page

    2017-01-01

    One of the most powerful strategies to support next generation science instruction is the use of instructional models. The Biological Sciences Curriculum Study 5E (Engage, Explore, Explain, Elaborate, and Evaluate) instructional model is arguably the most widely used version of a learning cycle in today's classrooms. The use of the 5Es as an…

  13. Joint Control for Dummies: An Elaboration of Lowenkron's Model of Joint (Stimulus) Control

    ERIC Educational Resources Information Center

    Sidener, David W.

    2006-01-01

    The following paper describes Lowenkron's model of joint (stimulus) control. Joint control is described as a means of accounting for performances, especially generalized performances, for which a history of contingency control does not provide an adequate account. Examples are provided to illustrate instances in which joint control may facilitate…

  14. Elaborations on the Socioegocentric and Dual-Level Connectionist Models of Group Interaction Processes

    ERIC Educational Resources Information Center

    Hewes, Dean E.

    2009-01-01

    The purpose of the author's contribution to this colloquy was to spark conversation on the theoretical nature of communication processes and the evidentiary requirements for testing their relationship to group outcomes. Co-discussants have raised important issues concerning the philosophical basis of the socioegocentric model (SM) and dual-level…

  15. Assumptions of Asian American Similarity: The Case of Filipino and Chinese American Students

    ERIC Educational Resources Information Center

    Agbayani-Siewert, Pauline

    2004-01-01

    The conventional research model of clustering ethnic groups into four broad categories risks perpetuating a pedagogy of stereotypes in social work policies and practice methods. Using an elaborated research model, this study tested the assumption of cultural similarity of Filipino and Chinese American college students by examining attitudes,…

  16. Analysis of a Teacher's Pedagogical Arguments Using Toulmin's Model and Argumentation Schemes

    ERIC Educational Resources Information Center

    Metaxas, N.; Potari, D.; Zachariades, T.

    2016-01-01

    In this article, we elaborate methodologies to study the argumentation speech of a teacher involved in argumentative activities. The standard tool of analysis of teachers' argumentation concerning pedagogical matters is Toulmin's model. The theory of argumentation schemes offers an alternative perspective on the analysis of arguments. We propose…

  17. Implementing the Lab School Club Model at the Academy in Manayunk

    ERIC Educational Resources Information Center

    Herman, Chris

    2010-01-01

    Central to The Lab School model is Sally Smith's Club Methodology, the full immersion of students into a time period where historical information is learned through multi-sensory activities. While immersed, through the use of costumes and elaborately decorated classrooms, students are engaged in project-based learning. As the student's…

  18. Developing Explanations and Developing Understanding: Students Explain the Phases of the Moon Using Visual Representations

    ERIC Educational Resources Information Center

    Parnafes, Orit

    2012-01-01

    This article presents a theoretical model of the process by which students construct and elaborate explanations of scientific phenomena using visual representations. The model describes progress in the underlying conceptual processes in students' explanations as a reorganization of fine-grained knowledge elements based on the Knowledge in Pieces…

  19. Model-independent assessment of current direct searches for spin-dependent dark matter.

    PubMed

    Giuliani, F

    2004-10-15

    I evaluate the current results of spin-dependent weakly interacting massive particle searches within a model-independent framework, showing the most restrictive limits to date derive from the combination of xenon and sodium iodide experiments. The extension of this analysis to the case of positive signal experiments is elaborated.

  20. The dual pathway model of AD/HD: an elaboration of neuro-developmental characteristics.

    PubMed

    Sonuga-Barke, Edmund J S

    2003-11-01

    The currently dominant neuro-cognitive model of Attention Deficit Hyperactivity Disorder (AD/HD) presents the condition as executive dysfunction (EDF) underpinned by disturbances in the fronto-dorsal striatal circuit and associated dopaminergic branches (e.g. meso-cortical). In contrast, motivationally-based accounts focus on altered reward processes and implicate fronto-ventral striatal reward circuits and those meso-limbic branches that terminate in the ventral striatum especially the nucleus accumbens. One such account, delay aversion (DEL), presents AD/HD as a motivational style-characterised by attempts to escape or avoid delay-arising from fundamental disturbances in these reward centres. While traditionally regarded as competing, EDF and DEL models have recently been presented as complimentary accounts of two psycho-patho-physiological subtypes of AD/HD with different developmental pathways, underpinned by different cortico-striatal circuits and modulated by different branches of the dopamine system. In the current paper we describe the development of this model in more detail. We elaborate on the neuro-circuitry possibly underpinning these two pathways and explore their developmental significance within a neuro-ecological framework.

  1. Bayesian hierarchical modeling for detecting safety signals in clinical trials.

    PubMed

    Xia, H Amy; Ma, Haijun; Carlin, Bradley P

    2011-09-01

    Detection of safety signals from clinical trial adverse event data is critical in drug development, but carries a challenging statistical multiplicity problem. Bayesian hierarchical mixture modeling is appealing for its ability to borrow strength across subgroups in the data, as well as moderate extreme findings most likely due merely to chance. We implement such a model for subject incidence (Berry and Berry, 2004 ) using a binomial likelihood, and extend it to subject-year adjusted incidence rate estimation under a Poisson likelihood. We use simulation to choose a signal detection threshold, and illustrate some effective graphics for displaying the flagged signals.

  2. PERIODIC AUTOREGRESSIVE-MOVING AVERAGE (PARMA) MODELING WITH APPLICATIONS TO WATER RESOURCES.

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    Results involving correlation properties and parameter estimation for autogressive-moving average models with periodic parameters are presented. A multivariate representation of the PARMA model is used to derive parameter space restrictions and difference equations for the periodic autocorrelations. Close approximation to the likelihood function for Gaussian PARMA processes results in efficient maximum-likelihood estimation procedures. Terms in the Fourier expansion of the parameters are sequentially included, and a selection criterion is given for determining the optimal number of harmonics to be included. Application of the techniques is demonstrated through analysis of a monthly streamflow time series.

  3. The Bidirectional Nature of Narrative Scaffolding: Latino Caregivers' Elaboration While Creating Stories from a Picture Book

    ERIC Educational Resources Information Center

    Schick, Adina R.; Melzi, Gigliana; Obregón, Javanna

    2017-01-01

    Although caregiver narrative elaboration is seen as a critical dimension for children's development of narrative skills, research has yet to show a predictive relation between caregiver elaboration and child outcomes for low-income Latino children. The present study explored whether specific types of narrative elaboration were predicted by and…

  4. Problemes en enseignement fonctionnel des langues (Problems in the Functional Teaching of Languages). Publication B-103.

    ERIC Educational Resources Information Center

    Alvarez, Gerardo, Ed.; Huot, Diane, Ed.

    Articles include: (1) "L'elaboration du materiel pedagogique pour des publics adultes" (The Elaboration of Teaching Materials for the Adult Public) by G. Painchaud-Leblanc, (2) "L'elaboration d'un programme d'etudes en francais langue seconde a partir des donnees recentes en didactique des langues" (The Elaboration of a Program…

  5. Group B Streptococcus Induces Neutrophil Recruitment to Gestational Tissues and Elaboration of Extracellular Traps and Nutritional Immunity

    PubMed Central

    Kothary, Vishesh; Doster, Ryan S.; Rogers, Lisa M.; Kirk, Leslie A.; Boyd, Kelli L.; Romano-Keeler, Joann; Haley, Kathryn P.; Manning, Shannon D.; Aronoff, David M.; Gaddy, Jennifer A.

    2017-01-01

    Streptococcus agalactiae, or Group B Streptococcus (GBS), is a gram-positive bacterial pathogen associated with infection during pregnancy and is a major cause of morbidity and mortality in neonates. Infection of the extraplacental membranes surrounding the developing fetus, a condition known as chorioamnionitis, is characterized histopathologically by profound infiltration of polymorphonuclear cells (PMNs, neutrophils) and greatly increases the risk for preterm labor, stillbirth, or neonatal GBS infection. The advent of animal models of chorioamnionitis provides a powerful tool to study host-pathogen relationships in vivo and ex vivo. The purpose of this study was to evaluate the innate immune response elicited by GBS and evaluate how antimicrobial strategies elaborated by these innate immune cells affect bacteria. Our work using a mouse model of GBS ascending vaginal infection during pregnancy reveals that clinically isolated GBS has the capacity to invade reproductive tissues and elicit host immune responses including infiltration of PMNs within the choriodecidua and placenta during infection, mirroring the human condition. Upon interacting with GBS, murine neutrophils elaborate DNA-containing extracellular traps, which immobilize GBS and are studded with antimicrobial molecules including lactoferrin. Exposure of GBS to holo- or apo-forms of lactoferrin reveals that the iron-sequestration activity of lactoferrin represses GBS growth and viability in a dose-dependent manner. Together, these data indicate that the mouse model of ascending infection is a useful tool to recapitulate human models of GBS infection during pregnancy. Furthermore, this work reveals that neutrophil extracellular traps ensnare GBS and repress bacterial growth via deposition of antimicrobial molecules, which drive nutritional immunity via metal sequestration strategies. PMID:28217556

  6. Quantifying the Establishment Likelihood of Invasive Alien Species Introductions Through Ports with Application to Honeybees in Australia.

    PubMed

    Heersink, Daniel K; Caley, Peter; Paini, Dean R; Barry, Simon C

    2016-05-01

    The cost of an uncontrolled incursion of invasive alien species (IAS) arising from undetected entry through ports can be substantial, and knowledge of port-specific risks is needed to help allocate limited surveillance resources. Quantifying the establishment likelihood of such an incursion requires quantifying the ability of a species to enter, establish, and spread. Estimation of the approach rate of IAS into ports provides a measure of likelihood of entry. Data on the approach rate of IAS are typically sparse, and the combinations of risk factors relating to country of origin and port of arrival diverse. This presents challenges to making formal statistical inference on establishment likelihood. Here we demonstrate how these challenges can be overcome with judicious use of mixed-effects models when estimating the incursion likelihood into Australia of the European (Apis mellifera) and Asian (A. cerana) honeybees, along with the invasive parasites of biosecurity concern they host (e.g., Varroa destructor). Our results demonstrate how skewed the establishment likelihood is, with one-tenth of the ports accounting for 80% or more of the likelihood for both species. These results have been utilized by biosecurity agencies in the allocation of resources to the surveillance of maritime ports. © 2015 Society for Risk Analysis.

  7. Competition between learned reward and error outcome predictions in anterior cingulate cortex.

    PubMed

    Alexander, William H; Brown, Joshua W

    2010-02-15

    The anterior cingulate cortex (ACC) is implicated in performance monitoring and cognitive control. Non-human primate studies of ACC show prominent reward signals, but these are elusive in human studies, which instead show mainly conflict and error effects. Here we demonstrate distinct appetitive and aversive activity in human ACC. The error likelihood hypothesis suggests that ACC activity increases in proportion to the likelihood of an error, and ACC is also sensitive to the consequence magnitude of the predicted error. Previous work further showed that error likelihood effects reach a ceiling as the potential consequences of an error increase, possibly due to reductions in the average reward. We explored this issue by independently manipulating reward magnitude of task responses and error likelihood while controlling for potential error consequences in an Incentive Change Signal Task. The fMRI results ruled out a modulatory effect of expected reward on error likelihood effects in favor of a competition effect between expected reward and error likelihood. Dynamic causal modeling showed that error likelihood and expected reward signals are intrinsic to the ACC rather than received from elsewhere. These findings agree with interpretations of ACC activity as signaling both perceptions of risk and predicted reward. Copyright 2009 Elsevier Inc. All rights reserved.

  8. Modelling soil water retention using support vector machines with genetic algorithm optimisation.

    PubMed

    Lamorski, Krzysztof; Sławiński, Cezary; Moreno, Felix; Barna, Gyöngyi; Skierucha, Wojciech; Arrue, José L

    2014-01-01

    This work presents point pedotransfer function (PTF) models of the soil water retention curve. The developed models allowed for estimation of the soil water content for the specified soil water potentials: -0.98, -3.10, -9.81, -31.02, -491.66, and -1554.78 kPa, based on the following soil characteristics: soil granulometric composition, total porosity, and bulk density. Support Vector Machines (SVM) methodology was used for model development. A new methodology for elaboration of retention function models is proposed. Alternative to previous attempts known from literature, the ν-SVM method was used for model development and the results were compared with the formerly used the C-SVM method. For the purpose of models' parameters search, genetic algorithms were used as an optimisation framework. A new form of the aim function used for models parameters search is proposed which allowed for development of models with better prediction capabilities. This new aim function avoids overestimation of models which is typically encountered when root mean squared error is used as an aim function. Elaborated models showed good agreement with measured soil water retention data. Achieved coefficients of determination values were in the range 0.67-0.92. Studies demonstrated usability of ν-SVM methodology together with genetic algorithm optimisation for retention modelling which gave better performing models than other tested approaches.

  9. Extending the Applicability of the Generalized Likelihood Function for Zero-Inflated Data Series

    NASA Astrophysics Data System (ADS)

    Oliveira, Debora Y.; Chaffe, Pedro L. B.; Sá, João. H. M.

    2018-03-01

    Proper uncertainty estimation for data series with a high proportion of zero and near zero observations has been a challenge in hydrologic studies. This technical note proposes a modification to the Generalized Likelihood function that accounts for zero inflation of the error distribution (ZI-GL). We compare the performance of the proposed ZI-GL with the original Generalized Likelihood function using the entire data series (GL) and by simply suppressing zero observations (GLy>0). These approaches were applied to two interception modeling examples characterized by data series with a significant number of zeros. The ZI-GL produced better uncertainty ranges than the GL as measured by the precision, reliability and volumetric bias metrics. The comparison between ZI-GL and GLy>0 highlights the need for further improvement in the treatment of residuals from near zero simulations when a linear heteroscedastic error model is considered. Aside from the interception modeling examples illustrated herein, the proposed ZI-GL may be useful for other hydrologic studies, such as for the modeling of the runoff generation in hillslopes and ephemeral catchments.

  10. Linking Illness in Parents to Health Anxiety in Offspring: Do Beliefs about Health Play a Role?

    PubMed

    Alberts, Nicole M; Hadjistavropoulos, Heather D; Sherry, Simon B; Stewart, Sherry H

    2016-01-01

    The cognitive behavioural (CB) model of health anxiety proposes parental illness leads to elevated health anxiety in offspring by promoting the acquisition of specific health beliefs (e.g. overestimation of the likelihood of illness). Our study tested this central tenet of the CB model. Participants were 444 emerging adults (18-25-years-old) who completed online measures and were categorized into those with healthy parents (n = 328) or seriously ill parents (n = 116). Small (d = .21), but significant, elevations in health anxiety, and small to medium (d = .40) elevations in beliefs about the likelihood of illness were found among those with ill vs. healthy parents. Mediation analyses indicated the relationship between parental illness and health anxiety was mediated by beliefs regarding the likelihood of future illness. Our study incrementally advances knowledge by testing and supporting a central proposition of the CB model. The findings add further specificity to the CB model by highlighting the importance of a specific health belief as a central contributor to health anxiety among offspring with a history of serious parental illness.

  11. Evidence of seasonal variation in longitudinal growth of height in a sample of boys from Stuttgart Carlsschule, 1771-1793, using combined principal component analysis and maximum likelihood principle.

    PubMed

    Lehmann, A; Scheffler, Ch; Hermanussen, M

    2010-02-01

    Recent progress in modelling individual growth has been achieved by combining the principal component analysis and the maximum likelihood principle. This combination models growth even in incomplete sets of data and in data obtained at irregular intervals. We re-analysed late 18th century longitudinal growth of German boys from the boarding school Carlsschule in Stuttgart. The boys, aged 6-23 years, were measured at irregular 3-12 monthly intervals during the period 1771-1793. At the age of 18 years, mean height was 1652 mm, but height variation was large. The shortest boy reached 1474 mm, the tallest 1826 mm. Measured height closely paralleled modelled height, with mean difference of 4 mm, SD 7 mm. Seasonal height variation was found. Low growth rates occurred in spring and high growth rates in summer and autumn. The present study demonstrates that combining the principal component analysis and the maximum likelihood principle enables growth modelling in historic height data also. Copyright (c) 2009 Elsevier GmbH. All rights reserved.

  12. A scan statistic for binary outcome based on hypergeometric probability model, with an application to detecting spatial clusters of Japanese encephalitis.

    PubMed

    Zhao, Xing; Zhou, Xiao-Hua; Feng, Zijian; Guo, Pengfei; He, Hongyan; Zhang, Tao; Duan, Lei; Li, Xiaosong

    2013-01-01

    As a useful tool for geographical cluster detection of events, the spatial scan statistic is widely applied in many fields and plays an increasingly important role. The classic version of the spatial scan statistic for the binary outcome is developed by Kulldorff, based on the Bernoulli or the Poisson probability model. In this paper, we apply the Hypergeometric probability model to construct the likelihood function under the null hypothesis. Compared with existing methods, the likelihood function under the null hypothesis is an alternative and indirect method to identify the potential cluster, and the test statistic is the extreme value of the likelihood function. Similar with Kulldorff's methods, we adopt Monte Carlo test for the test of significance. Both methods are applied for detecting spatial clusters of Japanese encephalitis in Sichuan province, China, in 2009, and the detected clusters are identical. Through a simulation to independent benchmark data, it is indicated that the test statistic based on the Hypergeometric model outweighs Kulldorff's statistics for clusters of high population density or large size; otherwise Kulldorff's statistics are superior.

  13. Expanding the "CBAL"™ Mathematics Assessments to Elementary Grades: The Development of a Competency Model and a Rational Number Learning Progression. Research Report. ETS RR-14-08

    ERIC Educational Resources Information Center

    Arieli-Attali, Meirav; Cayton-Hodges, Gabrielle

    2014-01-01

    Prior work on the "CBAL"™ mathematics competency model resulted in an initial competency model for middle school grades with several learning progressions (LPs) that elaborate central ideas in the competency model and provide a basis for connecting summative and formative assessment. In the current project, we created a competency model…

  14. A smoothed stochastic earthquake rate model considering seismicity and fault moment release for Europe

    NASA Astrophysics Data System (ADS)

    Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.

    2014-08-01

    We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for testing periods of 10-20 yr. The testing results suggest that our model is a viable candidate model to serve for long-term forecasting on timescales of years to decades for the European region.

  15. Methods for estimating drought streamflow probabilities for Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.

    2014-01-01

    Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.

  16. The dorsal medial frontal cortex is sensitive to time on task, not response conflict or error likelihood.

    PubMed

    Grinband, Jack; Savitskaya, Judith; Wager, Tor D; Teichert, Tobias; Ferrera, Vincent P; Hirsch, Joy

    2011-07-15

    The dorsal medial frontal cortex (dMFC) is highly active during choice behavior. Though many models have been proposed to explain dMFC function, the conflict monitoring model is the most influential. It posits that dMFC is primarily involved in detecting interference between competing responses thus signaling the need for control. It accurately predicts increased neural activity and response time (RT) for incompatible (high-interference) vs. compatible (low-interference) decisions. However, it has been shown that neural activity can increase with time on task, even when no decisions are made. Thus, the greater dMFC activity on incompatible trials may stem from longer RTs rather than response conflict. This study shows that (1) the conflict monitoring model fails to predict the relationship between error likelihood and RT, and (2) the dMFC activity is not sensitive to congruency, error likelihood, or response conflict, but is monotonically related to time on task. Copyright © 2010 Elsevier Inc. All rights reserved.

  17. Say More and Be More Coherent: How Text Elaboration and Cohesion Can Increase Writing Quality

    ERIC Educational Resources Information Center

    Crossley, Scott A.; McNamara, Danielle S.

    2016-01-01

    This study examines links between essay quality and text elaboration and text cohesion. For this study, 35 students wrote two essays (on two different prompts) and for each, were given 15 minutes to elaborate on their original text. An expert in discourse comprehension then modified the original and elaborated essays to increase cohesion,…

  18. The Effects of Guided Elaboration in a CSCL Programme on the Learning Outcomes of Primary School Students from Dutch and Immigrant Families

    ERIC Educational Resources Information Center

    Prinsen, Fleur Ruth; Terwel, Jan; Zijlstra, Bonne J. H.; Volman, Monique M. L.

    2013-01-01

    This study examined the effects of guided elaboration on students' learning outcomes in a computer-supported collaborative learning (CSCL) environment. The programme provided students with feedback on their elaborations, and students reflected on this feedback. It was expected that students in the experimental (elaboration) programme would show…

  19. Elaborate analysis and design of filter-bank-based sensing for wideband cognitive radios

    NASA Astrophysics Data System (ADS)

    Maliatsos, Konstantinos; Adamis, Athanasios; Kanatas, Athanasios G.

    2014-12-01

    The successful operation of a cognitive radio system strongly depends on its ability to sense the radio environment. With the use of spectrum sensing algorithms, the cognitive radio is required to detect co-existing licensed primary transmissions and to protect them from interference. This paper focuses on filter-bank-based sensing and provides a solid theoretical background for the design of these detectors. Optimum detectors based on the Neyman-Pearson theorem are developed for uniform discrete Fourier transform (DFT) and modified DFT filter banks with root-Nyquist filters. The proposed sensing framework does not require frequency alignment between the filter bank of the sensor and the primary signal. Each wideband primary channel is spanned and monitored by several sensor subchannels that analyse it in narrowband signals. Filter-bank-based sensing is proved to be robust and efficient under coloured noise. Moreover, the performance of the weighted energy detector as a sensing technique is evaluated. Finally, based on the Locally Most Powerful and the Generalized Likelihood Ratio test, real-world sensing algorithms that do not require a priori knowledge are proposed and tested.

  20. The judgement process in evidence-based medicine and health technology assessment.

    PubMed

    Kelly, Michael P; Moore, Tessa A

    2012-02-01

    This article describes the judgements used to interpret evidence in evidence-based medicine (EBM) and health technology assessment (HTA). It outlines the methods and processes of EBM and HTA. Respectively, EBM and HTA are approaches to medical clinical decision making and efficient allocation of scarce health resources. At the heart of both is a concern to review and synthesise evidence, especially evidence derived from randomised controlled trials (RCTs) of clinical effectiveness. The driver of the approach of both is a desire to eliminate, or at least reduce, bias. The hierarchy of evidence, which is used as an indicator of the likelihood of bias, features heavily in the process and methods of EBM and HTA. The epistemological underpinnings of EBM and HTA are explored with particular reference to the distinction between rationalism and empiricism, developed by the philosopher David Hume and elaborated by Immanuel Kant in the Critique of Pure Reason. The importance of Humian and Kantian principles for understanding the projects of EBM and HTA is considered and the ways in which decisions are made in both, within a judgemental framework originally outlined by Kant, are explored.

Top