Computed Tomography Screening for Lung Cancer in the National Lung Screening Trial
Black, William C.
2016-01-01
The National Lung Screening Trial (NLST) demonstrated that screening with low-dose CT versus chest radiography reduced lung cancer mortality by 16% to 20%. More recently, a cost-effectiveness analysis (CEA) of CT screening for lung cancer versus no screening in the NLST was performed. The CEA conformed to the reference-case recommendations of the US Panel on Cost-Effectiveness in Health and Medicine, including the use of the societal perspective and an annual discount rate of 3%. The CEA was based on several important assumptions. In this paper, I review the methods and assumptions used to obtain the base case estimate of $81,000 per quality-adjusted life-year gained. In addition, I show how this estimate varied widely among different subsets and when some of the base case assumptions were changed and speculate on the cost-effectiveness of CT screening for lung cancer outside the NLST. PMID:25635704
Base Case v.5.15 Documentation Supplement to Support the Clean Power Plan
Learn about several modeling assumptions used as part of EPA's analysis of the Clean Power Plan (Carbon Pollution Guidelines for Existing Electric Generating Units) using the EPA v.5.15 Base Case using Integrated Planning Model (IPM).
Comparing Efficiency Projections (released in AEO2010)
2010-01-01
Realized improvements in energy efficiency generally rely on a combination of technology and economics. The figure below illustrates the role of technology assumptions in the Annual Energy Outlook 2010 projections for energy efficiency in the residential and commercial buildings sector. Projected energy consumption in the Reference case is compared with projections in the Best Available Technology, High Technology, and 2009 Technology cases and an estimate based on an assumption of no change in efficiency for building shells and equipment.
Martins, Cesário L; Garly, May-Lill; Rodrigues, Amabelia; Benn, Christine S; Whittle, Hilton
2012-01-01
Objective The current policy of measles vaccination at 9 months of age was decided in the mid-1970s. The policy was not tested for impact on child survival but was based on studies of seroconversion after measles vaccination at different ages. The authors examined the empirical evidence for the six underlying assumptions. Design Secondary analysis. Data sources and methods These assumptions have not been research issues. Hence, the authors examined case reports to assess the empirical evidence for the original assumptions. The authors used existing reviews, and in December 2011, the authors made a PubMed search for relevant papers. The title and abstract of papers in English, French, Portuguese, Spanish, German and Scandinavian languages were assessed to ascertain whether the paper was potentially relevant. Based on cumulative measles incidence figures, the authors calculated how many measles cases had been prevented assuming everybody was vaccinated at a specific age, how many ‘vaccine failures’ would occur after the age of vaccination and how many cases would occur before the specific age of vaccination. In the combined analyses of several studies, the authors used the Mantel–Haenszel weighted RR stratifying for study or age groups to estimate common trends. Setting and participants African community studies of measles infection. Primary and secondary outcomes Consistency between assumptions and empirical evidence and the predicted effect on mortality. Results In retrospect, the major assumptions were based on false premises. First, in the single study examining this point, seronegative vaccinated children had considerable protection against measles infection. Second, in 18 community studies, vaccinated measles cases (‘vaccine failures’) had threefold lower case death than unvaccinated cases. Third, in 24 community studies, infants had twofold higher case death than older measles cases. Fourth, the only study examining the assumption that ‘vaccine failures’ lead to lack of confidence found the opposite because vaccinated children had milder measles infection. Fifth, a one-dose policy was recommended. However, the two randomised trials of early two-dose measles vaccination compared with one-dose vaccination found significantly reduced mortality until 3 years of age. Thus, current evidence suggests that the optimal age for a single dose of measles vaccine should have been 6 or 7 months resulting in fewer severe unvaccinated cases among infants but more mild ‘vaccine failures’ among older children. Furthermore, the two-dose trials indicate that measles vaccine reduces mortality from other causes than measles infection. Conclusions Many lives may have been lost by not determining the optimal age of measles vaccination. Since seroconversion continues to be the basis for policy, the current recommendation is to increase the age of measles vaccination to 12 months in countries with limited measles transmission. This policy may lead to an increase in child mortality. PMID:22815465
Mesa-Frias, Marco; Chalabi, Zaid; Foss, Anna M
2013-09-01
Health impact assessment (HIA) is often used to determine ex ante the health impact of an environmental policy or an environmental intervention. Underpinning any HIA is the framing assumption, which defines the causal pathways mapping environmental exposures to health outcomes. The sensitivity of the HIA to the framing assumptions is often ignored. A novel method based on fuzzy cognitive map (FCM) is developed to quantify the framing assumptions in the assessment stage of a HIA, and is then applied to a housing intervention (tightening insulation) as a case-study. Framing assumptions of the case-study were identified through a literature search of Ovid Medline (1948-2011). The FCM approach was used to identify the key variables that have the most influence in a HIA. Changes in air-tightness, ventilation, indoor air quality and mould/humidity have been identified as having the most influence on health. The FCM approach is widely applicable and can be used to inform the formulation of the framing assumptions in any quantitative HIA of environmental interventions. We argue that it is necessary to explore and quantify framing assumptions prior to conducting a detailed quantitative HIA during the assessment stage. Copyright © 2013 Elsevier Ltd. All rights reserved.
The Use (and Misuse) of PISA in Guiding Policy Reform: The Case of Spain
ERIC Educational Resources Information Center
Choi, Álvaro; Jerrim, John
2016-01-01
In 2013 Spain introduced a series of educational reforms explicitly inspired by the Programme for International Student Assessment (PISA) 2012 results. These reforms were mainly implemented in secondary education--based upon the assumption that this is where Spain's educational problems lie. This paper questions this assumption by attempting to…
On Cognitive Constraints and Learning Progressions: The Case of "Structure of Matter"
ERIC Educational Resources Information Center
Talanquer, Vicente
2009-01-01
Based on the analysis of available research on students' alternative conceptions about the particulate nature of matter, we identified basic implicit assumptions that seem to constrain students' ideas and reasoning on this topic at various learning stages. Although many of these assumptions are interrelated, some of them seem to change or…
On equations of motion of a nonlinear hydroelastic structure
NASA Astrophysics Data System (ADS)
Plotnikov, P. I.; Kuznetsov, I. V.
2008-07-01
Formal derivation of equations of a nonlinear hydroelastic structure, which is a volume of an ideal incompressible fluid covered by a shell, is proposed. The study is based on two assumptions. The first assumption implies that the energy stored in the shell is completely determined by the mean curvature and by the elementary area. In a three-dimensional case, the energy stored in the shell is chosen in the form of the Willmore functional. In a two-dimensional case, a more generic form of the functional can be considered. The second assumption implies that the equations of motionhave a Hamiltonian structure and can be obtained from the Lagrangian variational principle. In a two-dimensional case, a condition for the hydroelastic structure is derived, which relates the external pressure and the curvature of the elastic shell.
Deriving Safety Cases from Machine-Generated Proofs
NASA Technical Reports Server (NTRS)
Basir, Nurlida; Fischer, Bernd; Denney, Ewen
2009-01-01
Proofs provide detailed justification for the validity of claims and are widely used in formal software development methods. However, they are often complex and difficult to understand, because they use machine-oriented formalisms; they may also be based on assumptions that are not justified. This causes concerns about the trustworthiness of using formal proofs as arguments in safety-critical applications. Here, we present an approach to develop safety cases that correspond to formal proofs found by automated theorem provers and reveal the underlying argumentation structure and top-level assumptions. We concentrate on natural deduction proofs and show how to construct the safety cases by covering the proof tree with corresponding safety case fragments.
Recovery after treatment and sensitivity to base rate.
Doctor, J N
1999-04-01
Accurate classification of patients as having recovered after psychotherapy depends largely on the base rate of such recovery. This article presents methods for classifying participants as recovered after therapy. The approach described here considers base rate in the statistical model. These methods can be applied to psychotherapy outcome data for 2 purposes: (a) to determine the robustness of a data set to differing base-rate assumptions and (b) to formulate an appropriate cutoff that is beyond the range of cases that are not robust to plausible base-rate assumptions. Discussion addresses a fundamental premise underlying the study of recovery after psychotherapy.
Idiographic versus Nomothetic Approaches to Research in Organizations.
1981-07-01
alternative methodologic assumption based on intensive examination of one or a few cases under the theoretic assumption of dynamic interactionism is, with...phenomenological studies the researcher may not enter the actual setting but instead examines symbolic meanings as they constitute themselves in...B. Interactionism in personality from a historical perspective. Psychological Bulletin, 1974, 81, 1026-l148. Elashoff, J.D.; & Thoresen, C.E
Search algorithm complexity modeling with application to image alignment and matching
NASA Astrophysics Data System (ADS)
DelMarco, Stephen
2014-05-01
Search algorithm complexity modeling, in the form of penetration rate estimation, provides a useful way to estimate search efficiency in application domains which involve searching over a hypothesis space of reference templates or models, as in model-based object recognition, automatic target recognition, and biometric recognition. The penetration rate quantifies the expected portion of the database that must be searched, and is useful for estimating search algorithm computational requirements. In this paper we perform mathematical modeling to derive general equations for penetration rate estimates that are applicable to a wide range of recognition problems. We extend previous penetration rate analyses to use more general probabilistic modeling assumptions. In particular we provide penetration rate equations within the framework of a model-based image alignment application domain in which a prioritized hierarchical grid search is used to rank subspace bins based on matching probability. We derive general equations, and provide special cases based on simplifying assumptions. We show how previously-derived penetration rate equations are special cases of the general formulation. We apply the analysis to model-based logo image alignment in which a hierarchical grid search is used over a geometric misalignment transform hypothesis space. We present numerical results validating the modeling assumptions and derived formulation.
INCORPORATING NONCHEMICAL STRESSORS INTO CUMMULATIVE RISK ASSESSMENTS
The risk assessment paradigm has begun to shift from assessing single chemicals using "reasonable worst case" assumptions for individuals to considering multiple chemicals and community-based models. Inherent in community-based risk assessment is examination of all stressors a...
Marom, Gil; Bluestein, Danny
2016-01-01
This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed.
Enhancing Cultural Adaptation through Friendship Training: A Single-Case Study.
ERIC Educational Resources Information Center
Liu, Yi-Ching; Baker, Stanley B.
1993-01-01
Four-year-old girl from mainland China experienced culture shock when attending American university day-care center. Counseling intern from Taiwan designed friendship training program based on assumptions concerning adaptation, acculturation, and peer relationships. Evaluated as intensive single-case study, findings indicated the program may be…
Nishiura, Hiroshi; Inaba, Hisashi
2011-03-07
Empirical estimates of the incubation period of influenza A (H1N1-2009) have been limited. We estimated the incubation period among confirmed imported cases who traveled to Japan from Hawaii during the early phase of the 2009 pandemic (n=72). We addressed censoring and employed an infection-age structured argument to explicitly model the daily frequency of illness onset after departure. We assumed uniform and exponential distributions for the frequency of exposure in Hawaii, and the hazard rate of infection for the latter assumption was retrieved, in Hawaii, from local outbreak data. The maximum likelihood estimates of the median incubation period range from 1.43 to 1.64 days according to different modeling assumptions, consistent with a published estimate based on a New York school outbreak. The likelihood values of the different modeling assumptions do not differ greatly from each other, although models with the exponential assumption yield slightly shorter incubation periods than those with the uniform exposure assumption. Differences between our proposed approach and a published method for doubly interval-censored analysis highlight the importance of accounting for the dependence of the frequency of exposure on the survival function of incubating individuals among imported cases. A truncation of the density function of the incubation period due to an absence of illness onset during the exposure period also needs to be considered. When the data generating process is similar to that among imported cases, and when the incubation period is close to or shorter than the length of exposure, accounting for these aspects is critical for long exposure times. Copyright © 2010 Elsevier Ltd. All rights reserved.
Marom, Gil; Bluestein, Danny
2016-01-01
Summary This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed. PMID:26679833
NASA Astrophysics Data System (ADS)
Wootten, A.; Dixon, K. W.; Lanzante, J. R.; Mcpherson, R. A.
2017-12-01
Empirical statistical downscaling (ESD) approaches attempt to refine global climate model (GCM) information via statistical relationships between observations and GCM simulations. The aim of such downscaling efforts is to create added-value climate projections by adding finer spatial detail and reducing biases. The results of statistical downscaling exercises are often used in impact assessments under the assumption that past performance provides an indicator of future results. Given prior research describing the danger of this assumption with regards to temperature, this study expands the perfect model experimental design from previous case studies to test the stationarity assumption with respect to precipitation. Assuming stationarity implies the performance of ESD methods are similar between the future projections and historical training. Case study results from four quantile-mapping based ESD methods demonstrate violations of the stationarity assumption for both central tendency and extremes of precipitation. These violations vary geographically and seasonally. For the four ESD methods tested the greatest challenges for downscaling of daily total precipitation projections occur in regions with limited precipitation and for extremes of precipitation along Southeast coastal regions. We conclude with a discussion of future expansion of the perfect model experimental design and the implications for improving ESD methods and providing guidance on the use of ESD techniques for impact assessments and decision-support.
Adaptive Filtering Using Recurrent Neural Networks
NASA Technical Reports Server (NTRS)
Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.
2005-01-01
A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.
Phadnis, Milind A; Wetmore, James B; Mayo, Matthew S
2017-11-20
Traditional methods of sample size and power calculations in clinical trials with a time-to-event end point are based on the logrank test (and its variations), Cox proportional hazards (PH) assumption, or comparison of means of 2 exponential distributions. Of these, sample size calculation based on PH assumption is likely the most common and allows adjusting for the effect of one or more covariates. However, when designing a trial, there are situations when the assumption of PH may not be appropriate. Additionally, when it is known that there is a rapid decline in the survival curve for a control group, such as from previously conducted observational studies, a design based on the PH assumption may confer only a minor statistical improvement for the treatment group that is neither clinically nor practically meaningful. For such scenarios, a clinical trial design that focuses on improvement in patient longevity is proposed, based on the concept of proportional time using the generalized gamma ratio distribution. Simulations are conducted to evaluate the performance of the proportional time method and to identify the situations in which such a design will be beneficial as compared to the standard design using a PH assumption, piecewise exponential hazards assumption, and specific cases of a cure rate model. A practical example in which hemorrhagic stroke patients are randomized to 1 of 2 arms in a putative clinical trial demonstrates the usefulness of this approach by drastically reducing the number of patients needed for study enrollment. Copyright © 2017 John Wiley & Sons, Ltd.
Description logic-based methods for auditing frame-based medical terminological systems.
Cornet, Ronald; Abu-Hanna, Ameen
2005-07-01
Medical terminological systems (TSs) play an increasingly important role in health care by supporting recording, retrieval and analysis of patient information. As the size and complexity of TSs are growing, the need arises for means to audit them, i.e. verify and maintain (logical) consistency and (semantic) correctness of their contents. This is not only important for the management of TSs but also for providing their users with confidence about the reliability of their contents. Formal methods have the potential to play an important role in the audit of TSs, although there are few empirical studies to assess the benefits of using these methods. In this paper we propose a method based on description logics (DLs) for the audit of TSs. This method is based on the migration of the medical TS from a frame-based representation to a DL-based one. Our method is characterized by a process in which initially stringent assumptions are made about concept definitions. The assumptions allow the detection of concepts and relations that might comprise a source of logical inconsistency. If the assumptions hold then definitions are to be altered to eliminate the inconsistency, otherwise the assumptions are revised. In order to demonstrate the utility of the approach in a real-world case study we audit a TS in the intensive care domain and discuss decisions pertaining to building DL-based representations. This case study demonstrates that certain types of inconsistencies can indeed be detected by applying the method to a medical terminological system. The added value of the method described in this paper is that it provides a means to evaluate the compliance to a number of common modeling principles in a formal manner. The proposed method reveals potential modeling inconsistencies, helping to audit and (if possible) improve the medical TS. In this way, it contributes to providing confidence in the contents of the terminological system.
Slotted Waveguide and Antenna Study for HPM and RF Applications
2017-07-25
parallel metal plates separated by lmm, depending on the particular characteristics of the case (waveguide dimensions, SEY (secondary e lectron yield...waveguide antenna, shown in Figure 23, was studied . A new feed ing network based on a composite right-hand/left-hand (CRLH) waveguide structure was...approach is based on the assumption that the external coupling between the array elements is negligible, which is acceptable in the case of the
Mallinckrodt, C H; Lin, Q; Molenberghs, M
2013-01-01
The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re-analysis of data from a confirmatory clinical trial in depression. A likelihood-based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug-treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was - 2.79 (p = .013). In placebo multiple imputation, the result was - 2.17. Results from the other sensitivity analyses ranged from - 2.21 to - 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.
Increased Reliability for Single-Case Research Results: Is the Bootstrap the Answer?
ERIC Educational Resources Information Center
Parker, Richard I.
2006-01-01
There is need for objective and reliable single-case research (SCR) results in the movement toward evidence-based interventions (EBI), for inclusion in meta-analyses, and for funding accountability in clinical contexts. Yet SCR deals with data that often do not conform to parametric data assumptions and that yield results of low reliability. A…
School Finance Litigation: The Use of Data Analysis.
ERIC Educational Resources Information Center
Moskowitz, Jay; Sherman, Joel
School finance cases are relying increasingly on data analysis to show inequities. Such cases are based on the assumption that some state school finance systems are failing to achieve fiscal and educational equality. Data analysis can be used to show such things as the use of wealth as the primary determinant of a certain school district's…
Deriving Safety Cases from Automatically Constructed Proofs
NASA Technical Reports Server (NTRS)
Basir, Nurlida; Denney, Ewen; Fischer, Bernd
2009-01-01
Formal proofs provide detailed justification for the validity of claims and are widely used in formal software development methods. However, they are often complex and difficult to understand, because the formalism in which they are constructed and encoded is usually machine-oriented, and they may also be based on assumptions that are not justified. This causes concerns about the trustworthiness of using formal proofs as arguments in safety-critical applications. Here, we present an approach to develop safety cases that correspond to formal proofs found by automated theorem provers and reveal the underlying argumentation structure and top-level assumptions. We concentrate on natural deduction style proofs, which are closer to human reasoning than resolution proofs, and show how to construct the safety cases by covering the natural deduction proof tree with corresponding safety case fragments. We also abstract away logical book-keeping steps, which reduces the size of the constructed safety cases. We show how the approach can be applied to the proofs found by the Muscadet prover.
Agent Architectures for Compliance
NASA Astrophysics Data System (ADS)
Burgemeestre, Brigitte; Hulstijn, Joris; Tan, Yao-Hua
A Normative Multi-Agent System consists of autonomous agents who must comply with social norms. Different kinds of norms make different assumptions about the cognitive architecture of the agents. For example, a principle-based norm assumes that agents can reflect upon the consequences of their actions; a rule-based formulation only assumes that agents can avoid violations. In this paper we present several cognitive agent architectures for self-monitoring and compliance. We show how different assumptions about the cognitive architecture lead to different information needs when assessing compliance. The approach is validated with a case study of horizontal monitoring, an approach to corporate tax auditing recently introduced by the Dutch Customs and Tax Authority.
Bickel, David R.; Montazeri, Zahra; Hsieh, Pei-Chun; Beatty, Mary; Lawit, Shai J.; Bate, Nicholas J.
2009-01-01
Motivation: Measurements of gene expression over time enable the reconstruction of transcriptional networks. However, Bayesian networks and many other current reconstruction methods rely on assumptions that conflict with the differential equations that describe transcriptional kinetics. Practical approximations of kinetic models would enable inferring causal relationships between genes from expression data of microarray, tag-based and conventional platforms, but conclusions are sensitive to the assumptions made. Results: The representation of a sufficiently large portion of genome enables computation of an upper bound on how much confidence one may place in influences between genes on the basis of expression data. Information about which genes encode transcription factors is not necessary but may be incorporated if available. The methodology is generalized to cover cases in which expression measurements are missing for many of the genes that might control the transcription of the genes of interest. The assumption that the gene expression level is roughly proportional to the rate of translation led to better empirical performance than did either the assumption that the gene expression level is roughly proportional to the protein level or the Bayesian model average of both assumptions. Availability: http://www.oisb.ca points to R code implementing the methods (R Development Core Team 2004). Contact: dbickel@uottawa.ca Supplementary information: http://www.davidbickel.com PMID:19218351
CDMBE: A Case Description Model Based on Evidence
Zhu, Jianlin; Yang, Xiaoping; Zhou, Jing
2015-01-01
By combining the advantages of argument map and Bayesian network, a case description model based on evidence (CDMBE), which is suitable to continental law system, is proposed to describe the criminal cases. The logic of the model adopts the credibility logical reason and gets evidence-based reasoning quantitatively based on evidences. In order to consist with practical inference rules, five types of relationship and a set of rules are defined to calculate the credibility of assumptions based on the credibility and supportability of the related evidences. Experiments show that the model can get users' ideas into a figure and the results calculated from CDMBE are in line with those from Bayesian model. PMID:26421006
Tunis, Sandra L
2011-11-01
Canadian patients, healthcare providers and payers share interest in assessing the value of self-monitoring of blood glucose (SMBG) for individuals with type 2 diabetes but not on insulin. Using the UKPDS (UK Prospective Diabetes Study) model, the Canadian Optimal Prescribing and Utilization Service (COMPUS) conducted an SMBG cost-effectiveness analysis. Based on the results, COMPUS does not recommend routine strip use for most adults with type 2 diabetes who are not on insulin. Cost-effectiveness studies require many assumptions regarding cohort, clinical effect, complication costs, etc. The COMPUS evaluation included several conservative assumptions that negatively impacted SMBG cost effectiveness. Current objectives were to (i) review key, impactful COMPUS assumptions; (ii) illustrate how alternative inputs can lead to more favourable results for SMBG cost effectiveness; and (iii) provide recommendations for assessing its long-term value. A summary of COMPUS methods and results was followed by a review of assumptions (for trial-based glycosylated haemoglobin [HbA(1c)] effect, patient characteristics, costs, simulation pathway) and their potential impact. The UKPDS model was used for a 40-year cost-effectiveness analysis of SMBG (1.29 strips per day) versus no SMBG in the Canadian payer setting. COMPUS assumptions for patient characteristics (e.g. HbA(1c) 8.4%), SMBG HbA(1c) advantage (-0.25%) and costs were retained. As with the COMPUS analysis, UKPDS HbA(1c) decay curves were incorporated into SMBG and no-SMBG pathways. An important difference was that SMBG HbA(1c) benefits in the current study could extend beyond the initial simulation period. Sensitivity analyses examined SMBG HbA(1c) advantage, adherence, complication history and cost inputs. Outcomes (discounted at 5%) included QALYs, complication rates, total costs (year 2008 values) and incremental cost-effectiveness ratios (ICERs). The base-case ICER was $Can63 664 per QALY gained; approximately 56% of the COMPUS base-case ICER. SMBG was associated with modest risk reductions (0.10-0.70%) for six of seven complications. Assuming an SMBG advantage of -0.30% decreased the current base-case ICER by over $Can10 000 per QALY gained. With adherence of 66% and 87%, ICERs were (respectively) $Can39 231 and $Can54 349 per QALY gained. Incorporating a more representative complication history and 15% complication cost increase resulted in an ICER of $Can49 743 per QALY gained. These results underscore the importance of modelling assumptions regarding the duration of HbA(1c) effect. The current study shares several COMPUS limitations relating to the UKPDS model being designed for newly diagnosed patients, and to randomized controlled trial monitoring rates. Neither study explicitly examined the impact of varying the duration of initial HbA(1c) effects, or of medication or other treatment changes. Because the COMPUS research will potentially influence clinical practice and reimbursement policy in Canada, understanding the impact of assumptions on cost-effectiveness results seems especially important. Demonstrating that COMPUS ICERs were greatly reduced through variations in a small number of inputs may encourage additional clinical research designed to measure SMBG effects within the context of optimal disease management. It may also encourage additional economic evaluations that incorporate lessons learned and best practices for assessing the overall value of SMBG for type 2 diabetes in insulin-naive patients.
Impact of unseen assumptions on communication of atmospheric carbon mitigation options
NASA Astrophysics Data System (ADS)
Elliot, T. R.; Celia, M. A.; Court, B.
2010-12-01
With the rapid access and dissemination of information made available through online and digital pathways, there is need for a concurrent openness and transparency in communication of scientific investigation. Even with open communication it is essential that the scientific community continue to provide impartial result-driven information. An unknown factor in climate literacy is the influence of an impartial presentation of scientific investigation that has utilized biased base-assumptions. A formal publication appendix, and additional digital material, provides active investigators a suitable framework and ancillary material to make informed statements weighted by assumptions made in a study. However, informal media and rapid communiqués rarely make such investigatory attempts, often citing headline or key phrasing within a written work. This presentation is focused on Geologic Carbon Sequestration (GCS) as a proxy for the wider field of climate science communication, wherein we primarily investigate recent publications in GCS literature that produce scenario outcomes using apparently biased pro- or con- assumptions. A general review of scenario economics, capture process efficacy and specific examination of sequestration site assumptions and processes, reveals an apparent misrepresentation of what we consider to be a base-case GCS system. The authors demonstrate the influence of the apparent bias in primary assumptions on results from commonly referenced subsurface hydrology models. By use of moderate semi-analytical model simplification and Monte Carlo analysis of outcomes, we can establish the likely reality of any GCS scenario within a pragmatic middle ground. Secondarily, we review the development of publically available web-based computational tools and recent workshops where we presented interactive educational opportunities for public and institutional participants, with the goal of base-assumption awareness playing a central role. Through a series of interactive ‘what if’ scenarios, workshop participants were able to customize the models, which continue to be available from the Princeton University Subsurface Hydrology Research Group, and develop a better comprehension of subsurface factors contributing to GCS. Considering that the models are customizable, a simplified mock-up of regional GCS scenarios can be developed, which provides a possible pathway for informal, industrial, scientific or government communication of GCS concepts and likely scenarios. We believe continued availability, customizable scenarios, and simplifying assumptions are an exemplary means to communicate the possible outcome of CO2 sequestration projects; the associated risk; and, of no small importance, the consequences of base assumptions on predicted outcome.
From Earth to Space--Advertising Films Created in a Computer-Based Primary School Task
ERIC Educational Resources Information Center
Öman, Anne
2017-01-01
Today, teachers orchestrate computer-based tasks in software applications in Swedish primary schools. Meaning is made through various modes, and multimodal perspectives on literacy have the basic assumption that meaning is made through many representational and communicational resources. The case study presented in this paper has analysed pupils'…
Malley, Juliette; Hancock, Ruth; Murphy, Mike; Adams, John; Wittenberg, Raphael; Comas-Herrera, Adelina; Curry, Chris; King, Derek; James, Sean; Morciano, Marcello; Pickard, Linda
2011-01-01
The aim of this analysis is to examine the effect of different assumptions about future trends in life expectancy (LE) on the sustainability of the pensions and long-term care (LTC) systems. The context is the continuing debate in England about the reform of state pensions and the reform of the system for financing care and support. Macro and micro simulation models are used to make projections of future public expenditure on LTC services for older people and on state pensions and related benefits, making alternative assumptions on increases in future LE. The projections cover the period 2007 to 2032 and relate to England. Results are presented for a base case and for specified variants to the base case. The base case assumes that the number of older people by age and gender rises in line with the Office for National Statistics' principal 2006-based population projection for England. It also assumes no change in disability rates, no changes in patterns of care, no changes in policy and rises in unit care costs and real average earnings by 2 per cent per year. Under these assumptions public expenditure on pensions and related benefits is projected to rise from 4.7 per cent of Gross Domestic Product (GDP) in 2007 to 6.2 per cent of GDP in 2032 and public expenditure on LTC from 0.9 per cent of GDP in 2007 to 1.6 per cent of GDP in 2032. Under a very high LE variant to the GAD principal projection, however, public expenditure on pensions and related benefits is projected to reach 6.8 per cent of GDP in 2032 and public expenditure on LTC 1.7 per cent of GDP in 2032. Policymakers developing reform proposals need to recognise that, since future LE is inevitably uncertain and since variant assumptions about future LE significantly affect expenditure projections, there is a degree of uncertainty about the likely impact of demographic pressures on future public expenditure on pensions and LTC.
Brieger, W R; Oke, G A; Otusanya, S; Adesope, A; Tijanu, J; Banjoko, M
1997-01-01
Guinea-worm eradication has been progressing internationally and efforts at case containment have begun in most endemic countries. Case containment rests on the assumption that in previous phases of eradication most if not all endemic settlements have been identified. Experiences in the predominantly Yoruba communities of Ifeloju Local Government Area (LGA) in Oyo State, Nigeria, however, have shown that the settlements of ethnic minority groups may be overlooked during initial case searches and subsequent programmes of village-based reporting. The migrant cattle-herding Fulani are found throughout the savannah and sahel regions of West Africa. Nearly 3000 live in 60 settlements in Ifeloju. An intensive case search identified 57 cases in 15 settlements. The assumption that village-based health workers (VBHWs) in neighbouring Yoruba farm hamlets would identify cases in the Fulani settlements, known as gaa, proved false. Only 5 endemic gaa were located next to a Yoruba hamlet that had a VBHW, and even then the VBHW did not identify and report the cases in the gaa. Efforts to recruit VBHWs for each endemic gaa are recommended, but only after LGA staff improve the poor relationship between themselves and the Fulani, whom they view as outsiders. The results also imply the need for Guinea worm eradication staff in neighbouring LGAs, states and countries to search actively for the disease among their minority populations.
Decay of solutions of the wave equation with arbitrary localized nonlinear damping
NASA Astrophysics Data System (ADS)
Bellassoued, Mourad
We study the problem of decay rate for the solutions of the initial-boundary value problem to the wave equation, governed by localized nonlinear dissipation and without any assumption on the dynamics (i.e., the control geometric condition is not satisfied). We treat separately the autonomous and the non-autonomous cases. Providing regular initial data, without any assumption on an observation subdomain, we prove that the energy decays at last, as fast as the logarithm of time. Our result is a generalization of Lebeau (in: A. Boutet de Monvel, V. Marchenko (Eds.), Algebraic and Geometric Methods in Mathematical Physics, Kluwer Academic Publishers, Dordrecht, the Netherlands, 1996, pp. 73) result in the autonomous case and Nakao (Adv. Math. Sci. Appl. 7 (1) (1997) 317) work in the non-autonomous case. In order to prove that result we use a new method based on the Fourier-Bross-Iaglintzer (FBI) transform.
Lowry, Svetlana Z; Patterson, Emily S
2014-01-01
Background There is growing recognition that design flaws in health information technology (HIT) lead to increased cognitive work, impact workflows, and produce other undesirable user experiences that contribute to usability issues and, in some cases, patient harm. These usability issues may in turn contribute to HIT utilization disparities and patient safety concerns, particularly among “non-typical” HIT users and their health care providers. Health care disparities are associated with poor health outcomes, premature death, and increased health care costs. HIT has the potential to reduce these disparate outcomes. In the computer science field, it has long been recognized that embedded cultural assumptions can reduce the usability, usefulness, and safety of HIT systems for populations whose characteristics differ from “stereotypical” users. Among these non-typical users, inappropriate embedded design assumptions may contribute to health care disparities. It is unclear how to address potentially inappropriate embedded HIT design assumptions once detected. Objective The objective of this paper is to explain HIT universal design principles derived from the human factors engineering literature that can help to overcome potential usability and/or patient safety issues that are associated with unrecognized, embedded assumptions about cultural groups when designing HIT systems. Methods Existing best practices, guidance, and standards in software usability and accessibility were subjected to a 5-step expert review process to identify and summarize those best practices, guidance, and standards that could help identify and/or address embedded design assumptions in HIT that could negatively impact patient safety, particularly for non-majority HIT user populations. An iterative consensus-based process was then used to derive evidence-based design principles from the data to address potentially inappropriate embedded cultural assumptions. Results Design principles that may help identify and address embedded HIT design assumptions are available in the existing literature. Conclusions Evidence-based HIT design principles derived from existing human factors and informatics literature can help HIT developers identify and address embedded cultural assumptions that may underlie HIT-associated usability and patient safety concerns as well as health care disparities. PMID:27025349
Gibbons, Michael C; Lowry, Svetlana Z; Patterson, Emily S
2014-12-18
There is growing recognition that design flaws in health information technology (HIT) lead to increased cognitive work, impact workflows, and produce other undesirable user experiences that contribute to usability issues and, in some cases, patient harm. These usability issues may in turn contribute to HIT utilization disparities and patient safety concerns, particularly among "non-typical" HIT users and their health care providers. Health care disparities are associated with poor health outcomes, premature death, and increased health care costs. HIT has the potential to reduce these disparate outcomes. In the computer science field, it has long been recognized that embedded cultural assumptions can reduce the usability, usefulness, and safety of HIT systems for populations whose characteristics differ from "stereotypical" users. Among these non-typical users, inappropriate embedded design assumptions may contribute to health care disparities. It is unclear how to address potentially inappropriate embedded HIT design assumptions once detected. The objective of this paper is to explain HIT universal design principles derived from the human factors engineering literature that can help to overcome potential usability and/or patient safety issues that are associated with unrecognized, embedded assumptions about cultural groups when designing HIT systems. Existing best practices, guidance, and standards in software usability and accessibility were subjected to a 5-step expert review process to identify and summarize those best practices, guidance, and standards that could help identify and/or address embedded design assumptions in HIT that could negatively impact patient safety, particularly for non-majority HIT user populations. An iterative consensus-based process was then used to derive evidence-based design principles from the data to address potentially inappropriate embedded cultural assumptions. Design principles that may help identify and address embedded HIT design assumptions are available in the existing literature. Evidence-based HIT design principles derived from existing human factors and informatics literature can help HIT developers identify and address embedded cultural assumptions that may underlie HIT-associated usability and patient safety concerns as well as health care disparities.
ERIC Educational Resources Information Center
Schweppe, Judith; Rummer, Ralf
2007-01-01
The general idea of language-based accounts of short-term memory is that retention of linguistic materials is based on representations within the language processing system. In the present sentence recall study, we address the question whether the assumption of shared representations holds for morphosyntactic information (here: grammatical gender…
ERIC Educational Resources Information Center
Williams, Nida W.
2012-01-01
This qualitative case study was designed to explore how master teachers in transfer high schools learn the competencies they perceive are required to engage at-risk students so that they persist and graduate. The study is based on the following assumptions: (1) The requisite teacher competencies can be defined and identified and, in fact,…
Fault and event tree analyses for process systems risk analysis: uncertainty handling formulations.
Ferdous, Refaul; Khan, Faisal; Sadiq, Rehan; Amyotte, Paul; Veitch, Brian
2011-01-01
Quantitative risk analysis (QRA) is a systematic approach for evaluating likelihood, consequences, and risk of adverse events. QRA based on event (ETA) and fault tree analyses (FTA) employs two basic assumptions. The first assumption is related to likelihood values of input events, and the second assumption is regarding interdependence among the events (for ETA) or basic events (for FTA). Traditionally, FTA and ETA both use crisp probabilities; however, to deal with uncertainties, the probability distributions of input event likelihoods are assumed. These probability distributions are often hard to come by and even if available, they are subject to incompleteness (partial ignorance) and imprecision. Furthermore, both FTA and ETA assume that events (or basic events) are independent. In practice, these two assumptions are often unrealistic. This article focuses on handling uncertainty in a QRA framework of a process system. Fuzzy set theory and evidence theory are used to describe the uncertainties in the input event likelihoods. A method based on a dependency coefficient is used to express interdependencies of events (or basic events) in ETA and FTA. To demonstrate the approach, two case studies are discussed. © 2010 Society for Risk Analysis.
Statistical Mechanical Derivation of Jarzynski's Identity for Thermostated Non-Hamiltonian Dynamics
NASA Astrophysics Data System (ADS)
Cuendet, Michel A.
2006-03-01
The recent Jarzynski identity (JI) relates thermodynamic free energy differences to nonequilibrium work averages. Several proofs of the JI have been provided on the thermodynamic level. They rely on assumptions such as equivalence of ensembles in the thermodynamic limit or weakly coupled infinite heat baths. However, the JI is widely applied to NVT computer simulations involving finite numbers of particles, whose equations of motion are strongly coupled to a few extra degrees of freedom modeling a thermostat. In this case, the above assumptions are no longer valid. We propose a statistical mechanical approach to the JI solely based on the specific equations of motion, without any further assumption. We provide a detailed derivation for the non-Hamiltonian Nosé-Hoover dynamics, which is routinely used in computer simulations to produce canonical sampling.
INFLUENCE OF STRATIGRAPHY ON A DIVING MTBE PLUME AND ITS CHARACTERIZATION: A CASE STUDY
Conventional conceptual models applied at petroleum release sites are often based on assumptions of vertical contaminant migration through the vadose zone followed by horizontal, downgradient transport at the water table with limited, if any, additional downward migration. Howev...
Practical Stereology Applications for the Pathologist.
Brown, Danielle L
2017-05-01
Qualitative histopathology is the gold standard for routine examination of morphological tissue changes in the regulatory or academic environment. The human eye is exceptional for pattern recognition but often cannot detect small changes in quantity. In cases where detection of subtle quantitative changes is critical, more sensitive methods are required. Two-dimensional histomorphometry can provide additional quantitative information and is quite useful in many cases. However, the provided data may not be referent to the entire tissue and, as such, it makes several assumptions, which are sources of bias. In contrast, stereology is design based rather than assumption based and uses stringent sampling methods to obtain accurate and precise 3-dimensional information using geometrical and statistical principles. Recent advances in technology have made stereology more approachable and practical for the pathologist in both regulatory and academic environments. This review introduces pathologists to the basic principles of stereology and walks the reader through some real-world examples for the application of these principles in the workplace.
Hua, Wei; Sun, Guoying; Dodd, Caitlin N; Romio, Silvana A; Whitaker, Heather J; Izurieta, Hector S; Black, Steven; Sturkenboom, Miriam C J M; Davis, Robert L; Deceuninck, Genevieve; Andrews, N J
2013-08-01
The assumption that the occurrence of outcome event must not alter subsequent exposure probability is critical for preserving the validity of the self-controlled case series (SCCS) method. This assumption is violated in scenarios in which the event constitutes a contraindication for exposure. In this simulation study, we compared the performance of the standard SCCS approach and two alternative approaches when the event-independent exposure assumption was violated. Using the 2009 H1N1 and seasonal influenza vaccines and Guillain-Barré syndrome as a model, we simulated a scenario in which an individual may encounter multiple unordered exposures and each exposure may be contraindicated by the occurrence of outcome event. The degree of contraindication was varied at 0%, 50%, and 100%. The first alternative approach used only cases occurring after exposure with follow-up time starting from exposure. The second used a pseudo-likelihood method. When the event-independent exposure assumption was satisfied, the standard SCCS approach produced nearly unbiased relative incidence estimates. When this assumption was partially or completely violated, two alternative SCCS approaches could be used. While the post-exposure cases only approach could handle only one exposure, the pseudo-likelihood approach was able to correct bias for both exposures. Violation of the event-independent exposure assumption leads to an overestimation of relative incidence which could be corrected by alternative SCCS approaches. In multiple exposure situations, the pseudo-likelihood approach is optimal; the post-exposure cases only approach is limited in handling a second exposure and may introduce additional bias, thus should be used with caution. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Yan, Banghua; Stamnes, Knut; Toratani, Mitsuhiro; Li, Wei; Stamnes, Jakob J.
2002-10-01
For the atmospheric correction of ocean-color imagery obtained over Case I waters with the Sea-Viewing Wide Field-of-View Sensor (SeaWiFS) instrument the method currently used to relax the black-pixel assumption in the near infrared (NIR) relies on (1) an approximate model for the nadir NIR remote-sensing reflectance and (2) an assumption that the water-leaving radiance is isotropic over the upward hemisphere. Radiance simulations based on a comprehensive radiative-transfer model for the coupled atmosphere-ocean system and measurements of the nadir remote-sensing reflectance at 670 nm compiled in the SeaWiFS Bio-optical Algorithm Mini-Workshop (SeaBAM) database are used to assess the validity of this method. The results show that (1) it is important to improve the flexibility of the reflectance model to provide more realistic predictions of the nadir NIR water-leaving reflectance for different ocean regions and (2) the isotropic assumption should be avoided in the retrieval of ocean color, if the chlorophyll concentration is larger than approximately 6, 10, and 40 mg m-3 when the aerosol optical depth is approximately 0.05, 0.1, and 0.3, respectively. Finally, we extend our scope to Case II ocean waters to gain insight and enhance our understanding of the NIR aspects of ocean color. The results show that the isotropic assumption is invalid in a wider range than in Case I waters owing to the enhanced water-leaving reflectance resulting from oceanic sediments in the NIR wavelengths.
Jit, Mark; Bilcke, Joke; Mangen, Marie-Josée J; Salo, Heini; Melliez, Hugues; Edmunds, W John; Yazdan, Yazdanpanah; Beutels, Philippe
2009-10-19
Cost-effectiveness analyses are usually not directly comparable between countries because of differences in analytical and modelling assumptions. We investigated the cost-effectiveness of rotavirus vaccination in five European Union countries (Belgium, England and Wales, Finland, France and the Netherlands) using a single model, burden of disease estimates supplied by national public health agencies and a subset of common assumptions. Under base case assumptions (vaccination with Rotarix, 3% discount rate, health care provider perspective, no herd immunity and quality of life of one caregiver affected by a rotavirus episode) and a cost-effectiveness threshold of euro30,000, vaccination is likely to be cost effective in Finland only. However, single changes to assumptions may make it cost effective in Belgium and the Netherlands. The estimated threshold price per dose for Rotarix (excluding administration costs) to be cost effective was euro41 in Belgium, euro28 in England and Wales, euro51 in Finland, euro36 in France and euro46 in the Netherlands.
Optimal policy for value-based decision-making.
Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre
2016-08-18
For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down.
Optimal policy for value-based decision-making
Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre
2016-01-01
For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down. PMID:27535638
Comparing process-based breach models for earthen embankments subjected to internal erosion
USDA-ARS?s Scientific Manuscript database
Predicting the potential flooding from a dam site requires prediction of outflow resulting from breach. Conservative estimates from the assumption of instantaneous breach or from an upper envelope of historical cases are readily computed, but these estimates do not reflect the properties of a speci...
A Framework for Designing Scaffolds that Improve Motivation and Cognition
ERIC Educational Resources Information Center
Belland, Brian R.; Kim, ChanMin; Hannafin, Michael J.
2013-01-01
A problematic, yet common, assumption among educational researchers is that when teachers provide authentic, problem-based experiences, students will automatically be engaged. Evidence indicates that this is often not the case. In this article, we discuss (a) problems with ignoring motivation in the design of learning environments, (b)…
A Case Study in Conflict Management.
ERIC Educational Resources Information Center
Chase, Lawrence J.; Smith, Val R.
This paper presents a model for a message-centered theory of human conflict based on the assumption that conflict will result from the pairing of any two functional messages that share a common antecedent but contain different consequences with oppositely signed affect. The paper first shows how to represent conflict situations diagrammatically…
ERIC Educational Resources Information Center
Thacker, Rebecca A.; Gohmann, Stephen F.
1993-01-01
Discusses the "reasonable woman" standard in sexual harassment cases and gender-based differences in defining harassment. Investigates the issue of these differences in the emotional and psychological effects of hostile environments, using data from a survey of 8,523 public employees. (SK)
ESTIMATE OF METHANE EMISSIONS FROM THE U.S. NATURAL GAS INDUSTRY
Global methane from the fossil fuel industries have been poorly quantified and, in many cases, emissions are not well-known even at the country level. Historically, methane emissions from the U.S. gas industry have been based on sparse data, incorrect assumptions, or both. As a r...
Education by Choice: The Case for Family Control.
ERIC Educational Resources Information Center
Coons, John E.; Sugarman, Stephen D.
This book examines the philosophical issues, possible variations, and implementation of voucher plans of educational choice. The voucher system proposed here (the Quality Choice Model) is based on the assumption that a voucher system can ensure the equal provision of educational resources to children regardless of residential mobility or ability…
Environmental Interfaces in Teaching Economic Statistics
ERIC Educational Resources Information Center
Campos, Celso; Wodewotzki, Maria Lucia; Jacobini, Otavio; Ferrira, Denise
2016-01-01
The objective of this article is, based on the Critical Statistics Education assumptions, to value some environmental interfaces in teaching Statistics by modeling projects. Due to this, we present a practical case, one in which we address an environmental issue, placed in the context of the teaching of index numbers, within the Statistics…
Kelly, Christopher; Pashayan, Nora; Munisamy, Sreetharan; Powles, John W
2009-06-30
Our aim was to estimate the burden of fatal disease attributable to excess adiposity in England and Wales in 2003 and 2015 and to explore the sensitivity of the estimates to the assumptions and methods used. A spreadsheet implementation of the World Health Organization's (WHO) Comparative Risk Assessment (CRA) methodology for continuously distributed exposures was used. For our base case, adiposity-related risks were assumed to be minimal with a mean (SD) BMI of 21 (1) Kg m-2. All cause mortality risks for 2015 were taken from the Government Actuary and alternative compositions by cause derived. Disease-specific relative risks by BMI were taken from the CRA project and varied in sensitivity analyses. Under base case methods and assumptions for 2003, approximately 41,000 deaths and a loss of 1.05 years of life expectancy were attributed to excess adiposity. Seventy-seven percent of all diabetic deaths, 23% of all ischaemic heart disease deaths and 14% of all cerebrovascular disease deaths were attributed to excess adiposity. Predictions for 2015 were found to be more sensitive to assumptions about the future course of mortality risks for diabetes than to variation in the assumed trend in BMI. On less favourable assumptions the attributable loss of life expectancy in 2015 would rise modestly to 1.28 years. Excess adiposity appears to contribute materially but modestly to mortality risks in England and Wales and this contribution is likely to increase in the future. Uncertainty centres on future trends of associated diseases, especially diabetes. The robustness of these estimates is limited by the lack of control for correlated risks by stratification and by the empirical uncertainty surrounding the effects of prolonged excess adiposity beginning in adolescence.
Abusive Administration: A Case Study
ERIC Educational Resources Information Center
Jefferson, Anne L.
2006-01-01
In the academic world, there is an assumption of reasonable administrative conduct. In fact, to ensure such conduct, universities, like other public institutions, may have collective agreements to reinforce this assumption. However, in some cases, the university as employer can very quick off the mark should any faculty member wander into what it…
Analyzing the impact of modeling choices and assumptions in compartmental epidemiological models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nutaro, James J.; Pullum, Laura L.; Ramanathan, Arvind
In this study, computational models have become increasingly used as part of modeling, predicting, and understanding how infectious diseases spread within large populations. These models can be broadly classified into differential equation-based models (EBM) and agent-based models (ABM). Both types of models are central in aiding public health officials design intervention strategies in case of large epidemic outbreaks. We examine these models in the context of illuminating their hidden assumptions and the impact these may have on the model outcomes. Very few ABM/EBMs are evaluated for their suitability to address a particular public health concern, and drawing relevant conclusions aboutmore » their suitability requires reliable and relevant information regarding the different modeling strategies and associated assumptions. Hence, there is a need to determine how the different modeling strategies, choices of various parameters, and the resolution of information for EBMs and ABMs affect outcomes, including predictions of disease spread. In this study, we present a quantitative analysis of how the selection of model types (i.e., EBM vs. ABM), the underlying assumptions that are enforced by model types to model the disease propagation process, and the choice of time advance (continuous vs. discrete) affect the overall outcomes of modeling disease spread. Our study reveals that the magnitude and velocity of the simulated epidemic depends critically on the selection of modeling principles, various assumptions of disease process, and the choice of time advance.« less
Analyzing the impact of modeling choices and assumptions in compartmental epidemiological models
Nutaro, James J.; Pullum, Laura L.; Ramanathan, Arvind; ...
2016-05-01
In this study, computational models have become increasingly used as part of modeling, predicting, and understanding how infectious diseases spread within large populations. These models can be broadly classified into differential equation-based models (EBM) and agent-based models (ABM). Both types of models are central in aiding public health officials design intervention strategies in case of large epidemic outbreaks. We examine these models in the context of illuminating their hidden assumptions and the impact these may have on the model outcomes. Very few ABM/EBMs are evaluated for their suitability to address a particular public health concern, and drawing relevant conclusions aboutmore » their suitability requires reliable and relevant information regarding the different modeling strategies and associated assumptions. Hence, there is a need to determine how the different modeling strategies, choices of various parameters, and the resolution of information for EBMs and ABMs affect outcomes, including predictions of disease spread. In this study, we present a quantitative analysis of how the selection of model types (i.e., EBM vs. ABM), the underlying assumptions that are enforced by model types to model the disease propagation process, and the choice of time advance (continuous vs. discrete) affect the overall outcomes of modeling disease spread. Our study reveals that the magnitude and velocity of the simulated epidemic depends critically on the selection of modeling principles, various assumptions of disease process, and the choice of time advance.« less
Locomotion of neutrally buoyant fish with flexible caudal fin.
Iosilevskii, Gil
2016-06-21
Historically, burst-and-coast locomotion strategies have been given two very different explanations. The first one was based on the assumption that the drag of an actively swimming fish is greater than the drag of the same fish in motionless glide. Fish reduce the cost of locomotion by swimming actively during a part of the swimming interval, and gliding through the remaining part. The second one was based on the assumption that muscles perform efficiently only if their contraction rate exceeds a certain threshold. Fish reduce the cost of locomotion by using an efficient contraction rate during a part of the swimming interval, and gliding through the remaining part. In this paper, we suggest yet a third explanation. It is based on the assumption that propulsion efficiency of a swimmer can increase with thrust. Fish reduce the cost of locomotion by alternating high thrust, and hence more efficient, bursts with passive glides. The paper presents a formal analysis of the respective burst-and-coast strategy, shows that the locomotion efficiency can be practically as high as the propulsion efficiency during burst, and shows that the other two explanations can be considered particular cases of the present one. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mining Temporal Patterns to Improve Agents Behavior: Two Case Studies
NASA Astrophysics Data System (ADS)
Fournier-Viger, Philippe; Nkambou, Roger; Faghihi, Usef; Nguifo, Engelbert Mephu
We propose two mechanisms for agent learning based on the idea of mining temporal patterns from agent behavior. The first one consists of extracting temporal patterns from the perceived behavior of other agents accomplishing a task, to learn the task. The second learning mechanism consists in extracting temporal patterns from an agent's own behavior. In this case, the agent then reuses patterns that brought self-satisfaction. In both cases, no assumption is made on how the observed agents' behavior is internally generated. A case study with a real application is presented to illustrate each learning mechanism.
Rama, Ranganathan; Shanta, Viswanathan
2008-01-01
Abstract Objective To measure the bias in absolute cancer survival estimates in the absence of active follow-up of cancer patients in developing countries. Methods Included in the study were all incident cases of the 10 most common cancers and corresponding subtypes plus all tobacco-related cancers not ranked among the top 10 that were registered in the population-based cancer registry in Chennai, India, during 1990–1999 and followed through 2001. Registered incident cases were first matched with those in the all-cause mortality database from the vital statistics division of the Corporation of Chennai. Unmatched incident cancer cases were then actively followed up to determine their survival status. Absolute survival was estimated by using an actuarial method and applying different assumptions regarding the survival status (alive/dead) of cases under passive and active follow-up. Findings Before active follow-up, matches between cases ranged from 20% to 66%, depending on the site of the primary tumour. Active follow-up of unmatched incident cases revealed that 15% to 43% had died by the end of the follow-up period, while the survival status of 4% to 38% remained unknown. Before active follow-up of cancer patients, 5-year absolute survival was estimated to be between 22% and 47% higher, than when conventional actuarial assumption methods were applied to cases that were lost to follow-up. The smallest survival estimates were obtained when cases lost to follow-up were excluded from the analysis. Conclusion Under the conditions that prevail in India and other developing countries, active follow-up of cancer patients yields the most reliable estimates of cancer survival rates. Passive case follow-up alone or applying standard methods to estimate survival is likely to result in an upward bias. PMID:18670662
Area, length and thickness conservation: Dogma or reality?
NASA Astrophysics Data System (ADS)
Moretti, Isabelle; Callot, Jean Paul
2012-08-01
The basic assumption of quantitative structural geology is the preservation of material during deformation. However the hypothesis of volume conservation alone does not help to predict past or future geometries and so this assumption is usually translated into bed length in 2D (or area in 3D) and thickness conservation. When subsurface data are missing, geologists may extrapolate surface data to depth using the kink-band approach. These extrapolations, preserving both thicknesses and dips, lead to geometries which are restorable but often erroneous, due to both disharmonic deformation and internal deformation of layers. First, the Bolivian Sub-Andean Zone case is presented to highlight the evolution of the concepts on which balancing is based, and the important role played by a decoupling level in enhancing disharmony. Second, analogue models are analyzed to test the validity of the balancing techniques. Chamberlin's excess area approach is shown to be on average valid. However, neither the length nor the thicknesses are preserved. We propose that in real cases, the length preservation hypothesis during shortening could also be a wrong assumption. If the data are good enough to image the decollement level, the Chamberlin excess area method could be used to compute the bed length changes.
SARP: a value-based approach to hospice admissions triage.
MacDonald, D
1995-01-01
As hospices become established and case referrals increase, many programs are faced with the necessity of instituting waiting lists. Prioritizing cases for order of admission requires a triage method that is rational, fair, and consistent. This article describes the SARP method of hospice admissions triage, which evaluates prospective cases according to seniority, acuity, risk, and political significance. SARP's essential features, operative assumptions, advantages, and limitations are discussed, as well as the core hospice values which underlie its use. The article concludes with a call for trial and evaluation of SARP in other hospice settings.
Takeuchi, Yoshinori; Shinozaki, Tomohiro; Matsuyama, Yutaka
2018-01-08
Despite the frequent use of self-controlled methods in pharmacoepidemiological studies, the factors that may bias the estimates from these methods have not been adequately compared in real-world settings. Here, we comparatively examined the impact of a time-varying confounder and its interactions with time-invariant confounders, time trends in exposures and events, restrictions, and misspecification of risk period durations on the estimators from three self-controlled methods. This study analyzed self-controlled case series (SCCS), case-crossover (CCO) design, and sequence symmetry analysis (SSA) using simulated and actual electronic medical records datasets. We evaluated the performance of the three self-controlled methods in simulated cohorts for the following scenarios: 1) time-invariant confounding with interactions between the confounders, 2) time-invariant and time-varying confounding without interactions, 3) time-invariant and time-varying confounding with interactions among the confounders, 4) time trends in exposures and events, 5) restricted follow-up time based on event occurrence, and 6) patient restriction based on event history. The sensitivity of the estimators to misspecified risk period durations was also evaluated. As a case study, we applied these methods to evaluate the risk of macrolides on liver injury using electronic medical records. In the simulation analysis, time-varying confounding produced bias in the SCCS and CCO design estimates, which aggravated in the presence of interactions between the time-invariant and time-varying confounders. The SCCS estimates were biased by time trends in both exposures and events. Erroneously short risk periods introduced bias to the CCO design estimate, whereas erroneously long risk periods introduced bias to the estimates of all three methods. Restricting the follow-up time led to severe bias in the SSA estimates. The SCCS estimates were sensitive to patient restriction. The case study showed that although macrolide use was significantly associated with increased liver injury occurrence in all methods, the value of the estimates varied. The estimations of the three self-controlled methods depended on various underlying assumptions, and the violation of these assumptions may cause non-negligible bias in the resulting estimates. Pharmacoepidemiologists should select the appropriate self-controlled method based on how well the relevant key assumptions are satisfied with respect to the available data.
Tay, Richard
2016-03-01
The binary logistic model has been extensively used to analyze traffic collision and injury data where the outcome of interest has two categories. However, the assumption of a symmetric distribution may not be a desirable property in some cases, especially when there is a significant imbalance in the two categories of outcome. This study compares the standard binary logistic model with the skewed logistic model in two cases in which the symmetry assumption is violated in one but not the other case. The differences in the estimates, and thus the marginal effects obtained, are significant when the assumption of symmetry is violated. Copyright © 2015 Elsevier Ltd. All rights reserved.
Monocular correspondence detection for symmetrical objects by template matching
NASA Astrophysics Data System (ADS)
Vilmar, G.; Besslich, Philipp W., Jr.
1990-09-01
We describe a possibility to reconstruct 3-D information from a single view of an 3-D bilateral symmetric object. The symmetry assumption allows us to obtain a " second view" from a different viewpoint by a simple reflection of the monocular image. Therefore we have to solve the correspondence problem in a special case where known feature-based or area-based binocular approaches fail. In principle our approach is based on a frequency domain template matching of the features on the epipolar lines. During a training period our system " learns" the assignment of correspondence models to image features. The object shape is interpolated when no template matches to the image features. This fact is an important advantage of this methodology because no " real world" image holds the symmetry assumption perfectly. To simplify the training process we used single views on human faces (e. g. passport photos) but our system is trainable on any other kind of objects.
Conundrums in neurology: diagnosing serotonin syndrome - a meta-analysis of cases.
Werneke, Ursula; Jamshidi, Fariba; Taylor, David M; Ott, Michael
2016-07-12
Serotonin syndrome is a toxic state, caused by serotonin (5HT) excess in the central nervous system. Serotonin syndrome's main feature is neuro-muscular hyperexcitability, which in many cases is mild but in some cases can become life-threatening. The diagnosis of serotonin syndrome remains challenging since it can only be made on clinical grounds. Three diagnostic criteria systems, Sternbach, Radomski and Hunter classifications, are available. Here we test the validity of four assumptions that have become widely accepted: (1) The Hunter classification performs clinically better than the Sternbach and Radomski criteria; (2) in contrast to neuroleptic malignant syndrome, the onset of serotonin syndrome is usually rapid; (3) hyperthermia is a hallmark of severe serotonin syndrome; and (4) serotonin syndrome can readily be distinguished from neuroleptic malignant syndrome on clinical grounds and on the basis of medication history. Systematic review and meta-analysis of all cases of serotonin syndrome and toxicity published between 2004 and 2014, using PubMed and Web of Science. Two of the four assumptions (1 and 2) are based on only one published study each and have not been independently validated. There is little agreement between current criteria systems for the diagnosis of serotonin syndrome. Although frequently thought to be the gold standard for the diagnosis of the serotonin syndrome, the Hunter criteria did not perform better than the Sternbach and Radomski criteria. Not all cases seem to be of rapid onset and only relatively few cases may present with hyperthermia. The 0 differential diagnosis between serotonin syndrome and neuroleptic malignant syndrome is not always clear-cut. Our findings challenge four commonly made assumptions about serotonin syndrome. We propose our meta-analysis of cases (MAC) method as a new way to systematically pool and interpret anecdotal but important clinical information concerning uncommon or emergent phenomena that cannot be captured in any other way but through case reports.
Enterprise Education Needs Enterprising Educators: A Case Study on Teacher Training Provision
ERIC Educational Resources Information Center
Penaluna, Kathryn; Penaluna, Andy; Usei, Caroline; Griffiths, Dinah
2015-01-01
Purpose: The purpose of this paper is to reflect upon the process that underpinned and informed the development and delivery of a "creativity-led" credit-bearing teacher training provision and to illuminate key factors of influence for the approaches to teaching and learning. Design/methodology/approach: Based on the assumption that…
Discourse in Adult Education: The Language Education of Adult Immigrants in Sweden.
ERIC Educational Resources Information Center
Hill, Hannah
1990-01-01
A shortcoming of adult education theories is lack of attention to social, historical, and institutional contexts. A case study of language education programs for adult immigrants in Sweden illustrates how assumptions about participant-centered, needs-based education justified and legitimated the use of these programs as a tool for employment…
Learning to Internalize Action Dialogue
ERIC Educational Resources Information Center
Cotter, Teresa Ellen
2011-01-01
The purpose of this case study was to explore how participants of a communications workshop, "Action Dialogue," perceived their ability to engage in dialogue was improved and enhanced. The study was based on the following assumptions: (1) dialogue skills can be learned and people are able to learn these skills; (2) context and emotion influence…
ERIC Educational Resources Information Center
van der Lecq, Ria
2016-01-01
This article reports the results of a qualitative case study investigating the self-authorship characteristics of learners in the context of an interdisciplinary curriculum. The study identifies the students' assumptions about knowledge, self, and relationships. The findings are based on evidence from reflective essays written by students upon…
Bill Wilkins as a Model for Sensitivity Training.
ERIC Educational Resources Information Center
Smith, Henry C.
This case study is presented as a model for a sensitivity training program planned at Michigan State University. The goals, procedures, and criteria for conducting a program are illustrated. Based on the assumption that empathy is the mainspring of impression formation, and that empathy and evaluation interact, the goal is accurate evaluation.…
Performance evaluation of nonhomogeneous hospitals: the case of Hong Kong hospitals.
Li, Yongjun; Lei, Xiyang; Morton, Alec
2018-02-14
Throughout the world, hospitals are under increasing pressure to become more efficient. Efficiency analysis tools can play a role in giving policymakers insight into which units are less efficient and why. Many researchers have studied efficiencies of hospitals using data envelopment analysis (DEA) as an efficiency analysis tool. However, in the existing literature on DEA-based performance evaluation, a standard assumption of the constant returns to scale (CRS) or the variable returns to scale (VRS) DEA models is that decision-making units (DMUs) use a similar mix of inputs to produce a similar set of outputs. In fact, hospitals with different primary goals supply different services and provide different outputs. That is, hospitals are nonhomogeneous and the standard assumption of the DEA model is not applicable to the performance evaluation of nonhomogeneous hospitals. This paper considers the nonhomogeneity among hospitals in the performance evaluation and takes hospitals in Hong Kong as a case study. An extension of Cook et al. (2013) [1] based on the VRS assumption is developed to evaluated nonhomogeneous hospitals' efficiencies since inputs of hospitals vary greatly. Following the philosophy of Cook et al. (2013) [1], hospitals are divided into homogeneous groups and the product process of each hospital is divided into subunits. The performance of hospitals is measured on the basis of subunits. The proposed approach can be applied to measure the performance of other nonhomogeneous entities that exhibit variable return to scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavlin, C.J.; Japp, B.; Simpson, E.R.
There is an assumption that radioactive plaques placed at surgery are, and will remain, in proper relationship to the base of the tumor. The plaque dose is calculated based on this assumption. In fact, factors such as loose sutures, improper diameter estimations, pressure from adjacent rectus muscles, and intervening tissue (oblique muscles) can compromise this relationship. Ultrasound provides a practical method of imaging the tumor and plaque simultaneously. The authors have used postoperative ultrasound to monitor the accuracy of iodine-125 plaque placement in nine cases. Detection of eccentrically placed and malpositioned plaques provides valuable insight which can be used tomore » refine surgical technique. Detection of plaque tilting by oblique muscles can serve as a basis for recalculating dosage. The relationship of plaque margins to vital ocular structures such as the optic nerve can also be determined by ultrasound.« less
Lunar in-core thermionic nuclear reactor power system conceptual design
NASA Technical Reports Server (NTRS)
Mason, Lee S.; Schmitz, Paul C.; Gallup, Donald R.
1991-01-01
This paper presents a conceptual design of a lunar in-core thermionic reactor power system. The concept consists of a thermionic reactor located in a lunar excavation with surface mounted waste heat radiators. The system was integrated with a proposed lunar base concept representative of recent NASA Space Exploration Initiative studies. The reference mission is a permanently-inhabited lunar base requiring a 550 kWe, 7 year life central power station. Performance parameters and assumptions were based on the Thermionic Fuel Element (TFE) Verification Program. Five design cases were analyzed ranging from conservative to advanced. The cases were selected to provide sensitivity effects on the achievement of TFE program goals.
Annual Energy Outlook 2016 With Projections to 2040
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
The Annual Energy Outlook 2016 (AEO2016), prepared by the U.S. Energy Information Administration (EIA), presents long-term projections of energy supply, demand, and prices through 2040. The projections, focused on U.S. energy markets, are based on results from EIA’s National Energy Modeling System (NEMS). NEMS enables EIA to make projections under alternative, internallyconsistent sets of assumptions. The analysis in AEO2016 focuses on the Reference case and 17 alternative cases. EIA published an Early Release version of the AEO2016 Reference case (including U.S. Environmental Protection Agency’s (EPA) Clean Power Plan (CPP)) and a No CPP case (excluding the CPP) in May 2016.
How to make a particular case for person-centred patient care: A commentary on Alexandra Parvan.
Graham, George
2018-06-14
In recent years, a person-centred approach to patient care in cases of mental illness has been promoted as an alternative to a disease orientated approach. Alexandra Parvan's contribution to the person-centred approach serves to motivate an exploration of the approach's most apt metaphysical assumptions. I argue that a metaphysical thesis or assumption about both persons and their uniqueness is an essential element of being person-centred. I apply the assumption to issues such as the disorder/disease distinction and to the continuity of mental health and illness. © 2018 John Wiley & Sons, Ltd.
Using Human Givens Therapy to Support the Well-Being of Adolescents: A Case Example
ERIC Educational Resources Information Center
Yates, Yvonne; Atkinson, Cathy
2011-01-01
This article outlines the use of Human Givens (HG) therapy with adolescents reporting poor subjective well-being. HG therapy is based on the assumption that human beings have innate needs, which, if unmet, lead to emotional distress and mental health problems. Hitherto, there has been no independently published empirical research into the efficacy…
ERIC Educational Resources Information Center
Schmid, Euline Cutrim; Hegelheimer, Volker
2014-01-01
This paper presents research findings of a longitudinal empirical case study that investigated an innovative Computer Assisted Language Learning (CALL) professional development program for pre-service English as Foreign Language (EFL) teachers. The conceptualization of the program was based on the assumption that pre-service language teachers…
ERIC Educational Resources Information Center
Tay, Elaine; Allen, Matthew
2011-01-01
Using the example of an undergraduate unit of study that is taught both on-campus and externally, but uses Internet-based learning in both cases, we explore how social media might be used effectively in higher education. We place into question the assumption that such technologies necessarily engage students in constructivist learning; we argue…
Universities and Innovation in the Knowledge Economy: Cases from English Regions
ERIC Educational Resources Information Center
Higher Education Management and Policy, 2005
2005-01-01
The last decade has seen a growing increase in policy discourse in many countries on entrepreneurship and innovation with a prominent emphasis on the role to be played by universities. However, it is far from clear to what extent institutional behaviours are influenced by this enterprising policy discourse based on the broad assumption that…
ERIC Educational Resources Information Center
James, Wendy Michelle
2013-01-01
Science and engineering instructors often observe that students have difficulty using or applying prerequisite mathematics knowledge in their courses. This qualitative project uses a case-study method to investigate the instruction in a trigonometry course and a physics course based on a different methodology and set of assumptions about student…
Exploring a fourth dimension: spirituality as a resource for the couple therapist.
Anderson, D A; Worthen, D
1997-01-01
This article explores ways in which the therapist's own spirituality can serve as a resource in couple therapy. Spirituality is defined as subjective engagement with a fourth, transcendent dimension of human experience. This engagement enhances human life and evokes corresponding behavior. Spiritually based therapy may be influenced by three assumptions: that God or a Divine Being exists, that human-kind yearns innately for connection with this Being, and that this Being is interested in humans and acts upon and within their relationships to promote beneficial change. In therapy these assumptions affect how the therapist listens and responds throughout sessions. The authors incorporate a case example illustrating the application of this fourth dimension in couple therapy.
Stochastic analysis of surface roughness models in quantum wires
NASA Astrophysics Data System (ADS)
Nedjalkov, Mihail; Ellinghaus, Paul; Weinbub, Josef; Sadi, Toufik; Asenov, Asen; Dimov, Ivan; Selberherr, Siegfried
2018-07-01
We present a signed particle computational approach for the Wigner transport model and use it to analyze the electron state dynamics in quantum wires focusing on the effect of surface roughness. Usually surface roughness is considered as a scattering model, accounted for by the Fermi Golden Rule, which relies on approximations like statistical averaging and in the case of quantum wires incorporates quantum corrections based on the mode space approach. We provide a novel computational approach to enable physical analysis of these assumptions in terms of phase space and particles. Utilized is the signed particles model of Wigner evolution, which, besides providing a full quantum description of the electron dynamics, enables intuitive insights into the processes of tunneling, which govern the physical evolution. It is shown that the basic assumptions of the quantum-corrected scattering model correspond to the quantum behavior of the electron system. Of particular importance is the distribution of the density: Due to the quantum confinement, electrons are kept away from the walls, which is in contrast to the classical scattering model. Further quantum effects are retardation of the electron dynamics and quantum reflection. Far from equilibrium the assumption of homogeneous conditions along the wire breaks even in the case of ideal wire walls.
Cow-specific treatment of clinical mastitis: an economic approach.
Steeneveld, W; van Werven, T; Barkema, H W; Hogeveen, H
2011-01-01
Under Dutch circumstances, most clinical mastitis (CM) cases of cows on dairy farms are treated with a standard intramammary antimicrobial treatment. Several antimicrobial treatments are available for CM, differing in antimicrobial compound, route of application, duration, and cost. Because cow factors (e.g., parity, stage of lactation, and somatic cell count history) and the causal pathogen influence the probability of cure, cow-specific treatment of CM is often recommended. The objective of this study was to determine if cow-specific treatment of CM is economically beneficial. Using a stochastic Monte Carlo simulation model, 20,000 CM cases were simulated. These CM cases were caused by Streptococcus uberis and Streptococcus dysgalactiae (40%), Staphylococcus aureus (30%), or Escherichia coli (30%). For each simulated CM case, the consequences of using different antimicrobial treatment regimens (standard 3-d intramammary, extended 5-d intramammary, combination 3-d intramammary+systemic, combination 3-d intramammary+systemic+1-d nonsteroidal antiinflammatory drugs, and combination extended 5-d intramammary+systemic) were simulated simultaneously. Finally, total costs of the 5 antimicrobial treatment regimens were compared. Some inputs for the model were based on literature information and assumptions made by the authors were used if no information was available. Bacteriological cure for each individual cow depended on the antimicrobial treatment regimen, the causal pathogen, and the cow factors parity, stage of lactation, somatic cell count history, CM history, and whether the cow was systemically ill. Total costs for each case depended on treatment costs for the initial CM case (including costs for antibiotics, milk withdrawal, and labor), treatment costs for follow-up CM cases, costs for milk production losses, and costs for culling. Average total costs for CM using the 5 treatments were (US) $224, $247, $253, $260, and $275, respectively. Average probabilities of bacteriological cure for the 5 treatments were 0.53, 0.65, 0.65, 0.68, and 0.75, respectively. For all different simulated CM cases, the standard 3-d intramammary antimicrobial treatment had the lowest total costs. The benefits of lower costs for milk production losses and culling for cases treated with the intensive treatments did not outweigh the higher treatment costs. The stochastic model was developed using information from the literature and assumptions made by the authors. Using these information sources resulted in a difference in effectiveness of different antimicrobial treatments for CM. Based on our assumptions, cow-specific treatment of CM was not economically beneficial. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Borner, Kelsey B; Canter, Kimberly S; Lee, Robert H; Davis, Ann M; Hampl, Sarah; Chuang, Ian
2016-09-01
Pediatric obesity presents a significant burden. However, family-based behavioral group (FBBG) obesity interventions are largely uncovered by our health care system. The present study uses Return on Investment (ROI) and Internal Rate of Return (IRR) analyses to analyze the business side of FBBG interventions. ROI and IRR were calculated to determine longitudinal cost-effectiveness of a FBBG intervention. Multiple simulations of cost savings are projected using three estimated trajectories of weight change and variations in assumptions. The baseline model of child savings gives an average IRR of 0.2% ± 0.08% and an average ROI of 20.8% ± 0.4%, which represents a break-even IRR and a positive ROI. More pessimistic simulations result in negative IRR values. Under certain assumptions, FBBGs offer a break-even proposition. Results are limited by lack of data regarding several assumptions, and future research should evaluate changes in cost savings following changes in child and adult weight. © The Author 2016. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Potters, Jan; Leuridan, Bert
2017-05-01
This article concerns the way in which philosophers study the epistemology of scientific thought experiments. Starting with a general overview of the main contemporary philosophical accounts, we will first argue that two implicit assumptions are present therein: first, that the epistemology of scientific thought experiments is solely concerned with factual knowledge of the world; and second, that philosophers should account for this in terms of the way in which individuals in general contemplate these thought experiments in thought. Our goal is to evaluate these assumptions and their implications using a particular case study: Albert Einstein's magnet-conductor thought experiment. We will argue that an analysis of this thought experiment based on these assumptions - as John Norton (1991) provides - is, in a sense, both misguided (the thought experiment by itself did not lead Einstein to factual knowledge of the world) and too narrow (to understand the thought experiment's epistemology, its historical context should also be taken into account explicitly). Based on this evaluation we propose an alternative philosophical approach to the epistemology of scientific thought experiments which is more encompassing while preserving what is of value in the dominant view.
Kelly, Christopher; Pashayan, Nora; Munisamy, Sreetharan; Powles, John W
2009-01-01
Background Our aim was to estimate the burden of fatal disease attributable to excess adiposity in England and Wales in 2003 and 2015 and to explore the sensitivity of the estimates to the assumptions and methods used. Methods A spreadsheet implementation of the World Health Organization's (WHO) Comparative Risk Assessment (CRA) methodology for continuously distributed exposures was used. For our base case, adiposity-related risks were assumed to be minimal with a mean (SD) BMI of 21 (1) Kg m-2. All cause mortality risks for 2015 were taken from the Government Actuary and alternative compositions by cause derived. Disease-specific relative risks by BMI were taken from the CRA project and varied in sensitivity analyses. Results Under base case methods and assumptions for 2003, approximately 41,000 deaths and a loss of 1.05 years of life expectancy were attributed to excess adiposity. Seventy-seven percent of all diabetic deaths, 23% of all ischaemic heart disease deaths and 14% of all cerebrovascular disease deaths were attributed to excess adiposity. Predictions for 2015 were found to be more sensitive to assumptions about the future course of mortality risks for diabetes than to variation in the assumed trend in BMI. On less favourable assumptions the attributable loss of life expectancy in 2015 would rise modestly to 1.28 years. Conclusion Excess adiposity appears to contribute materially but modestly to mortality risks in England and Wales and this contribution is likely to increase in the future. Uncertainty centres on future trends of associated diseases, especially diabetes. The robustness of these estimates is limited by the lack of control for correlated risks by stratification and by the empirical uncertainty surrounding the effects of prolonged excess adiposity beginning in adolescence. PMID:19566928
Assumptions of Statistical Tests: What Lies Beneath.
Jupiter, Daniel C
We have discussed many statistical tests and tools in this series of commentaries, and while we have mentioned the underlying assumptions of the tests, we have not explored them in detail. We stop to look at some of the assumptions of the t-test and linear regression, justify and explain them, mention what can go wrong when the assumptions are not met, and suggest some solutions in this case. Copyright © 2017 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
Ishikawa, Hirofumi; Shimogawara, Rieko; Fueda, Kaoru
2017-01-01
In the summer of 2014, an outbreak of autochthonous dengue fever occurred in Yoyogi Park and its vicinity, Tokyo, Japan. In this study, we investigated how the dengue fever outbreak progressed in Yoyogi Park using a mathematical model. This study was limited to the transmission of the dengue virus in Yoyogi Park and its vicinity. We estimated the distributions of the intrinsic incubation period and infection dates on the basis of epidemiological information on the dengue outbreak in 2014. We searched for an assumption that satisfactorily explains the outbreak in 2014 using rough estimates of secondary and tertiary infection cases. We constructed a mathematical model for the transmission of the dengue virus between humans and Aedes albopictus. We carried out 1,000-trial stochastic simulations for all combinations of three kinds of assumption about Ae. albopictus and asymptomatic infection with each of three levels. Simulation results showed that the scale of the outbreak was markedly affected by the daily survival rate of Ae. albopictus. The outbreak involved a small number of secondary infection cases, reached a peak at tertiary infection, and transformed to termination at the fourth infection. Under some assumptions, the daily progress of onset cases was within a range between the 1st-3rd quartiles of 1,000 trials for 87% of dates and within a range between the minimum and maximum for all dates. It is important to execute plans to detect asymptomatic cases and reduce the survival rate of Ae. albopictus to prevent the spread of tertiary infections unless an outbreak is suppressed at the secondary infection stage.
ERIC Educational Resources Information Center
Herrera, Tony Isaac
2010-01-01
This qualitative case study was designed to explore whether and how a sample of domestic and international managers use two key adult education concepts--critical reflection and experiential learning--to influence changes in individual employees whom they coach. The study is based on the primary assumption that although managers do not…
Turning Teachers into Designers: The Case of the Ark of Inquiry
ERIC Educational Resources Information Center
De Vries, Bregje; Schouwenaars, Ilona; Stokhof, Harry
2017-01-01
The Ark of Inquiry seeks to support inquiry-based science education (IBSE) in different countries and school systems across Europe by teachers that may differ in light of their prior experiences with IBSE. Given the differences, the assumption is that teachers need to make adaptations to the approach and materials of the Ark of Inquiry. This study…
ERIC Educational Resources Information Center
Sells, Scott P.
A model for treating difficult adolescents and their families is presented. Part 1 offers six basic assumptions about the causes of severe behavioral problems and presents the treatment model with guidelines necessary to address each of these six causes. Case examples highlight and clarify major points within each of the 15 procedural steps of the…
T.P. Holmes; E.A. Murphy; D.D. Royle
2005-01-01
In this paper, we provide preliminary estimates of the impacts of the hemlock woolly adelgid on residential property values in Sparta, New Jersey, using the hedonic property value method. The literature on the aesthetic perceptions of forest landscapes is briefly reviewed to provide guidance in formulating economic hypotheses based on the assumption of an informative...
A challenging dissociation in masked identity priming with the lexical decision task.
Perea, Manuel; Jiménez, María; Gómez, Pablo
2014-05-01
The masked priming technique has been used extensively to explore the early stages of visual-word recognition. One key phenomenon in masked priming lexical decision is that identity priming is robust for words, whereas it is small/unreliable for nonwords. This dissociation has usually been explained on the basis that masked priming effects are lexical in nature, and hence there should not be an identity prime facilitation for nonwords. We present two experiments whose results are at odds with the assumption made by models that postulate that identity priming is purely lexical, and also challenge the assumption that word and nonword responses are based on the same information. Our experiments revealed that for nonwords, but not for words, matched-case identity PRIME-TARGET pairs were responded to faster than mismatched-case identity prime-TARGET pairs, and this phenomenon was not modulated by the lowercase/uppercase feature similarity of the stimuli. Copyright © 2014 Elsevier B.V. All rights reserved.
Cornejo-Donoso, Jorge; Einarsson, Baldvin; Birnir, Bjorn; Gaines, Steven D
2017-01-01
Marine Protected Areas (MPA) are important management tools shown to protect marine organisms, restore biomass, and increase fisheries yields. While MPAs have been successful in meeting these goals for many relatively sedentary species, highly mobile organisms may get few benefits from this type of spatial protection due to their frequent movement outside the protected area. The use of a large MPA can compensate for extensive movement, but testing this empirically is challenging, as it requires both large areas and sufficient time series to draw conclusions. To overcome this limitation, MPA models have been used to identify designs and predict potential outcomes, but these simulations are highly sensitive to the assumptions describing the organism's movements. Due to recent improvements in computational simulations, it is now possible to include very complex movement assumptions in MPA models (e.g. Individual Based Model). These have renewed interest in MPA simulations, which implicitly assume that increasing the detail in fish movement overcomes the sensitivity to the movement assumptions. Nevertheless, a systematic comparison of the designs and outcomes obtained under different movement assumptions has not been done. In this paper, we use an individual based model, interconnected to population and fishing fleet models, to explore the value of increasing the detail of the movement assumptions using four scenarios of increasing behavioral complexity: a) random, diffusive movement, b) aggregations, c) aggregations that respond to environmental forcing (e.g. sea surface temperature), and d) aggregations that respond to environmental forcing and are transported by currents. We then compare these models to determine how the assumptions affect MPA design, and therefore the effective protection of the stocks. Our results show that the optimal MPA size to maximize fisheries benefits increases as movement complexity increases from ~10% for the diffusive assumption to ~30% when full environment forcing was used. We also found that in cases of limited understanding of the movement dynamics of a species, simplified assumptions can be used to provide a guide for the minimum MPA size needed to effectively protect the stock. However, using oversimplified assumptions can produce suboptimal designs and lead to a density underestimation of ca. 30%; therefore, the main value of detailed movement dynamics is to provide more reliable MPA design and predicted outcomes. Large MPAs can be effective in recovering overfished stocks, protect pelagic fish and provide significant increases in fisheries yields. Our models provide a means to empirically test this spatial management tool, which theoretical evidence consistently suggests as an effective alternative to managing highly mobile pelagic stocks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dutta, Abhijit; Sahir, A. H.; Tan, Eric
This report was developed as part of the U.S. Department of Energy’s Bioenergy Technologies Office’s efforts to enable the development of technologies for the production of infrastructure-compatible, cost-competitive liquid hydrocarbon fuels from biomass. Specifically, this report details two conceptual designs based on projected product yields and quality improvements via catalyst development and process integration. It is expected that these research improvements will be made within the 2022 timeframe. The two conversion pathways detailed are (1) in situ and (2) ex situ upgrading of vapors produced from the fast pyrolysis of biomass. While the base case conceptual designs and underlying assumptionsmore » outline performance metrics for feasibility, it should be noted that these are only two of many other possibilities in this area of research. Other promising process design options emerging from the research will be considered for future techno-economic analysis. Both the in situ and ex situ conceptual designs, using the underlying assumptions, project MFSPs of approximately $3.5/gallon gasoline equivalent (GGE). The performance assumptions for the ex situ process were more aggressive with higher distillate (diesel-range) products. This was based on an assumption that more favorable reaction chemistry (such as coupling) can be made possible in a separate reactor where, unlike in an in situ upgrading reactor, one does not have to deal with catalyst mixing with biomass char and ash, which pose challenges to catalyst performance and maintenance. Natural gas was used for hydrogen production, but only when off gases from the process was not sufficient to meet the needs; natural gas consumption is insignificant in both the in situ and ex situ base cases. Heat produced from the burning of char, coke, and off-gases allows for the production of surplus electricity which is sold to the grid allowing a reduction of approximately 5¢/GGE in the MFSP.« less
[The fourth horseman: The yellow fever].
Vallejos-Parás, Alfonso; Cabrera-Gaytán, David Alejandro
2017-01-01
Dengue virus three, Chikunguya and Zika have entered the national territory through the south of the country. Cases and outbreaks of yellow fever have now been identified in the Americas where it threatens to expand. Although Mexico has a robust epidemiological surveillance system for vector-borne diseases, our country must be alert in case of its possible introduction into the national territory. This paper presents theoretical assumptions based on factual data on the behavior of yellow fever in the Americas, as well as reflections on the epidemiological surveillance of vector-borne diseases.
Applying the compound Poisson process model to the reporting of injury-related mortality rates.
Kegler, Scott R
2007-02-16
Injury-related mortality rate estimates are often analyzed under the assumption that case counts follow a Poisson distribution. Certain types of injury incidents occasionally involve multiple fatalities, however, resulting in dependencies between cases that are not reflected in the simple Poisson model and which can affect even basic statistical analyses. This paper explores the compound Poisson process model as an alternative, emphasizing adjustments to some commonly used interval estimators for population-based rates and rate ratios. The adjusted estimators involve relatively simple closed-form computations, which in the absence of multiple-case incidents reduce to familiar estimators based on the simpler Poisson model. Summary data from the National Violent Death Reporting System are referenced in several examples demonstrating application of the proposed methodology.
Equilibrium pricing in an order book environment: Case study for a spin model
NASA Astrophysics Data System (ADS)
Meudt, Frederik; Schmitt, Thilo A.; Schäfer, Rudi; Guhr, Thomas
2016-07-01
When modeling stock market dynamics, the price formation is often based on an equilibrium mechanism. In real stock exchanges, however, the price formation is governed by the order book. It is thus interesting to check if the resulting stylized facts of a model with equilibrium pricing change, remain the same or, more generally, are compatible with the order book environment. We tackle this issue in the framework of a case study by embedding the Bornholdt-Kaizoji-Fujiwara spin model into the order book dynamics. To this end, we use a recently developed agent based model that realistically incorporates the order book. We find realistic stylized facts. We conclude for the studied case that equilibrium pricing is not needed and that the corresponding assumption of a ;fundamental; price may be abandoned.
Using effort information with change-in-ratio data for population estimation
Udevitz, Mark S.; Pollock, Kenneth H.
1995-01-01
Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.
All-optical Integrated Switches Based on Azo-benzene Liquid Crystals on Silicon
2011-11-01
Glass D263 SU8 Polymer Polymer NLC n̂ n̂ Refractive index @1.55 µm Materials n// = 1.689 n⊥= 1.502 n = 1.575 n = 1.516 E7 Glass D263 SU8 ...In the other case we have a nonlinear LCW based on glass substrates. It consists in a rectangular hollow realized in SU8 photoresist two glass...and discussion 5. All optical polymeric waveguide: methods, assumptions and procedure 6. All optical polymeric waveguide: results and discussion 7
The probability of misassociation between neighboring targets
NASA Astrophysics Data System (ADS)
Areta, Javier A.; Bar-Shalom, Yaakov; Rothrock, Ronald
2008-04-01
This paper presents procedures to calculate the probability that the measurement originating from an extraneous target will be (mis)associated with a target of interest for the cases of Nearest Neighbor and Global association. It is shown that these misassociation probabilities depend, under certain assumptions, on a particular - covariance weighted - norm of the difference between the targets' predicted measurements. For the Nearest Neighbor association, the exact solution, obtained for the case of equal innovation covariances, is based on a noncentral chi-square distribution. An approximate solution is also presented for the case of unequal innovation covariances. For the Global case an approximation is presented for the case of "similar" innovation covariances. In the general case of unequal innovation covariances where this approximation fails, an exact method based on the inversion of the characteristic function is presented. The theoretical results, confirmed by Monte Carlo simulations, quantify the benefit of Global vs. Nearest Neighbor association. These results are applied to problems of single sensor as well as centralized fusion architecture multiple sensor tracking.
Pattison, J E
2007-01-01
The purpose of the study reported here was to investigate two important assumptions used in a recently reported new method of estimating inbreeding in large, relatively isolated populations over historic times. The method, based on modeling the genealogical "paradox," produces values of Pearl's coefficients, Z, a measure of inbreeding or genealogical coalescence, as a function of time. In this study, the effects on inbreeding of two important assumptions made in earlier studies, namely those of using a constant generation length and of ignoring migration, have been investigated for the population of Britain. First, by relating the median age of women at childbirth to the development level of various societies, the variation of the generation lengths for different periods in historic Britain were estimated. Values of Z for two types of varying generation lengths were then calculated and compared with the case of constant generation length. Second, the population curve for Britain used in earlier studies was modified to obtain the subpopulation at any time during the past two millennia that was descended from the pre-Roman British Celts. Values of Z for the case with migration were then calculated and compared with the case for no migration. It is shown that these two assumptions may be taken into account if and when required. Both the effect of a varying generation length and the effect of migration on Z were found to be 20-40%, when no known value of inbreeding was used, and 2-5%, when a known value of inbreeding was used.
A prevalence-based association test for case-control studies.
Ryckman, Kelli K; Jiang, Lan; Li, Chun; Bartlett, Jacquelaine; Haines, Jonathan L; Williams, Scott M
2008-11-01
Genetic association is often determined in case-control studies by the differential distribution of alleles or genotypes. Recent work has demonstrated that association can also be assessed by deviations from the expected distributions of alleles or genotypes. Specifically, multiple methods motivated by the principles of Hardy-Weinberg equilibrium (HWE) have been developed. However, these methods do not take into account many of the assumptions of HWE. Therefore, we have developed a prevalence-based association test (PRAT) as an alternative method for detecting association in case-control studies. This method, also motivated by the principles of HWE, uses an estimated population allele frequency to generate expected genotype frequencies instead of using the case and control frequencies separately. Our method often has greater power, under a wide variety of genetic models, to detect association than genotypic, allelic or Cochran-Armitage trend association tests. Therefore, we propose PRAT as a powerful alternative method of testing for association.
The Properties and the Nature of Light: The Study of Newton's Work and the Teaching of Optics
ERIC Educational Resources Information Center
Raftopoulos, Athanasios; Kalyfommatou, Niki; Constantinou, Constantinos P.
2005-01-01
The history of science shows that for each scientific issue there may be more than one models that are simultaneously accepted by the scientific community. One such case concerns the wave and corpuscular models of light. Newton claimed that he had proved some properties of light based on a set of minimal assumptions, without any commitments to any…
ERIC Educational Resources Information Center
Hattori, Masasi; Oaksford, Mike
2007-01-01
In this article, 41 models of covariation detection from 2 x 2 contingency tables were evaluated against past data in the literature and against data from new experiments. A new model was also included based on a limiting case of the normative phi-coefficient under an extreme rarity assumption, which has been shown to be an important factor in…
NONPARAMETRIC MANOVA APPROACHES FOR NON-NORMAL MULTIVARIATE OUTCOMES WITH MISSING VALUES
He, Fanyin; Mazumdar, Sati; Tang, Gong; Bhatia, Triptish; Anderson, Stewart J.; Dew, Mary Amanda; Krafty, Robert; Nimgaonkar, Vishwajit; Deshpande, Smita; Hall, Martica; Reynolds, Charles F.
2017-01-01
Between-group comparisons often entail many correlated response variables. The multivariate linear model, with its assumption of multivariate normality, is the accepted standard tool for these tests. When this assumption is violated, the nonparametric multivariate Kruskal-Wallis (MKW) test is frequently used. However, this test requires complete cases with no missing values in response variables. Deletion of cases with missing values likely leads to inefficient statistical inference. Here we extend the MKW test to retain information from partially-observed cases. Results of simulated studies and analysis of real data show that the proposed method provides adequate coverage and superior power to complete-case analyses. PMID:29416225
Nasejje, Justine B; Mwambi, Henry
2017-09-07
Uganda just like any other Sub-Saharan African country, has a high under-five child mortality rate. To inform policy on intervention strategies, sound statistical methods are required to critically identify factors strongly associated with under-five child mortality rates. The Cox proportional hazards model has been a common choice in analysing data to understand factors strongly associated with high child mortality rates taking age as the time-to-event variable. However, due to its restrictive proportional hazards (PH) assumption, some covariates of interest which do not satisfy the assumption are often excluded in the analysis to avoid mis-specifying the model. Otherwise using covariates that clearly violate the assumption would mean invalid results. Survival trees and random survival forests are increasingly becoming popular in analysing survival data particularly in the case of large survey data and could be attractive alternatives to models with the restrictive PH assumption. In this article, we adopt random survival forests which have never been used in understanding factors affecting under-five child mortality rates in Uganda using Demographic and Health Survey data. Thus the first part of the analysis is based on the use of the classical Cox PH model and the second part of the analysis is based on the use of random survival forests in the presence of covariates that do not necessarily satisfy the PH assumption. Random survival forests and the Cox proportional hazards model agree that the sex of the household head, sex of the child, number of births in the past 1 year are strongly associated to under-five child mortality in Uganda given all the three covariates satisfy the PH assumption. Random survival forests further demonstrated that covariates that were originally excluded from the earlier analysis due to violation of the PH assumption were important in explaining under-five child mortality rates. These covariates include the number of children under the age of five in a household, number of births in the past 5 years, wealth index, total number of children ever born and the child's birth order. The results further indicated that the predictive performance for random survival forests built using covariates including those that violate the PH assumption was higher than that for random survival forests built using only covariates that satisfy the PH assumption. Random survival forests are appealing methods in analysing public health data to understand factors strongly associated with under-five child mortality rates especially in the presence of covariates that violate the proportional hazards assumption.
Grisales Díaz, Víctor Hugo; Olivar Tost, Gerard
2017-01-01
Dual extraction, high-temperature extraction, mixture extraction, and oleyl alcohol extraction have been proposed in the literature for acetone, butanol, and ethanol (ABE) production. However, energy and economic evaluation under similar assumptions of extraction-based separation systems are necessary. Hence, the new process proposed in this work, direct steam distillation (DSD), for regeneration of high-boiling extractants was compared with several extraction-based separation systems. The evaluation was performed under similar assumptions through simulation in Aspen Plus V7.3 ® software. Two end distillation systems (number of non-ideal stages between 70 and 80) were studied. Heat integration and vacuum operation of some units were proposed reducing the energy requirements. Energy requirement of hybrid processes, substrate concentration of 200 g/l, was between 6.4 and 8.3 MJ-fuel/kg-ABE. The minimum energy requirements of extraction-based separation systems, feeding a water concentration in the substrate equivalent to extractant selectivity, and ideal assumptions were between 2.6 and 3.5 MJ-fuel/kg-ABE, respectively. The efficiencies of recovery systems for baseline case and ideal evaluation were 0.53-0.57 and 0.81-0.84, respectively. The main advantages of DSD were the operation of the regeneration column at atmospheric pressure, the utilization of low-pressure steam, and the low energy requirements of preheating. The in situ recovery processes, DSD, and mixture extraction with conventional regeneration were the approaches with the lowest energy requirements and total annualized costs.
Stress Wave Interaction Between Two Adjacent Blast Holes
NASA Astrophysics Data System (ADS)
Yi, Changping; Johansson, Daniel; Nyberg, Ulf; Beyglou, Ali
2016-05-01
Rock fragmentation by blasting is determined by the level and state of stress in the rock mass subjected to blasting. With the application of electronic detonators, some researchers stated that it is possible to achieve improved fragmentation through stress wave superposition with very short delay times. This hypothesis was studied through theoretical analysis in the paper. First, the stress in rock mass induced by a single-hole shot was analyzed with the assumptions of infinite velocity of detonation and infinite charge length. Based on the stress analysis of a single-hole shot, the stress history and tensile stress distribution between two adjacent holes were presented for cases of simultaneous initiation and 1 ms delayed initiation via stress superposition. The results indicated that the stress wave interaction is local around the collision point. Then, the tensile stress distribution at the extended line of two adjacent blast holes was analyzed for a case of 2 ms delay. The analytical results showed that the tensile stress on the extended line increases due to the stress wave superposition under the assumption that the influence of neighboring blast hole on the stress wave propagation can be neglected. However, the numerical results indicated that this assumption is unreasonable and yields contrary results. The feasibility of improving fragmentation via stress wave interaction with precise initiation was also discussed. The analysis in this paper does not support that the interaction of stress waves improves the fragmentation.
An Exploration of Dental Students' Assumptions About Community-Based Clinical Experiences.
Major, Nicole; McQuistan, Michelle R
2016-03-01
The aim of this study was to ascertain which assumptions dental students recalled feeling prior to beginning community-based clinical experiences and whether those assumptions were fulfilled or challenged. All fourth-year students at the University of Iowa College of Dentistry & Dental Clinics participate in community-based clinical experiences. At the completion of their rotations, they write a guided reflection paper detailing the assumptions they had prior to beginning their rotations and assessing the accuracy of their assumptions. For this qualitative descriptive study, the 218 papers from three classes (2011-13) were analyzed for common themes. The results showed that the students had a variety of assumptions about their rotations. They were apprehensive about working with challenging patients, performing procedures for which they had minimal experience, and working too slowly. In contrast, they looked forward to improving their clinical and patient management skills and knowledge. Other assumptions involved the site (e.g., the equipment/facility would be outdated; protocols/procedures would be similar to the dental school's). Upon reflection, students reported experiences that both fulfilled and challenged their assumptions. Some continued to feel apprehensive about treating certain patient populations, while others found it easier than anticipated. Students were able to treat multiple patients per day, which led to increased speed and patient management skills. However, some reported challenges with time management. Similarly, students were surprised to discover some clinics were new/updated although some had limited instruments and materials. Based on this study's findings about students' recalled assumptions and reflective experiences, educators should consider assessing and addressing their students' assumptions prior to beginning community-based dental education experiences.
Rauch, Geraldine; Brannath, Werner; Brückner, Matthias; Kieser, Meinhard
2018-05-01
In many clinical trial applications, the endpoint of interest corresponds to a time-to-event endpoint. In this case, group differences are usually expressed by the hazard ratio. Group differences are commonly assessed by the logrank test, which is optimal under the proportional hazard assumption. However, there are many situations in which this assumption is violated. Especially in applications were a full population and several subgroups or a composite time-to-first-event endpoint and several components are considered, the proportional hazard assumption usually does not simultaneously hold true for all test problems under investigation. As an alternative effect measure, Kalbfleisch and Prentice proposed the so-called 'average hazard ratio'. The average hazard ratio is based on a flexible weighting function to modify the influence of time and has a meaningful interpretation even in the case of non-proportional hazards. Despite this favorable property, it is hardly ever used in practice, whereas the standard hazard ratio is commonly reported in clinical trials regardless of whether the proportional hazard assumption holds true or not. There exist two main approaches to construct corresponding estimators and tests for the average hazard ratio where the first relies on weighted Cox regression and the second on a simple plug-in estimator. The aim of this work is to give a systematic comparison of these two approaches and the standard logrank test for different time-toevent settings with proportional and nonproportional hazards and to illustrate the pros and cons in application. We conduct a systematic comparative study based on Monte-Carlo simulations and by a real clinical trial example. Our results suggest that the properties of the average hazard ratio depend on the underlying weighting function. The two approaches to construct estimators and related tests show very similar performance for adequately chosen weights. In general, the average hazard ratio defines a more valid effect measure than the standard hazard ratio under non-proportional hazards and the corresponding tests provide a power advantage over the common logrank test. As non-proportional hazards are often met in clinical practice and the average hazard ratio tests often outperform the common logrank test, this approach should be used more routinely in applications. Schattauer GmbH.
Managing simulation-based training: A framework for optimizing learning, cost, and time
NASA Astrophysics Data System (ADS)
Richmond, Noah Joseph
This study provides a management framework for optimizing training programs for learning, cost, and time when using simulation based training (SBT) and reality based training (RBT) as resources. Simulation is shown to be an effective means for implementing activity substitution as a way to reduce risk. The risk profile of 22 US Air Force vehicles are calculated, and the potential risk reduction is calculated under the assumption of perfect substitutability of RBT and SBT. Methods are subsequently developed to relax the assumption of perfect substitutability. The transfer effectiveness ratio (TER) concept is defined and modeled as a function of the quality of the simulator used, and the requirements of the activity trained. The Navy F/A-18 is then analyzed in a case study illustrating how learning can be maximized subject to constraints in cost and time, and also subject to the decision maker's preferences for the proportional and absolute use of simulation. Solution methods for optimizing multiple activities across shared resources are next provided. Finally, a simulation strategy including an operations planning program (OPP), an implementation program (IP), an acquisition program (AP), and a pedagogical research program (PRP) is detailed. The study provides the theoretical tools to understand how to leverage SBT, a case study demonstrating these tools' efficacy, and a set of policy recommendations to enable the US military to better utilize SBT in the future.
Shielding of medical imaging X-ray facilities: a simple and practical method.
Bibbo, Giovanni
2017-12-01
The most widely accepted method for shielding design of X-ray facilities is that contained in the National Council on Radiation Protection and Measurements Report 147 whereby the computation of the barrier thickness for primary, secondary and leakage radiations is based on the knowledge of the distances from the radiation sources, the assumptions of the clinical workload, and usage and occupancy of adjacent areas. The shielding methodology used in this report is complex. With this methodology, the shielding designers need to make assumptions regarding the use of the X-ray room and the adjoining areas. Different shielding designers may make different assumptions resulting in different shielding requirements for a particular X-ray room. A more simple and practical method is to base the shielding design on the shielding principle used to shield X-ray tube housing to limit the leakage radiation from the X-ray tube. In this case, the shielding requirements of the X-ray room would depend only on the maximum radiation output of the X-ray equipment regardless of workload, usage or occupancy of the adjacent areas of the room. This shielding methodology, which has been used in South Australia since 1985, has proven to be practical and, to my knowledge, has not led to excess shielding of X-ray installations.
Asteroid mass estimation with Markov-chain Monte Carlo
NASA Astrophysics Data System (ADS)
Siltala, Lauri; Granvik, Mikael
2017-10-01
Estimates for asteroid masses are based on their gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to a 13-dimensional inverse problem at minimum where the aim is to derive the mass of the perturbing asteroid and six orbital elements for both the perturbing asteroid and the test asteroid by fitting their trajectories to their observed positions. The fitting has typically been carried out with linearized methods such as the least-squares method. These methods need to make certain assumptions regarding the shape of the probability distributions of the model parameters. This is problematic as these assumptions have not been validated. We have developed a new Markov-chain Monte Carlo method for mass estimation which does not require an assumption regarding the shape of the parameter distribution. Recently, we have implemented several upgrades to our MCMC method including improved schemes for handling observational errors and outlier data alongside the option to consider multiple perturbers and/or test asteroids simultaneously. These upgrades promise significantly improved results: based on two separate results for (19) Fortuna with different test asteroids we previously hypothesized that simultaneous use of both test asteroids would lead to an improved result similar to the average literature value for (19) Fortuna with substantially reduced uncertainties. Our upgraded algorithm indeed finds a result essentially equal to the literature value for this asteroid, confirming our previous hypothesis. Here we show these new results for (19) Fortuna and other example cases, and compare our results to previous estimates. Finally, we discuss our plans to improve our algorithm further, particularly in connection with Gaia.
Models in biology: ‘accurate descriptions of our pathetic thinking’
2014-01-01
In this essay I will sketch some ideas for how to think about models in biology. I will begin by trying to dispel the myth that quantitative modeling is somehow foreign to biology. I will then point out the distinction between forward and reverse modeling and focus thereafter on the former. Instead of going into mathematical technicalities about different varieties of models, I will focus on their logical structure, in terms of assumptions and conclusions. A model is a logical machine for deducing the latter from the former. If the model is correct, then, if you believe its assumptions, you must, as a matter of logic, also believe its conclusions. This leads to consideration of the assumptions underlying models. If these are based on fundamental physical laws, then it may be reasonable to treat the model as ‘predictive’, in the sense that it is not subject to falsification and we can rely on its conclusions. However, at the molecular level, models are more often derived from phenomenology and guesswork. In this case, the model is a test of its assumptions and must be falsifiable. I will discuss three models from this perspective, each of which yields biological insights, and this will lead to some guidelines for prospective model builders. PMID:24886484
Is the Conduct of War a Business?
2010-01-01
speculators succumb to the hysteria as asset prices increase.22 Periodic bouts of irrational exuberance (a term coined in 1996 by Alan Greenspan) are...heart of sound business management. Eco- nomic theory is based on the assumption that all actors are rational. Nevertheless, irration - ality plays a...collective irrational outcomes or so-called bubbles, as was the case in the U.S. housing collapse. In business activity, the relation- ship between a
Complete spacelike hypersurfaces in orthogonally splitted spacetimes
NASA Astrophysics Data System (ADS)
Colombo, Giulio; Rigoli, Marco
2017-10-01
We provide some "half-space theorems" for spacelike complete non-compact hypersurfaces into orthogonally splitted spacetimes. In particular we generalize some recent work of Rubio and Salamanca on maximal spacelike compact hypersurfaces. Beside compactness, we also relax some of their curvature assumptions and even consider the case of nonconstant mean curvature bounded from above. The analytic tools used in various arguments are based on some forms of the weak maximum principle.
NASA Technical Reports Server (NTRS)
Abdrashitov, G.
1943-01-01
An approximate theory of buffeting is here presented, based on the assumption of harmonic disturbing forces. Two cases of buffeting are considered: namely, for a tail angle of attack greater and less than the stalling angle, respectively. On the basis of the tests conducted and the results of foreign investigators, a general analysis is given of the nature of the forced vibrations the possible load limits on the tail, and the methods of elimination of buffeting.
Reasoning with Incomplete and Uncertain Information
1991-08-01
are rationally compatible (just as is the case in the fundamental computational mechanisms of truth maintenance systems ). The logics we construct will...complete, pre- cise, and unvarying. This fundamental assumption is a principal source of the limitation of many diagnostic systems to single fault diagnoses...Air Force Systems Command Griffiss Air Force Base, NY 13441-5700 This report has been reviewed by the Rome Laboratory Public Affairs Dffice (PA) and
Orientational analysis of planar fibre systems observed as a Poisson shot-noise process.
Kärkkäinen, Salme; Lantuéjoul, Christian
2007-10-01
We consider two-dimensional fibrous materials observed as a digital greyscale image. The problem addressed is to estimate the orientation distribution of unobservable thin fibres from a greyscale image modelled by a planar Poisson shot-noise process. The classical stereological approach is not straightforward, because the point intensities of thin fibres along sampling lines may not be observable. For such cases, Kärkkäinen et al. (2001) suggested the use of scaled variograms determined from grey values along sampling lines in several directions. Their method is based on the assumption that the proportion between the scaled variograms and point intensities in all directions of sampling lines is constant. This assumption is proved to be valid asymptotically for Boolean models and dead leaves models, under some regularity conditions. In this work, we derive the scaled variogram and its approximations for a planar Poisson shot-noise process using the modified Bessel function. In the case of reasonable high resolution of the observed image, the scaled variogram has an approximate functional relation to the point intensity, and in the case of high resolution the relation is proportional. As the obtained relations are approximative, they are tested on simulations. The existing orientation analysis method based on the proportional relation is further experimented on images with different resolutions. The new result, the asymptotic proportionality between the scaled variograms and the point intensities for a Poisson shot-noise process, completes the earlier results for the Boolean models and for the dead leaves models.
"They just know": the epistemological politics of "evidence-based" non-formal education.
Archibald, Thomas
2015-02-01
Community education and outreach programs should be evidence-based. This dictum seems at once warranted, welcome, and slightly platitudinous. However, the "evidence-based" movement's more narrow definition of evidence--privileging randomized controlled trials as the "gold standard"--has fomented much debate. Such debate, though insightful, often lacks grounding in actual practice. To address that lack, the purpose of the study presented in this paper was to examine what actually happens, in practice, when people support the implementation of evidence-based programs (EBPs) or engage in related efforts to make non-formal education more "evidence-based." Focusing on three cases--two adolescent sexual health projects (one in the United States and one in Kenya) and one more general youth development organization--I used qualitative methods to address the questions: (1) How is evidence-based program and evidence-based practice work actually practiced? (2) What perspectives and assumptions about what non-formal education is are manifested through that work? and (3) What conflicts and tensions emerge through that work related to those perspectives and assumptions? Informed by theoretical perspectives on the intersection of science, expertise, and democracy, I conclude that the current dominant approach to making non-formal education more evidence-based by way of EBPs is seriously flawed. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Bursik, J. W.; Hall, R. M.
1980-01-01
The saturated equilibrium expansion approximation for two phase flow often involves ideal-gas and latent-heat assumptions to simplify the solution procedure. This approach is well documented by Wegener and Mack and works best at low pressures where deviations from ideal-gas behavior are small. A thermodynamic expression for liquid mass fraction that is decoupled from the equations of fluid mechanics is used to compare the effects of the various assumptions on nitrogen-gas saturated equilibrium expansion flow starting at 8.81 atm, 2.99 atm, and 0.45 atm, which are conditions representative of transonic cryogenic wind tunnels. For the highest pressure case, the entire set of ideal-gas and latent-heat assumptions are shown to be in error by 62 percent for the values of heat capacity and latent heat. An approximation of the exact, real-gas expression is also developed using a constant, two phase isentropic expansion coefficient which results in an error of only 2 percent for the high pressure case.
Finite element techniques in computational time series analysis of turbulent flows
NASA Astrophysics Data System (ADS)
Horenko, I.
2009-04-01
In recent years there has been considerable increase of interest in the mathematical modeling and analysis of complex systems that undergo transitions between several phases or regimes. Such systems can be found, e.g., in weather forecast (transitions between weather conditions), climate research (ice and warm ages), computational drug design (conformational transitions) and in econometrics (e.g., transitions between different phases of the market). In all cases, the accumulation of sufficiently detailed time series has led to the formation of huge databases, containing enormous but still undiscovered treasures of information. However, the extraction of essential dynamics and identification of the phases is usually hindered by the multidimensional nature of the signal, i.e., the information is "hidden" in the time series. The standard filtering approaches (like f.~e. wavelets-based spectral methods) have in general unfeasible numerical complexity in high-dimensions, other standard methods (like f.~e. Kalman-filter, MVAR, ARCH/GARCH etc.) impose some strong assumptions about the type of the underlying dynamics. Approach based on optimization of the specially constructed regularized functional (describing the quality of data description in terms of the certain amount of specified models) will be introduced. Based on this approach, several new adaptive mathematical methods for simultaneous EOF/SSA-like data-based dimension reduction and identification of hidden phases in high-dimensional time series will be presented. The methods exploit the topological structure of the analysed data an do not impose severe assumptions on the underlying dynamics. Special emphasis will be done on the mathematical assumptions and numerical cost of the constructed methods. The application of the presented methods will be first demonstrated on a toy example and the results will be compared with the ones obtained by standard approaches. The importance of accounting for the mathematical assumptions used in the analysis will be pointed up in this example. Finally, applications to analysis of meteorological and climate data will be presented.
Li, Jinyu; Rossetti, Giulia; Dreyer, Jens; Raugei, Simone; Ippoliti, Emiliano; Lüscher, Bernhard; Carloni, Paolo
2014-01-01
Protein electrospray ionization (ESI) mass spectrometry (MS)-based techniques are widely used to provide insight into structural proteomics under the assumption that non-covalent protein complexes being transferred into the gas phase preserve basically the same intermolecular interactions as in solution. Here we investigate the applicability of this assumption by extending our previous structural prediction protocol for single proteins in ESI-MS to protein complexes. We apply our protocol to the human insulin dimer (hIns2) as a test case. Our calculations reproduce the main charge and the collision cross section (CCS) measured in ESI-MS experiments. Molecular dynamics simulations for 0.075 ms show that the complex maximizes intermolecular non-bonded interactions relative to the structure in water, without affecting the cross section. The overall gas-phase structure of hIns2 does exhibit differences with the one in aqueous solution, not inferable from a comparison with calculated CCS. Hence, care should be exerted when interpreting ESI-MS proteomics data based solely on NMR and/or X-ray structural information. PMID:25210764
Efficient Fair Exchange from Identity-Based Signature
NASA Astrophysics Data System (ADS)
Yum, Dae Hyun; Lee, Pil Joong
A fair exchange scheme is a protocol by which two parties Alice and Bob exchange items or services without allowing either party to gain advantages by quitting prematurely or otherwise misbehaving. To this end, modern cryptographic solutions use a semi-trusted arbitrator who involves only in cases where one party attempts to cheat or simply crashes. We call such a fair exchange scheme optimistic. When no registration is required between the signer and the arbitrator, we say that the fair exchange scheme is setup free. To date, the setup-free optimist fair exchange scheme under the standard RSA assumption was only possible from the generic construction of [12], which uses ring signatures. In this paper, we introduce a new setup-free optimistic fair exchange scheme under the standard RSA assumption. Our scheme uses the GQ identity-based signature and is more efficient than [12]. The construction can also be generalized by using various identity-based signature schemes. Our main technique is to allow each user to choose his (or her) own “random” public key in the identitybased signature scheme.
In Search of a Pony: Sources, Methods, Outcomes, and Motivated Reasoning.
Stone, Marc B
2018-05-01
It is highly desirable to be able to evaluate the effect of policy interventions. Such evaluations should have expected outcomes based upon sound theory and be carefully planned, objectively evaluated and prospectively executed. In many cases, however, assessments originate with investigators' poorly substantiated beliefs about the effects of a policy. Instead of designing studies that test falsifiable hypotheses, these investigators adopt methods and data sources that serve as little more than descriptions of these beliefs in the guise of analysis. Interrupted time series analysis is one of the most popular forms of analysis used to present these beliefs. It is intuitively appealing but, in most cases, it is based upon false analogies, fallacious assumptions and analytical errors.
Models in palaeontological functional analysis
Anderson, Philip S. L.; Bright, Jen A.; Gill, Pamela G.; Palmer, Colin; Rayfield, Emily J.
2012-01-01
Models are a principal tool of modern science. By definition, and in practice, models are not literal representations of reality but provide simplifications or substitutes of the events, scenarios or behaviours that are being studied or predicted. All models make assumptions, and palaeontological models in particular require additional assumptions to study unobservable events in deep time. In the case of functional analysis, the degree of missing data associated with reconstructing musculoskeletal anatomy and neuronal control in extinct organisms has, in the eyes of some scientists, rendered detailed functional analysis of fossils intractable. Such a prognosis may indeed be realized if palaeontologists attempt to recreate elaborate biomechanical models based on missing data and loosely justified assumptions. Yet multiple enabling methodologies and techniques now exist: tools for bracketing boundaries of reality; more rigorous consideration of soft tissues and missing data and methods drawing on physical principles that all organisms must adhere to. As with many aspects of science, the utility of such biomechanical models depends on the questions they seek to address, and the accuracy and validity of the models themselves. PMID:21865242
ASP-G: an ASP-based method for finding attractors in genetic regulatory networks
Mushthofa, Mushthofa; Torres, Gustavo; Van de Peer, Yves; Marchal, Kathleen; De Cock, Martine
2014-01-01
Motivation: Boolean network models are suitable to simulate GRNs in the absence of detailed kinetic information. However, reducing the biological reality implies making assumptions on how genes interact (interaction rules) and how their state is updated during the simulation (update scheme). The exact choice of the assumptions largely determines the outcome of the simulations. In most cases, however, the biologically correct assumptions are unknown. An ideal simulation thus implies testing different rules and schemes to determine those that best capture an observed biological phenomenon. This is not trivial because most current methods to simulate Boolean network models of GRNs and to compute their attractors impose specific assumptions that cannot be easily altered, as they are built into the system. Results: To allow for a more flexible simulation framework, we developed ASP-G. We show the correctness of ASP-G in simulating Boolean network models and obtaining attractors under different assumptions by successfully recapitulating the detection of attractors of previously published studies. We also provide an example of how performing simulation of network models under different settings help determine the assumptions under which a certain conclusion holds. The main added value of ASP-G is in its modularity and declarativity, making it more flexible and less error-prone than traditional approaches. The declarative nature of ASP-G comes at the expense of being slower than the more dedicated systems but still achieves a good efficiency with respect to computational time. Availability and implementation: The source code of ASP-G is available at http://bioinformatics.intec.ugent.be/kmarchal/Supplementary_Information_Musthofa_2014/asp-g.zip. Contact: Kathleen.Marchal@UGent.be or Martine.DeCock@UGent.be Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25028722
Granqvist, Pehr; Sroufe, L Alan; Dozier, Mary; Hesse, Erik; Steele, Miriam; van Ijzendoorn, Marinus; Solomon, Judith; Schuengel, Carlo; Fearon, Pasco; Bakermans-Kranenburg, Marian; Steele, Howard; Cassidy, Jude; Carlson, Elizabeth; Madigan, Sheri; Jacobvitz, Deborah; Foster, Sarah; Behrens, Kazuko; Rifkin-Graboi, Anne; Gribneau, Naomi; Spangler, Gottfried; Ward, Mary J; True, Mary; Spieker, Susan; Reijman, Sophie; Reisz, Samantha; Tharner, Anne; Nkara, Frances; Goldwyn, Ruth; Sroufe, June; Pederson, David; Pederson, Deanne; Weigand, Robert; Siegel, Daniel; Dazzi, Nino; Bernard, Kristin; Fonagy, Peter; Waters, Everett; Toth, Sheree; Cicchetti, Dante; Zeanah, Charles H; Lyons-Ruth, Karlen; Main, Mary; Duschinsky, Robbie
2017-12-01
Disorganized/Disoriented (D) attachment has seen widespread interest from policy makers, practitioners, and clinicians in recent years. However, some of this interest seems to have been based on some false assumptions that (1) attachment measures can be used as definitive assessments of the individual in forensic/child protection settings and that disorganized attachment (2) reliably indicates child maltreatment, (3) is a strong predictor of pathology, and (4) represents a fixed or static "trait" of the child, impervious to development or help. This paper summarizes the evidence showing that these four assumptions are false and misleading. The paper reviews what is known about disorganized infant attachment and clarifies the implications of the classification for clinical and welfare practice with children. In particular, the difference between disorganized attachment and attachment disorder is examined, and a strong case is made for the value of attachment theory for supportive work with families and for the development and evaluation of evidence-based caregiving interventions.
Granqvist, Pehr; Sroufe, L. Alan; Dozier, Mary; Hesse, Erik; Steele, Miriam; van Ijzendoorn, Marinus; Solomon, Judith; Schuengel, Carlo; Fearon, Pasco; Bakermans-Kranenburg, Marian; Steele, Howard; Cassidy, Jude; Carlson, Elizabeth; Madigan, Sheri; Jacobvitz, Deborah; Foster, Sarah; Behrens, Kazuko; Rifkin-Graboi, Anne; Gribneau, Naomi; Spangler, Gottfried; Ward, Mary J; True, Mary; Spieker, Susan; Reijman, Sophie; Reisz, Samantha; Tharner, Anne; Nkara, Frances; Goldwyn, Ruth; Sroufe, June; Pederson, David; Pederson, Deanne; Weigand, Robert; Siegel, Daniel; Dazzi, Nino; Bernard, Kristin; Fonagy, Peter; Waters, Everett; Toth, Sheree; Cicchetti, Dante; Zeanah, Charles H; Lyons-Ruth, Karlen; Main, Mary; Duschinsky, Robbie
2017-01-01
ABSTRACT Disorganized/Disoriented (D) attachment has seen widespread interest from policy makers, practitioners, and clinicians in recent years. However, some of this interest seems to have been based on some false assumptions that (1) attachment measures can be used as definitive assessments of the individual in forensic/child protection settings and that disorganized attachment (2) reliably indicates child maltreatment, (3) is a strong predictor of pathology, and (4) represents a fixed or static “trait” of the child, impervious to development or help. This paper summarizes the evidence showing that these four assumptions are false and misleading. The paper reviews what is known about disorganized infant attachment and clarifies the implications of the classification for clinical and welfare practice with children. In particular, the difference between disorganized attachment and attachment disorder is examined, and a strong case is made for the value of attachment theory for supportive work with families and for the development and evaluation of evidence-based caregiving interventions. PMID:28745146
Performance enhancement of various real-time image processing techniques via speculative execution
NASA Astrophysics Data System (ADS)
Younis, Mohamed F.; Sinha, Purnendu; Marlowe, Thomas J.; Stoyenko, Alexander D.
1996-03-01
In real-time image processing, an application must satisfy a set of timing constraints while ensuring the semantic correctness of the system. Because of the natural structure of digital data, pure data and task parallelism have been used extensively in real-time image processing to accelerate the handling time of image data. These types of parallelism are based on splitting the execution load performed by a single processor across multiple nodes. However, execution of all parallel threads is mandatory for correctness of the algorithm. On the other hand, speculative execution is an optimistic execution of part(s) of the program based on assumptions on program control flow or variable values. Rollback may be required if the assumptions turn out to be invalid. Speculative execution can enhance average, and sometimes worst-case, execution time. In this paper, we target various image processing techniques to investigate applicability of speculative execution. We identify opportunities for safe and profitable speculative execution in image compression, edge detection, morphological filters, and blob recognition.
AIC and the challenge of complexity: A case study from ecology.
Moll, Remington J; Steel, Daniel; Montgomery, Robert A
2016-12-01
Philosophers and scientists alike have suggested Akaike's Information Criterion (AIC), and other similar model selection methods, show predictive accuracy justifies a preference for simplicity in model selection. This epistemic justification of simplicity is limited by an assumption of AIC which requires that the same probability distribution must generate the data used to fit the model and the data about which predictions are made. This limitation has been previously noted but appears to often go unnoticed by philosophers and scientists and has not been analyzed in relation to complexity. If predictions are about future observations, we argue that this assumption is unlikely to hold for models of complex phenomena. That in turn creates a practical limitation for simplicity's AIC-based justification because scientists modeling such phenomena are often interested in predicting the future. We support our argument with an ecological case study concerning the reintroduction of wolves into Yellowstone National Park, U.S.A. We suggest that AIC might still lend epistemic support for simplicity by leading to better explanations of complex phenomena. Copyright © 2016 Elsevier Ltd. All rights reserved.
Walmsley, Christopher W; McCurry, Matthew R; Clausen, Phillip D; McHenry, Colin R
2013-01-01
Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be 'reasonable' are often assumed to have little influence on the results and their interpretation. HERE WE REPORT AN EXTENSIVE SENSITIVITY ANALYSIS WHERE HIGH RESOLUTION FINITE ELEMENT (FE) MODELS OF MANDIBLES FROM SEVEN SPECIES OF CROCODILE WERE ANALYSED UNDER LOADS TYPICAL FOR COMPARATIVE ANALYSIS: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results.
McCurry, Matthew R.; Clausen, Phillip D.; McHenry, Colin R.
2013-01-01
Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation. Here we report an extensive sensitivity analysis where high resolution finite element (FE) models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results. PMID:24255817
Identity-Based Verifiably Encrypted Signatures without Random Oracles
NASA Astrophysics Data System (ADS)
Zhang, Lei; Wu, Qianhong; Qin, Bo
Fair exchange protocol plays an important role in electronic commerce in the case of exchanging digital contracts. Verifiably encrypted signatures provide an optimistic solution to these scenarios with an off-line trusted third party. In this paper, we propose an identity-based verifiably encrypted signature scheme. The scheme is non-interactive to generate verifiably encrypted signatures and the resulting encrypted signature consists of only four group elements. Based on the computational Diffie-Hellman assumption, our scheme is proven secure without using random oracles. To the best of our knowledge, this is the first identity-based verifiably encrypted signature scheme provably secure in the standard model.
Burden of suicide in Poland in 2012: how could it be measured and how big is it?
Orlewska, Katarzyna; Orlewska, Ewa
2018-04-01
The aim of our study was to estimate the health-related and economic burden of suicide in Poland in 2012 and to demonstrate the effects of using different assumptions on the disease burden estimation. Years of life lost (YLL) were calculated by multiplying the number of deaths by the remaining life expectancy. Local expected YLL (LEYLL) and standard expected YLL (SEYLL) were computed using Polish life expectancy tables and WHO standards, respectively. In the base case analysis LEYLL and SEYLL were computed with 3.5 and 0% discount rates, respectively, and no age-weighting. Premature mortality costs were calculated using a human capital approach, with discounting at 5%, and are reported in Polish zloty (PLN) (1 euro = 4.3 PLN). The impact of applying different assumptions on base-case estimates was tested in sensitivity analyses. The total LEYLLs and SEYLLs due to suicide were 109,338 and 279,425, respectively, with 88% attributable to male deaths. The cost of male premature mortality (2,808,854,532 PLN) was substantially higher than for females (177,852,804 PLN). Discounting and age-weighting have a large effect on the base case estimates of LEYLLs. The greatest impact on the estimates of suicide-related premature mortality costs was due to the value of the discount rate. Our findings provide quantitative evidence on the burden of suicide. In our opinion each of the demonstrated methods brings something valuable to the evaluation of the impact of suicide on a given population, but LEYLLs and premature mortality costs estimated according to national guidelines have the potential to be useful for local public health policymakers.
Fuel Cycle Analysis Framework Base Cases for the IAEA/INPRO GAINS Collaborative Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brent Dixon
Thirteen countries participated in the Collaborative Project GAINS “Global Architecture of Innovative Nuclear Energy Systems Based on Thermal and Fast Reactors Including a Closed Fuel Cycle”, which was the primary activity within the IAEA/INPRO Program Area B: “Global Vision on Sustainable Nuclear Energy” for the last three years. The overall objective of GAINS was to develop a standard framework for assessing future nuclear energy systems taking into account sustainable development, and to validate results through sample analyses. This paper details the eight scenarios that constitute the GAINS framework base cases for analysis of the transition to future innovative nuclear energymore » systems. The framework base cases provide a reference for users of the framework to start from in developing and assessing their own alternate systems. Each base case is described along with performance results against the GAINS sustainability evaluation metrics. The eight cases include four using a moderate growth projection and four using a high growth projection for global nuclear electricity generation through 2100. The cases are divided into two sets, addressing homogeneous and heterogeneous scenarios developed by GAINS to model global fuel cycle strategies. The heterogeneous world scenario considers three separate nuclear groups based on their fuel cycle strategies, with non-synergistic and synergistic cases. The framework base case analyses results show the impact of these different fuel cycle strategies while providing references for future users of the GAINS framework. A large number of scenario alterations are possible and can be used to assess different strategies, different technologies, and different assumptions about possible futures of nuclear power. Results can be compared to the framework base cases to assess where these alternate cases perform differently versus the sustainability indicators.« less
NASA Astrophysics Data System (ADS)
Sisk-Hilton, Stephanie Lee
This study examines the two way relationship between an inquiry-based professional development model and teacher enactors. The two year study follows a group of teachers enacting the emergent Supporting Knowledge Integration for Inquiry Practice (SKIIP) professional development model. This study seeks to: (a) identify activity structures in the model that interact with teachers' underlying assumptions regarding professional development and inquiry learning; (b) explain key decision points during implementation in terms of these underlying assumptions; and (c) examine the impact of key activity structures on individual teachers' stated belief structures regarding inquiry learning. Linn's knowledge integration framework facilitates description and analysis of teacher development. Three sets of tensions emerge as themes that describe and constrain participants' interaction with and learning through the model. These are: learning from the group vs. learning on one's own; choosing and evaluating evidence based on impressions vs. specific criteria; and acquiring new knowledge vs. maintaining feelings of autonomy and efficacy. In each of these tensions, existing group goals and operating assumptions initially fell at one end of the tension, while the professional development goals and forms fell at the other. Changes to the model occurred as participants reacted to and negotiated these points of tension. As the group engaged in and modified the SKIIP model, they had repeated opportunities to articulate goals and to make connections between goals and model activity structures. Over time, decisions to modify the model took into consideration an increasingly complex set of underlying assumptions and goals. Teachers identified and sought to balance these tensions. This led to more complex and nuanced decision making, which reflected growing capacity to consider multiple goals in choosing activity structures to enact. The study identifies key activity structures that scaffolded this process for teachers, and which ultimately promoted knowledge integration at both the group and individual levels. This study is an "extreme case" which examines implementation of the SKIIP model under very favorable conditions. Lessons learned regarding appropriate levels of model responsiveness, likely areas of conflict between model form and teacher underlying assumptions, and activity structures that scaffold knowledge integration provide a starting point for future, larger scale implementation.
Missing data and multiple imputation in clinical epidemiological research.
Pedersen, Alma B; Mikkelsen, Ellen M; Cronin-Fenton, Deirdre; Kristensen, Nickolaj R; Pham, Tra My; Pedersen, Lars; Petersen, Irene
2017-01-01
Missing data are ubiquitous in clinical epidemiological research. Individuals with missing data may differ from those with no missing data in terms of the outcome of interest and prognosis in general. Missing data are often categorized into the following three types: missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR). In clinical epidemiological research, missing data are seldom MCAR. Missing data can constitute considerable challenges in the analyses and interpretation of results and can potentially weaken the validity of results and conclusions. A number of methods have been developed for dealing with missing data. These include complete-case analyses, missing indicator method, single value imputation, and sensitivity analyses incorporating worst-case and best-case scenarios. If applied under the MCAR assumption, some of these methods can provide unbiased but often less precise estimates. Multiple imputation is an alternative method to deal with missing data, which accounts for the uncertainty associated with missing data. Multiple imputation is implemented in most statistical software under the MAR assumption and provides unbiased and valid estimates of associations based on information from the available data. The method affects not only the coefficient estimates for variables with missing data but also the estimates for other variables with no missing data.
Missing data and multiple imputation in clinical epidemiological research
Pedersen, Alma B; Mikkelsen, Ellen M; Cronin-Fenton, Deirdre; Kristensen, Nickolaj R; Pham, Tra My; Pedersen, Lars; Petersen, Irene
2017-01-01
Missing data are ubiquitous in clinical epidemiological research. Individuals with missing data may differ from those with no missing data in terms of the outcome of interest and prognosis in general. Missing data are often categorized into the following three types: missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR). In clinical epidemiological research, missing data are seldom MCAR. Missing data can constitute considerable challenges in the analyses and interpretation of results and can potentially weaken the validity of results and conclusions. A number of methods have been developed for dealing with missing data. These include complete-case analyses, missing indicator method, single value imputation, and sensitivity analyses incorporating worst-case and best-case scenarios. If applied under the MCAR assumption, some of these methods can provide unbiased but often less precise estimates. Multiple imputation is an alternative method to deal with missing data, which accounts for the uncertainty associated with missing data. Multiple imputation is implemented in most statistical software under the MAR assumption and provides unbiased and valid estimates of associations based on information from the available data. The method affects not only the coefficient estimates for variables with missing data but also the estimates for other variables with no missing data. PMID:28352203
Problems in the Definition, Interpretation, and Evaluation of Genetic Heterogeneity
Whittemore, Alice S.; Halpern, Jerry
2001-01-01
Suppose that we wish to classify families with multiple cases of disease into one of three categories: those that segregate mutations of a gene of interest, those which segregate mutations of other genes, and those whose disease is due to nonhereditary factors or chance. Among families in the first two categories (the hereditary families), we wish to estimate the proportion, p, of families that segregate mutations of the gene of interest. Although this proportion is a commonly accepted concept, it is well defined only with an unambiguous definition of “family.” Even then, extraneous factors such as family sizes and structures can cause p to vary across different populations and, within a population, to be estimated differently by different studies. Restrictive assumptions about the disease are needed, in order to avoid this undesirable variation. The assumptions require that mutations of all disease-causing genes (i) have no effect on family size, (ii) have very low frequencies, and (iii) have penetrances that satisfy certain constraints. Despite the unverifiability of these assumptions, linkage studies often invoke them to estimate p, using the admixture likelihood introduced by Smith and discussed by Ott. We argue against this common practice, because (1) it also requires the stronger assumption of equal penetrances for all etiologically relevant genes; (2) even if all assumptions are met, estimates of p are sensitive to misspecification of the unknown phenocopy rate; (3) even if all the necessary assumptions are met and the phenocopy rate is correctly specified, estimates of p that are obtained by linkage programs such as HOMOG and GENEHUNTER are based on the wrong likelihood and therefore are biased in the presence of phenocopies. We show how to correct these estimates; but, nevertheless, we do not recommend the use of parametric heterogeneity models in linkage analysis, even merely as a tool for increasing the statistical power to detect linkage. This is because the assumptions required by these models cannot be verified, and their violation could actually decrease power. Instead, we suggest that estimation of p be postponed until the relevant genes have been identified. Then their frequencies and penetrances can be estimated on the basis of population-based samples and can be used to obtain more-robust estimates of p for specific populations. PMID:11170893
Of mental models, assumptions and heuristics: The case of acids and acid strength
NASA Astrophysics Data System (ADS)
McClary, Lakeisha Michelle
This study explored what cognitive resources (i.e., units of knowledge necessary to learn) first-semester organic chemistry students used to make decisions about acid strength and how those resources guided the prediction, explanation and justification of trends in acid strength. We were specifically interested in the identifying and characterizing the mental models, assumptions and heuristics that students relied upon to make their decisions, in most cases under time constraints. The views about acids and acid strength were investigated for twenty undergraduate students. Data sources for this study included written responses and individual interviews. The data was analyzed using a qualitative methodology to answer five research questions. Data analysis regarding these research questions was based on existing theoretical frameworks: problem representation (Chi, Feltovich & Glaser, 1981), mental models (Johnson-Laird, 1983); intuitive assumptions (Talanquer, 2006), and heuristics (Evans, 2008). These frameworks were combined to develop the framework from which our data were analyzed. Results indicated that first-semester organic chemistry students' use of cognitive resources was complex and dependent on their understanding of the behavior of acids. Expressed mental models were generated using prior knowledge and assumptions about acids and acid strength; these models were then employed to make decisions. Explicit and implicit features of the compounds in each task mediated participants' attention, which triggered the use of a very limited number of heuristics, or shortcut reasoning strategies. Many students, however, were able to apply more effortful analytic reasoning, though correct trends were predicted infrequently. Most students continued to use their mental models, assumptions and heuristics to explain a given trend in acid strength and to justify their predicted trends, but the tasks influenced a few students to shift from one model to another model. An emergent finding from this project was that the problem representation greatly influenced students' ability to make correct predictions in acid strength. Many students, however, were able to apply more effortful analytic reasoning, though correct trends were predicted infrequently. Most students continued to use their mental models, assumptions and heuristics to explain a given trend in acid strength and to justify their predicted trends, but the tasks influenced a few students to shift from one model to another model. An emergent finding from this project was that the problem representation greatly influenced students' ability to make correct predictions in acid strength.
Cameron E. Naficy; Eric G. Keeling; Peter Landres; Paul F. Hessburg; Thomas T. Veblen; Anna Sala
2016-01-01
Changes in the climate and in key ecological processes are prompting increased debate about ecological restoration and other interventions in wilderness. The prospect of intervention in wilderness raises legal, scientific, and values-based questions about the appropriateness of possible actions. In this article, we focus on the role of science to elucidate the...
Reinharz, S; Mester, R
1978-01-01
The action assumptions which characterize and differentiate cultures affect the creation and functioning of their institutions. Using this analytic framework, the development of a community mental health center in Israel reflects a culture which contains both pioneering and bureaucratic action assumptions. The effects of these assumptions on staff interventions in community problems are traced. Finally, various dimensions of the emerging definition of community mental health practice in Israel are discussed and their problematic features identified.
Monte Carlo simulations within avalanche rescue
NASA Astrophysics Data System (ADS)
Reiweger, Ingrid; Genswein, Manuel; Schweizer, Jürg
2016-04-01
Refining concepts for avalanche rescue involves calculating suitable settings for rescue strategies such as an adequate probing depth for probe line searches or an optimal time for performing resuscitation for a recovered avalanche victim in case of additional burials. In the latter case, treatment decisions have to be made in the context of triage. However, given the low number of incidents it is rarely possible to derive quantitative criteria based on historical statistics in the context of evidence-based medicine. For these rare, but complex rescue scenarios, most of the associated concepts, theories, and processes involve a number of unknown "random" parameters which have to be estimated in order to calculate anything quantitatively. An obvious approach for incorporating a number of random variables and their distributions into a calculation is to perform a Monte Carlo (MC) simulation. We here present Monte Carlo simulations for calculating the most suitable probing depth for probe line searches depending on search area and an optimal resuscitation time in case of multiple avalanche burials. The MC approach reveals, e.g., new optimized values for the duration of resuscitation that differ from previous, mainly case-based assumptions.
5 CFR 841.405 - Economic assumptions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Economic assumptions. 841.405 Section 841... (CONTINUED) FEDERAL EMPLOYEES RETIREMENT SYSTEM-GENERAL ADMINISTRATION Government Costs § 841.405 Economic assumptions. The determinations of the normal cost percentage will be based on the economic assumptions...
5 CFR 841.405 - Economic assumptions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 5 Administrative Personnel 2 2013-01-01 2013-01-01 false Economic assumptions. 841.405 Section 841... (CONTINUED) FEDERAL EMPLOYEES RETIREMENT SYSTEM-GENERAL ADMINISTRATION Government Costs § 841.405 Economic assumptions. The determinations of the normal cost percentage will be based on the economic assumptions...
5 CFR 841.405 - Economic assumptions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Economic assumptions. 841.405 Section 841... (CONTINUED) FEDERAL EMPLOYEES RETIREMENT SYSTEM-GENERAL ADMINISTRATION Government Costs § 841.405 Economic assumptions. The determinations of the normal cost percentage will be based on the economic assumptions...
5 CFR 841.405 - Economic assumptions.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 5 Administrative Personnel 2 2012-01-01 2012-01-01 false Economic assumptions. 841.405 Section 841... (CONTINUED) FEDERAL EMPLOYEES RETIREMENT SYSTEM-GENERAL ADMINISTRATION Government Costs § 841.405 Economic assumptions. The determinations of the normal cost percentage will be based on the economic assumptions...
5 CFR 841.405 - Economic assumptions.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 5 Administrative Personnel 2 2014-01-01 2014-01-01 false Economic assumptions. 841.405 Section 841... (CONTINUED) FEDERAL EMPLOYEES RETIREMENT SYSTEM-GENERAL ADMINISTRATION Government Costs § 841.405 Economic assumptions. The determinations of the normal cost percentage will be based on the economic assumptions...
Adopting Internet Standards for Orbital Use
NASA Technical Reports Server (NTRS)
Wood, Lloyd; Ivancic, William; da Silva Curiel, Alex; Jackson, Chris; Stewart, Dave; Shell, Dave; Hodgson, Dave
2005-01-01
After a year of testing and demonstrating a Cisco mobile access router intended for terrestrial use onboard the low-Earth-orbiting UK-DMC satellite as part of a larger merged ground/space IP-based internetwork, we reflect on and discuss the benefits and drawbacks of integration and standards reuse for small satellite missions. Benefits include ease of operation and the ability to leverage existing systems and infrastructure designed for general use, as well as reuse of existing, known, and well-understood security and operational models. Drawbacks include cases where integration work was needed to bridge the gaps in assumptions between different systems, and where performance considerations outweighed the benefits of reuse of pre-existing file transfer protocols. We find similarities with the terrestrial IP networks whose technologies we have adopted and also some significant differences in operational models and assumptions that must be considered.
An Azulene-Based Discovery Experiment: Challenging Students to Watch for the "False Assumption"
ERIC Educational Resources Information Center
Garner, Charles M.
2005-01-01
A discovery-based experiment is developed depending on a "false assumption" that the students mistakenly assume they know the structure of a reaction product and are forced to reconcile observations that are inconsistent with this assumption. This experiment involves the chemistry of azulenes, an interesting class of intensely colored aromatic…
Cost-effectiveness in fall prevention for older women.
Hektoen, Liv F; Aas, Eline; Lurås, Hilde
2009-08-01
The aim of this study was to estimate the cost-effectiveness of implementing an exercise-based fall prevention programme for home-dwelling women in the > or = 80-year age group in Norway. The impact of the home-based individual exercise programme on the number of falls is based on a New Zealand study. On the basis of the cost estimates and the estimated reduction in the number of falls obtained with the chosen programme, we calculated the incremental costs and the incremental effect of the exercise programme as compared with no prevention. The calculation of the average healthcare cost of falling was based on assumptions regarding the distribution of fall injuries reported in the literature, four constructed representative case histories, assumptions regarding healthcare provision associated with the treatment of the specified cases, and estimated unit costs from Norwegian cost data. We calculated the average healthcare costs per fall for the first year. We found that the reduction in healthcare costs per individual for treating fall-related injuries was 1.85 times higher than the cost of implementing a fall prevention programme. The reduction in healthcare costs more than offset the cost of the prevention programme for women aged > or = 80 years living at home, which indicates that health authorities should increase their focus on prevention. The main intention of this article is to stipulate costs connected to falls among the elderly in a transparent way and visualize the whole cost picture. Cost-effectiveness analysis is a health policy tool that makes politicians and other makers of health policy conscious of this complexity.
Misleading Theoretical Assumptions in Hypertext/Hypermedia Research.
ERIC Educational Resources Information Center
Tergan, Sigmar-Olaf
1997-01-01
Reviews basic theoretical assumptions of research on learning with hypertext/hypermedia. Focuses on whether the results of research on hypertext/hypermedia-based learning support these assumptions. Results of empirical studies and theoretical analysis reveal that many research approaches have been misled by inappropriate theoretical assumptions on…
NASA Astrophysics Data System (ADS)
Scharnagl, Benedikt; Durner, Wolfgang
2013-04-01
Models are inherently imperfect because they simplify processes that are themselves imperfectly known and understood. Moreover, the input variables and parameters needed to run a model are typically subject to various sources of error. As a consequence of these imperfections, model predictions will always deviate from corresponding observations. In most applications in soil hydrology, these deviations are clearly not random but rather show a systematic structure. From a statistical point of view, this systematic mismatch may be a reason for concern because it violates one of the basic assumptions made in inverse parameter estimation: the assumption of independence of the residuals. But what are the consequences of simply ignoring the autocorrelation in the residuals, as it is current practice in soil hydrology? Are the parameter estimates still valid even though the statistical foundation they are based on is partially collapsed? Theory and practical experience from other fields of science have shown that violation of the independence assumption will result in overconfident uncertainty bounds and that in some cases it may lead to significantly different optimal parameter values. In our contribution, we present three soil hydrological case studies, in which the effect of autocorrelated residuals on the estimated parameters was investigated in detail. We explicitly accounted for autocorrelated residuals using a formal likelihood function that incorporates an autoregressive model. The inverse problem was posed in a Bayesian framework, and the posterior probability density function of the parameters was estimated using Markov chain Monte Carlo simulation. In contrast to many other studies in related fields of science, and quite surprisingly, we found that the first-order autoregressive model, often abbreviated as AR(1), did not work well in the soil hydrological setting. We showed that a second-order autoregressive, or AR(2), model performs much better in these applications, leading to parameter and uncertainty estimates that satisfy all the underlying statistical assumptions. For theoretical reasons, these estimates are deemed more reliable than those estimates based on the neglect of autocorrelation in the residuals. In compliance with theory and results reported in the literature, our results showed that parameter uncertainty bounds were substantially wider if autocorrelation in the residuals was explicitly accounted for, and also the optimal parameter vales were slightly different in this case. We argue that the autoregressive model presented here should be used as a matter of routine in inverse modeling of soil hydrological processes.
Distribution Characteristics of Air-Bone Gaps – Evidence of Bias in Manual Audiometry
Margolis, Robert H.; Wilson, Richard H.; Popelka, Gerald R.; Eikelboom, Robert H.; Swanepoel, De Wet; Saly, George L.
2015-01-01
Objective Five databases were mined to examine distributions of air-bone gaps obtained by automated and manual audiometry. Differences in distribution characteristics were examined for evidence of influences unrelated to the audibility of test signals. Design The databases provided air- and bone-conduction thresholds that permitted examination of air-bone gap distributions that were free of ceiling and floor effects. Cases with conductive hearing loss were eliminated based on air-bone gaps, tympanometry, and otoscopy, when available. The analysis is based on 2,378,921 threshold determinations from 721,831 subjects from five databases. Results Automated audiometry produced air-bone gaps that were normally distributed suggesting that air- and bone-conduction thresholds are normally distributed. Manual audiometry produced air-bone gaps that were not normally distributed and show evidence of biasing effects of assumptions of expected results. In one database, the form of the distributions showed evidence of inclusion of conductive hearing losses. Conclusions Thresholds obtained by manual audiometry show tester bias effects from assumptions of the patient’s hearing loss characteristics. Tester bias artificially reduces the variance of bone-conduction thresholds and the resulting air-bone gaps. Because the automated method is free of bias from assumptions of expected results, these distributions are hypothesized to reflect the true variability of air- and bone-conduction thresholds and the resulting air-bone gaps. PMID:26627469
Bayesian learning and the psychology of rule induction
Endress, Ansgar D.
2014-01-01
In recent years, Bayesian learning models have been applied to an increasing variety of domains. While such models have been criticized on theoretical grounds, the underlying assumptions and predictions are rarely made concrete and tested experimentally. Here, I use Frank and Tenenbaum's (2011) Bayesian model of rule-learning as a case study to spell out the underlying assumptions, and to confront them with the empirical results Frank and Tenenbaum (2011) propose to simulate, as well as with novel experiments. While rule-learning is arguably well suited to rational Bayesian approaches, I show that their models are neither psychologically plausible nor ideal observer models. Further, I show that their central assumption is unfounded: humans do not always preferentially learn more specific rules, but, at least in some situations, those rules that happen to be more salient. Even when granting the unsupported assumptions, I show that all of the experiments modeled by Frank and Tenenbaum (2011) either contradict their models, or have a large number of more plausible interpretations. I provide an alternative account of the experimental data based on simple psychological mechanisms, and show that this account both describes the data better, and is easier to falsify. I conclude that, despite the recent surge in Bayesian models of cognitive phenomena, psychological phenomena are best understood by developing and testing psychological theories rather than models that can be fit to virtually any data. PMID:23454791
Cooperation, psychological game theory, and limitations of rationality in social interaction.
Colman, Andrew M
2003-04-01
Rational choice theory enjoys unprecedented popularity and influence in the behavioral and social sciences, but it generates intractable problems when applied to socially interactive decisions. In individual decisions, instrumental rationality is defined in terms of expected utility maximization. This becomes problematic in interactive decisions, when individuals have only partial control over the outcomes, because expected utility maximization is undefined in the absence of assumptions about how the other participants will behave. Game theory therefore incorporates not only rationality but also common knowledge assumptions, enabling players to anticipate their co-players' strategies. Under these assumptions, disparate anomalies emerge. Instrumental rationality, conventionally interpreted, fails to explain intuitively obvious features of human interaction, yields predictions starkly at variance with experimental findings, and breaks down completely in certain cases. In particular, focal point selection in pure coordination games is inexplicable, though it is easily achieved in practice; the intuitively compelling payoff-dominance principle lacks rational justification; rationality in social dilemmas is self-defeating; a key solution concept for cooperative coalition games is frequently inapplicable; and rational choice in certain sequential games generates contradictions. In experiments, human players behave more cooperatively and receive higher payoffs than strict rationality would permit. Orthodox conceptions of rationality are evidently internally deficient and inadequate for explaining human interaction. Psychological game theory, based on nonstandard assumptions, is required to solve these problems, and some suggestions along these lines have already been put forward.
Effects of rotational symmetry breaking in polymer-coated nanopores
NASA Astrophysics Data System (ADS)
Osmanović, D.; Kerr-Winter, M.; Eccleston, R. C.; Hoogenboom, B. W.; Ford, I. J.
2015-01-01
The statistical theory of polymers tethered around the inner surface of a cylindrical channel has traditionally employed the assumption that the equilibrium density of the polymers is independent of the azimuthal coordinate. However, simulations have shown that this rotational symmetry can be broken when there are attractive interactions between the polymers. We investigate the phases that emerge in these circumstances, and we quantify the effect of the symmetry assumption on the phase behavior of the system. In the absence of this assumption, one can observe large differences in the equilibrium densities between the rotationally symmetric case and the non-rotationally symmetric case. A simple analytical model is developed that illustrates the driving thermodynamic forces responsible for this symmetry breaking. Our results have implications for the current understanding of the behavior of polymers in cylindrical nanopores.
Effects of rotational symmetry breaking in polymer-coated nanopores
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osmanović, D.; Hoogenboom, B. W.; Ford, I. J.
2015-01-21
The statistical theory of polymers tethered around the inner surface of a cylindrical channel has traditionally employed the assumption that the equilibrium density of the polymers is independent of the azimuthal coordinate. However, simulations have shown that this rotational symmetry can be broken when there are attractive interactions between the polymers. We investigate the phases that emerge in these circumstances, and we quantify the effect of the symmetry assumption on the phase behavior of the system. In the absence of this assumption, one can observe large differences in the equilibrium densities between the rotationally symmetric case and the non-rotationally symmetricmore » case. A simple analytical model is developed that illustrates the driving thermodynamic forces responsible for this symmetry breaking. Our results have implications for the current understanding of the behavior of polymers in cylindrical nanopores.« less
Assumptions of Asian American Similarity: The Case of Filipino and Chinese American Students
ERIC Educational Resources Information Center
Agbayani-Siewert, Pauline
2004-01-01
The conventional research model of clustering ethnic groups into four broad categories risks perpetuating a pedagogy of stereotypes in social work policies and practice methods. Using an elaborated research model, this study tested the assumption of cultural similarity of Filipino and Chinese American college students by examining attitudes,…
Formalization and Analysis of Reasoning by Assumption
ERIC Educational Resources Information Center
Bosse, Tibor; Jonker, Catholijn M.; Treur, Jan
2006-01-01
This article introduces a novel approach for the analysis of the dynamics of reasoning processes and explores its applicability for the reasoning pattern called reasoning by assumption. More specifically, for a case study in the domain of a Master Mind game, it is shown how empirical human reasoning traces can be formalized and automatically…
Samuels, W.B.
1982-01-01
An oilspill risk analysis was conducted for the South Atlantic (proposed sale 78) Outer Continental Shelf (OCS) lease area. The analysis considered the probability of spill occurrences based on historical trends; likely movement of oil slicks based on a climatological model ; and locations of environmental resources which could be vulnerable to spilled oil. The times between spill occurrence and contact with resources were estimated to aid analysts in estimating slick characteristics. Critical assumptions made for this particular analysis were: (1) that oil exists in the lease area, (2) that either 0.228 billion (mean case) or 1.14 billion (high case) barrels of oil will be found and produced from tracts sold in sale 78, and (3) that all the oil will be found either in the northern or the southern portion of the lease area. On the basis of these resource estimates, it was estimated that 1 to 5 oilspills of 1,000 barrels or greater will occur over the 25 to 30-year production life of the proposed sale 78 tracts. The results also depend upon the routes and methods chosen to transport oil from OCS platforms to shore. Given the above assumptions, the estimated probability that one or more oilspills of 1,000 barrels or larger will occur and contact land after being at sea less than 30 days is less than 15 percent for all cases considered; for spills 10,000 barrels or larger, the probability is less than 10 percent. These probabilities also reflect the following assumptions: oilspills remain intact for up to 30 days, do not weather, and are not cleaned up. It is noteworthy that over 80 percent of the risk of oilspill occurrence from proposed sale 78 is due to transportation rather than production of oil. In addition, the risks of oilspill occurrence from proposed sale 78 (mean resource estimate) are less than one-tenth of the risks of existing tanker transportation of crude oil imports and refined products in the South Atlantic area.
NASA Technical Reports Server (NTRS)
Casper, Paul W.; Bent, Rodney B.
1991-01-01
The algorithm used in previous technology time-of-arrival lightning mapping systems was based on the assumption that the earth is a perfect spheroid. These systems yield highly-accurate lightning locations, which is their major strength. However, extensive analysis of tower strike data has revealed occasionally significant (one to two kilometer) systematic offset errors which are not explained by the usual error sources. It was determined that these systematic errors reduce dramatically (in some cases) when the oblate shape of the earth is taken into account. The oblate spheroid correction algorithm and a case example is presented.
New class of control laws for robotic manipulators. I - Nonadaptive case. II - Adaptive case
NASA Technical Reports Server (NTRS)
Wen, John T.; Bayard, David S.
1988-01-01
A new class of exponentially stabilizing control laws for joint level control of robot arms is discussed. Closed-loop exponential stability has been demonstrated for both the set point and tracking control problems by a slight modification of the energy Lyapunov function and the use of a lemma which handles third-order terms in the Lyapunov function derivatives. In the second part, these control laws are adapted in a simple fashion to achieve asymptotically stable adaptive control. The analysis addresses the nonlinear dynamics directly without approximation, linearization, or ad hoc assumptions, and uses a parameterization based on physical (time-invariant) quantities.
Using foresight methods to anticipate future threats: the case of disease management.
Ma, Sai; Seid, Michael
2006-01-01
We describe a unique foresight framework for health care managers to use in longer-term planning. This framework uses scenario-building to envision plausible alternate futures of the U.S. health care system and links those broad futures to business-model-specific "load-bearing" assumptions. Because the framework we describe simultaneously addresses very broad and very specific issues, it can be easily applied to a broad range of health care issues by using the broad framework and business-specific assumptions for the particular case at hand. We illustrate this method using the case of disease management, pointing out that although the industry continues to grow rapidly, its future also contains great uncertainties.
Buildings Lean Maintenance Implementation Model
NASA Astrophysics Data System (ADS)
Abreu, Antonio; Calado, João; Requeijo, José
2016-11-01
Nowadays, companies in global markets have to achieve high levels of performance and competitiveness to stay "alive".Within this assumption, the building maintenance cannot be done in a casual and improvised way due to the costs related. Starting with some discussion about lean management and building maintenance, this paper introduces a model to support the Lean Building Maintenance (LBM) approach. Finally based on a real case study from a Portuguese company, the benefits, challenges and difficulties are presented and discussed.
DISENTANGLING THE ICL WITH THE CHEFs: ABELL 2744 AS A CASE STUDY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiménez-Teja, Y.; Dupke, R., E-mail: yojite@iaa.es
Measurements of the intracluster light (ICL) are still prone to methodological ambiguities, and there are multiple techniques in the literature to address them, mostly based on the binding energy, the local density distribution, or the surface brightness. A common issue with these methods is the a priori assumption of a number of hypotheses on either the ICL morphology, its surface brightness level, or some properties of the brightest cluster galaxy (BCG). The discrepancy in the results is high, and numerical simulations just place a boundary on the ICL fraction in present-day galaxy clusters in the range 10%–50%. We developed amore » new algorithm based on the Chebyshev–Fourier functions to estimate the ICL fraction without relying on any a priori assumption about the physical or geometrical characteristics of the ICL. We are able to not only disentangle the ICL from the galactic luminosity but mark out the limits of the BCG from the ICL in a natural way. We test our technique with the recently released data of the cluster Abell 2744, observed by the Frontier Fields program. The complexity of this multiple merging cluster system and the formidable depth of these images make it a challenging test case to prove the efficiency of our algorithm. We found a final ICL fraction of 19.17 ± 2.87%, which is very consistent with numerical simulations.« less
The impact of changing dental needs on cost savings from fluoridation.
Campain, A C; Mariño, R J; Wright, F A C; Harrison, D; Bailey, D L; Morgan, M V
2010-03-01
Although community water fluoridation has been one of the cornerstone strategies for the prevention and control of dental caries, questions are still raised regarding its cost-effectiveness. This study assessed the impact of changing dental needs on the cost savings from community water fluoridation in Australia. Net costs were estimated as Costs((programme)) minus Costs((averted caries).) Averted costs were estimated as the product of caries increment in non-fluoridated community, effectiveness of fluoridation and the cost of a carious surface. Modelling considered four age-cohorts: 6-20, 21-45, 46-65 and 66+ years and three time points 1970s, 1980s, and 1990s. Cost of a carious surface was estimated by conventional and complex methods. Real discount rates (4, 7 (base) and 10%) were utilized. With base-case assumptions, the average annual cost savings/person, using Australian dollars at the 2005 level, ranged from $56.41 (1970s) to $17.75 (1990s) (conventional method) and from $249.45 (1970s) to $69.86 (1990s) (complex method). Under worst-case assumptions fluoridation remained cost-effective with cost savings ranging from $24.15 (1970s) to $3.87 (1990s) (conventional method) and $107.85 (1970s) and $24.53 (1990s) (complex method). For 66+ years cohort (1990s) fluoridation did not show a cost saving, but costs/person were marginal. Community water fluoridation remains a cost-effective preventive measure in Australia.
NASA Astrophysics Data System (ADS)
Efimov, Denis; Schiffer, Johannes; Ortega, Romeo
2016-05-01
Motivated by the problem of phase-locking in droop-controlled inverter-based microgrids with delays, the recently developed theory of input-to-state stability (ISS) for multistable systems is extended to the case of multistable systems with delayed dynamics. Sufficient conditions for ISS of delayed systems are presented using Lyapunov-Razumikhin functions. It is shown that ISS multistable systems are robust with respect to delays in a feedback. The derived theory is applied to two examples. First, the ISS property is established for the model of a nonlinear pendulum and delay-dependent robustness conditions are derived. Second, it is shown that, under certain assumptions, the problem of phase-locking analysis in droop-controlled inverter-based microgrids with delays can be reduced to the stability investigation of the nonlinear pendulum. For this case, corresponding delay-dependent conditions for asymptotic phase-locking are given.
Evaluation of alternative future energy scenarios for Brazil using an energy mix model
NASA Astrophysics Data System (ADS)
Coelho, Maysa Joppert
The purpose of this study is to model and assess the performance and the emissions impacts of electric energy technologies in Brazil, based on selected economic scenarios, for a time frame of 40 years, taking the year of 1995 as a base year. A Base scenario has been developed, for each of three economic development projections, based upon a sectoral analysis. Data regarding the characteristics of over 300 end-use technologies and 400 energy conversion technologies have been collected. The stand-alone MARKAL technology-based energy-mix model, first developed at Brookhaven National Laboratory, was applied to a base case study and five alternative case studies, for each economic scenario. The alternative case studies are: (1) minimum increase in the thermoelectric contribution to the power production system of 20 percent after 2010; (2) extreme values for crude oil price; (3) minimum increase in the renewable technologies contribution to the power production system of 20 percent after 2010; (4) uncertainty on the cost of future renewable conversion technologies; and (5) model is forced to use the natural gas plants committed to be built in the country. Results such as the distribution of fuel used for power generation, electricity demand across economy sectors, total CO2 emissions from burning fossil fuels for power generation, shadow price (marginal cost) of technologies, and others, are evaluated and compared to the Base scenarios previous established. Among some key findings regarding the Brazilian energy system it may be inferred that: (1) diesel technologies are estimated to be the most cost-effective thermal technology in the country; (2) wind technology is estimated to be the most cost-effective technology to be used when a minimum share of renewables is imposed to the system; and (3) hydroelectric technologies present the highest cost/benefit relation among all conversion technologies considered. These results are subject to the limitations of key input assumptions and key assumptions of modeling framework, and are used as the basis for recommendations regarding energy development priorities for Brazil.
The effect of induced abortion on the incidence of Down's syndrome in Hawaii.
Smith, R G; Gardner, R W; Steinhoff, P; Chung, C S; Palmore, J A
1980-01-01
There was a decrease in the recorded number of cases and in the incidence rate of Down's syndrome in Hawaii between 1963-1969 and 1971-1977. Independent of all other factors, induced abortion accounted for 43 percent of the decline in the number of cases, based on the assumption that a substantial number of clandestine abortions were being performed in Hawaii before the 1970 legalization of abortion. However, if we assume that very few illegal abortions were performed prior to 1970, there would have been an actual 3.5 percent increase in the number of cases of Down's syndrome in the absence of legal abortion. Declining pregnancy rates and decreasing age-specific incidence rates of Down's syndrome also contributed to the drop in the number of cases between 1963-1969 and 1971-1977.
2013-01-01
Background When mathematical modelling is applied to many different application areas, a common task is the estimation of states and parameters based on measurements. With this kind of inference making, uncertainties in the time when the measurements have been taken are often neglected, but especially in applications taken from the life sciences, this kind of errors can considerably influence the estimation results. As an example in the context of personalized medicine, the model-based assessment of the effectiveness of drugs is becoming to play an important role. Systems biology may help here by providing good pharmacokinetic and pharmacodynamic (PK/PD) models. Inference on these systems based on data gained from clinical studies with several patient groups becomes a major challenge. Particle filters are a promising approach to tackle these difficulties but are by itself not ready to handle uncertainties in measurement times. Results In this article, we describe a variant of the standard particle filter (PF) algorithm which allows state and parameter estimation with the inclusion of measurement time uncertainties (MTU). The modified particle filter, which we call MTU-PF, also allows the application of an adaptive stepsize choice in the time-continuous case to avoid degeneracy problems. The modification is based on the model assumption of uncertain measurement times. While the assumption of randomness in the measurements themselves is common, the corresponding measurement times are generally taken as deterministic and exactly known. Especially in cases where the data are gained from measurements on blood or tissue samples, a relatively high uncertainty in the true measurement time seems to be a natural assumption. Our method is appropriate in cases where relatively few data are used from a relatively large number of groups or individuals, which introduce mixed effects in the model. This is a typical setting of clinical studies. We demonstrate the method on a small artificial example and apply it to a mixed effects model of plasma-leucine kinetics with data from a clinical study which included 34 patients. Conclusions Comparisons of our MTU-PF with the standard PF and with an alternative Maximum Likelihood estimation method on the small artificial example clearly show that the MTU-PF obtains better estimations. Considering the application to the data from the clinical study, the MTU-PF shows a similar performance with respect to the quality of estimated parameters compared with the standard particle filter, but besides that, the MTU algorithm shows to be less prone to degeneration than the standard particle filter. PMID:23331521
Biostatistics Series Module 6: Correlation and Linear Regression.
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Correlation and linear regression are the most commonly used techniques for quantifying the association between two numeric variables. Correlation quantifies the strength of the linear relationship between paired variables, expressing this as a correlation coefficient. If both variables x and y are normally distributed, we calculate Pearson's correlation coefficient ( r ). If normality assumption is not met for one or both variables in a correlation analysis, a rank correlation coefficient, such as Spearman's rho (ρ) may be calculated. A hypothesis test of correlation tests whether the linear relationship between the two variables holds in the underlying population, in which case it returns a P < 0.05. A 95% confidence interval of the correlation coefficient can also be calculated for an idea of the correlation in the population. The value r 2 denotes the proportion of the variability of the dependent variable y that can be attributed to its linear relation with the independent variable x and is called the coefficient of determination. Linear regression is a technique that attempts to link two correlated variables x and y in the form of a mathematical equation ( y = a + bx ), such that given the value of one variable the other may be predicted. In general, the method of least squares is applied to obtain the equation of the regression line. Correlation and linear regression analysis are based on certain assumptions pertaining to the data sets. If these assumptions are not met, misleading conclusions may be drawn. The first assumption is that of linear relationship between the two variables. A scatter plot is essential before embarking on any correlation-regression analysis to show that this is indeed the case. Outliers or clustering within data sets can distort the correlation coefficient value. Finally, it is vital to remember that though strong correlation can be a pointer toward causation, the two are not synonymous.
Biostatistics Series Module 6: Correlation and Linear Regression
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Correlation and linear regression are the most commonly used techniques for quantifying the association between two numeric variables. Correlation quantifies the strength of the linear relationship between paired variables, expressing this as a correlation coefficient. If both variables x and y are normally distributed, we calculate Pearson's correlation coefficient (r). If normality assumption is not met for one or both variables in a correlation analysis, a rank correlation coefficient, such as Spearman's rho (ρ) may be calculated. A hypothesis test of correlation tests whether the linear relationship between the two variables holds in the underlying population, in which case it returns a P < 0.05. A 95% confidence interval of the correlation coefficient can also be calculated for an idea of the correlation in the population. The value r2 denotes the proportion of the variability of the dependent variable y that can be attributed to its linear relation with the independent variable x and is called the coefficient of determination. Linear regression is a technique that attempts to link two correlated variables x and y in the form of a mathematical equation (y = a + bx), such that given the value of one variable the other may be predicted. In general, the method of least squares is applied to obtain the equation of the regression line. Correlation and linear regression analysis are based on certain assumptions pertaining to the data sets. If these assumptions are not met, misleading conclusions may be drawn. The first assumption is that of linear relationship between the two variables. A scatter plot is essential before embarking on any correlation-regression analysis to show that this is indeed the case. Outliers or clustering within data sets can distort the correlation coefficient value. Finally, it is vital to remember that though strong correlation can be a pointer toward causation, the two are not synonymous. PMID:27904175
Acoustic Analogy and Alternative Theories for Jet Noise Prediction
NASA Technical Reports Server (NTRS)
Morris, Philip J.; Farassat, F.
2002-01-01
Several methods for the prediction of jet noise are described. All but one of the noise prediction schemes are based on Lighthill's or Lilley's acoustic analogy, whereas the other is the jet noise generation model recently proposed by Tam and Auriault. In all of the approaches, some assumptions must be made concerning the statistical properties of the turbulent sources. In each case the characteristic scales of the turbulence are obtained from a solution of the Reynolds-averaged Navier-Stokes equation using a kappa-sigma turbulence model. It is shown that, for the same level of empiricism, Tam and Auriault's model yields better agreement with experimental noise measurements than the acoustic analogy. It is then shown that this result is not because of some fundamental flaw in the acoustic analogy approach, but instead is associated with the assumptions made in the approximation of the turbulent source statistics. If consistent assumptions are made, both the acoustic analogy and Tam and Auriault's model yield identical noise predictions. In conclusion, a proposal is presented for an acoustic analogy that provides a clearer identification of the equivalent source mechanisms, as is a discussion of noise prediction issues that remain to be resolved.
The Acoustic Analogy and Alternative Theories for Jet Noise Prediction
NASA Technical Reports Server (NTRS)
Morris, Philip J.; Farassat, F.; Morris, Philip J.
2002-01-01
This paper describes several methods for the prediction of jet noise. All but one of the noise prediction schemes are based on Lighthill's or Lilley's acoustic analogy while the other is the jet noise generation model recently proposed by Tam and Auriault. In all the approaches some assumptions must be made concerning the statistical properties of the turbulent sources. In each case the characteristic scales of the turbulence are obtained from a solution of the Reynolds-averaged Navier Stokes equation using a k-epsilon turbulence model. It is shown that, for the same level of empiricism, Tam and Auriault's model yields better agreement with experimental noise measurements than the acoustic analogy. It is then shown that this result is not because of some fundamental flaw in the acoustic analogy approach: but, is associated with the assumptions made in the approximation of the turbulent source statistics. If consistent assumptions are made, both the acoustic analogy and Tam and Auriault's model yield identical noise predictions. The paper concludes with a proposal for an acoustic analogy that provides a clearer identification of the equivalent source mechanisms and a discussion of noise prediction issues that remain to be resolved.
The Acoustic Analogy and Alternative Theories for Jet Noise Prediction
NASA Technical Reports Server (NTRS)
Morris, Philip J.; Farassat, F.
2002-01-01
This paper describes several methods for the prediction of jet noise. All but one of the noise prediction schemes are based on Lighthill's or Lilley's acoustic analogy while the other is the jet noise generation model recently proposed by Tam and Auriault. In all the approaches some assumptions must be made concerning the statistical properties of the turbulent sources. In each case the characteristic scales of the turbulence are obtained from a solution of the Reynolds-averaged Navier Stokes equation using a k - epsilon turbulence model. It is shown that, for the same level of empiricism, Tam and Auriault's model yields better agreement with experimental noise measurements than the acoustic analogy. It is then shown that this result is not because of some fundamental flaw in the acoustic analogy approach: but, is associated with the assumptions made in the approximation of the turbulent source statistics. If consistent assumptions are made, both the acoustic analogy and Tam and Auriault's model yield identical noise predictions. The paper concludes with a proposal for an acoustic analogy that provides a clearer identification of the equivalent source mechanisms and a discussion of noise prediction issues that remain to be resolved.
Chaugule, Shraddha; Graham, Claudia
2017-11-01
To evaluate the cost-effectiveness of real-time continuous glucose monitoring (CGM) compared to self-monitoring of blood glucose (SMBG) alone in people with type 1 diabetes (T1DM) using multiple daily injections (MDI) from the Canadian societal perspective. The IMS CORE Diabetes Model (v.9.0) was used to assess the long-term (50 years) cost-effectiveness of real-time CGM (G5 Mobile CGM System; Dexcom, Inc., San Diego, CA) compared with SMBG alone for a cohort of adults with poorly-controlled T1DM. Treatment effects and baseline characteristics of patients were derived from the DIAMOND randomized controlled clinical trial; all other assumptions and costs were sourced from published research. The accuracy and clinical effectiveness of G5 Mobile CGM is the same as the G4 Platinum CGM used in the DIAMOND randomized clinical trial. Base case assumptions included (a) baseline HbA1c of 8.6%, (b) change in HbA1c of -1.0% for CGM users vs -0.4% for SMBG users, and (c) disutilities of -0.0142 for non-severe hypoglycemic events (NSHEs) and severe hypoglycemic events (SHEs) not requiring medical intervention, and -0.047 for SHEs requiring medical resources. Treatment costs and outcomes were discounted at 1.5% per year. The incremental cost-effectiveness ratio for the base case G5 Mobile CGM vs SMBG was $33,789 CAD/quality-adjusted life-year (QALY). Sensitivity analyses showed that base case results were most sensitive to changes in percentage reduction in hypoglycemic events and disutilities associated with hypoglycemic events. The base case results were minimally impacted by changes in baseline HbA1c level, incorporation of indirect costs, changes in the discount rate, and baseline utility of patients. The results of this analysis demonstrate that G5 Mobile CGM is cost-effective within the population of adults with T1DM using MDI, assuming a Canadian willingness-to-pay threshold of $50,000 CAD per QALY.
Improving AACSB Assurance of Learning with Importance-Performance and Learning Growth: A Case Study
ERIC Educational Resources Information Center
Harvey, James W.; McCrohan, Kevin F.
2017-01-01
Two fallacious assumptions can mislead assurance of learning (AoL) loop closing. Association to Advance Collegiate Schools of Business guidance states that learning goals should reflect the outcomes most valued by the program, but evidence shows that schools assign equal priorities to the skills selected. The second false assumption is that…
ERIC Educational Resources Information Center
Jimenez, Laura M.; Meyer, Carla K.
2016-01-01
Graphic novels in the K-12 classroom are most often used to motivate marginalized readers because of the lower text load and assumption of easy reading. This assumption has thus far been unexplored by reading research. This qualitative multiple-case study utilized think-aloud protocols in a new attention-mapping activity to better understand how…
ERIC Educational Resources Information Center
Sant, Edda; Hanley, Chris
2018-01-01
Teacher education in England now requires that student teachers follow practices that do not undermine "fundamental British values" where these practices are assessed against a set of ethics and behaviour standards. This paper examines the political assumptions underlying pedagogical interpretations about the education of national…
ERIC Educational Resources Information Center
Mellone, Maria
2011-01-01
Assumptions about the construction and the transmission of knowledge and about the nature of mathematics always underlie any teaching practice, even if often unconsciously. I examine the conjecture that theoretical tools suitably chosen can help the teacher to make such assumptions explicit and to support the teacher's reflection on his/her…
7 CFR 1980.476 - Transfer and assumptions.
Code of Federal Regulations, 2010 CFR
2010-01-01
...-354 449-30 to recover its pro rata share of the actual loss at that time. In completing Form FmHA or... the lender on liquidations and property management. A. The State Director may approve all transfer and... Director will notify the Finance Office of all approved transfer and assumption cases on Form FmHA or its...
Estimators for longitudinal latent exposure models: examining measurement model assumptions.
Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D
2017-06-15
Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Castillo, Edward; Castillo, Richard; White, Benjamin; Rojo, Javier; Guerrero, Thomas
2012-01-01
Compressible flow based image registration operates under the assumption that the mass of the imaged material is conserved from one image to the next. Depending on how the mass conservation assumption is modeled, the performance of existing compressible flow methods is limited by factors such as image quality, noise, large magnitude voxel displacements, and computational requirements. The Least Median of Squares Filtered Compressible Flow (LFC) method introduced here is based on a localized, nonlinear least squares, compressible flow model that describes the displacement of a single voxel that lends itself to a simple grid search (block matching) optimization strategy. Spatially inaccurate grid search point matches, corresponding to erroneous local minimizers of the nonlinear compressible flow model, are removed by a novel filtering approach based on least median of squares fitting and the forward search outlier detection method. The spatial accuracy of the method is measured using ten thoracic CT image sets and large samples of expert determined landmarks (available at www.dir-lab.com). The LFC method produces an average error within the intra-observer error on eight of the ten cases, indicating that the method is capable of achieving a high spatial accuracy for thoracic CT registration. PMID:22797602
Spreading dynamics on complex networks: a general stochastic approach.
Noël, Pierre-André; Allard, Antoine; Hébert-Dufresne, Laurent; Marceau, Vincent; Dubé, Louis J
2014-12-01
Dynamics on networks is considered from the perspective of Markov stochastic processes. We partially describe the state of the system through network motifs and infer any missing data using the available information. This versatile approach is especially well adapted for modelling spreading processes and/or population dynamics. In particular, the generality of our framework and the fact that its assumptions are explicitly stated suggests that it could be used as a common ground for comparing existing epidemics models too complex for direct comparison, such as agent-based computer simulations. We provide many examples for the special cases of susceptible-infectious-susceptible and susceptible-infectious-removed dynamics (e.g., epidemics propagation) and we observe multiple situations where accurate results may be obtained at low computational cost. Our perspective reveals a subtle balance between the complex requirements of a realistic model and its basic assumptions.
Determination of contact angle from the maximum height of enlarged drops on solid surfaces
NASA Astrophysics Data System (ADS)
Behroozi, F.
2012-04-01
Measurement of the liquid/solid contact angle provides useful information on the wetting properties of fluids. In 1870, the German physicist Georg Hermann Quincke (1834-1924) published the functional relation between the maximum height of an enlarged drop and its contact angle. Quincke's relation offered an alternative to the direct measurement of contact angle, which in practice suffers from several experimental uncertainties. In this paper, we review Quincke's original derivation and show that it is based on a hidden assumption. We then present a new derivation that exposes this assumption and clarifies the conditions under which Quincke's relation is valid. To explore Quincke's relation experimentally, we measure the maximum height of enlarged water drops on several substrates and calculate the contact angle in each case. Our results are in good agreement with contact angles measured directly from droplet images.
Smoothing of the bivariate LOD score for non-normal quantitative traits.
Buil, Alfonso; Dyer, Thomas D; Almasy, Laura; Blangero, John
2005-12-30
Variance component analysis provides an efficient method for performing linkage analysis for quantitative traits. However, type I error of variance components-based likelihood ratio testing may be affected when phenotypic data are non-normally distributed (especially with high values of kurtosis). This results in inflated LOD scores when the normality assumption does not hold. Even though different solutions have been proposed to deal with this problem with univariate phenotypes, little work has been done in the multivariate case. We present an empirical approach to adjust the inflated LOD scores obtained from a bivariate phenotype that violates the assumption of normality. Using the Collaborative Study on the Genetics of Alcoholism data available for the Genetic Analysis Workshop 14, we show how bivariate linkage analysis with leptokurtotic traits gives an inflated type I error. We perform a novel correction that achieves acceptable levels of type I error.
Multidimensional stock network analysis: An Escoufier's RV coefficient approach
NASA Astrophysics Data System (ADS)
Lee, Gan Siew; Djauhari, Maman A.
2013-09-01
The current practice of stocks network analysis is based on the assumption that the time series of closed stock price could represent the behaviour of the each stock. This assumption leads to consider minimal spanning tree (MST) and sub-dominant ultrametric (SDU) as an indispensible tool to filter the economic information contained in the network. Recently, there is an attempt where researchers represent stock not only as a univariate time series of closed price but as a bivariate time series of closed price and volume. In this case, they developed the so-called multidimensional MST to filter the important economic information. However, in this paper, we show that their approach is only applicable for that bivariate time series only. This leads us to introduce a new methodology to construct MST where each stock is represented by a multivariate time series. An example of Malaysian stock exchange will be presented and discussed to illustrate the advantages of the method.
Consequences of synergy between environmental carcinogens
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berenbaum, M.C.
1985-12-01
As it is generally impossible to determine dose-response relationships for carcinogens at the low concentrations in which they occur in the environment, risk-benefit considerations are by consensus based on the linear, no-threshold model, on the assumption that this represents the worst case. However, this assumption does not take into account the possibility of synergistic interactions between carcinogens. It is shown here that, as a result of such interactions, the dose-response curve for added risk due to any individual carcinogen will generally be steeper at lower doses than at higher doses, and consequently the risk at low environmental levels will bemore » higher than would be expected from a linear response. Moreover, this excess risk at low doses is shown to increase as the general level of environmental carcinogens rises and, independently of this effect, it may also increase with the number of carcinogens present.« less
A Managerial Approach to Compensation
ERIC Educational Resources Information Center
Wolfe, Arthur V.
1975-01-01
The article examines the major external forces constraining equitable employee compensation, sets forth the classical employee compensation assumptions, suggests somewhat more realistic employee compensation assumptions, and proposes guidelines based on analysis of these external constraints and assumptions. (Author)
The lack of selection bias in a snowball sampled case-control study on drug abuse.
Lopes, C S; Rodrigues, L C; Sichieri, R
1996-12-01
Friend controls in matched case-control studies can be a potential source of bias based on the assumption that friends are more likely to share exposure factors. This study evaluates the role of selection bias in a case-control study that used the snowball sampling method based on friendship for the selection of cases and controls. The cases selected fro the study were drug abusers located in the community. Exposure was defined by the presence of at least one psychiatric diagnosis. Psychiatric and drug abuse/dependence diagnoses were made according to the Diagnostic and Statistical Manual of Mental Disorders (DSM-III-R) criteria. Cases and controls were matched on sex, age and friendship. The measurement of selection bias was made through the comparison of the proportion of exposed controls selected by exposed cases (p1) with the proportion of exposed controls selected by unexposed cases (p2). If p1 = p2 then, selection bias should not occur. The observed distribution of the 185 matched pairs having at least one psychiatric disorder showed a p1 value of 0.52 and a p2 value of 0.51, indicating no selection bias in this study. Our findings support the idea that the use of friend controls can produce a valid basis for a case-control study.
On the joint bimodality of temperature and moisture near stratocumulus cloud tops
NASA Technical Reports Server (NTRS)
Randall, D. A.
1983-01-01
The observed distributions of the thermodynamic variables near stratocumulus top are highly bimodal. Two simple models of sub-grid fractional cloudiness motivated by this observed bimodality are examined. In both models, certain low order moments of two independent, moist-conservative thermodynamic variables are assumed to be known. The first model is based on the assumption of two discrete populations of parcels: a warm-day population and a cool-moist population. If only the first and second moments are assumed to be known, the number of unknowns exceeds the number of independent equations. If the third moments are assumed to be known as well, the number of independent equations exceeds the number of unknowns. The second model is based on the assumption of a continuous joint bimodal distribution of parcels, obtained as the weighted sum of two binormal distributions. For this model, the third moments are used to obtain 9 independent nonlinear algebraic equations in 11 unknowns. Two additional equations are needed to determine the covariance within the two subpopulations. In case these two internal covariance vanish, the system of equations can be solved analytically.
From behavioural analyses to models of collective motion in fish schools
Lopez, Ugo; Gautrais, Jacques; Couzin, Iain D.; Theraulaz, Guy
2012-01-01
Fish schooling is a phenomenon of long-lasting interest in ethology and ecology, widely spread across taxa and ecological contexts, and has attracted much interest from statistical physics and theoretical biology as a case of self-organized behaviour. One topic of intense interest is the search of specific behavioural mechanisms at stake at the individual level and from which the school properties emerges. This is fundamental for understanding how selective pressure acting at the individual level promotes adaptive properties of schools and in trying to disambiguate functional properties from non-adaptive epiphenomena. Decades of studies on collective motion by means of individual-based modelling have allowed a qualitative understanding of the self-organization processes leading to collective properties at school level, and provided an insight into the behavioural mechanisms that result in coordinated motion. Here, we emphasize a set of paradigmatic modelling assumptions whose validity remains unclear, both from a behavioural point of view and in terms of quantitative agreement between model outcome and empirical data. We advocate for a specific and biologically oriented re-examination of these assumptions through experimental-based behavioural analysis and modelling. PMID:24312723
Dissecting disease entities out of the broad spectrum of bipolar-disorders.
Levine, Joseph; Toker, Lilach; Agam, Galila
2018-01-01
The etiopathology of bipolar disorders is yet unraveled and new avenues should be pursued. One such avenue may be based on the assumption that the bipolar broad spectrum includes, among others, an array of rare medical disease entities. Towards this aim we propose a dissecting approach based on a search for rare medical diseases with known etiopathology which also exhibit bipolar disorders symptomatology. We further suggest that the etiopathologic mechanisms underlying such rare medical diseases may also underlie a rare variant of bipolar disorder. Such an assumption may be further reinforced if both the rare medical disease and its bipolar clinical phenotype demonstrate a] a similar mode of inheritance (i.e, autosomal dominant); b] brain involvement; and c] data implicating that the etiopathological mechanisms underlying the rare diseases affect biological processes reported to be associated with bipolar disorders and their treatment. We exemplify our suggested approach by a rare case of autosomal dominant leucodystrophy, a disease entity exhibiting nuclear lamin B1 pathology also presenting bipolar symptomatology. Copyright © 2017 Elsevier B.V. All rights reserved.
On a viable first-order formulation of relativistic viscous fluids and its applications to cosmology
NASA Astrophysics Data System (ADS)
Disconzi, Marcelo M.; Kephart, Thomas W.; Scherrer, Robert J.
We consider a first-order formulation of relativistic fluids with bulk viscosity based on a stress-energy tensor introduced by Lichnerowicz. Choosing a barotropic equation-of-state, we show that this theory satisfies basic physical requirements and, under the further assumption of vanishing vorticity, that the equations of motion are causal, both in the case of a fixed background and when the equations are coupled to Einstein's equations. Furthermore, Lichnerowicz's proposal does not fit into the general framework of first-order theories studied by Hiscock and Lindblom, and hence their instability results do not apply. These conclusions apply to the full-fledged nonlinear theory, without any equilibrium or near equilibrium assumptions. Similarities and differences between the approach explored here and other theories of relativistic viscosity, including the Mueller-Israel-Stewart formulation, are addressed. Cosmological models based on the Lichnerowicz stress-energy tensor are studied. As the topic of (relativistic) viscous fluids is also of interest outside the general relativity and cosmology communities, such as, for instance, in applications involving heavy-ion collisions, we make our presentation largely self-contained.
An improved rainfall disaggregation technique for GCMs
NASA Astrophysics Data System (ADS)
Onof, C.; Mackay, N. G.; Oh, L.; Wheater, H. S.
1998-08-01
Meteorological models represent rainfall as a mean value for a grid square so that when the latter is large, a disaggregation scheme is required to represent the spatial variability of rainfall. In general circulation models (GCMs) this is based on an assumption of exponentiality of rainfall intensities and a fixed value of areal rainfall coverage, dependent on rainfall type. This paper examines these two assumptions on the basis of U.K. and U.S. radar data. Firstly, the coverage of an area is strongly dependent on its size, and this dependence exhibits a scaling law over a range of sizes. Secondly, the coverage is, of course, dependent on the resolution at which it is measured, although this dependence is weak at high resolutions. Thirdly, the time series of rainfall coverages has a long-tailed autocorrelation function which is comparable to that of the mean areal rainfalls. It is therefore possible to reproduce much of the temporal dependence of coverages by using a regression of the log of the mean rainfall on the log of the coverage. The exponential assumption is satisfactory in many cases but not able to reproduce some of the long-tailed dependence of some intensity distributions. Gamma and lognormal distributions provide a better fit in these cases, but they have their shortcomings and require a second parameter. An improved disaggregation scheme for GCMs is proposed which incorporates the previous findings to allow the coverage to be obtained for any area and any mean rainfall intensity. The parameters required are given and some of their seasonal behavior is analyzed.
Song, Fujian; Loke, Yoon K; Walsh, Tanya; Glenny, Anne-Marie; Eastwood, Alison J; Altman, Douglas G
2009-04-03
To investigate basic assumptions and other methodological problems in the application of indirect comparison in systematic reviews of competing healthcare interventions. Survey of published systematic reviews. Inclusion criteria Systematic reviews published between 2000 and 2007 in which an indirect approach had been explicitly used. Identified reviews were assessed for comprehensiveness of the literature search, method for indirect comparison, and whether assumptions about similarity and consistency were explicitly mentioned. The survey included 88 review reports. In 13 reviews, indirect comparison was informal. Results from different trials were naively compared without using a common control in six reviews. Adjusted indirect comparison was usually done using classic frequentist methods (n=49) or more complex methods (n=18). The key assumption of trial similarity was explicitly mentioned in only 40 of the 88 reviews. The consistency assumption was not explicit in most cases where direct and indirect evidence were compared or combined (18/30). Evidence from head to head comparison trials was not systematically searched for or not included in nine cases. Identified methodological problems were an unclear understanding of underlying assumptions, inappropriate search and selection of relevant trials, use of inappropriate or flawed methods, lack of objective and validated methods to assess or improve trial similarity, and inadequate comparison or inappropriate combination of direct and indirect evidence. Adequate understanding of basic assumptions underlying indirect and mixed treatment comparison is crucial to resolve these methodological problems. APPENDIX 1: PubMed search strategy. APPENDIX 2: Characteristics of identified reports. APPENDIX 3: Identified studies. References of included studies.
NASA Technical Reports Server (NTRS)
Mielonen, T.; Levy, R. C.; Aaltonen, V.; Komppula, M.; de Leeuw, G.; Huttunen, J.; Lihavainen, H.; Kolmonen, P.; Lehtinen, K. E. J.; Arola, A.
2011-01-01
Aerosol Optical Depth (AOD) and Angstrom exponent (AE) values derived with the MODIS retrieval algorithm over land (Collection 5) are compared with ground based sun photometer measurements at eleven sites spanning the globe. Although, in general, total AOD compares well at these sites (R2 values generally over 0.8), there are cases (from 2 to 67% of the measurements depending on the site) where MODIS clearly retrieves the wrong spectral dependence, and hence, an unrealistic AE value. Some of these poor AE retrievals are due to the aerosol signal being too small (total AOD<0.3) but in other cases the AOD should have been high enough to derive accurate AE. However, in these cases, MODIS indicates AE values close to 0.6 and zero fine model weighting (FMW), i.e. dust model provides the best fitting to the MODIS observed reflectance. Yet, according to evidence from the collocated sun photometer measurements and back-trajectory analyses, there should be no dust present. This indicates that the assumptions about aerosol model and surface properties made by the MODIS algorithm may have been incorrect. Here we focus on problems related to parameterization of the land-surface optical properties in the algorithm, in particular the relationship between the surface reflectance at 660 and 2130 nm.
Langholz, Bryan; Thomas, Duncan C.; Stovall, Marilyn; Smith, Susan A.; Boice, John D.; Shore, Roy E.; Bernstein, Leslie; Lynch, Charles F.; Zhang, Xinbo; Bernstein, Jonine L.
2009-01-01
Summary Methods for the analysis of individually matched case-control studies with location-specific radiation dose and tumor location information are described. These include likelihood methods for analyses that just use cases with precise location of tumor information and methods that also include cases with imprecise tumor location information. The theory establishes that each of these likelihood based methods estimates the same radiation rate ratio parameters, within the context of the appropriate model for location and subject level covariate effects. The underlying assumptions are characterized and the potential strengths and limitations of each method are described. The methods are illustrated and compared using the WECARE study of radiation and asynchronous contralateral breast cancer. PMID:18647297
1982-01-01
have become highly sensitized to the potential long-term health and environmental effects of the so-called "toxic * and hazardous chemicals," which...their assumption of the Defense Disposal mission. The PCB collection and disposal exercise will be on-going for several years. PCB disposal is by...imposed with respect to the disposal of *. materials at a time when their effects were largely unknown (and in many cases still are). Moreover, the
2012-09-01
this case, there is a price premium relative to globally least-cost purchases if such capabilities exist elsewhere and are being employed at a level of...operational sovereignty and security, and the technology areas where MOD would rely on international defence cooperation or open global technology...planning assumptions (i.e. future budgets) • What was required for retention in the UK industrial base 10 • Overview of the global defence market
Robust stability of second-order systems
NASA Technical Reports Server (NTRS)
Chuang, C.-H.
1993-01-01
A feedback linearization technique is used in conjunction with passivity concepts to design robust controllers for space robots. It is assumed that bounded modeling uncertainties exist in the inertia matrix and the vector representing the coriolis, centripetal, and friction forces. Under these assumptions, the controller guarantees asymptotic tracking of the joint variables. A Lagrangian approach is used to develop a dynamic model for space robots. Closed-loop simulation results are illustrated for a simple case of a single link planar manipulator with freely floating base.
Smith, Leah M; Lévesque, Linda E; Kaufman, Jay S; Strumpf, Erin C
2017-06-01
The regression discontinuity design (RDD) is a quasi-experimental approach used to avoid confounding bias in the assessment of new policies and interventions. It is applied specifically in situations where individuals are assigned to a policy/intervention based on whether they are above or below a pre-specified cut-off on a continuously measured variable, such as birth date, income or weight. The strength of the design is that, provided individuals do not manipulate the value of this variable, assignment to the policy/intervention is considered as good as random for individuals close to the cut-off. Despite its popularity in fields like economics, the RDD remains relatively unknown in epidemiology where its application could be tremendously useful. In this paper, we provide a practical introduction to the RDD for health researchers, describe four empirically testable assumptions of the design and offer strategies that can be used to assess whether these assumptions are met in a given study. For illustrative purposes, we implement these strategies to assess whether the RDD is appropriate for a study of the impact of human papillomavirus vaccination on cervical dysplasia. We found that, whereas the assumptions of the RDD were generally satisfied in our study context, birth timing had the potential to confound our effect estimate in an unexpected way and therefore needed to be taken into account in the analysis. Our findings underscore the importance of assessing the validity of the assumptions of this design, testing them when possible and making adjustments as necessary to support valid causal inference. © The Author 2016. Published by Oxford University Press on behalf of the International Epidemiological Association
Levi, Miriam; Ariani, Filippo; Baldasseroni, Alberto
2011-01-01
To introduce the concept of DALYs (Disability Adjusted Life Years), in order to calculate the burden of occupational injuries and to compare the disability weights methodology applied by the National Institute for Insurance against Accidents at Work (INAIL) to occupational injuries, with respect to the methodology adopted by the World Health Organization in the Global Burden of Disease Study (GBD), in order to facilitate, on a regional-national basis, the future application of estimates of Burden of Disease due to this phenomenon, based on data available from the NHS. In the first part of the present study, a comparison between the theoretical GBD methodology, based on Disability Weights, and the INAIL methodology based on Gradi di inabilità (Degree of Disability) (GI) described in the table of impairments is made, using data on occupational injuries occurred in Tuscany from 2001 to 2008. Given the different criteria adopted by WHO and INAIL for the classification of injuries sequelae, in the second part, two equations described in the literature have been applied in order to correct systematic biases. In the INAIL dataset, all types of injuries, though often small in scale, have cases with permanent consequences, some of them serious.This contrasts with the assumptions of the WHO, that, apart from the cases of amputation, reduces the possibility of lifelong disabilities to a few very serious categories. In the case of femur and skull fractures, the proportion of lifelong cases is considered by WHO similar to the proportion that in the INAIL dataset is achieved after narrowing the threshold of permanent damage to cases with GI ≥ 33. In the case of amputations and spinal cord injuries, for which the WHO assumes a priori that all cases have lifelong consequences, on the contrary, the greater similarity between the assumptions and the empirically observable reality is obtained after extending the threshold of permanent damage to all cases with even minimal sequelae.The comparison between the WHO DW and INAIL GI, possible only in relation to injuries resulting in permanent damage, shows that in case of injuries of greater severity, INAIL GI are generally lower than the WHO DW. In the case of less serious injuries, INAIL gives instead higher values. The length of temporary disabilities recorded by INAIL is systematically higher than that estimated by WHO. These initial comparisons between the WHO methodology and the cases evaluation performed by INAIL show that the Italian system, based on the gathering of all relevant aspects related to each case, has the potential to utilize and synthesize a greater amount of information.However, wide limits of uncertainty still remain and further empirical findings are needed in order to compare the two systems in terms of precise determination of the DW, the length of disabilities and variations of mortality related to injuries.
Brisson, Marc; Laprise, Jean-François; Chesson, Harrell W; Drolet, Mélanie; Malagón, Talía; Boily, Marie-Claude; Markowitz, Lauri E
2016-01-01
Randomized clinical trials have shown the 9-valent human papillomavirus (HPV) vaccine to be highly effective against types 31/33/45/52/58 compared with the 4-valent. Evidence on the added health and economic benefit of the 9-valent is required for policy decisions. We compare population-level effectiveness and cost-effectiveness of 9- and 4-valent HPV vaccination in the United States. We used a multitype individual-based transmission-dynamic model of HPV infection and disease (anogenital warts and cervical, anogenital, and oropharyngeal cancers), 3% discount rate, and societal perspective. The model was calibrated to sexual behavior and epidemiologic data from the United States. In our base-case, we assumed 95% vaccine-type efficacy, lifelong protection, and a cost/dose of $145 and $158 for the 4- and 9-valent vaccine, respectively. Predictions are presented using the mean (80% uncertainty interval [UI] = 10(th)-90(th) percentiles) of simulations. Under base-case assumptions, the 4-valent gender-neutral vaccination program is estimated to cost $5500 (80% UI = 2400-9400) and $7300 (80% UI = 4300-11 000)/quality-adjusted life-year (QALY) gained with and without cross-protection, respectively. Switching to a 9-valent gender-neutral program is estimated to be cost-saving irrespective of cross-protection assumptions. Finally, the incremental cost/QALY gained of switching to a 9-valent gender-neutral program (vs 9-valent girls/4-valent boys) is estimated to be $140 200 (80% UI = 4200->1 million) and $31 100 (80% UI = 2100->1 million) with and without cross-protection, respectively. Results are robust to assumptions about HPV natural history, screening methods, duration of protection, and healthcare costs. Switching to a 9-valent gender-neutral HPV vaccination program is likely to be cost-saving if the additional cost/dose of the 9-valent is less than $13. Giving females the 9-valent vaccine provides the majority of benefits of a gender-neutral strategy. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
The Cost of Penicillin Allergy Evaluation.
Blumenthal, Kimberly G; Li, Yu; Banerji, Aleena; Yun, Brian J; Long, Aidan A; Walensky, Rochelle P
2017-09-22
Unverified penicillin allergy leads to adverse downstream clinical and economic sequelae. Penicillin allergy evaluation can be used to identify true, IgE-mediated allergy. To estimate the cost of penicillin allergy evaluation using time-driven activity-based costing (TDABC). We implemented TDABC throughout the care pathway for 30 outpatients presenting for penicillin allergy evaluation. The base-case evaluation included penicillin skin testing and a 1-step amoxicillin drug challenge, performed by an allergist. We varied assumptions about the provider type, clinical setting, procedure type, and personnel timing. The base-case penicillin allergy evaluation costs $220 in 2016 US dollars: $98 for personnel, $119 for consumables, and $3 for space. In sensitivity analyses, lower cost estimates were achieved when only a drug challenge was performed (ie, no skin test, $84) and a nurse practitioner provider was used ($170). Adjusting for the probability of anaphylaxis did not result in a changed estimate ($220); although other analyses led to modest changes in the TDABC estimate ($214-$246), higher estimates were identified with changing to a low-demand practice setting ($268), a 50% increase in personnel times ($269), and including clinician documentation time ($288). In a least/most costly scenario analyses, the lowest TDABC estimate was $40 and the highest was $537. Using TDABC, penicillin allergy evaluation costs $220; even with varied assumptions adjusting for operational challenges, clinical setting, and expanded testing, penicillin allergy evaluation still costs only about $540. This modest investment may be offset for patients treated with costly alternative antibiotics that also may result in adverse consequences. Copyright © 2017 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.
Gender differences in legal outcomes of filicide in Austria and Finland.
Amon, S; Putkonen, H; Weizmann-Henelius, G; Fernandez Arias, P; Klier, C M
2018-06-01
Female offenders of filicide have been found to receive more lenient legal handling than male offenders. We aimed to discover these possible gender differences in the legal outcome of filicide cases. This was a binational register-based study covering all filicide offenders in Austria and Finland 1995-2005. We examined the legal outcomes of the crimes of all living offenders (64 mothers and 26 fathers). Mothers received a conviction of murder and life imprisonment less often than fathers. Within psychotic and personality-disordered offenders, infanticides, and offenders convicted for life, gender differences were less evident. Even though there seems to be some gender differences within the legal outcomes of filicide, ruling seemed more consistent than expected within distinct subgroups of offenders. Gender-based assumptions should not hinder equal and just handling of filicide cases.
Simulation of the hybrid and steady state advanced operating modes in ITER
NASA Astrophysics Data System (ADS)
Kessel, C. E.; Giruzzi, G.; Sips, A. C. C.; Budny, R. V.; Artaud, J. F.; Basiuk, V.; Imbeaux, F.; Joffrin, E.; Schneider, M.; Murakami, M.; Luce, T.; St. John, Holger; Oikawa, T.; Hayashi, N.; Takizuka, T.; Ozeki, T.; Na, Y.-S.; Park, J. M.; Garcia, J.; Tucillo, A. A.
2007-09-01
Integrated simulations are performed to establish a physics basis, in conjunction with present tokamak experiments, for the operating modes in the International Thermonuclear Experimental Reactor (ITER). Simulations of the hybrid mode are done using both fixed and free-boundary 1.5D transport evolution codes including CRONOS, ONETWO, TSC/TRANSP, TOPICS and ASTRA. The hybrid operating mode is simulated using the GLF23 and CDBM05 energy transport models. The injected powers are limited to the negative ion neutral beam, ion cyclotron and electron cyclotron heating systems. Several plasma parameters and source parameters are specified for the hybrid cases to provide a comparison of 1.5D core transport modelling assumptions, source physics modelling assumptions, as well as numerous peripheral physics modelling. Initial results indicate that very strict guidelines will need to be imposed on the application of GLF23, for example, to make useful comparisons. Some of the variations among the simulations are due to source models which vary widely among the codes used. In addition, there are a number of peripheral physics models that should be examined, some of which include fusion power production, bootstrap current, treatment of fast particles and treatment of impurities. The hybrid simulations project to fusion gains of 5.6-8.3, βN values of 2.1-2.6 and fusion powers ranging from 350 to 500 MW, under the assumptions outlined in section 3. Simulations of the steady state operating mode are done with the same 1.5D transport evolution codes cited above, except the ASTRA code. In these cases the energy transport model is more difficult to prescribe, so that energy confinement models will range from theory based to empirically based. The injected powers include the same sources as used for the hybrid with the possible addition of lower hybrid. The simulations of the steady state mode project to fusion gains of 3.5-7, βN values of 2.3-3.0 and fusion powers of 290 to 415 MW, under the assumptions described in section 4. These simulations will be presented and compared with particular focus on the resulting temperature profiles, source profiles and peripheral physics profiles. The steady state simulations are at an early stage and are focused on developing a range of safety factor profiles with 100% non-inductive current.
Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood
NASA Astrophysics Data System (ADS)
Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim
2017-04-01
Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models
Interpretive Research Aiming at Theory Building: Adopting and Adapting the Case Study Design
ERIC Educational Resources Information Center
Diaz Andrade, Antonio
2009-01-01
Although the advantages of case study design are widely recognised, its original positivist underlying assumptions may mislead interpretive researchers aiming at theory building. The paper discusses the limitations of the case study design for theory building and explains how grounded theory systemic process adds to the case study design. The…
NASA Technical Reports Server (NTRS)
Lan, C. Edward
1985-01-01
A computer program based on the Quasi-Vortex-Lattice Method of Lan is presented for calculating longitudinal and lateral-directional aerodynamic characteristics of nonplanar wing-body combination. The method is based on the assumption of inviscid subsonic flow. Both attached and vortex-separated flows are treated. For the vortex-separated flow, the calculation is based on the method of suction analogy. The effect of vortex breakdown is accounted for by an empirical method. A summary of the theoretical method, program capabilities, input format, output variables and program job control set-up are described. Three test cases are presented as guides for potential users of the code.
Notes on SAW Tag Interrogation Techniques
NASA Technical Reports Server (NTRS)
Barton, Richard J.
2010-01-01
We consider the problem of interrogating a single SAW RFID tag with a known ID and known range in the presence of multiple interfering tags under the following assumptions: (1) The RF propagation environment is well approximated as a simple delay channel with geometric power-decay constant alpha >/= 2. (2) The interfering tag IDs are unknown but well approximated as independent, identically distributed random samples from a probability distribution of tag ID waveforms with known second-order properties, and the tag of interest is drawn independently from the same distribution. (3) The ranges of the interfering tags are unknown but well approximated as independent, identically distributed realizations of a random variable rho with a known probability distribution f(sub rho) , and the tag ranges are independent of the tag ID waveforms. In particular, we model the tag waveforms as random impulse responses from a wide-sense-stationary, uncorrelated-scattering (WSSUS) fading channel with known bandwidth and scattering function. A brief discussion of the properties of such channels and the notation used to describe them in this document is given in the Appendix. Under these assumptions, we derive the expression for the output signal-to-noise ratio (SNR) for an arbitrary combination of transmitted interrogation signal and linear receiver filter. Based on this expression, we derive the optimal interrogator configuration (i.e., transmitted signal/receiver filter combination) in the two extreme noise/interference regimes, i.e., noise-limited and interference-limited, under the additional assumption that the coherence bandwidth of the tags is much smaller than the total tag bandwidth. Finally, we evaluate the performance of both optimal interrogators over a broad range of operating scenarios using both numerical simulation based on the assumed model and Monte Carlo simulation based on a small sample of measured tag waveforms. The performance evaluation results not only provide guidelines for proper interrogator design, but also provide some insight on the validity of the assumed signal model. It should be noted that the assumption that the impulse response of the tag of interest is known precisely implies that the temperature and range of the tag are also known precisely, which is generally not the case in practice. However, analyzing interrogator performance under this simplifying assumption is much more straightforward and still provides a great deal of insight into the nature of the problem.
Edwardson, Nicholas; Bolin, Jane N; McClellan, David A; Nash, Philip P; Helduser, Janet W
2016-04-01
Demand for a wide array of colorectal cancer screening strategies continues to outpace supply. One strategy to reduce this deficit is to dramatically increase the number of primary care physicians who are trained and supportive of performing office-based colonoscopies or flexible sigmoidoscopies. This study evaluates the clinical and economic implications of training primary care physicians via family medicine residency programs to offer colorectal cancer screening services as an in-office procedure. Using previously established clinical and economic assumptions from existing literature and budget data from a local grant (2013), incremental cost-effectiveness ratios are calculated that incorporate the costs of a proposed national training program and subsequent improvements in patient compliance. Sensitivity analyses are also conducted. Baseline assumptions suggest that the intervention would produce 2394 newly trained residents who could perform 71,820 additional colonoscopies or 119,700 additional flexible sigmoidoscopies after ten years. Despite high costs associated with the national training program, incremental cost-effectiveness ratios remain well below standard willingness-to-pay thresholds under base case assumptions. Interestingly, the status quo hierarchy of preferred screening strategies is disrupted by the proposed intervention. A national overhaul of family medicine residency programs offering training for colorectal cancer screening yields satisfactory incremental cost-effectiveness ratios. However, the model places high expectations on primary care physicians to improve current compliance levels in the US. Copyright © 2016 Elsevier Inc. All rights reserved.
A simple model for the cloud adjacency effect and the apparent bluing of aerosols near clouds
NASA Astrophysics Data System (ADS)
Marshak, Alexander; Wen, Guoyong; Coakley, James A.; Remer, Lorraine A.; Loeb, Norman G.; Cahalan, Robert F.
2008-07-01
In determining aerosol-cloud interactions, the properties of aerosols must be characterized in the vicinity of clouds. Numerous studies based on satellite observations have reported that aerosol optical depths increase with increasing cloud cover. Part of the increase comes from the humidification and consequent growth of aerosol particles in the moist cloud environment, but part comes from 3-D cloud-radiative transfer effects on the retrieved aerosol properties. Often, discerning whether the observed increases in aerosol optical depths are artifacts or real proves difficult. The paper only addresses the cloud-clear sky radiative transfer interaction part. It provides a simple model that quantifies the enhanced illumination of cloud-free columns in the vicinity of clouds that are used in the aerosol retrievals. This model is based on the assumption that the enhancement in the cloud-free column radiance comes from enhanced Rayleigh scattering that results from the presence of the nearby clouds. This assumption leads to a larger increase of AOT for shorter wavelengths, or to a "bluing" of aerosols near clouds. The assumption that contribution from molecular scattering dominates over aerosol scattering and surface reflection is justified for the case of shorter wavelengths, dark surfaces, and an aerosol layer below the cloud tops. The enhancement in Rayleigh scattering is estimated using a stochastic cloud model to obtain the radiative flux reflected by broken clouds and comparing this flux with that obtained with the molecules in the atmosphere causing extinction, but no scattering.
NASA Astrophysics Data System (ADS)
Zhou, Yongbo; Sun, Xuejin; Mielonen, Tero; Li, Haoran; Zhang, Riwei; Li, Yan; Zhang, Chuanliang
2018-01-01
For inhomogeneous cirrus clouds, cloud optical thickness (COT) and effective diameter (De) provided by the Moderate Resolution Imaging Spectrometer (MODIS) Collection 6 cloud products are associated with errors due to the single habit assumption (SHA), independent pixel assumption (IPA), photon absorption effect (PAE), and plane-parallel assumption (PPA). SHA means that every cirrus cloud is assumed to have the same shape habit of ice crystals. IPA errors are caused by three-dimensional (3D) radiative effects. PPA and PAE errors are caused by cloud inhomogeneity. We proposed a method to single out these different errors. These errors were examined using the Spherical Harmonics Discrete Ordinate Method simulations done for the MODIS 0.86 μm and 2.13 μm bands. Four midlatitude and tropical cirrus cases were studied. For the COT retrieval, the impacts of SHA and IPA were especially large for optically thick cirrus cases. SHA errors in COT varied distinctly with scattering angles. For the De retrieval, SHA decreased De under most circumstances. PAE decreased De for optically thick cirrus cases. For the COT and De retrievals, the dominant error source was SHA for overhead sun whereas for oblique sun, it could be any of SHA, IPA, and PAE, varying with cirrus cases and sun-satellite viewing geometries. On the domain average, the SHA errors in COT (De) were within -16.1%-42.6% (-38.7%-2.0%), whereas the 3-D radiative effects- and cloud inhomogeneity-induced errors in COT (De) were within -5.6%-19.6% (-2.9%-8.0%) and -2.6%-0% (-3.7%-9.8%), respectively.
Clark, James E; Osborne, Jason W; Gallagher, Peter; Watson, Stuart
2016-07-01
Neuroendocrine data are typically positively skewed and rarely conform to the expectations of a Gaussian distribution. This can be a problem when attempting to analyse results within the framework of the general linear model, which relies on assumptions that residuals in the data are normally distributed. One frequently used method for handling violations of this assumption is to transform variables to bring residuals into closer alignment with assumptions (as residuals are not directly manipulated). This is often attempted through ad hoc traditional transformations such as square root, log and inverse. However, Box and Cox (Box & Cox, ) observed that these are all special cases of power transformations and proposed a more flexible method of transformation for researchers to optimise alignment with assumptions. The goal of this paper is to demonstrate the benefits of the infinitely flexible Box-Cox transformation on neuroendocrine data using syntax in spss. When applied to positively skewed data typical of neuroendocrine data, the majority (~2/3) of cases were brought into strict alignment with Gaussian distribution (i.e. a non-significant Shapiro-Wilks test). Those unable to meet this challenge showed substantial improvement in distributional properties. The biggest challenge was distributions with a high ratio of kurtosis to skewness. We discuss how these cases might be handled, and we highlight some of the broader issues associated with transformation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
A critique of the historical-fire-regime concept in conservation.
Freeman, Johanna; Kobziar, Leda; Rose, Elizabeth White; Cropper, Wendell
2017-10-01
Prescribed fire is widely accepted as a conservation tool because fire is essential to the maintenance of native biodiversity in many terrestrial communities. Approaches to this land-management technique vary greatly among continents, and sharing knowledge internationally can inform application of prescribed fire worldwide. In North America, decisions about how and when to apply prescribed fire are typically based on the historical-fire-regime concept (HFRC), which holds that replicating the pattern of fires ignited by lightning or preindustrial humans best promotes native species in fire-prone regions. The HFRC rests on 3 assumptions: it is possible to infer historical fire regimes accurately; fire-suppressed communities are ecologically degraded; and reinstating historical fire regimes is the best course of action despite the global shift toward novel abiotic and biotic conditions. We examined the underpinnings of these assumptions by conducting a literature review on the use of historical fire regimes to inform the application of prescribed fire. We found that the practice of inferring historical fire regimes for entire regions or ecosystems often entails substantial uncertainty and can yield equivocal results; ecological outcomes of fire suppression are complex and may not equate to degradation, depending on the ecosystem and context; and habitat fragmentation, invasive species, and other modern factors can interact with fire to produce novel and in some cases negative ecological outcomes. It is therefore unlikely that all 3 assumptions will be fully upheld for any landscape in which prescribed fire is being applied. Although the HFRC is a valuable starting point, it should not be viewed as the sole basis for developing prescribed fire programs. Rather, fire prescriptions should also account for other specific, measurable ecological parameters on a case-by-case basis. To best achieve conservation goals, researchers should seek to understand contemporary fire-biota interactions across trophic levels, functional groups, spatial and temporal scales, and management contexts. © 2017 Society for Conservation Biology.
How does the legal system respond when children with learning difficulties are victimized?
Cederborg, Ann-Christin; Lamb, Michael E
2006-05-01
To understand how the Swedish legal system perceives and handles mentally handicapped children who may have been victimized. Twenty-two judicial districts in Sweden provided complete files on 39 District Court cases (including the Appeals Court files on 17 of these cases) involving children with learning difficulties or other handicaps as alleged victims of abuse, threat and neglect. The children (25 girls and 14 boys) averaged 11.8 years of age when first allegedly victimized. Sexual abuse was the most frequently alleged crime (33 cases). Court transcripts, court files and expert assessments of the alleged victims' handicaps and their possible consequences were examined to elucidate the ways in which courts evaluated the credibility of the alleged victims. The children's reports of their victimization were expected to have the characteristics emphasized by proponents of Statement Reality Analysis (SRA) and Criterion Based Content Analysis (CBCA) in order to be deemed credible. Expert reports were seldom available or adequate. Because many reports were poorly written or prepared by experts who lacked the necessary skills, courts were left to rely on their own assumptions and knowledge when evaluating children's capacities and credibility. Children with learning difficulties or other handicaps were expected to provide the same sort of reports as other children. To minimize the risk that judgments may be based on inaccurate assumptions courts need to require more thorough assessments of children's limitations and their implications. Assessments by competent mental health professionals could inform and strengthen legal decision-making. A standardized procedure that included psycho-diagnostic instruments would allow courts to understand better the abilities, capacities, and behavior of specific handicapped children.
Curran, Desmond; de Ridder, Marc; Van Effelterre, Thierry
2016-11-01
Hepatitis A vaccination stimulates memory cells to produce an anamnestic response. In this study, we used a mathematical model to examine how long-term immune memory might convey additional protection against clinical/icteric infections. Dynamic and decision models were used to estimate the expected number of cases, and the costs and quality-adjusted life-years (QALYs), respectively. Several scenarios were explored by assuming: (1) varying duration of vaccine-induced immune memory, (2) and/or varying levels of vaccine-induced immune memory protection (IMP), (3) and/or varying levels of infectiousness in vaccinated individuals with IMP. The base case analysis assumed a time horizon of 25 y (2012 - 2036), with additional analyses over 50 and 75 y. The analyses were conducted in the Mexican public health system perspective. In the base case that assumed no vaccine-induced IMP, the 2-dose hepatitis A vaccination strategy was cost-effective compared with the 1-dose strategy over the 3 time horizons. However, it was not cost-effective if we assumed additional IMP durations of at least 10 y in the 25-y horizon. In the 50- and 75-y horizons, the 2-dose strategy was always cost-effective, except when 100% reduction in the probability of icteric Infections, 75% reduction in infectiousness, and mean durations of IMP of at least 50 y were assumed. This analysis indicates that routine vaccination of toddlers against hepatitis A virus would be cost-effective in Mexico using a single-dose vaccination strategy. However, the cost-effectiveness of a second dose depends on the assumptions of additional protection by IMP and the time horizon over which the analysis is performed.
Curran, Desmond; de Ridder, Marc; Van Effelterre, Thierry
2016-01-01
ABSTRACT Hepatitis A vaccination stimulates memory cells to produce an anamnestic response. In this study, we used a mathematical model to examine how long-term immune memory might convey additional protection against clinical/icteric infections. Dynamic and decision models were used to estimate the expected number of cases, and the costs and quality-adjusted life-years (QALYs), respectively. Several scenarios were explored by assuming: (1) varying duration of vaccine-induced immune memory, (2) and/or varying levels of vaccine-induced immune memory protection (IMP), (3) and/or varying levels of infectiousness in vaccinated individuals with IMP. The base case analysis assumed a time horizon of 25 y (2012 – 2036), with additional analyses over 50 and 75 y. The analyses were conducted in the Mexican public health system perspective. In the base case that assumed no vaccine-induced IMP, the 2-dose hepatitis A vaccination strategy was cost-effective compared with the 1-dose strategy over the 3 time horizons. However, it was not cost-effective if we assumed additional IMP durations of at least 10 y in the 25-y horizon. In the 50- and 75-y horizons, the 2-dose strategy was always cost-effective, except when 100% reduction in the probability of icteric Infections, 75% reduction in infectiousness, and mean durations of IMP of at least 50 y were assumed. This analysis indicates that routine vaccination of toddlers against hepatitis A virus would be cost-effective in Mexico using a single-dose vaccination strategy. However, the cost-effectiveness of a second dose depends on the assumptions of additional protection by IMP and the time horizon over which the analysis is performed. PMID:27428611
Comparison of smallpox outbreak control strategies using a spatial metapopulation model.
Hall, I M; Egan, J R; Barrass, I; Gani, R; Leach, S
2007-10-01
To determine the potential benefits of regionally targeted mass vaccination as an adjunct to other smallpox control strategies we employed a spatial metapopulation patch model based on the administrative districts of Great Britain. We counted deaths due to smallpox and to vaccination to identify strategies that minimized total deaths. Results confirm that case isolation, and the tracing, vaccination and observation of case contacts can be optimal for control but only for optimistic assumptions concerning, for example, the basic reproduction number for smallpox (R0=3) and smaller numbers of index cases ( approximately 10). For a wider range of scenarios, including larger numbers of index cases and higher reproduction numbers, the addition of mass vaccination targeted only to infected districts provided an appreciable benefit (5-80% fewer deaths depending on where the outbreak started with a trigger value of 1-10 isolated symptomatic individuals within a district).
To BECCS or Not To BECCS: A Question of Method
NASA Astrophysics Data System (ADS)
DeCicco, J. M.
2017-12-01
Bioenergy with carbon capture and storage (BECCS) is seen as an important option in many climate stabilization scenarios. Limited demonstrations are underway, including a system that captures and sequesters the fermentation CO2 from ethanol production. However, its net CO2 emissions are uncertain for reasons related to both system characteristics and methodological issues. As for bioenergy in general, evaluations draw on both ecological and engineering methods. It is informative to apply different methods using available data for demonstration systems in comparison to related bioenergy systems. To do so, this paper examines a case study BECCS system and addresses questions regarding the utilization of terrestrial carbon, biomass sustainability and the implications for scalability. The analysis examines four systems, all utilizing the same land area, using two methods. The cases are: A) a crop system without either biofuel production or CCS; B) a biofuel production system without CCS; C) biofuel system with CCS, i.e., the BECCS case, and D) a crop system without biofuel production or CCS but with crop residue removal and conversion to a stable char. In cases A and D, the delivered fuel is fossil-based; in cases B and C the fuel is biomass-based. The first method is LCA, involving steady-flow modeling of systems over a defined lifecycle, following current practice as seen in the attributional LCA component of California's Low-Carbon Fuel Standard (LCFS). The second method involves spatially and temporally explicit analysis, reflecting the dynamics of carbon exchanges with the atmosphere. Although parameters are calibrated to the California LCFS LCA model, simplified spreadsheet modeling is used to maximize transparency while highlighting assumptions that most influence the results. The analysis reveals distinctly different pictures of net CO2 emissions for the cases examined, with the dynamic method painting a less optimistic picture of the BECCS system than the LCA method. Differences in results are traced to differing representations of terrestrial carbon exchanges and associated modeling assumptions. We conclude with suggestions for future work on project- and program-scale carbon accounting methods and the need for caution in advancing BECCS before such methods are better validated.
Efficiency at maximum power output of linear irreversible Carnot-like heat engines.
Wang, Yang; Tu, Z C
2012-01-01
The efficiency at maximum power output of linear irreversible Carnot-like heat engines is investigated based on the assumption that the rate of irreversible entropy production of the working substance in each "isothermal" process is a quadratic form of the heat exchange rate between the working substance and the reservoir. It is found that the maximum power output corresponds to minimizing the irreversible entropy production in two isothermal processes of the Carnot-like cycle, and that the efficiency at maximum power output has the form η(mP)=η(C)/(2-γη(C)), where η(C) is the Carnot efficiency, while γ depends on the heat transfer coefficients between the working substance and two reservoirs. The value of η(mP) is bounded between η(-)≡η(C)/2 and η(+)≡η(C)/(2-η(C)). These results are consistent with those obtained by Chen and Yan [J. Chem. Phys. 90, 3740 (1989)] based on the endoreversible assumption, those obtained by Esposito et al. [Phys. Rev. Lett. 105, 150603 (2010)] based on the low-dissipation assumption, and those obtained by Schmiedl and Seifert [Europhys. Lett. 81, 20003 (2008)] for stochastic heat engines which in fact also satisfy the low-dissipation assumption. Additionally, we find that the endoreversible assumption happens to hold for Carnot-like heat engines operating at the maximum power output based on our fundamental assumption, and that the Carnot-like heat engines that we focused on do not strictly satisfy the low-dissipation assumption, which implies that the low-dissipation assumption or our fundamental assumption is a sufficient but non-necessary condition for the validity of η(mP)=η(C)/(2-γη(C)) as well as the existence of two bounds, η(-)≡η(C)/2 and η(+)≡η(C)/(2-η(C)). © 2012 American Physical Society
Efficiency at maximum power output of linear irreversible Carnot-like heat engines
NASA Astrophysics Data System (ADS)
Wang, Yang; Tu, Z. C.
2012-01-01
The efficiency at maximum power output of linear irreversible Carnot-like heat engines is investigated based on the assumption that the rate of irreversible entropy production of the working substance in each “isothermal” process is a quadratic form of the heat exchange rate between the working substance and the reservoir. It is found that the maximum power output corresponds to minimizing the irreversible entropy production in two isothermal processes of the Carnot-like cycle, and that the efficiency at maximum power output has the form ηmP=ηC/(2-γηC), where ηC is the Carnot efficiency, while γ depends on the heat transfer coefficients between the working substance and two reservoirs. The value of ηmP is bounded between η-≡ηC/2 and η+≡ηC/(2-ηC). These results are consistent with those obtained by Chen and Yan [J. Chem. Phys.JCPSA60021-960610.1063/1.455832 90, 3740 (1989)] based on the endoreversible assumption, those obtained by Esposito [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.105.150603 105, 150603 (2010)] based on the low-dissipation assumption, and those obtained by Schmiedl and Seifert [Europhys. Lett.EULEEJ0295-507510.1209/0295-5075/81/20003 81, 20003 (2008)] for stochastic heat engines which in fact also satisfy the low-dissipation assumption. Additionally, we find that the endoreversible assumption happens to hold for Carnot-like heat engines operating at the maximum power output based on our fundamental assumption, and that the Carnot-like heat engines that we focused on do not strictly satisfy the low-dissipation assumption, which implies that the low-dissipation assumption or our fundamental assumption is a sufficient but non-necessary condition for the validity of ηmP=ηC/(2-γηC) as well as the existence of two bounds, η-≡ηC/2 and η+≡ηC/(2-ηC).
Is adding maternal vaccination to prevent whooping cough cost-effective in Australia?
Van Bellinghen, Laure-Anne; Dimitroff, Alex; Haberl, Michael; Li, Xiao; Manton, Andrew; Moeremans, Karen; Demarteau, Nadia
2018-05-17
Pertussis or whooping cough, a highly infectious respiratory infection, causes significant morbidity and mortality in infants. In adolescents and adults, pertussis presents with atypical symptoms often resulting in under-diagnosis and under-reporting, increasing the risk of transmission to more vulnerable groups. Maternal vaccination against pertussis protects mothers and newborns. This evaluation assessed the cost-effectiveness of adding maternal dTpa (reduced antigen diphtheria, Tetanus, acellular pertussis) vaccination to the 2016 nationally-funded pertussis program (DTPa [Diphtheria, Tetanus, acellular Pertussis] at 2, 4, 6, 18 months, 4 years and dTpa at 12-13 years) in Australia. A static cross-sectional population model was developed using a one-year period at steady-state. The model considered the total Australian population, stratified by age. Vaccine effectiveness against pertussis infection was assumed to be 92% in mothers and 91% in newborns, based on observational and case-control studies. The model included conservative assumptions around unreported cases. With 70% coverage, adding maternal vaccination to the existing pertussis program would prevent 8,847 pertussis cases, 422 outpatient cases, 146 hospitalizations and 0.54 deaths per year at the population level. With a 5% discount rate, 138.5 quality-adjusted-life-years (QALYs) would be gained at an extra cost of AUS$ 4.44 million and an incremental cost-effectiveness ratio of AUS$ 32,065 per QALY gained. Sensitivity and scenario analyses demonstrated that outcomes were most sensitive to assumptions around vaccine effectiveness, duration of protection in mothers, and disutility of unreported cases. In conclusion, dTpa vaccination in the third trimester of pregnancy is likely to be cost-effective from a healthcare payer perspective in Australia.
Financial analysis of technology acquisition using fractionated lasers as a model.
Jutkowitz, Eric; Carniol, Paul J; Carniol, Alan R
2010-08-01
Ablative fractional lasers are among the most advanced and costly devices on the market. Yet, there is a dearth of published literature on the cost and potential return on investment (ROI) of such devices. The objective of this study was to provide a methodological framework for physicians to evaluate ROI. To facilitate this analysis, we conducted a case study on the potential ROI of eight ablative fractional lasers. In the base case analysis, a 5-year lease and a 3-year lease were assumed as the purchase option with a $0 down payment and 3-month payment deferral. In addition to lease payments, service contracts, labor cost, and disposables were included in the total cost estimate. Revenue was estimated as price per procedure multiplied by total number of procedures in a year. Sensitivity analyses were performed to account for variability in model assumptions. Based on the assumptions of the model, all lasers had higher ROI under the 5-year lease agreement compared with that for the 3-year lease agreement. When comparing results between lasers, those with lower operating and purchase cost delivered a higher ROI. Sensitivity analysis indicates the model is most sensitive to purchase method. If physicians opt to purchase the device rather than lease, they can significantly enhance ROI. ROI analysis is an important tool for physicians who are considering making an expensive device acquisition. However, physicians should not rely solely on ROI and must also consider the clinical benefits of a laser. (c) Thieme Medical Publishers.
Cost-Effectiveness Analysis of a National Newborn Screening Program for Biotinidase Deficiency.
Vallejo-Torres, Laura; Castilla, Iván; Couce, María L; Pérez-Cerdá, Celia; Martín-Hernández, Elena; Pineda, Mercé; Campistol, Jaume; Arrospide, Arantzazu; Morris, Stephen; Serrano-Aguilar, Pedro
2015-08-01
There are conflicting views as to whether testing for biotinidase deficiency (BD) ought to be incorporated into universal newborn screening (NBS) programs. The aim of this study was to evaluate the cost-effectiveness of adding BD to the panel of conditions currently screened under the national NBS program in Spain. We used information from the regional NBS program for BD that has been in place in the Spanish region of Galicia since 1987. These data, along with other sources, were used to develop a cost-effectiveness decision model that compared lifetime costs and health outcomes of a national birth cohort of newborns with and without an early detection program. The analysis took the perspective of the Spanish National Health Service. Effectiveness was measured in terms of quality-adjusted life years (QALYs). We undertook extensive sensitivity analyses around the main model assumptions, including a probabilistic sensitivity analysis. In the base case analysis, NBS for BD led to higher QALYs and higher health care costs, with an estimated incremental cost per QALY gained of $24,677. Lower costs per QALY gained were found when conservative assumptions were relaxed, yielding cost savings in some scenarios. The probability that BD screening was cost-effective was estimated to be >70% in the base case at a standard threshold value. This study indicates that NBS for BD is likely to be a cost-effective use of resources. Copyright © 2015 by the American Academy of Pediatrics.
Song, Minsun; Wheeler, William; Caporaso, Neil E; Landi, Maria Teresa; Chatterjee, Nilanjan
2018-03-01
Genome-wide association studies (GWAS) are now routinely imputed for untyped single nucleotide polymorphisms (SNPs) based on various powerful statistical algorithms for imputation trained on reference datasets. The use of predicted allele counts for imputed SNPs as the dosage variable is known to produce valid score test for genetic association. In this paper, we investigate how to best handle imputed SNPs in various modern complex tests for genetic associations incorporating gene-environment interactions. We focus on case-control association studies where inference for an underlying logistic regression model can be performed using alternative methods that rely on varying degree on an assumption of gene-environment independence in the underlying population. As increasingly large-scale GWAS are being performed through consortia effort where it is preferable to share only summary-level information across studies, we also describe simple mechanisms for implementing score tests based on standard meta-analysis of "one-step" maximum-likelihood estimates across studies. Applications of the methods in simulation studies and a dataset from GWAS of lung cancer illustrate ability of the proposed methods to maintain type-I error rates for the underlying testing procedures. For analysis of imputed SNPs, similar to typed SNPs, the retrospective methods can lead to considerable efficiency gain for modeling of gene-environment interactions under the assumption of gene-environment independence. Methods are made available for public use through CGEN R software package. © 2017 WILEY PERIODICALS, INC.
An at-site flood estimation method in the context of nonstationarity I. A simulation study
NASA Astrophysics Data System (ADS)
Gado, Tamer A.; Nguyen, Van-Thanh-Van
2016-04-01
The stationarity of annual flood peak records is the traditional assumption of flood frequency analysis. In some cases, however, as a result of land-use and/or climate change, this assumption is no longer valid. Therefore, new statistical models are needed to capture dynamically the change of probability density functions over time, in order to obtain reliable flood estimation. In this study, an innovative method for nonstationary flood frequency analysis was presented. Here, the new method is based on detrending the flood series and applying the L-moments along with the GEV distribution to the transformed ;stationary; series (hereafter, this is called the LM-NS). The LM-NS method was assessed through a comparative study with the maximum likelihood (ML) method for the nonstationary GEV model, as well as with the stationary (S) GEV model. The comparative study, based on Monte Carlo simulations, was carried out for three nonstationary GEV models: a linear dependence of the mean on time (GEV1), a quadratic dependence of the mean on time (GEV2), and linear dependence in both the mean and log standard deviation on time (GEV11). The simulation results indicated that the LM-NS method performs better than the ML method for most of the cases studied, whereas the stationary method provides the least accurate results. An additional advantage of the LM-NS method is to avoid the numerical problems (e.g., convergence problems) that may occur with the ML method when estimating parameters for small data samples.
Rhetoric versus reality: The role of research in deconstructing concepts of caring.
Freshwater, Dawn; Cahill, Jane; Esterhuizen, Philip; Muncey, Tessa; Smith, Helen
2017-10-01
Our aim was to employ a critical analytic lens to explicate the role of nursing research in supporting the notion of caring realities. To do this, we used case exemplars to illustrate the infusion of such discourses. The first exemplar examines the fundamental concept of caring: using Florence Nightingale's Notes on Nursing, the case study surfaces caring as originally grounded in ritualized practice and subsequently describes its transmutation, via competing discourses, to a more holistic concept. It is argued that in the many and varied attempts to define the dynamic concept of care, caring has now become paradoxically, a more fragmented concept despite attempts to render it more holistic and inclusive. In the second exemplar, one of the authors draws on her personal experience of the gap between theory and practice, so pronounced that it pushed the author to revisit the concept of evidence-based practice and nursing education. In our third and final exemplar, we refer to the absence of knowledge and practice generated through natural enquiry and curiosity, an absence which has led to production of corporate led rhetoric. Drawing together the central arguments of the three exemplars, we reflect on the influential role of nursing research in enabling the deconstruction of taken for granted assumptions such as caring, evidence-based practice and empowerment; assumptions which have been generated by discourses riddled with confusion and alienation from the reality of practice and the natural spirit of professional enquiry. © 2017 John Wiley & Sons Ltd.
Effect of non-Poisson samples on turbulence spectra from laser velocimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sree, D.; Kjelgaard, S.O.; Sellers, W.L. III
1994-12-01
Spectral estimations from LV data are typically based on the assumption of a Poisson sampling process. It is demonstrated here that the sampling distribution must be considered before spectral estimates are used to infer turbulence scales. A non-Poisson sampling process can occur if there is nonhomogeneous distribution of particles in the flow. Based on the study of a simulated first-order spectrum, it has been shown that a non-Poisson sampling process causes the estimated spectrum to deviate from the true spectrum. Also, in this case the prefiltering techniques do not improve the spectral estimates at higher frequencies. 4 refs.
The Emperors sham - wrong assumption that sham needling is sham.
Lundeberg, Thomas; Lund, Iréne; Näslund, Jan; Thomas, Moolamanil
2008-12-01
During the last five years a large number of randomised controlled clinical trials (RCTs) have been published on the efficacy of acupuncture in different conditions. In most of these studies verum is compared with sham acupuncture. In general both verum and sham have been found to be effective, and often with little reported difference in outcome. This has repeatedly led to the conclusion that acupuncture is no more effective than placebo treatment. However, this conclusion is based on the assumption that sham acupuncture is inert. Since sham acupuncture evidently is merely another form of acupuncture from the physiological perspective, the assumption that sham is sham is incorrect and conclusions based on this assumption are therefore invalid. Clinical guidelines based on such conclusions may therefore exclude suffering patients from valuable treatments.
NASA Astrophysics Data System (ADS)
van der Sluijs, Jeroen P.; Arjan Wardekker, J.
2015-04-01
In order to enable anticipation and proactive adaptation, local decision makers increasingly seek detailed foresight about regional and local impacts of climate change. To this end, the Netherlands Models and Data-Centre implemented a pilot chain of sequentially linked models to project local climate impacts on hydrology, agriculture and nature under different national climate scenarios for a small region in the east of the Netherlands named Baakse Beek. The chain of models sequentially linked in that pilot includes a (future) weather generator and models of respectively subsurface hydrogeology, ground water stocks and flows, soil chemistry, vegetation development, crop yield and nature quality. These models typically have mismatching time step sizes and grid cell sizes. The linking of these models unavoidably involves the making of model assumptions that can hardly be validated, such as those needed to bridge the mismatches in spatial and temporal scales. Here we present and apply a method for the systematic critical appraisal of model assumptions that seeks to identify and characterize the weakest assumptions in a model chain. The critical appraisal of assumptions presented in this paper has been carried out ex-post. For the case of the climate impact model chain for Baakse Beek, the three most problematic assumptions were found to be: land use and land management kept constant over time; model linking of (daily) ground water model output to the (yearly) vegetation model around the root zone; and aggregation of daily output of the soil hydrology model into yearly input of a so called ‘mineralization reduction factor’ (calculated from annual average soil pH and daily soil hydrology) in the soil chemistry model. Overall, the method for critical appraisal of model assumptions presented and tested in this paper yields a rich qualitative insight in model uncertainty and model quality. It promotes reflectivity and learning in the modelling community, and leads to well informed recommendations for model improvement.
Nevo, Daniel; Nishihara, Reiko; Ogino, Shuji; Wang, Molin
2017-08-04
In the analysis of time-to-event data with multiple causes using a competing risks Cox model, often the cause of failure is unknown for some of the cases. The probability of a missing cause is typically assumed to be independent of the cause given the time of the event and covariates measured before the event occurred. In practice, however, the underlying missing-at-random assumption does not necessarily hold. Motivated by colorectal cancer molecular pathological epidemiology analysis, we develop a method to conduct valid analysis when additional auxiliary variables are available for cases only. We consider a weaker missing-at-random assumption, with missing pattern depending on the observed quantities, which include the auxiliary covariates. We use an informative likelihood approach that will yield consistent estimates even when the underlying model for missing cause of failure is misspecified. The superiority of our method over naive methods in finite samples is demonstrated by simulation study results. We illustrate the use of our method in an analysis of colorectal cancer data from the Nurses' Health Study cohort, where, apparently, the traditional missing-at-random assumption fails to hold.
A new momentum integral method for approximating bed shear stress
NASA Astrophysics Data System (ADS)
Wengrove, M. E.; Foster, D. L.
2016-02-01
In nearshore environments, accurate estimation of bed stress is critical to estimate morphologic evolution, and benthic mass transfer fluxes. However, bed shear stress over mobile boundaries in wave environments is notoriously difficult to estimate due to the non-equilibrium boundary layer. Approximating the friction velocity with a traditional logarithmic velocity profile model is common, but an unsteady non-uniform flow field violates critical assumptions in equilibrium boundary layer theory. There have been several recent developments involving stress partitioning through an examination of the momentum transfer contributions that lead to improved estimates of the bed stress. For the case of single vertical profile observations, Mehdi et al. (2014) developed a full momentum integral-based method for steady-unidirectional flow that integrates the streamwise Navier-Stokes equation three times to an arbitrary position within the boundary layer. For the case of two-dimensional velocity observations, Rodriguez-Abudo and Foster (2014) were able to examine the momentum contributions from waves, turbulence and the bedform in a spatial and temporal averaging approach to the Navier-Stokes equations. In this effort, the above methods are combined to resolve the bed shear stress in both short and long wave dominated environments with a highly mobile bed. The confluence is an integral based approach for determining bed shear stress that makes no a-priori assumptions of boundary layer shape and uses just a single velocity profile time series for both the phase dependent case (under waves) and the unsteady case (under solitary waves). The developed method is applied to experimental observations obtained in a full scale laboratory investigation (Oregon State's Large Wave Flume) of the nearbed velocity field over a rippled sediment bed in oscillatory flow using both particle image velocimetry and a profiling acoustic Doppler velocimeter. This method is particularly relevant for small scale field observations and laboratory observations.
Introduction to the Application of Web-Based Surveys.
ERIC Educational Resources Information Center
Timmerman, Annemarie
This paper discusses some basic assumptions and issues concerning web-based surveys. Discussion includes: assumptions regarding cost and ease of use; disadvantages of web-based surveys, concerning the inability to compensate for four common errors of survey research: coverage error, sampling error, measurement error and nonresponse error; and…
The importance of being equivalent: Newton's two models of one-body motion
NASA Astrophysics Data System (ADS)
Pourciau, Bruce
2004-05-01
As an undergraduate at Cambridge, Newton entered into his "Waste Book" an assumption that we have named the Equivalence Assumption (The Younger): "If a body move progressively in some crooked line [about a center of motion] ..., [then this] crooked line may bee conceived to consist of an infinite number of streight lines. Or else in any point of the croked line the motion may bee conceived to be on in the tangent". In this assumption, Newton somewhat imprecisely describes two mathematical models, a "polygonal limit model" and a "tangent deflected model", for "one-body motion", that is, for the motion of a "body in orbit about a fixed center", and then claims that these two models are equivalent. In the first part of this paper, we study the Principia to determine how the elder Newton would more carefully describe the polygonal limit and tangent deflected models. From these more careful descriptions, we then create Equivalence Assumption (The Elder), a precise interpretation of Equivalence Assumption (The Younger) as it might have been restated by Newton, after say 1687. We then review certain portions of the Waste Book and the Principia to make the case that, although Newton never restates nor even alludes to the Equivalence Assumption after his youthful Waste Book entry, still the polygonal limit and tangent deflected models, as well as an unspoken belief in their equivalence, infuse Newton's work on orbital motion. In particular, we show that the persuasiveness of the argument for the Area Property in Proposition 1 of the Principia depends crucially on the validity of Equivalence Assumption (The Elder). After this case is made, we present the mathematical analysis required to establish the validity of the Equivalence Assumption (The Elder). Finally, to illustrate the fundamental nature of the resulting theorem, the Equivalence Theorem as we call it, we present three significant applications: we use the Equivalence Theorem first to clarify and resolve questions related to Leibniz's "polygonal model" of one-body motion; then to repair Newton's argument for the Area Property in Proposition 1; and finally to clarify and resolve questions related to the transition from impulsive to continuous forces in "De motu" and the Principia.
Austin, Peter C
2018-01-01
The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.
Austin, Peter C.
2017-01-01
The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest. PMID:29321694
Analysis of messy data with heteroscedastic in mean models
NASA Astrophysics Data System (ADS)
Trianasari, Nurvita; Sumarni, Cucu
2016-02-01
In the analysis of the data, we often faced with the problem of data where the data did not meet some assumptions. In conditions of such data is often called data messy. This problem is a consequence of the data that generates outliers that bias or error estimation. To analyze the data messy, there are three approaches, namely standard analysis, transform data and data analysis methods rather than a standard. Simulations conducted to determine the performance of a third comparative test procedure on average often the model variance is not homogeneous. Data simulation of each scenario is raised as much as 500 times. Next, we do the analysis of the average comparison test using three methods, Welch test, mixed models and Welch-r test. Data generation is done through software R version 3.1.2. Based on simulation results, these three methods can be used for both normal and abnormal case (homoscedastic). The third method works very well on data balanced or unbalanced when there is no violation in the homogenity's assumptions variance. For balanced data, the three methods still showed an excellent performance despite the violation of the assumption of homogeneity of variance, with the requisite degree of heterogeneity is high. It can be shown from the level of power test above 90 percent, and the best to Welch method (98.4%) and the Welch-r method (97.8%). For unbalanced data, Welch method will be very good moderate at in case of heterogeneity positive pair with a 98.2% power. Mixed models method will be very good at case of highly heterogeneity was negative negative pairs with power. Welch-r method works very well in both cases. However, if the level of heterogeneity of variance is very high, the power of all method will decrease especially for mixed models methods. The method which still works well enough (power more than 50%) is Welch-r method (62.6%), and the method of Welch (58.6%) in the case of balanced data. If the data are unbalanced, Welch-r method works well enough in the case of highly heterogeneous positive positive or negative negative pairs, there power are 68.8% and 51% consequencly. Welch method perform well enough only in the case of highly heterogeneous variety of positive positive pairs with it is power of 64.8%. While mixed models method is good in the case of a very heterogeneous variety of negative partner with 54.6% power. So in general, when there is a variance is not homogeneous case, Welch method is applied to the data rank (Welch-r) has a better performance than the other methods.
ERIC Educational Resources Information Center
Cauce, Ana M.; And Others
Most of the research on the assessment of the intelligence of Latinos in the United States appears to be based on some possibly erroneous or at least dubious assumptions. Among these are the following: (1) the assumption of bilinguality; (2) the assumption of equal proficiency in the English language; (3) the assumption of the equivalence of…
Cole, Courtney E
2010-12-01
Narrative approaches to health communication research have often been characterized by assumptions of the therapeutic and ameliorative effect of narratives. In this article, I call these assumptions into question by critically engaging extant research in narrative health communication research in light of testimony by a participant in South Africa's Truth and Reconciliation Commission. Drawing on his personal narrative, numerous retellings of his story in public and academic discourse, and his responses to his story's appropriation, I demonstrate the importance of conducting narrative research and theorizing with an appreciation of its therapeutic potential, as well as its ability to harm.
Vrzheshch, P V
2015-01-01
Quantitative evaluation of the accuracy of the rapid equilibrium assumption in the steady-state enzyme kinetics was obtained for an arbitrary mechanism of an enzyme-catalyzed reaction. This evaluation depends only on the structure and properties of the equilibrium segment, but doesn't depend on the structure and properties of the rest (stationary part) of the kinetic scheme. The smaller the values of the edges leaving equilibrium segment in relation to values of the edges within the equilibrium segment, the higher the accuracy of determination of intermediate concentrations and reaction velocity in a case of the rapid equilibrium assumption.
Fuzzy α-minimum spanning tree problem: definition and solutions
NASA Astrophysics Data System (ADS)
Zhou, Jian; Chen, Lu; Wang, Ke; Yang, Fan
2016-04-01
In this paper, the minimum spanning tree problem is investigated on the graph with fuzzy edge weights. The notion of fuzzy ? -minimum spanning tree is presented based on the credibility measure, and then the solutions of the fuzzy ? -minimum spanning tree problem are discussed under different assumptions. First, we respectively, assume that all the edge weights are triangular fuzzy numbers and trapezoidal fuzzy numbers and prove that the fuzzy ? -minimum spanning tree problem can be transformed to a classical problem on a crisp graph in these two cases, which can be solved by classical algorithms such as the Kruskal algorithm and the Prim algorithm in polynomial time. Subsequently, as for the case that the edge weights are general fuzzy numbers, a fuzzy simulation-based genetic algorithm using Prüfer number representation is designed for solving the fuzzy ? -minimum spanning tree problem. Some numerical examples are also provided for illustrating the effectiveness of the proposed solutions.
NASA Astrophysics Data System (ADS)
Breyer, Christian; Afanasyeva, Svetlana; Brakemeier, Dietmar; Engelhard, Manfred; Giuliano, Stefano; Puppe, Michael; Schenk, Heiko; Hirsch, Tobias; Moser, Massimo
2017-06-01
The main objective of this research is to present a solid foundation of capex projections for the major solar energy technologies until the year 2030 for further analyses. The experience curve approach has been chosen for this capex assessment, which requires a good understanding of the projected total global installed capacities of the major solar energy technologies and the respective learning rates. A literature survey has been conducted for CSP tower, CSP trough, PV and Li-ion battery. Based on the literature survey a base case has been defined for all technologies and low growth and high growth cases for further sensitivity analyses. All results are shown in detail in the paper and a comparison to the expectation of a potentially major investor in all of these technologies confirmed the derived capex projections in this paper.
Chesson, Harrell W; Forhan, Sara E; Gottlieb, Sami L; Markowitz, Lauri E
2008-08-18
We estimated the health and economic benefits of preventing recurrent respiratory papillomatosis (RRP) through quadrivalent human papillomavirus (HPV) vaccination. We applied a simple mathematical model to estimate the averted costs and quality-adjusted life years (QALYs) saved by preventing RRP in children whose mothers had been vaccinated at age 12 years. Under base case assumptions, the prevention of RRP would avert an estimated USD 31 (range: USD 2-178) in medical costs (2006 US dollars) and save 0.00016 QALYs (range: 0.00001-0.00152) per 12-year-old girl vaccinated. Including the benefits of RRP reduced the estimated cost per QALY gained by HPV vaccination by roughly 14-21% in the base case and by <2% to >100% in the sensitivity analyses. More precise estimates of the incidence of RRP are needed, however, to quantify this impact more reliably.
CONTROL FUNCTION ASSISTED IPW ESTIMATION WITH A SECONDARY OUTCOME IN CASE-CONTROL STUDIES.
Sofer, Tamar; Cornelis, Marilyn C; Kraft, Peter; Tchetgen Tchetgen, Eric J
2017-04-01
Case-control studies are designed towards studying associations between risk factors and a single, primary outcome. Information about additional, secondary outcomes is also collected, but association studies targeting such secondary outcomes should account for the case-control sampling scheme, or otherwise results may be biased. Often, one uses inverse probability weighted (IPW) estimators to estimate population effects in such studies. IPW estimators are robust, as they only require correct specification of the mean regression model of the secondary outcome on covariates, and knowledge of the disease prevalence. However, IPW estimators are inefficient relative to estimators that make additional assumptions about the data generating mechanism. We propose a class of estimators for the effect of risk factors on a secondary outcome in case-control studies that combine IPW with an additional modeling assumption: specification of the disease outcome probability model. We incorporate this model via a mean zero control function. We derive the class of all regular and asymptotically linear estimators corresponding to our modeling assumption, when the secondary outcome mean is modeled using either the identity or the log link. We find the efficient estimator in our class of estimators and show that it reduces to standard IPW when the model for the primary disease outcome is unrestricted, and is more efficient than standard IPW when the model is either parametric or semiparametric.
Regularity Results for a Class of Functionals with Non-Standard Growth
NASA Astrophysics Data System (ADS)
Acerbi, Emilio; Mingione, Giuseppe
We consider the integral functional
ERIC Educational Resources Information Center
Nachlieli, Talli; Herbst, Patricio
2009-01-01
This article reports on an investigation of how teachers of geometry perceived an episode of instruction presented to them as a case of engaging students in proving. Confirming what was hypothesized, participants found it remarkable that a teacher would allow a student to make an assumption while proving. But they perceived this episode in various…
ERIC Educational Resources Information Center
Coombs, W. Timothy; Holladay, Sherry J.
2002-01-01
Explains a comprehensive, prescriptive, situational approach for responding to crises and protecting organizational reputation: the situational crisis communication theory (SCCT). Notes undergraduate students read two crisis case studies from a set of 13 cases and responded to questions following the case. Validates a key assumption in SCCT and…
Visual resources and the public: an empirical approach
Rachel Kaplan
1979-01-01
Visual resource management systems incorporate many assumptions about how people see the landscape. While these assumptions are not articulated, they nonetheless affect the decision process. Problems inherent in some of these assumptions are examined. Extensive research based on people's preference ratings of different settings provides insight into people's...
Developing animals flout prominent assumptions of ecological physiology.
Burggren, Warren W
2005-08-01
Every field of biology has its assumptions, but when they grow to be dogma, they can become constraining. This essay presents data-based challenges to several prominent assumptions of developmental physiologists. The ubiquity of allometry is such an assumption, yet animal development is characterized by rate changes that are counter to allometric predictions. Physiological complexity is assumed to increase with development, but examples are provided showing that complexity can be greatest at intermediate developmental stages. It is assumed that organs have functional equivalency in embryos and adults, yet embryonic structures can have quite different functions than inferred from adults. Another assumption challenged is the duality of neural control (typically sympathetic and parasympathetic), since one of these two regulatory mechanisms typically considerably precedes in development the appearance of the other. A final assumption challenged is the notion that divergent phylogeny creates divergent physiologies in embryos just as in adults, when in fact early in development disparate vertebrate taxa show great quantitative as well as qualitative similarity. Collectively, the inappropriateness of these prominent assumptions based on adult studies suggests that investigation of embryos, larvae and fetuses be conducted with appreciation for their potentially unique physiologies.
Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies
Theis, Fabian J.
2017-01-01
Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464
Liu, Wei; Ding, Jinhui
2018-04-01
The application of the principle of the intention-to-treat (ITT) to the analysis of clinical trials is challenged in the presence of missing outcome data. The consequences of stopping an assigned treatment in a withdrawn subject are unknown. It is difficult to make a single assumption about missing mechanisms for all clinical trials because there are complicated reactions in the human body to drugs due to the presence of complex biological networks, leading to data missing randomly or non-randomly. Currently there is no statistical method that can tell whether a difference between two treatments in the ITT population of a randomized clinical trial with missing data is significant at a pre-specified level. Making no assumptions about the missing mechanisms, we propose a generalized complete-case (GCC) analysis based on the data of completers. An evaluation of the impact of missing data on the ITT analysis reveals that a statistically significant GCC result implies a significant treatment effect in the ITT population at a pre-specified significance level unless, relative to the comparator, the test drug is poisonous to the non-completers as documented in their medical records. Applications of the GCC analysis are illustrated using literature data, and its properties and limits are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kozluk, M.J.; Vijay, D.K.
Postulated catastrophic rupture of high-energy piping systems is the fundamental criterion used for the safety design basis of both light and heavy water nuclear generating stations. Historically, the criterion has been applied by assuming a nonmechanistic instantaneous double-ended guillotine rupture of the largest diameter pipes inside of containment. Nonmechanistic, meaning that the assumption of an instantaneous guillotine rupture has not been based on stresses in the pipe, failure mechanisms, toughness of the piping material, nor the dynamics of the ruptured pipe ends as they separate. This postulated instantaneous double-ended guillotine rupture of a pipe was a convenient simplifying assumption thatmore » resulted in a conservative accident scenario. This conservative accident scenario has now become entrenched as the design basis accident for: containment design, shutdown system design, emergency fuel cooling systems design, and to establish environmental qualification temperature and pressure conditions. The requirement to address dynamic effects associated with the postulated pipe rupture subsequently evolved. The dynamic effects include: potential missiles, pipe whipping, blowdown jets, and thermal-hydraulic transients. Recent advances in fracture mechanics research have demonstrated that certain pipes under specific conditions cannot crack in ways that result in an instantaneous guillotine rupture. Canadian utilities are now using mechanistic fracture mechanics and leak-before-break assessments on a case-by-case basis, in limited applications, to support licensing cases which seek exemption from the need to consider the various dynamic effects associated with postulated instantaneous catastrophic rupture of high-energy piping systems inside and outside of containment.« less
Cognitive neuropsychology and its vicissitudes: The fate of Caramazza's axioms.
Shallice, Tim
2015-01-01
Cognitive neuropsychology is characterized as the discipline in which one draws conclusions about the organization of the normal cognitive systems from the behaviour of brain-damaged individuals. In a series of papers, Caramazza, later in collaboration with McCloskey, put forward four assumptions as the bridge principles for making such inferences. Four potential pitfalls, one for each axiom, are discussed with respect to the use of single-case methods. Two of the pitfalls also apply to case series and group study procedures, and the other two are held to be indirectly testable or avoidable. Moreover, four other pitfalls are held to apply to case series or group study methods. It is held that inferences from single-case procedures may profitably be supported or rejected using case series/group study methods, but also that analogous support needs to be given in the other direction for functionally based case series or group studies. It is argued that at least six types of neuropsychological method are valuable for extrapolation to theories of the normal cognitive system but that the single- or multiple-case study remains a critical part of cognitive neuropsychology's methods.
Structural interactions in ionic liquids linked to higher-order Poisson-Boltzmann equations
NASA Astrophysics Data System (ADS)
Blossey, R.; Maggs, A. C.; Podgornik, R.
2017-06-01
We present a derivation of generalized Poisson-Boltzmann equations starting from classical theories of binary fluid mixtures, employing an approach based on the Legendre transform as recently applied to the case of local descriptions of the fluid free energy. Under specific symmetry assumptions, and in the linearized regime, the Poisson-Boltzmann equation reduces to a phenomenological equation introduced by Bazant et al. [Phys. Rev. Lett. 106, 046102 (2011)], 10.1103/PhysRevLett.106.046102, whereby the structuring near the surface is determined by bulk coefficients.
NASA Astrophysics Data System (ADS)
Kammerdiner, Alla; Xanthopoulos, Petros; Pardalos, Panos M.
2007-11-01
In this chapter a potential problem with application of the Granger-causality based on the simple vector autoregressive (VAR) modeling to EEG data is investigated. Although some initial studies tested whether the data support the stationarity assumption of VAR, the stability of the estimated model is rarely (if ever) been verified. In fact, in cases when the stability condition is violated the process may exhibit a random walk like behavior or even be explosive. The problem is illustrated by an example.
A Framework for Designing Scaffolds That Improve Motivation and Cognition
Belland, Brian R.; Kim, ChanMin; Hannafin, Michael J.
2013-01-01
A problematic, yet common, assumption among educational researchers is that when teachers provide authentic, problem-based experiences, students will automatically be engaged. Evidence indicates that this is often not the case. In this article, we discuss (a) problems with ignoring motivation in the design of learning environments, (b) problem-based learning and scaffolding as one way to help, (c) how scaffolding has strayed from what was originally equal parts motivational and cognitive support, and (d) a conceptual framework for the design of scaffolds that can enhance motivation as well as cognitive outcomes. We propose guidelines for the design of computer-based scaffolds to promote motivation and engagement while students are solving authentic problems. Remaining questions and suggestions for future research are then discussed. PMID:24273351
Sensitivity to Uncertainty in Asteroid Impact Risk Assessment
NASA Astrophysics Data System (ADS)
Mathias, D.; Wheeler, L.; Prabhu, D. K.; Aftosmis, M.; Dotson, J.; Robertson, D. K.
2015-12-01
The Engineering Risk Assessment (ERA) team at NASA Ames Research Center is developing a physics-based impact risk model for probabilistically assessing threats from potential asteroid impacts on Earth. The model integrates probabilistic sampling of asteroid parameter ranges with physics-based analyses of entry, breakup, and impact to estimate damage areas and casualties from various impact scenarios. Assessing these threats is a highly coupled, dynamic problem involving significant uncertainties in the range of expected asteroid characteristics, how those characteristics may affect the level of damage, and the fidelity of various modeling approaches and assumptions. The presented model is used to explore the sensitivity of impact risk estimates to these uncertainties in order to gain insight into what additional data or modeling refinements are most important for producing effective, meaningful risk assessments. In the extreme cases of very small or very large impacts, the results are generally insensitive to many of the characterization and modeling assumptions. However, the nature of the sensitivity can change across moderate-sized impacts. Results will focus on the value of additional information in this critical, mid-size range, and how this additional data can support more robust mitigation decisions.
Observing the tabarru' rate in a family takaful
NASA Astrophysics Data System (ADS)
Ismail, Hamizun bin
2013-04-01
Takaful system has a built-in mechanism to counter any over-pricing policies of the insurance companies because whatever may be the premium charged, the surplus would normally go back to the participants in proportion to their contributions. In contrast to a conventional insurance company, insurance surplus is not supposed to be a source of return for a takaful company. Any surplus that is a result of overpricing or over-charging is required to be returned back to takaful participants. Similarly, in case of under-pricing, policyholders may be asked to meet any deficit or negative difference between the policyholders' contribution and the actual claims, benefits and compensation. The objective of this study is to measure the efficacy of a family takaful contract through a simple actuarial model based on deterministic survival assumption. In addition, a linear tabarru' rate is introduced. The results show that the linear assumption on the tabarru' rate has an advantage over the flat rate as far as the risk of the prospective loss is concerned.
Source biases in midlatitude magnetotelluric transfer functions due to Pc3-4 geomagnetic pulsations
NASA Astrophysics Data System (ADS)
Murphy, Benjamin S.; Egbert, Gary D.
2018-01-01
The magnetotelluric (MT) method for imaging the electrical conductivity structure of the Earth is based on the assumption that source magnetic fields can be considered quasi-uniform, such that the spatial scale of the inducing source is much larger than the intrinsic length scale of the electromagnetic induction process (the skin depth). Here, we show using EarthScope MT data that short spatial scale source magnetic fields from geomagnetic pulsations (Pc's) can violate this fundamental assumption. Over resistive regions of the Earth, the skin depth can be comparable to the short meridional range of Pc3-4 disturbances that are generated by geomagnetic field-line resonances (FLRs). In such cases, Pc's can introduce narrow-band bias in MT transfer function estimates at FLR eigenperiods ( 10-100 s). Although it appears unlikely that these biases will be a significant problem for data inversions, further study is necessary to understand the conditions under which they may distort inverse solutions.[Figure not available: see fulltext.
Revisiting the Cassandra syndrome; the changing climate of coral reef research
NASA Astrophysics Data System (ADS)
Maynard, J. A.; Baird, A. H.; Pratchett, M. S.
2008-12-01
Climate change will be with us for decades, even with significant reductions in emissions. Therefore, predictions made with respect to climate change impacts on coral reefs need to be highly defensible to ensure credibility over the timeframes this issue demands. If not, a Cassandra syndrome could be created whereby future more well-supported predictions of the fate of reefs are neither heard nor acted upon. Herein, popularising predictions based on essentially untested assumptions regarding reefs and their capacity to cope with future climate change is questioned. Some of these assumptions include that: all corals live close to their thermal limits, corals cannot adapt/acclimatize to rapid rates of change, physiological trade-offs resulting from ocean acidification will lead to reduced fecundity, and that climate-induced coral loss leads to widespread fisheries collapse. We argue that, while there is a place for popularising worst-case scenarios, the coral reef crisis has been effectively communicated and, though this communication should be sustained, efforts should now focus on addressing critical knowledge gaps.
Rassen, Jeremy A; Brookhart, M Alan; Glynn, Robert J; Mittleman, Murray A; Schneeweiss, Sebastian
2009-12-01
The gold standard of study design for treatment evaluation is widely acknowledged to be the randomized controlled trial (RCT). Trials allow for the estimation of causal effect by randomly assigning participants either to an intervention or comparison group; through the assumption of "exchangeability" between groups, comparing the outcomes will yield an estimate of causal effect. In the many cases where RCTs are impractical or unethical, instrumental variable (IV) analysis offers a nonexperimental alternative based on many of the same principles. IV analysis relies on finding a naturally varying phenomenon, related to treatment but not to outcome except through the effect of treatment itself, and then using this phenomenon as a proxy for the confounded treatment variable. This article demonstrates how IV analysis arises from an analogous but potentially impossible RCT design, and outlines the assumptions necessary for valid estimation. It gives examples of instruments used in clinical epidemiology and concludes with an outline on estimation of effects.
Rassen, Jeremy A.; Brookhart, M. Alan; Glynn, Robert J.; Mittleman, Murray A.; Schneeweiss, Sebastian
2010-01-01
The gold standard of study design for treatment evaluation is widely acknowledged to be the randomized controlled trial (RCT). Trials allow for the estimation of causal effect by randomly assigning participants either to an intervention or comparison group; through the assumption of “exchangeability” between groups, comparing the outcomes will yield an estimate of causal effect. In the many cases where RCTs are impractical or unethical, instrumental variable (IV) analysis offers a nonexperimental alternative based on many of the same principles. IV analysis relies on finding a naturally varying phenomenon, related to treatment but not to outcome except through the effect of treatment itself, and then using this phenomenon as a proxy for the confounded treatment variable. This article demonstrates how IV analysis arises from an analogous but potentially impossible RCT design, and outlines the assumptions necessary for valid estimation. It gives examples of instruments used in clinical epidemiology and concludes with an outline on estimation of effects. PMID:19356901
Macklin, R
2003-01-01
Gillon is correct that the four principles provide a sound and useful way of analysing moral dilemmas. As he observes, the approach using these principles does not provide a unique solution to dilemmas. This can be illustrated by alternatives to Gillon's own analysis of the four case scenarios. In the first scenario, a different set of factual assumptions could yield a different conclusion about what is required by the principle of beneficence. In the second scenario, although Gillon's conclusion is correct, what is open to question is his claim that what society regards as the child's best interest determines what really is in the child's best interest. The third scenario shows how it may be reasonable for the principle of beneficence to take precedence over autonomy in certain circumstances, yet like the first scenario, the ethical conclusion relies on a set of empirical assumptions and predictions of what is likely to occur. The fourth scenario illustrates how one can draw different conclusions based on the importance given to the precautionary principle. PMID:14519836
A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.
Quan, Quan; Cai, Kai-Yuan
2016-02-01
In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.
Haberl, Helmut
2013-07-01
The notion that biomass combustion is carbon neutral vis-a-vis the atmosphere because carbon released during biomass combustion is absorbed during plant regrowth is inherent in the greenhouse gas accounting rules in many regulations and conventions. But this 'carbon neutrality' assumption of bioenergy is an oversimplification that can result in major flaws in emission accounting; it may even result in policies that increase, instead of reduce, overall greenhouse gas emissions. This commentary discusses the systemic feedbacks and ecosystem succession/land-use history issues ignored by the carbon neutrality assumption. Based on recent literature, three cases are elaborated which show that the C balance of bioenergy may range from highly beneficial to strongly detrimental, depending on the plants grown, the land used (including its land-use history) as well as the fossil energy replaced. The article concludes by proposing the concept of GHG cost curves of bioenergy as a means for optimizing the climate benefits of bioenergy policies.
Risk-Screening Environmental Indicators (RSEI)
EPA's Risk-Screening Environmental Indicators (RSEI) is a geographically-based model that helps policy makers and communities explore data on releases of toxic substances from industrial facilities reporting to EPA??s Toxics Release Inventory (TRI). By analyzing TRI information together with simplified risk factors, such as the amount of chemical released, its fate and transport through the environment, each chemical??s relative toxicity, and the number of people potentially exposed, RSEI calculates a numeric score, which is designed to only be compared to other scores calculated by RSEI. Because it is designed as a screening-level model, RSEI uses worst-case assumptions about toxicity and potential exposure where data are lacking, and also uses simplifying assumptions to reduce the complexity of the calculations. A more refined assessment is required before any conclusions about health impacts can be drawn. RSEI is used to establish priorities for further investigation and to look at changes in potential impacts over time. Users can save resources by conducting preliminary analyses with RSEI.
Macklin, R
2003-10-01
Gillon is correct that the four principles provide a sound and useful way of analysing moral dilemmas. As he observes, the approach using these principles does not provide a unique solution to dilemmas. This can be illustrated by alternatives to Gillon's own analysis of the four case scenarios. In the first scenario, a different set of factual assumptions could yield a different conclusion about what is required by the principle of beneficence. In the second scenario, although Gillon's conclusion is correct, what is open to question is his claim that what society regards as the child's best interest determines what really is in the child's best interest. The third scenario shows how it may be reasonable for the principle of beneficence to take precedence over autonomy in certain circumstances, yet like the first scenario, the ethical conclusion relies on a set of empirical assumptions and predictions of what is likely to occur. The fourth scenario illustrates how one can draw different conclusions based on the importance given to the precautionary principle.
Operating a terrestrial Internet router onboard and alongside a small satellite
NASA Astrophysics Data System (ADS)
Wood, L.; da Silva Curiel, A.; Ivancic, W.; Hodgson, D.; Shell, D.; Jackson, C.; Stewart, D.
2006-07-01
After twenty months of flying, testing and demonstrating a Cisco mobile access router, originally designed for terrestrial use, onboard the low-Earth-orbiting UK-DMC satellite as part of a larger merged ground/space IP-based internetwork, we use our experience to examine the benefits and drawbacks of integration and standards reuse for small satellite missions. Benefits include ease of operation and the ability to leverage existing systems and infrastructure designed for general use with a large set of latent capabilities to draw on when needed, as well as the familiarity that comes from reuse of existing, known, and well-understood security and operational models. Drawbacks include cases where integration work was needed to bridge the gaps in assumptions between different systems, and where performance considerations outweighed the benefits of reuse of pre-existing file transfer protocols. We find similarities with the terrestrial IP networks whose technologies have been taken to small satellites—and also some significant differences between the two in operational models and assumptions that must be borne in mind.
Phase I Design for Completely or Partially Ordered Treatment Schedules
Wages, Nolan A.; O’Quigley, John; Conaway, Mark R.
2013-01-01
The majority of methods for the design of Phase I trials in oncology are based upon a single course of therapy, yet in actual practice it may be the case that there is more than one treatment schedule for any given dose. Therefore, the probability of observing a dose-limiting toxicity (DLT) may depend upon both the total amount of the dose given, as well as the frequency with which it is administered. The objective of the study then becomes to find an acceptable combination of both dose and schedule. Past literature on designing these trials has entailed the assumption that toxicity increases monotonically with both dose and schedule. In this article, we relax this assumption for schedules and present a dose-schedule finding design that can be generalized to situations in which we know the ordering between all schedules and those in which we do not. We present simulation results that compare our method to other suggested dose-schedule finding methodology. PMID:24114957
NASA Astrophysics Data System (ADS)
Udomsungworagul, A.; Charnsethikul, P.
2018-03-01
This article introduces methodology to solve large scale two-phase linear programming with a case of multiple time period animal diet problems under both nutrients in raw materials and finished product demand uncertainties. Assumption of allowing to manufacture multiple product formulas in the same time period and assumption of allowing to hold raw materials and finished products inventory have been added. Dantzig-Wolfe decompositions, Benders decomposition and Column generations technique has been combined and applied to solve the problem. The proposed procedure was programmed using VBA and Solver tool in Microsoft Excel. A case study was used and tested in term of efficiency and effectiveness trade-offs.
An Economic Evaluation of Food Safety Education Interventions: Estimates and Critical Data Gaps.
Zan, Hua; Lambea, Maria; McDowell, Joyce; Scharff, Robert L
2017-08-01
The economic evaluation of food safety interventions is an important tool that practitioners and policy makers use to assess the efficacy of their efforts. These evaluations are built on models that are dependent on accurate estimation of numerous input variables. In many cases, however, there is no data available to determine input values and expert opinion is used to generate estimates. This study uses a benefit-cost analysis of the food safety component of the adult Expanded Food and Nutrition Education Program (EFNEP) in Ohio as a vehicle for demonstrating how results based on variable values that are not objectively determined may be sensitive to alternative assumptions. In particular, the focus here is on how reported behavioral change is translated into economic benefits. Current gaps in the literature make it impossible to know with certainty how many people are protected by the education (what are the spillover effects?), the length of time education remains effective, and the level of risk reduction from change in behavior. Based on EFNEP survey data, food safety education led 37.4% of participants to improve their food safety behaviors. Under reasonable default assumptions, benefits from this improvement significantly outweigh costs, yielding a benefit-cost ratio of between 6.2 and 10.0. Incorporation of a sensitivity analysis using alternative estimates yields a greater range of estimates (0.2 to 56.3), which highlights the importance of future research aimed at filling these research gaps. Nevertheless, most reasonable assumptions lead to estimates of benefits that justify their costs.
Rocca, Elena; Andersen, Fredrik
2017-08-14
Scientific risk evaluations are constructed by specific evidence, value judgements and biological background assumptions. The latter are the framework-setting suppositions we apply in order to understand some new phenomenon. That background assumptions co-determine choice of methodology, data interpretation, and choice of relevant evidence is an uncontroversial claim in modern basic science. Furthermore, it is commonly accepted that, unless explicated, disagreements in background assumptions can lead to misunderstanding as well as miscommunication. Here, we extend the discussion on background assumptions from basic science to the debate over genetically modified (GM) plants risk assessment. In this realm, while the different political, social and economic values are often mentioned, the identity and role of background assumptions at play are rarely examined. We use an example from the debate over risk assessment of stacked genetically modified plants (GM stacks), obtained by applying conventional breeding techniques to GM plants. There are two main regulatory practices of GM stacks: (i) regulate as conventional hybrids and (ii) regulate as new GM plants. We analyzed eight papers representative of these positions and found that, in all cases, additional premises are needed to reach the stated conclusions. We suggest that these premises play the role of biological background assumptions and argue that the most effective way toward a unified framework for risk analysis and regulation of GM stacks is by explicating and examining the biological background assumptions of each position. Once explicated, it is possible to either evaluate which background assumptions best reflect contemporary biological knowledge, or to apply Douglas' 'inductive risk' argument.
ERIC Educational Resources Information Center
Toma, J. Douglas
This paper examines whether the social science-based typology of Yvonne Lincoln and Egon Guba (1994), in which social science scholars are divided into positivist, postpositivist, critical, and constructivist paradigms based on ontological, epistemological, and methodological assumptions in the discipline, can be adapted to the academic discipline…
Early Retirement Is Not the Cat's Meow. The Endpaper.
ERIC Educational Resources Information Center
Ferguson, Wayne S.
1982-01-01
Early retirement plans are perceived as being beneficial to school staff and financially advantageous to schools. Four out of the five assumptions on which these perceptions are based are incorrect. The one correct assumption is that early retirement will make affirmative action programs move ahead more rapidly. The incorrect assumptions are: (1)…
ERIC Educational Resources Information Center
Baskas, Richard S.
2011-01-01
The purpose of this study is to examine Knowles' theory of andragogy and his six assumptions of how adults learn while providing evidence to support two of his assumptions based on the theory of andragogy. As no single theory explains how adults learn, it can best be assumed that adults learn through the accumulation of formal and informal…
Human judgment vs. quantitative models for the management of ecological resources.
Holden, Matthew H; Ellner, Stephen P
2016-07-01
Despite major advances in quantitative approaches to natural resource management, there has been resistance to using these tools in the actual practice of managing ecological populations. Given a managed system and a set of assumptions, translated into a model, optimization methods can be used to solve for the most cost-effective management actions. However, when the underlying assumptions are not met, such methods can potentially lead to decisions that harm the environment and economy. Managers who develop decisions based on past experience and judgment, without the aid of mathematical models, can potentially learn about the system and develop flexible management strategies. However, these strategies are often based on subjective criteria and equally invalid and often unstated assumptions. Given the drawbacks of both methods, it is unclear whether simple quantitative models improve environmental decision making over expert opinion. In this study, we explore how well students, using their experience and judgment, manage simulated fishery populations in an online computer game and compare their management outcomes to the performance of model-based decisions. We consider harvest decisions generated using four different quantitative models: (1) the model used to produce the simulated population dynamics observed in the game, with the values of all parameters known (as a control), (2) the same model, but with unknown parameter values that must be estimated during the game from observed data, (3) models that are structurally different from those used to simulate the population dynamics, and (4) a model that ignores age structure. Humans on average performed much worse than the models in cases 1-3, but in a small minority of scenarios, models produced worse outcomes than those resulting from students making decisions based on experience and judgment. When the models ignored age structure, they generated poorly performing management decisions, but still outperformed students using experience and judgment 66% of the time. © 2016 by the Ecological Society of America.
Franco, Jennifer; Levidow, Les; Fig, David; Goldfarb, Lucia; Hönicke, Mireille; Mendonça, Maria Luisa
2010-01-01
The biofuel project is an agro-industrial development and politically contested policy process where governments increasingly become global actors. European Union (EU) biofuels policy rests upon arguments about societal benefits of three main kinds - namely, environmental protection (especially greenhouse gas savings), energy security and rural development, especially in the global South. Each argument involves optimistic assumptions about what the putative benefits mean and how they can be fulfilled. After examining those assumptions, we compare them with experiences in three countries - Germany, Brazil and Mozambique - which have various links to each other and to the EU through biofuels. In those case studies, there are fundamental contradictions between EU policy assumptions and practices in the real world, involving frictional encounters among biofuel promoters as well as with people adversely affected. Such contradictions may intensify with the future rise of biofuels and so warrant systematic attention.
Modeling of Heat Transfer in Rooms in the Modelica "Buildings" Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wetter, Michael; Zuo, Wangda; Nouidui, Thierry Stephane
This paper describes the implementation of the room heat transfer model in the free open-source Modelica \\Buildings" library. The model can be used as a single room or to compose a multizone building model. We discuss how the model is decomposed into submodels for the individual heat transfer phenomena. We also discuss the main physical assumptions. The room model can be parameterized to use different modeling assumptions, leading to linear or non-linear differential algebraic systems of equations. We present numerical experiments that show how these assumptions affect computing time and accuracy for selected cases of the ANSI/ASHRAE Standard 140- 2007more » envelop validation tests.« less
On the time-homogeneous Ornstein-Uhlenbeck process in the foreign exchange rates
NASA Astrophysics Data System (ADS)
da Fonseca, Regina C. B.; Matsushita, Raul Y.; de Castro, Márcio T.; Figueiredo, Annibal
2015-10-01
Since Gaussianity and stationarity assumptions cannot be fulfilled by financial data, the time-homogeneous Ornstein-Uhlenbeck (THOU) process was introduced as a candidate model to describe time series of financial returns [1]. It is an Ornstein-Uhlenbeck (OU) process in which these assumptions are replaced by linearity and time-homogeneity. We employ the OU and THOU processes to analyze daily foreign exchange rates against the US dollar. We confirm that the OU process does not fit the data, while in most cases the first four cumulants patterns from data can be described by the THOU process. However, there are some exceptions in which the data do not follow linearity or time-homogeneity assumptions.
Ecological resilience in lakes and the conjunction fallacy.
Spears, Bryan M; Futter, Martyn N; Jeppesen, Erik; Huser, Brian J; Ives, Stephen; Davidson, Thomas A; Adrian, Rita; Angeler, David G; Burthe, Sarah J; Carvalho, Laurence; Daunt, Francis; Gsell, Alena S; Hessen, Dag O; Janssen, Annette B G; Mackay, Eleanor B; May, Linda; Moorhouse, Heather; Olsen, Saara; Søndergaard, Martin; Woods, Helen; Thackeray, Stephen J
2017-11-01
There is a pressing need to apply stability and resilience theory to environmental management to restore degraded ecosystems effectively and to mitigate the effects of impending environmental change. Lakes represent excellent model case studies in this respect and have been used widely to demonstrate theories of ecological stability and resilience that are needed to underpin preventative management approaches. However, we argue that this approach is not yet fully developed because the pursuit of empirical evidence to underpin such theoretically grounded management continues in the absence of an objective probability framework. This has blurred the lines between intuitive logic (based on the elementary principles of probability) and extensional logic (based on assumption and belief) in this field.
Yaesoubi, Reza; Trotter, Caroline; Colijn, Caroline; Yaesoubi, Maziar; Colombini, Anaïs; Resch, Stephen; Kristiansen, Paul A; LaForce, F Marc; Cohen, Ted
2018-01-01
The introduction of a conjugate vaccine for serogroup A Neisseria meningitidis has dramatically reduced disease in the African meningitis belt. In this context, important questions remain about the performance of different vaccine policies that target remaining serogroups. Here, we estimate the health impact and cost associated with several alternative vaccination policies in Burkina Faso. We developed and calibrated a mathematical model of meningococcal transmission to project the disability-adjusted life years (DALYs) averted and costs associated with the current Base policy (serogroup A conjugate vaccination at 9 months, as part of the Expanded Program on Immunization [EPI], plus district-specific reactive vaccination campaigns using polyvalent meningococcal polysaccharide [PMP] vaccine in response to outbreaks) and three alternative policies: (1) Base Prime: novel polyvalent meningococcal conjugate (PMC) vaccine replaces the serogroup A conjugate in EPI and is also used in reactive campaigns; (2) Prevention 1: PMC used in EPI and in a nationwide catch-up campaign for 1-18-year-olds; and (3) Prevention 2: Prevention 1, except the nationwide campaign includes individuals up to 29 years old. Over a 30-year simulation period, Prevention 2 would avert 78% of the meningococcal cases (95% prediction interval: 63%-90%) expected under the Base policy if serogroup A is not replaced by remaining serogroups after elimination, and would avert 87% (77%-93%) of meningococcal cases if complete strain replacement occurs. Compared to the Base policy and at the PMC vaccine price of US$4 per dose, strategies that use PMC vaccine (i.e., Base Prime and Preventions 1 and 2) are expected to be cost saving if strain replacement occurs, and would cost US$51 (-US$236, US$490), US$188 (-US$97, US$626), and US$246 (-US$53, US$703) per DALY averted, respectively, if strain replacement does not occur. An important potential limitation of our study is the simplifying assumption that all circulating meningococcal serogroups can be aggregated into a single group; while this assumption is critical for model tractability, it would compromise the insights derived from our model if the effectiveness of the vaccine differs markedly between serogroups or if there are complex between-serogroup interactions that influence the frequency and magnitude of future meningitis epidemics. Our results suggest that a vaccination strategy that includes a catch-up nationwide immunization campaign in young adults with a PMC vaccine and the addition of this new vaccine into EPI is cost-effective and would avert a substantial portion of meningococcal cases expected under the current World Health Organization-recommended strategy of reactive vaccination. This analysis is limited to Burkina Faso and assumes that polyvalent vaccines offer equal protection against all meningococcal serogroups; further studies are needed to evaluate the robustness of this assumption and applicability for other countries in the meningitis belt.
On the combinatorics of sparsification.
Huang, Fenix Wd; Reidys, Christian M
2012-10-22
We study the sparsification of dynamic programming based on folding algorithms of RNA structures. Sparsification is a method that improves significantly the computation of minimum free energy (mfe) RNA structures. We provide a quantitative analysis of the sparsification of a particular decomposition rule, Λ∗. This rule splits an interval of RNA secondary and pseudoknot structures of fixed topological genus. Key for quantifying sparsifications is the size of the so called candidate sets. Here we assume mfe-structures to be specifically distributed (see Assumption 1) within arbitrary and irreducible RNA secondary and pseudoknot structures of fixed topological genus. We then present a combinatorial framework which allows by means of probabilities of irreducible sub-structures to obtain the expectation of the Λ∗-candidate set w.r.t. a uniformly random input sequence. We compute these expectations for arc-based energy models via energy-filtered generating functions (GF) in case of RNA secondary structures as well as RNA pseudoknot structures. Furthermore, for RNA secondary structures we also analyze a simplified loop-based energy model. Our combinatorial analysis is then compared to the expected number of Λ∗-candidates obtained from the folding mfe-structures. In case of the mfe-folding of RNA secondary structures with a simplified loop-based energy model our results imply that sparsification provides a significant, constant improvement of 91% (theory) to be compared to an 96% (experimental, simplified arc-based model) reduction. However, we do not observe a linear factor improvement. Finally, in case of the "full" loop-energy model we can report a reduction of 98% (experiment). Sparsification was initially attributed a linear factor improvement. This conclusion was based on the so called polymer-zeta property, which stems from interpreting polymer chains as self-avoiding walks. Subsequent findings however reveal that the O(n) improvement is not correct. The combinatorial analysis presented here shows that, assuming a specific distribution (see Assumption 1), of mfe-structures within irreducible and arbitrary structures, the expected number of Λ∗-candidates is Θ(n2). However, the constant reduction is quite significant, being in the range of 96%. We furthermore show an analogous result for the sparsification of the Λ∗-decomposition rule for RNA pseudoknotted structures of genus one. Finally we observe that the effect of sparsification is sensitive to the employed energy model.
State Politics and Education: An Examination of Selected Multiple-State Case Studies.
ERIC Educational Resources Information Center
Burlingame, Martin; Geske, Terry G.
1979-01-01
Reviews the multiple-state case study literature, highlights some findings, discusses several methodological issues, and concludes with suggestions for possible research agendas. Urges students and researchers to be more actively critical of the assumptions and findings of these studies. (Author/IRT)
Allen, Timothy Craig; Stafford, Mehary; Liang, Bryan A
2014-04-01
This study examines whether the assumptions that pathologists understand the medical malpractice negligence rule and have a clear single standard of care are reasonable. Two hundred eighty-one Texas academic pathologists and trainees were presented 10 actual pathology malpractice cases from publicly available sources, representing the tort system's signal. Of the respondents, 55.52% were trainees, and 44.48% were pathology faculty. Only in two cases did more than 50% of respondents correctly identify the behavior of pathologists as defined by legal outcomes. In only half of the cases did more than 50% of pathologists concur with the jury verdict. This study provides further evidence that physicians do not understand the legal rule of negligence. Pathologists have a poor understanding of negligence and cannot accurately predict a jury verdict. There is significant divergence from the single standard of care assumption. Alternative methods to provide appropriate compensation and to establish physician accountability should be explored. Additional education about medical negligence is needed.
Stability and perturbations of countable Markov maps
NASA Astrophysics Data System (ADS)
Jordan, Thomas; Munday, Sara; Sahlsten, Tuomas
2018-04-01
Let T and , , be countable Markov maps such that the branches of converge pointwise to the branches of T, as . We study the stability of various quantities measuring the singularity (dimension, Hölder exponent etc) of the topological conjugacy between and T when . This is a well-understood problem for maps with finitely-many branches, and the quantities are stable for small ɛ, that is, they converge to their expected values if . For the infinite branch case their stability might be expected to fail, but we prove that even in the infinite branch case the quantity is stable under some natural regularity assumptions on and T (under which, for instance, the Hölder exponent of fails to be stable). Our assumptions apply for example in the case of Gauss map, various Lüroth maps and accelerated Manneville-Pomeau maps when varying the parameter α. For the proof we introduce a mass transportation method from the cusp that allows us to exploit thermodynamical ideas from the finite branch case. Dedicated to the memory of Bernd O Stratmann
NASA Astrophysics Data System (ADS)
Melas, Evangelos
2011-07-01
The 3+1 (canonical) decomposition of all geometries admitting two-dimensional space-like surfaces is exhibited as a generalization of a previous work. A proposal, consisting of a specific re-normalization Assumption and an accompanying Requirement, which has been put forward in the 2+1 case is now generalized to 3+1 dimensions. This enables the canonical quantization of these geometries through a generalization of Kuchař's quantization scheme in the case of infinite degrees of freedom. The resulting Wheeler-deWitt equation is based on a re-normalized manifold parameterized by three smooth scalar functionals. The entire space of solutions to this equation is analytically given, a fact that is entirely new to the present case. This is made possible by exploiting the freedom left by the imposition of the Requirement and contained in the third functional.
[The costs of altruism in organ donation case analysis].
Netza Cardoso, Cruz; Casas Martínez, María Luz Lina; Ramírez García, Hugo
2010-01-01
Three main assumptions were considered for the structure of donation programs during the decade of the sixties: the first states that people, through altruism, would feel committed with the affected and therefore incentivized to donate. The second one states that the human body can not be valued in mercantile terms; therefore organ donation should not be done free of any charges. The last one states donation does not represent any type of harm or damage for the donor. Today, more tan four decades away from their instauration, these three assumptions have been violated and modified due to the way in which they were socialized through the donation protocols. Altruism did not seem to be as generalized as expected, and organ commerce has already gone beyond the legislative frameworks that intended to prevent it; one example is the case of India. In this paper we analyze--through two objectives--the repercussions and impact that took effect in four cases registered in the National Institute of Cardiology (Instituto Nacional de Cardiología) "Ignacio Chávez" in Mexico City. First objective: to describe the economical costs that the altruism-based donation protocol caused on the participant families. Second objective: to reflect on other costs that affected donators due to organ donation. It was found on the reviewed cases that repercussions can go beyond the economical issues; labor related, emotional and ethical repercussions were found too due to a undeniable sensation of reification that donors experience in view of the mechanization of the study protocol they undergo, specially when results are not the optimum. We circumscribe this paper’s analysis to living donors.
The Impact and Cost of Scaling up GeneXpert MTB/RIF in South Africa
Meyer-Rath, Gesine; Schnippel, Kathryn; Long, Lawrence; MacLeod, William; Sanne, Ian; Stevens, Wendy; Pillay, Sagie; Pillay, Yogan; Rosen, Sydney
2012-01-01
Objective We estimated the incremental cost and impact on diagnosis and treatment uptake of national rollout of Xpert MTB/RIF technology (Xpert) for the diagnosis of pulmonary TB above the cost of current guidelines for the years 2011 to 2016 in South Africa. Methods We parameterised a population-level decision model with data from national-level TB databases (n = 199,511) and implementation studies. The model follows cohorts of TB suspects from diagnosis to treatment under current diagnostic guidelines or an algorithm that includes Xpert. Assumptions include the number of TB suspects, symptom prevalence of 5.5%, annual suspect growth rate of 10%, and 2010 public-sector salaries and drug and service delivery costs. Xpert test costs are based on data from an in-country pilot evaluation and assumptions about when global volumes allowing cartridge discounts will be reached. Results At full scale, Xpert will increase the number of TB cases diagnosed per year by 30%–37% and the number of MDR-TB cases diagnosed by 69%–71%. It will diagnose 81% of patients after the first visit, compared to 46% currently. The cost of TB diagnosis per suspect will increase by 55% to USD 60–61 and the cost of diagnosis and treatment per TB case treated by 8% to USD 797–873. The incremental capital cost of the Xpert scale-up will be USD 22 million and the incremental recurrent cost USD 287–316 million over six years. Conclusion Xpert will increase both the number of TB cases diagnosed and treated and the cost of TB diagnosis. These results do not include savings due to reduced transmission of TB as a result of earlier diagnosis and treatment initiation. PMID:22693561
NASA Astrophysics Data System (ADS)
Honarvar, M.; Lobo, J.; Mohareri, O.; Salcudean, S. E.; Rohling, R.
2015-05-01
To produce images of tissue elasticity, the vibro-elastography technique involves applying a steady-state multi-frequency vibration to tissue, estimating displacements from ultrasound echo data, and using the estimated displacements in an inverse elasticity problem with the shear modulus spatial distribution as the unknown. In order to fully solve the inverse problem, all three displacement components are required. However, using ultrasound, the axial component of the displacement is measured much more accurately than the other directions. Therefore, simplifying assumptions must be used in this case. Usually, the equations of motion are transformed into a Helmholtz equation by assuming tissue incompressibility and local homogeneity. The local homogeneity assumption causes significant imaging artifacts in areas of varying elasticity. In this paper, we remove the local homogeneity assumption. In particular we introduce a new finite element based direct inversion technique in which only the coupling terms in the equation of motion are ignored, so it can be used with only one component of the displacement. Both Cartesian and cylindrical coordinate systems are considered. The use of multi-frequency excitation also allows us to obtain multiple measurements and reduce artifacts in areas where the displacement of one frequency is close to zero. The proposed method was tested in simulations and experiments against a conventional approach in which the local homogeneity is used. The results show significant improvements in elasticity imaging with the new method compared to previous methods that assumes local homogeneity. For example in simulations, the contrast to noise ratio (CNR) for the region with spherical inclusion increases from an average value of 1.5-17 after using the proposed method instead of the local inversion with homogeneity assumption, and similarly in the prostate phantom experiment, the CNR improved from an average value of 1.6 to about 20.
Partitioning uncertainty in streamflow projections under nonstationary model conditions
NASA Astrophysics Data System (ADS)
Chawla, Ila; Mujumdar, P. P.
2018-02-01
Assessing the impacts of Land Use (LU) and climate change on future streamflow projections is necessary for efficient management of water resources. However, model projections are burdened with significant uncertainty arising from various sources. Most of the previous studies have considered climate models and scenarios as major sources of uncertainty, but uncertainties introduced by land use change and hydrologic model assumptions are rarely investigated. In this paper an attempt is made to segregate the contribution from (i) general circulation models (GCMs), (ii) emission scenarios, (iii) land use scenarios, (iv) stationarity assumption of the hydrologic model, and (v) internal variability of the processes, to overall uncertainty in streamflow projections using analysis of variance (ANOVA) approach. Generally, most of the impact assessment studies are carried out with unchanging hydrologic model parameters in future. It is, however, necessary to address the nonstationarity in model parameters with changing land use and climate. In this paper, a regression based methodology is presented to obtain the hydrologic model parameters with changing land use and climate scenarios in future. The Upper Ganga Basin (UGB) in India is used as a case study to demonstrate the methodology. The semi-distributed Variable Infiltration Capacity (VIC) model is set-up over the basin, under nonstationary conditions. Results indicate that model parameters vary with time, thereby invalidating the often-used assumption of model stationarity. The streamflow in UGB under the nonstationary model condition is found to reduce in future. The flows are also found to be sensitive to changes in land use. Segregation results suggest that model stationarity assumption and GCMs along with their interactions with emission scenarios, act as dominant sources of uncertainty. This paper provides a generalized framework for hydrologists to examine stationarity assumption of models before considering them for future streamflow projections and segregate the contribution of various sources to the uncertainty.
Principles of assessing bacterial susceptibility to antibiotics using the agar diffusion method.
Bonev, Boyan; Hooper, James; Parisot, Judicaël
2008-06-01
The agar diffusion assay is one method for quantifying the ability of antibiotics to inhibit bacterial growth. Interpretation of results from this assay relies on model-dependent analysis, which is based on the assumption that antibiotics diffuse freely in the solid nutrient medium. In many cases, this assumption may be incorrect, which leads to significant deviations of the predicted behaviour from the experiment and to inaccurate assessment of bacterial susceptibility to antibiotics. We sought a theoretical description of the agar diffusion assay that takes into consideration loss of antibiotic during diffusion and provides higher accuracy of the MIC determined from the assay. We propose a new theoretical framework for analysis of agar diffusion assays. MIC was determined by this technique for a number of antibiotics and analysis was carried out using both the existing free diffusion and the new dissipative diffusion models. A theory for analysis of antibiotic diffusion in solid media is described, in which we consider possible interactions of the test antibiotic with the solid medium or partial antibiotic inactivation during diffusion. This is particularly relevant to the analysis of diffusion of hydrophobic or amphipathic compounds. The model is based on a generalized diffusion equation, which includes the existing theory as a special case and contains an additional, dissipative term. Analysis of agar diffusion experiments using the new model allows significantly more accurate interpretation of experimental results and determination of MICs. The model has more general validity and is applicable to analysis of other dissipative processes, for example to antigen diffusion and to calculations of substrate load in affinity purification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cartmell, D.B.
1995-09-01
Based on US Department of Energy (DOE), Richland Operations Office (RL) review, specific areas of Westinghouse Hanford Company (WHC), Transition Projects ``Draft`` Multi-Year Program Plan (MYPP) were revised in preparation for the RL approval ceremony on September 26, 1995. These changes were reviewed with the appropriate RL Project Manager. The changes have been incorporated to the MYPP electronic file, and hard copies replacing the ``Draft`` MYPP will be distributed after the formal signing. In addition to the comments received, a summary level schedule and outyear estimates for the K Basin deactivation beginning in FY 2001 have been included. The Kmore » Basin outyear waste data is nearing completion this week and will be incorporated. This exclusion was discussed with Mr. N.D. Moorer, RL, Facility Transition Program Support/Integration. The attached MYPP scope/schedule reflects the Integrated Target Case submitted in the April 1995 Activity Data Sheets (ADS) with the exception of B Plant and the Plutonium Finishing Plant (PFP). The 8 Plant assumption in FY 1997 reflects the planning case in the FY 1997 ADS with a shortfall of $5 million. PFP assumptions have been revised from the FY 1997 ADS based on the direction provided this past summer by DOE-Headquarters. This includes the acceleration of the polycube stabilization back to its originally planned completion date. Although the overall program repricing in FY 1996 allowed the scheduled acceleration to fall with the funding allocation, the FY 1997 total reflects a shortfall of $6 million.« less
Identification of the human factors contributing to maintenance failures in a petroleum operation.
Antonovsky, Ari; Pollock, Clare; Straker, Leon
2014-03-01
This research aimed to identify the most frequently occurring human factors contributing to maintenance-related failures within a petroleum industry organization. Commonality between failures will assist in understanding reliability in maintenance processes, thereby preventing accidents in high-hazard domains. Methods exist for understanding the human factors contributing to accidents. Their application in a maintenance context mainly has been advanced in aviation and nuclear power. Maintenance in the petroleum industry provides a different context for investigating the role that human factors play in influencing outcomes. It is therefore worth investigating the contributing human factors to improve our understanding of both human factors in reliability and the factors specific to this domain. Detailed analyses were conducted of maintenance-related failures (N = 38) in a petroleum company using structured interviews with maintenance technicians. The interview structure was based on the Human Factor Investigation Tool (HFIT), which in turn was based on Rasmussen's model of human malfunction. A mean of 9.5 factors per incident was identified across the cases investigated.The three most frequent human factors contributing to the maintenance failures were found to be assumption (79% of cases), design and maintenance (71%), and communication (66%). HFIT proved to be a useful instrument for identifying the pattern of human factors that recurred most frequently in maintenance-related failures. The high frequency of failures attributed to assumptions and communication demonstrated the importance of problem-solving abilities and organizational communication in a domain where maintenance personnel have a high degree of autonomy and a wide geographical distribution.
Li, Shi; Mukherjee, Bhramar; Batterman, Stuart; Ghosh, Malay
2013-12-01
Case-crossover designs are widely used to study short-term exposure effects on the risk of acute adverse health events. While the frequentist literature on this topic is vast, there is no Bayesian work in this general area. The contribution of this paper is twofold. First, the paper establishes Bayesian equivalence results that require characterization of the set of priors under which the posterior distributions of the risk ratio parameters based on a case-crossover and time-series analysis are identical. Second, the paper studies inferential issues under case-crossover designs in a Bayesian framework. Traditionally, a conditional logistic regression is used for inference on risk-ratio parameters in case-crossover studies. We consider instead a more general full likelihood-based approach which makes less restrictive assumptions on the risk functions. Formulation of a full likelihood leads to growth in the number of parameters proportional to the sample size. We propose a semi-parametric Bayesian approach using a Dirichlet process prior to handle the random nuisance parameters that appear in a full likelihood formulation. We carry out a simulation study to compare the Bayesian methods based on full and conditional likelihood with the standard frequentist approaches for case-crossover and time-series analysis. The proposed methods are illustrated through the Detroit Asthma Morbidity, Air Quality and Traffic study, which examines the association between acute asthma risk and ambient air pollutant concentrations. © 2013, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Hoffmann, Robert; Liebich, Robert
2018-01-01
This paper states a unique classification to understand the source of the subharmonic vibrations of gas foil bearing (GFB) systems, which will experimentally and numerically tested. The classification is based on two cases, where an isolated system is assumed: Case 1 considers a poorly balance rotor, which results in increased displacement during operation and interacts with the nonlinear progressive structure. It is comparable to a Duffing-Oscillator. In contrast, for case 2 a well/perfectly balanced rotor is assumed. Hence, the only source of nonlinear subharmonic whirling results from the fluid film self-excitation. Experimental tests with different unbalance levels and GFB modifications confirm these assumptions. Furthermore, simulations are able to predict the self-excitations and synchronous and subharmonic resonances of the experimental test. The numerical model is based on a linearised eigenvalue problem. The GFB system uses linearised stiffness and damping parameters by applying a perturbation method on the Reynolds Equation. The nonlinear bump structure is simplified by a link-spring model. It includes Coulomb friction effects inside the elastic corrugated structure and captures the interaction between single bumps.
Investigating the Assumptions of Uses and Gratifications Research
ERIC Educational Resources Information Center
Lometti, Guy E.; And Others
1977-01-01
Discusses a study designed to determine empirically the gratifications sought from communication channels and to test the assumption that individuals differentiate channels based on gratifications. (MH)
Collective behaviour in vertebrates: a sensory perspective
Collignon, Bertrand; Fernández-Juricic, Esteban
2016-01-01
Collective behaviour models can predict behaviours of schools, flocks, and herds. However, in many cases, these models make biologically unrealistic assumptions in terms of the sensory capabilities of the organism, which are applied across different species. We explored how sensitive collective behaviour models are to these sensory assumptions. Specifically, we used parameters reflecting the visual coverage and visual acuity that determine the spatial range over which an individual can detect and interact with conspecifics. Using metric and topological collective behaviour models, we compared the classic sensory parameters, typically used to model birds and fish, with a set of realistic sensory parameters obtained through physiological measurements. Compared with the classic sensory assumptions, the realistic assumptions increased perceptual ranges, which led to fewer groups and larger group sizes in all species, and higher polarity values and slightly shorter neighbour distances in the fish species. Overall, classic visual sensory assumptions are not representative of many species showing collective behaviour and constrain unrealistically their perceptual ranges. More importantly, caution must be exercised when empirically testing the predictions of these models in terms of choosing the model species, making realistic predictions, and interpreting the results. PMID:28018616
Stark, Renee G; John, Jürgen; Leidl, Reiner
2011-01-13
This study's aim was to develop a first quantification of the frequency and costs of adverse drug events (ADEs) originating in ambulatory medical practice in Germany. The frequencies and costs of ADEs were quantified for a base case, building on an existing cost-of-illness model for ADEs. The model originates from the U.S. health care system, its structure of treatment probabilities linked to ADEs was transferred to Germany. Sensitivity analyses based on values determined from a literature review were used to test the postulated results. For Germany, the base case postulated that about 2 million adults ingesting medications have will have an ADE in 2007. Health care costs related to ADEs in this base case totalled 816 million Euros, mean costs per case were 381 Euros. About 58% of costs resulted from hospitalisations, 11% from emergency department visits and 21% from long-term care. Base case estimates of frequency and costs of ADEs were lower than all estimates of the sensitivity analyses. The postulated frequency and costs of ADEs illustrate the possible size of the health problems and economic burden related to ADEs in Germany. The validity of the U.S. treatment structure used remains to be determined for Germany. The sensitivity analysis used assumptions from different studies and thus further quantified the information gap in Germany regarding ADEs. This study found costs of ADEs in the ambulatory setting in Germany to be significant. Due to data scarcity, results are only a rough indication.
Introduction to Permutation and Resampling-Based Hypothesis Tests
ERIC Educational Resources Information Center
LaFleur, Bonnie J.; Greevy, Robert A.
2009-01-01
A resampling-based method of inference--permutation tests--is often used when distributional assumptions are questionable or unmet. Not only are these methods useful for obvious departures from parametric assumptions (e.g., normality) and small sample sizes, but they are also more robust than their parametric counterparts in the presences of…
ERIC Educational Resources Information Center
Yilmaz, Suha; Tekin-Dede, Ayse
2016-01-01
Mathematization competency is considered in the field as the focus of modelling process. Considering the various definitions, the components of the mathematization competency are determined as identifying assumptions, identifying variables based on the assumptions and constructing mathematical model/s based on the relations among identified…
A Proposal for Testing Local Realism Without Using Assumptions Related to Hidden Variable States
NASA Technical Reports Server (NTRS)
Ryff, Luiz Carlos
1996-01-01
A feasible experiment is discussed which allows us to prove a Bell's theorem for two particles without using an inequality. The experiment could be used to test local realism against quantum mechanics without the introduction of additional assumptions related to hidden variables states. Only assumptions based on direct experimental observation are needed.
Production, Comprehension, and Theories of the Mental Lexicon. CUNYForum, Numbers 5-6.
ERIC Educational Resources Information Center
Cowart, Wayne
Problems related to the structure of the mental lexicon are considered. The single access assumption, the passive memory assumption, and the heterogeneous memory assumption are rejected in favor of the theory which assumes several active memories, each able to store expression based on only one homogenous set of abstract primitives. One lexicon…
The logic of causation and the risk of paralytic poliomyelitis for an American child.
Ridgway, D.
2000-01-01
Beginning in January 1997, American immunization policy allowed parents and physicians to elect one of three approved infant vaccination strategies for preventing poliomyelitis. Although the three strategies likely have different outcomes with respect to prevention of paralytic poliomyelitis, the extreme rarity of the disease in the USA prevents any controlled comparison. In this paper, a formal inferential logic, originally described by Donald Rubin, is applied to the vaccination problem. Assumptions and indirect evidence are used to overcome the inability to observe the same subjects under varying conditions to allow the inference of causality from non-randomized observations. Using available epidemiologic information and explicit assumptions, it is possible to project the risk of paralytic polio for infants immunized with oral polio vaccine (1.3 cases per million vaccinees), inactivated polio vaccine (0.54 cases per million vaccinees), or a sequential schedule (0.54-0.92 cases per million vaccinees). PMID:10722138
NASA Astrophysics Data System (ADS)
Li, Hui; Yu, Jun-Ling; Yu, Le-An; Sun, Jie
2014-05-01
Case-based reasoning (CBR) is one of the main forecasting methods in business forecasting, which performs well in prediction and holds the ability of giving explanations for the results. In business failure prediction (BFP), the number of failed enterprises is relatively small, compared with the number of non-failed ones. However, the loss is huge when an enterprise fails. Therefore, it is necessary to develop methods (trained on imbalanced samples) which forecast well for this small proportion of failed enterprises and performs accurately on total accuracy meanwhile. Commonly used methods constructed on the assumption of balanced samples do not perform well in predicting minority samples on imbalanced samples consisting of the minority/failed enterprises and the majority/non-failed ones. This article develops a new method called clustering-based CBR (CBCBR), which integrates clustering analysis, an unsupervised process, with CBR, a supervised process, to enhance the efficiency of retrieving information from both minority and majority in CBR. In CBCBR, various case classes are firstly generated through hierarchical clustering inside stored experienced cases, and class centres are calculated out by integrating cases information in the same clustered class. When predicting the label of a target case, its nearest clustered case class is firstly retrieved by ranking similarities between the target case and each clustered case class centre. Then, nearest neighbours of the target case in the determined clustered case class are retrieved. Finally, labels of the nearest experienced cases are used in prediction. In the empirical experiment with two imbalanced samples from China, the performance of CBCBR was compared with the classical CBR, a support vector machine, a logistic regression and a multi-variant discriminate analysis. The results show that compared with the other four methods, CBCBR performed significantly better in terms of sensitivity for identifying the minority samples and generated high total accuracy meanwhile. The proposed approach makes CBR useful in imbalanced forecasting.
Genetic dissection of the consensus sequence for the class 2 and class 3 flagellar promoters
Wozniak, Christopher E.; Hughes, Kelly T.
2008-01-01
Summary Computational searches for DNA binding sites often utilize consensus sequences. These search models make assumptions that the frequency of a base pair in an alignment relates to the base pair’s importance in binding and presume that base pairs contribute independently to the overall interaction with the DNA binding protein. These two assumptions have generally been found to be accurate for DNA binding sites. However, these assumptions are often not satisfied for promoters, which are involved in additional steps in transcription initiation after RNA polymerase has bound to the DNA. To test these assumptions for the flagellar regulatory hierarchy, class 2 and class 3 flagellar promoters were randomly mutagenized in Salmonella. Important positions were then saturated for mutagenesis and compared to scores calculated from the consensus sequence. Double mutants were constructed to determine how mutations combined for each promoter type. Mutations in the binding site for FlhD4C2, the activator of class 2 promoters, better satisfied the assumptions for the binding model than did mutations in the class 3 promoter, which is recognized by the σ28 transcription factor. These in vivo results indicate that the activator sites within flagellar promoters can be modeled using simple assumptions but that the DNA sequences recognized by the flagellar sigma factor require more complex models. PMID:18486950
Trajectory Planning by Preserving Flexibility: Metrics and Analysis
NASA Technical Reports Server (NTRS)
Idris, Husni R.; El-Wakil, Tarek; Wing, David J.
2008-01-01
In order to support traffic management functions, such as mitigating traffic complexity, ground and airborne systems may benefit from preserving or optimizing trajectory flexibility. To help support this hypothesis trajectory flexibility metrics have been defined in previous work to represent the trajectory robustness and adaptability to the risk of violating safety and traffic management constraints. In this paper these metrics are instantiated in the case of planning a trajectory with the heading degree of freedom. A metric estimation method is presented based on simplifying assumptions, namely discrete time and heading maneuvers. A case is analyzed to demonstrate the estimation method and its use in trajectory planning in a situation involving meeting a time constraint and avoiding loss of separation with nearby traffic. The case involves comparing path-stretch trajectories, in terms of adaptability and robustness along each, deduced from a map of estimated flexibility metrics over the solution space. The case demonstrated anecdotally that preserving flexibility may result in enhancing certain factors that contribute to traffic complexity, namely reducing proximity and confrontation.
Splitting of inviscid fluxes for real gases
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Vanleer, Bram; Shuen, Jian-Shun
1988-01-01
Flux-vector and flux-difference splittings for the inviscid terms of the compressible flow equations are derived under the assumption of a general equation of state for a real gas in equilibrium. No necessary assumptions, approximations or auxiliary quantities are introduced. The formulas derived include several particular cases known for ideal gases and readily apply to curvilinear coordinates. Applications of the formulas in a TVD algorithm to one-dimensional shock-tube and nozzle problems show their quality and robustness.
Splitting of inviscid fluxes for real gases
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Van Leer, Bram; Shuen, Jian-Shun
1990-01-01
Flux-vector and flux-difference splittings for the inviscid terms of the compressible flow equations are derived under the assumption of a general equation of state for a real gas in equilibrium. No necessary assumptions, approximations for auxiliary quantities are introduced. The formulas derived include several particular cases known for ideal gases and readily apply to curvilinear coordinates. Applications of the formulas in a TVD algorithm to one-dimensional shock-tube and nozzle problems show their quality and robustness.
The four-principle formulation of common morality is at the core of bioethics mediation method.
Ahmadi Nasab Emran, Shahram
2015-08-01
Bioethics mediation is increasingly used as a method in clinical ethics cases. My goal in this paper is to examine the implicit theoretical assumptions of the bioethics mediation method developed by Dubler and Liebman. According to them, the distinguishing feature of bioethics mediation is that the method is useful in most cases of clinical ethics in which conflict is the main issue, which implies that there is either no real ethical issue or if there were, they are not the key to finding a resolution. I question the tacit assumption of non-normativity of the mediation method in bioethics by examining the various senses in which bioethics mediation might be non-normative or neutral. The major normative assumption of the mediation method is the existence of common morality. In addition, the four-principle formulation of the theory articulated by Beauchamp and Childress implicitly provides the normative content for the method. Full acknowledgement of the theoretical and normative assumptions of bioethics mediation helps clinical ethicists better understand the nature of their job. In addition, the need for a robust philosophical background even in what appears to be a purely practical method of mediation cannot be overemphasized. Acknowledgement of the normative nature of bioethics mediation method necessitates a more critical attitude of the bioethics mediators towards the norms they usually take for granted uncritically as valid.
Short arc orbit determination and imminent impactors in the Gaia era
NASA Astrophysics Data System (ADS)
Spoto, F.; Del Vigna, A.; Milani, A.; Tommei, G.; Tanga, P.; Mignard, F.; Carry, B.; Thuillot, W.; David, P.
2018-06-01
Short-arc orbit determination is crucial when an asteroid is first discovered. In these cases usually the observations are so few that the differential correction procedure may not converge. We developed an initial orbit computation method, based on systematic ranging, which is an orbit determination technique that systematically explores a raster in the topocentric range and range-rate space region inside the admissible region. We obtained a fully rigorous computation of the probability for the asteroid that could impact the Earth within a few days from the discovery without any a priori assumption. We tested our method on the two past impactors, 2008 TC3 and 2014 AA, on some very well known cases, and on two particular objects observed by the European Space Agency Gaia mission.
Wells, Stewart; Bullen, Chris
2008-01-01
This article describes the near failure of an information technology (IT) system designed to support a government-funded, primary care-based hepatitis B screening program in New Zealand. Qualitative methods were used to collect data and construct an explanatory model. Multiple incorrect assumptions were made about participants, primary care workflows and IT capacity, software vendor user knowledge, and the health IT infrastructure. Political factors delayed system development and it was implemented untested, almost failing. An intensive rescue strategy included system modifications, relaxation of data validity rules, close engagement with software vendors, and provision of intensive on-site user support. This case study demonstrates that consideration of the social, political, technological, and health care contexts is important for successful implementation of public health informatics projects.
Psycho-physiological effects of head-mounted displays in ubiquitous use
NASA Astrophysics Data System (ADS)
Kawai, Takashi; Häkkinen, Jukka; Oshima, Keisuke; Saito, Hiroko; Yamazoe, Takashi; Morikawa, Hiroyuki; Nyman, Göte
2011-02-01
In this study, two experiments were conducted to evaluate the psycho-physiological effects by practical use of monocular head-mounted display (HMD) in a real-world environment, based on the assumption of consumer-level applications as viewing video content and receiving navigation information while walking. In the experiment 1, the workload was examined for different types of presenting stimuli using an HMD (monocular or binocular, see-through or non-see-through). The experiment 2 focused on the relationship between the real-world environment and the visual information presented using a monocular HMD. The workload was compared between a case where participants walked while viewing video content without relation to the real-world environment, and a case where participants walked while viewing visual information to augment the real-world environment as navigations.
Probabilistic evaluation of on-line checks in fault-tolerant multiprocessor systems
NASA Technical Reports Server (NTRS)
Nair, V. S. S.; Hoskote, Yatin V.; Abraham, Jacob A.
1992-01-01
The analysis of fault-tolerant multiprocessor systems that use concurrent error detection (CED) schemes is much more difficult than the analysis of conventional fault-tolerant architectures. Various analytical techniques have been proposed to evaluate CED schemes deterministically. However, these approaches are based on worst-case assumptions related to the failure of system components. Often, the evaluation results do not reflect the actual fault tolerance capabilities of the system. A probabilistic approach to evaluate the fault detecting and locating capabilities of on-line checks in a system is developed. The various probabilities associated with the checking schemes are identified and used in the framework of the matrix-based model. Based on these probabilistic matrices, estimates for the fault tolerance capabilities of various systems are derived analytically.
Simulation of biochemical reactions with time-dependent rates by the rejection-based algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thanh, Vo Hong, E-mail: vo@cosbi.eu; Priami, Corrado, E-mail: priami@cosbi.eu; Department of Mathematics, University of Trento, Trento
We address the problem of simulating biochemical reaction networks with time-dependent rates and propose a new algorithm based on our rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)]. The computation for selecting next reaction firings by our time-dependent RSSA (tRSSA) is computationally efficient. Furthermore, the generated trajectory is exact by exploiting the rejection-based mechanism. We benchmark tRSSA on different biological systems with varying forms of reaction rates to demonstrate its applicability and efficiency. We reveal that for nontrivial cases, the selection of reaction firings in existing algorithms introduces approximations because the integration of reactionmore » rates is very computationally demanding and simplifying assumptions are introduced. The selection of the next reaction firing by our approach is easier while preserving the exactness.« less
Sensitivity of TRIM projections to management, harvest, yield, and stocking adjustment assumptions.
Susan J. Alexander
1991-01-01
The Timber Resource Inventory Model (TRIM) was used to make several projections of forest industry timber supply for the Douglas-fir region. The sensitivity of these projections to assumptions about management and yields is discussed. A base run is compared to runs in which yields were altered, stocking adjustment was eliminated, harvest assumptions were changed, and...
Olofsson, Johanna; Barta, Zsolt; Börjesson, Pål; Wallberg, Ola
2017-01-01
Cellulase enzymes have been reported to contribute with a significant share of the total costs and greenhouse gas emissions of lignocellulosic ethanol production today. A potential future alternative to purchasing enzymes from an off-site manufacturer is to integrate enzyme and ethanol production, using microorganisms and part of the lignocellulosic material as feedstock for enzymes. This study modelled two such integrated process designs for ethanol from logging residues from spruce production, and compared it to an off-site case based on existing data regarding purchased enzymes. Greenhouse gas emissions and primary energy balances were studied in a life-cycle assessment, and cost performance in a techno-economic analysis. The base case scenario suggests that greenhouse gas emissions per MJ of ethanol could be significantly lower in the integrated cases than in the off-site case. However, the difference between the integrated and off-site cases is reduced with alternative assumptions regarding enzyme dosage and the environmental impact of the purchased enzymes. The comparison of primary energy balances did not show any significant difference between the cases. The minimum ethanol selling price, to reach break-even costs, was from 0.568 to 0.622 EUR L -1 for the integrated cases, as compared to 0.581 EUR L -1 for the off-site case. An integrated process design could reduce greenhouse gas emissions from lignocellulose-based ethanol production, and the cost of an integrated process could be comparable to purchasing enzymes produced off-site. This study focused on the environmental and economic assessment of an integrated process, and in order to strengthen the comparison to the off-site case, more detailed and updated data regarding industrial off-site enzyme production are especially important.
Towards Understanding the DO-178C / ED-12C Assurance Case
NASA Technical Reports Server (NTRS)
Holloway, C M.
2012-01-01
This paper describes initial work towards building an explicit assurance case for DO-178C / ED-12C. Two specific questions are explored: (1) What are some of the assumptions upon which the guidance in the document relies, and (2) What claims are made concerning test coverage analysis?
2017-01-01
The Annual Energy Outlook provides modeled projections of domestic energy markets through 2050, and includes cases with different assumptions of macroeconomic growth, world oil prices, technological progress, and energy policies. With strong domestic production and relatively flat demand, the United States becomes a net energy exporter over the projection period in most cases.
An interactive data viewer that provides modeled projections of domestic energy markets through 2050, and includes cases with different assumptions of macroeconomic growth, world oil prices, technological progress, and energy policies. With strong domestic production and relatively flat demand, the United States becomes a net energy exporter over the projection period in most cases.
Educational Research in Palestine: Epistemological and Cultural Challenges--A Case Study
ERIC Educational Resources Information Center
Khalifah, Ayman A.
2010-01-01
This study investigates the prevailing epistemological and cultural conditions that underlie educational research in Palestine. Using a case study of a major Palestinian University that awards Masters Degrees in Education, the study analyzes the assumptions and the methodology that characterizes current educational research. Using an analysis of…
Tornow, Matthew A; Skelton, Randall R
2012-01-01
When molecules and morphology produce incongruent hypotheses of primate interrelationships, the data are typically viewed as incompatible, and molecular hypotheses are often considered to be better indicators of phylogenetic history. However, it has been demonstrated that the choice of which taxa to include in cladistic analysis as well as assumptions about character weighting, character state transformation order, and outgroup choice all influence hypotheses of relationships and may positively influence tree topology, so that relationships between extant taxa are consistent with those found using molecular data. Thus, the source of incongruence between morphological and molecular trees may lie not in the morphological data themselves but in assumptions surrounding the ways characters evolve and their impact on cladistic analysis. In this study, we investigate the role that assumptions about character polarity and transformation order play in creating incongruence between primate phylogenies based on morphological data and those supported by multiple lines of molecular data. By releasing constraints imposed on published morphological analyses of primates from disparate clades and subjecting those data to parsimony analysis, we test the hypothesis that incongruence between morphology and molecules results from inherent flaws in morphological data. To quantify the difference between incongruent trees, we introduce a new method called branch slide distance (BSD). BSD mitigates many of the limitations attributed to other tree comparison methods, thus allowing for a more accurate measure of topological similarity. We find that releasing a priori constraints on character behavior often produces trees that are consistent with molecular trees. Case studies are presented that illustrate how congruence between molecules and unconstrained morphological data may provide insight into issues of polarity, transformation order, homology, and homoplasy.
Is Seismically Determined Q an Intrinsic Material Property?
NASA Astrophysics Data System (ADS)
Langston, C. A.
2003-12-01
The seismic quality factor, Q, has a well-defined physical meaning as an intrinsic material property associated with a visco-elastic or a non-linear stress-strain constitutive relation for a material. Measurement of Q from seismic waves, however, involves interpreting seismic wave amplitude and phase as deviations from some ideal elastic wave propagation model. Thus, assumptions in the elastic wave propagation model become the basis for attributing anelastic properties to the earth continuum. Scientifically, the resulting Q model derived from seismic data is no more than a hypothesis that needs to be verified by other independent experiments concerning the continuum constitutive law and through careful examination of the truth of the assumptions in the wave propagation model. A case in point concerns the anelasticity of Mississippi embayment sediments in the central U.S. that has important implications for evaluation of earthquake strong ground motions. Previous body wave analyses using converted Sp phases have suggested that Qs is ~30 in the sediments based on simple ray theory assumptions. However, detailed modeling of 1D heterogeneity in the sediments shows that Qs cannot be resolved by the Sp data. An independent experiment concerning the amplitude decay of surface waves propagating in the sediments shows that Qs must be generally greater than 80 but is also subject to scattering attenuation. Apparent Q effects seen in direct P and S waves can also be produced by wave tunneling mechanisms in relatively simple 1D heterogeneity. Heterogeneity is a general geophysical attribute of the earth as shown by many high-resolution data sets and should be used as the first litmus test on assumptions made in seismic Q studies before a Q model can be interpreted as an intrinsic material property.
Latimer, Nicholas R; Abrams, Keith R; Lambert, Paul C; Crowther, Michael J; Wailoo, Allan J; Morden, James P; Akehurst, Ron L; Campbell, Michael J
2014-04-01
Treatment switching commonly occurs in clinical trials of novel interventions in the advanced or metastatic cancer setting. However, methods to adjust for switching have been used inconsistently and potentially inappropriately in health technology assessments (HTAs). We present recommendations on the use of methods to adjust survival estimates in the presence of treatment switching in the context of economic evaluations. We provide background on the treatment switching issue and summarize methods used to adjust for it in HTAs. We discuss the assumptions and limitations associated with adjustment methods and draw on results of a simulation study to make recommendations on their use. We demonstrate that methods used to adjust for treatment switching have important limitations and often produce bias in realistic scenarios. We present an analysis framework that aims to increase the probability that suitable adjustment methods can be identified on a case-by-case basis. We recommend that the characteristics of clinical trials, and the treatment switching mechanism observed within them, should be considered alongside the key assumptions of the adjustment methods. Key assumptions include the "no unmeasured confounders" assumption associated with the inverse probability of censoring weights (IPCW) method and the "common treatment effect" assumption associated with the rank preserving structural failure time model (RPSFTM). The limitations associated with switching adjustment methods such as the RPSFTM and IPCW mean that they are appropriate in different scenarios. In some scenarios, both methods may be prone to bias; "2-stage" methods should be considered, and intention-to-treat analyses may sometimes produce the least bias. The data requirements of adjustment methods also have important implications for clinical trialists.
Anselmi, Pasquale; Stefanutti, Luca; de Chiusole, Debora; Robusto, Egidio
2017-11-01
The gain-loss model (GaLoM) is a formal model for assessing knowledge and learning. In its original formulation, the GaLoM assumes independence among the skills. Such an assumption is not reasonable in several domains, in which some preliminary knowledge is the foundation for other knowledge. This paper presents an extension of the GaLoM to the case in which the skills are not independent, and the dependence relation among them is described by a well-graded competence space. The probability of mastering skill s at the pretest is conditional on the presence of all skills on which s depends. The probabilities of gaining or losing skill s when moving from pretest to posttest are conditional on the mastery of s at the pretest, and on the presence at the posttest of all skills on which s depends. Two formulations of the model are presented, in which the learning path is allowed to change from pretest to posttest or not. A simulation study shows that models based on the true competence space obtain a better fit than models based on false competence spaces, and are also characterized by a higher assessment accuracy. An empirical application shows that models based on pedagogically sound assumptions about the dependencies among the skills obtain a better fit than models assuming independence among the skills. © 2017 The British Psychological Society.
Retrospective Assessment of Cost Savings From Prevention
Grosse, Scott D.; Berry, Robert J.; Tilford, J. Mick; Kucik, James E.; Waitzman, Norman J.
2016-01-01
Introduction Although fortification of food with folic acid has been calculated to be cost saving in the U.S., updated estimates are needed. This analysis calculates new estimates from the societal perspective of net cost savings per year associated with mandatory folic acid fortification of enriched cereal grain products in the U.S. that was implemented during 1997–1998. Methods Estimates of annual numbers of live-born spina bifida cases in 1995–1996 relative to 1999–2011 based on birth defects surveillance data were combined during 2015 with published estimates of the present value of lifetime direct costs updated in 2014 U.S. dollars for a live-born infant with spina bifida to estimate avoided direct costs and net cost savings. Results The fortification mandate is estimated to have reduced the annual number of U.S. live-born spina bifida cases by 767, with a lower-bound estimate of 614. The present value of mean direct lifetime cost per infant with spina bifida is estimated to be $791,900, or $577,000 excluding caregiving costs. Using a best estimate of numbers of avoided live-born spina bifida cases, fortification is estimated to reduce the present value of total direct costs for each year's birth cohort by $603 million more than the cost of fortification. A lower-bound estimate of cost savings using conservative assumptions, including the upper-bound estimate of fortification cost, is $299 million. Conclusions The estimates of cost savings are larger than previously reported, even using conservative assumptions. The analysis can also inform assessments of folic acid fortification in other countries. PMID:26790341
The specification of a hospital cost function. A comment on the recent literature.
Breyer, F
1987-06-01
In the empirical estimation of hospital cost functions, two radically different types of specifications have been chosen to date, ad-hoc forms and flexible functional forms based on neoclassical production theory. This paper discusses the respective strengths and weaknesses of both approaches and emphasizes the apparently unreconcilable conflict between the goals of maintaining functional flexibility and keeping the number of variables manageable if at the same time patient heterogeneity is to be adequately reflected in the case mix variables. A new specification is proposed which strikes a compromise between these goals, and the underlying assumptions are discussed critically.
Multilayer perceptron with local constraint as an emerging method in spatial data analysis
NASA Astrophysics Data System (ADS)
de Bollivier, M.; Dubois, G.; Maignan, M.; Kanevsky, M.
1997-02-01
The use of Geographic Information Systems has revolutionalized the handling and the visualization of geo-referenced data and has underlined the critic role of spatial analysis. The usual tools for such a purpose are geostatistics which are widely used in Earth science. Geostatistics are based upon several hypothesis which are not always verified in practice. On the other hand, Artificial Neural Network (ANN) a priori can be used without special assumptions and are known to be flexible. This paper proposes to discuss the application of ANN in the case of the interpolation of a geo-referenced variable.
Halsey, Neal A
2017-03-01
Public trust can be improved by learning from past mistakes, by establishing a standing forum for review of new concerns as they arise, and by maintaining a robust vaccine safety system. Developing standard guidelines for reporting causality assessment in case reports would help educate physicians and prevent future unnecessary concerns based on false assumptions of causal relationships. © The Author 2015. Published by Oxford University Press on behalf of The Journal of the Pediatric Infectious Diseases Society. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Radionuclide administration to nursing mothers: mathematically derived guidelines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romney, B.M.; Nickoloff, E.L.; Esser, P.D.
We determined a formula to establish objective guidelines for the administration of radionuclides to nursing mothers. The formula is based on the maximum permissible dose to the infant's critical organ, serial measurements of breast milk activity, milk volume, and dose to the critical organ per microcurie in milk. Using worst-case assumptions, we believe that cessation of nursing for 24 hours after administration of technetium labeled radiopharmaceuticals is sufficient for safety. Longer-lived agents require greater delays. Iodine-123 radiopharmaceuticals are preferable to iodine-131 agents and should always be used when studying the unblocked thyroid.
PBOOST: a GPU-based tool for parallel permutation tests in genome-wide association studies.
Yang, Guangyuan; Jiang, Wei; Yang, Qiang; Yu, Weichuan
2015-05-01
The importance of testing associations allowing for interactions has been demonstrated by Marchini et al. (2005). A fast method detecting associations allowing for interactions has been proposed by Wan et al. (2010a). The method is based on likelihood ratio test with the assumption that the statistic follows the χ(2) distribution. Many single nucleotide polymorphism (SNP) pairs with significant associations allowing for interactions have been detected using their method. However, the assumption of χ(2) test requires the expected values in each cell of the contingency table to be at least five. This assumption is violated in some identified SNP pairs. In this case, likelihood ratio test may not be applicable any more. Permutation test is an ideal approach to checking the P-values calculated in likelihood ratio test because of its non-parametric nature. The P-values of SNP pairs having significant associations with disease are always extremely small. Thus, we need a huge number of permutations to achieve correspondingly high resolution for the P-values. In order to investigate whether the P-values from likelihood ratio tests are reliable, a fast permutation tool to accomplish large number of permutations is desirable. We developed a permutation tool named PBOOST. It is based on GPU with highly reliable P-value estimation. By using simulation data, we found that the P-values from likelihood ratio tests will have relative error of >100% when 50% cells in the contingency table have expected count less than five or when there is zero expected count in any of the contingency table cells. In terms of speed, PBOOST completed 10(7) permutations for a single SNP pair from the Wellcome Trust Case Control Consortium (WTCCC) genome data (Wellcome Trust Case Control Consortium, 2007) within 1 min on a single Nvidia Tesla M2090 device, while it took 60 min in a single CPU Intel Xeon E5-2650 to finish the same task. More importantly, when simultaneously testing 256 SNP pairs for 10(7) permutations, our tool took only 5 min, while the CPU program took 10 h. By permuting on a GPU cluster consisting of 40 nodes, we completed 10(12) permutations for all 280 SNP pairs reported with P-values smaller than 1.6 × 10⁻¹² in the WTCCC datasets in 1 week. The source code and sample data are available at http://bioinformatics.ust.hk/PBOOST.zip. gyang@ust.hk; eeyu@ust.hk Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Relating color working memory and color perception.
Allred, Sarah R; Flombaum, Jonathan I
2014-11-01
Color is the most frequently studied feature in visual working memory (VWM). Oddly, much of this work de-emphasizes perception, instead making simplifying assumptions about the inputs served to memory. We question these assumptions in light of perception research, and we identify important points of contact between perception and working memory in the case of color. Better characterization of its perceptual inputs will be crucial for elucidating the structure and function of VWM. Copyright © 2014 Elsevier Ltd. All rights reserved.
Automatics adjusment on private pension fund for Asian Mathematics Conferences
NASA Astrophysics Data System (ADS)
Purwadi, J.
2017-10-01
This paper discussed about how the automatic adjustment mechanism in the pension fund with defined benefits in case conditions beyond assumptions - assumptions that have been determined. Automatic adjustment referred to in this experiment is intended to anticipate changes in economic and demographic conditions. The method discuss in this paper are indexing life expectancy. In this paper discussed about how the methods on private pension fund and how’s the impact of the change of life expectancy on benefit.
ERIC Educational Resources Information Center
Jackson, Paul R.
1972-01-01
The probabilities of certain English football teams winning different playoffs are determined. In each case, a mathematical model is fitted to the observed data, assumptions are verified, and the calculations performed. (LS)
The retention time of inorganic mercury in the brain — A systematic review of the evidence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rooney, James P.K., E-mail: jrooney@rcsi.ie
2014-02-01
Reports from human case studies indicate a half-life for inorganic mercury in the brain in the order of years—contradicting older radioisotope studies that estimated half-lives in the order of weeks to months in duration. This study systematically reviews available evidence on the retention time of inorganic mercury in humans and primates to better understand this conflicting evidence. A broad search strategy was used to capture 16,539 abstracts on the Pubmed database. Abstracts were screened to include only study types containing relevant information. 131 studies of interest were identified. Only 1 primate study made a numeric estimate for the half-life ofmore » inorganic mercury (227–540 days). Eighteen human mercury poisoning cases were followed up long term including autopsy. Brain inorganic mercury concentrations at death were consistent with a half-life of several years or longer. 5 radionucleotide studies were found, one of which estimated head half-life (21 days). This estimate has sometimes been misinterpreted to be equivalent to brain half-life—which ignores several confounding factors including limited radioactive half-life and radioactive decay from surrounding tissues including circulating blood. No autopsy cohort study estimated a half-life for inorganic mercury, although some noted bioaccumulation of brain mercury with age. Modelling studies provided some extreme estimates (69 days vs 22 years). Estimates from modelling studies appear sensitive to model assumptions, however predications based on a long half-life (27.4 years) are consistent with autopsy findings. In summary, shorter estimates of half-life are not supported by evidence from animal studies, human case studies, or modelling studies based on appropriate assumptions. Evidence from such studies point to a half-life of inorganic mercury in human brains of several years to several decades. This finding carries important implications for pharmcokinetic modelling of mercury and potentially for the regulatory toxicology of mercury.« less
A total-evidence approach to dating with fossils, applied to the early radiation of the hymenoptera.
Ronquist, Fredrik; Klopfstein, Seraina; Vilhelmsen, Lars; Schulmeister, Susanne; Murray, Debra L; Rasnitsyn, Alexandr P
2012-12-01
Phylogenies are usually dated by calibrating interior nodes against the fossil record. This relies on indirect methods that, in the worst case, misrepresent the fossil information. Here, we contrast such node dating with an approach that includes fossils along with the extant taxa in a Bayesian total-evidence analysis. As a test case, we focus on the early radiation of the Hymenoptera, mostly documented by poorly preserved impression fossils that are difficult to place phylogenetically. Specifically, we compare node dating using nine calibration points derived from the fossil record with total-evidence dating based on 343 morphological characters scored for 45 fossil (4--20 complete) and 68 extant taxa. In both cases we use molecular data from seven markers (∼5 kb) for the extant taxa. Because it is difficult to model speciation, extinction, sampling, and fossil preservation realistically, we develop a simple uniform prior for clock trees with fossils, and we use relaxed clock models to accommodate rate variation across the tree. Despite considerable uncertainty in the placement of most fossils, we find that they contribute significantly to the estimation of divergence times in the total-evidence analysis. In particular, the posterior distributions on divergence times are less sensitive to prior assumptions and tend to be more precise than in node dating. The total-evidence analysis also shows that four of the seven Hymenoptera calibration points used in node dating are likely to be based on erroneous or doubtful assumptions about the fossil placement. With respect to the early radiation of Hymenoptera, our results suggest that the crown group dates back to the Carboniferous, ∼309 Ma (95% interval: 291--347 Ma), and diversified into major extant lineages much earlier than previously thought, well before the Triassic. [Bayesian inference; fossil dating; morphological evolution; relaxed clock; statistical phylogenetics.].
A Total-Evidence Approach to Dating with Fossils, Applied to the Early Radiation of the Hymenoptera
Ronquist, Fredrik; Klopfstein, Seraina; Vilhelmsen, Lars; Schulmeister, Susanne; Murray, Debra L.; Rasnitsyn, Alexandr P.
2012-01-01
Abstract Phylogenies are usually dated by calibrating interior nodes against the fossil record. This relies on indirect methods that, in the worst case, misrepresent the fossil information. Here, we contrast such node dating with an approach that includes fossils along with the extant taxa in a Bayesian total-evidence analysis. As a test case, we focus on the early radiation of the Hymenoptera, mostly documented by poorly preserved impression fossils that are difficult to place phylogenetically. Specifically, we compare node dating using nine calibration points derived from the fossil record with total-evidence dating based on 343 morphological characters scored for 45 fossil (4--20 complete) and 68 extant taxa. In both cases we use molecular data from seven markers (∼5 kb) for the extant taxa. Because it is difficult to model speciation, extinction, sampling, and fossil preservation realistically, we develop a simple uniform prior for clock trees with fossils, and we use relaxed clock models to accommodate rate variation across the tree. Despite considerable uncertainty in the placement of most fossils, we find that they contribute significantly to the estimation of divergence times in the total-evidence analysis. In particular, the posterior distributions on divergence times are less sensitive to prior assumptions and tend to be more precise than in node dating. The total-evidence analysis also shows that four of the seven Hymenoptera calibration points used in node dating are likely to be based on erroneous or doubtful assumptions about the fossil placement. With respect to the early radiation of Hymenoptera, our results suggest that the crown group dates back to the Carboniferous, ∼309 Ma (95% interval: 291--347 Ma), and diversified into major extant lineages much earlier than previously thought, well before the Triassic. [Bayesian inference; fossil dating; morphological evolution; relaxed clock; statistical phylogenetics.] PMID:22723471
Quantifying Wrinkle Features of Thin Membrane Structures
NASA Technical Reports Server (NTRS)
Jacobson, Mindy B.; Iwasa, Takashi; Naton, M. C.
2004-01-01
For future micro-systems utilizing membrane based structures, quantified predictions of wrinkling behavior in terms of amplitude, angle and wavelength are needed to optimize the efficiency and integrity of such structures, as well as their associated control systems. For numerical analyses performed in the past, limitations on the accuracy of membrane distortion simulations have often been related to the assumptions made. This work demonstrates that critical assumptions include: effects of gravity, supposed initial or boundary conditions, and the type of element used to model the membrane. In this work, a 0.2 m x 02 m membrane is treated as a structural material with non-negligible bending stiffness. Finite element modeling is used to simulate wrinkling behavior due to a constant applied in-plane shear load. Membrane thickness, gravity effects, and initial imperfections with respect to flatness were varied in numerous nonlinear analysis cases. Significant findings include notable variations in wrinkle modes for thickness in the range of 50 microns to 1000 microns, which also depend on the presence of an applied gravity field. However, it is revealed that relationships between overall strain energy density and thickness for cases with differing initial conditions are independent of assumed initial conditions. In addition, analysis results indicate that the relationship between wrinkle amplitude scale (W/t) and structural scale (L/t) is independent of the nonlinear relationship between thickness and stiffness.
Quantifying Square Membrane Wrinkle Behavior Using MITC Shell Elements
NASA Technical Reports Server (NTRS)
Jacobson, Mindy B.; Iwasa, Takashi; Natori, M. C.
2004-01-01
For future membrane based structures, quantified predictions of membrane wrinkling behavior in terms of amplitude, angle and wavelength are needed to optimize the efficiency and integrity of such structures, as well as their associated control systems. For numerical analyses performed in the past, limitations on the accuracy of membrane distortion simulations have often been related to the assumptions made while using finite elements. Specifically, this work demonstrates that critical assumptions include: effects of gravity. supposed initial or boundary conditions, and the type of element used to model the membrane. In this work, a 0.2 square meter membrane is treated as a structural material with non-negligible bending stiffness. Mixed Interpolation of Tensorial Components (MTTC) shell elements are used to simulate wrinkling behavior due to a constant applied in-plane shear load. Membrane thickness, gravity effects, and initial imperfections with respect to flatness were varied in numerous nonlinear analysis cases. Significant findings include notable variations in wrinkle modes for thickness in the range of 50 microns to 1000 microns, which also depend on the presence of an applied gravity field. However, it is revealed that relationships between overall strain energy density for cases with differing initial conditions are independent of assumed initial con&tions. In addition, analysis results indicate that the relationship between amplitude scale (W/t) and structural scale (L/t) is linear in the presence of a gravity field.
Tukker, Arnold; de Koning, Arjan; Wood, Richard; Moll, Stephan; Bouwmeester, Maaike C
2013-02-19
Environmentally extended input output (EE IO) analysis is increasingly used to assess the carbon footprint of final consumption. Official EE IO data are, however, at best available for single countries or regions such as the EU27. This causes problems in assessing pollution embodied in imported products. The popular "domestic technology assumption (DTA)" leads to errors. Improved approaches based on Life Cycle Inventory data, Multiregional EE IO tables, etc. rely on unofficial research data and modeling, making them difficult to implement by statistical offices. The DTA can lead to errors for three main reasons: exporting countries can have higher impact intensities; may use more intermediate inputs for the same output; or may sell the imported products for lower/other prices than those produced domestically. The last factor is relevant for sustainable consumption policies of importing countries, whereas the first factors are mainly a matter of making production in exporting countries more eco-efficient. We elaborated a simple correction for price differences in imports and domestic production using monetary and physical data from official import and export statistics. A case study for the EU27 shows that this "price-adjusted DTA" gives a partial but meaningful adjustment of pollution embodied in trade compared to multiregional EE IO studies.
Source Pulse Estimation of Mine Shock by Blind Deconvolution
NASA Astrophysics Data System (ADS)
Makowski, R.
The objective of seismic signal deconvolution is to extract from the signal information concerning the rockmass or the signal in the source of the shock. In the case of blind deconvolution, we have to extract information regarding both quantities. Many methods of deconvolution made use of in prospective seismology were found to be of minor utility when applied to shock-induced signals recorded in the mines of the Lubin Copper District. The lack of effectiveness should be attributed to the inadequacy of the model on which the methods are based, with respect to the propagation conditions for that type of signal. Each of the blind deconvolution methods involves a number of assumptions; hence, only if these assumptions are fulfilled, we may expect reliable results.Consequently, we had to formulate a different model for the signals recorded in the copper mines of the Lubin District. The model is based on the following assumptions: (1) The signal emitted by the sh ock source is a short-term signal. (2) The signal transmitting system (rockmass) constitutes a parallel connection of elementary systems. (3) The elementary systems are of resonant type. Such a model seems to be justified by the geological structure as well as by the positions of the shock foci and seismometers. The results of time-frequency transformation also support the dominance of resonant-type propagation.Making use of the model, a new method for the blind deconvolution of seismic signals has been proposed. The adequacy of the new model, as well as the efficiency of the proposed method, has been confirmed by the results of blind deconvolution. The slight approximation errors obtained with a small number of approximating elements additionally corroborate the adequacy of the model.
Evaluating Model-Driven Development for large-scale EHRs through the openEHR approach.
Christensen, Bente; Ellingsen, Gunnar
2016-05-01
In healthcare, the openEHR standard is a promising Model-Driven Development (MDD) approach for electronic healthcare records. This paper aims to identify key socio-technical challenges when the openEHR approach is put to use in Norwegian hospitals. More specifically, key fundamental assumptions are investigated empirically. These assumptions promise a clear separation of technical and domain concerns, users being in control of the modelling process, and widespread user commitment. Finally, these assumptions promise an easy way to model and map complex organizations. This longitudinal case study is based on an interpretive approach, whereby data were gathered through 440h of participant observation, 22 semi-structured interviews and extensive document studies over 4 years. The separation of clinical and technical concerns seemed to be aspirational, because both designing the technical system and modelling the domain required technical and clinical competence. Hence developers and clinicians found themselves working together in both arenas. User control and user commitment seemed not to apply in large-scale projects, as modelling the domain turned out to be too complicated and hence to appeal only to especially interested users worldwide, not the local end-users. Modelling proved to be a complex standardization process that shaped both the actual modelling and healthcare practice itself. A broad assemblage of contributors seems to be needed for developing an archetype-based system, in which roles, responsibilities and contributions cannot be clearly defined and delimited. The way MDD occurs has implications for medical practice per se in the form of the need to standardize practices to ensure that medical concepts are uniform across practices. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Decadal oscillations and extreme value distribution of river peak flows in the Meuse catchment
NASA Astrophysics Data System (ADS)
De Niel, Jan; Willems, Patrick
2017-04-01
In flood risk management, flood probabilities are often quantified through Generalized Pareto distributions of river peak flows. One of the main underlying assumptions is that all data points need to originate from one single underlying distribution (i.i.d. assumption). However, this hypothesis, although generally assumed to be correct for variables such as river peak flows, remains somehow questionable: flooding might indeed be caused by different hydrological and/or meteorological conditions. This study confirms these findings from previous research by showing a clear indication of the link between atmospheric conditions and flooding for the Meuse river in The Netherlands: decadal oscillations of river peak flows can (at least partially) be attributed to the occurrence of westerly weather types. The study further proposes a method to take this correlation between atmospheric conditions and river peak flows into account when calibrating an extreme value distribution for river peak flows. Rather than calibrating one single distribution to the data and potentially violating the i.i.d. assumption, weather type depending extreme value distributions are derived and composed. The study shows that, for the Meuse river in The Netherlands, such approach results in a more accurate extreme value distribution, especially with regards to extrapolations. Comparison of the proposed method with a traditional extreme value analysis approach and an alternative model-based approach for the same case study shows strong differences in the peak flow extrapolation. The design-flood for a 1,250 year return period is estimated at 4,800 m3s-1 for the proposed method, compared with 3,450 m3s-1 and 3,900 m3s-1 for the traditional method and a previous study. The methods were validated based on instrumental and documentary flood information of the past 500 years.
Expertise on glaucoma patients (in German)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leydhecker, W.
1973-02-01
The difficuities encountered in cases of glaucoma where external factors are brought into relationship with the disease are discussed. The discussion is based on the author's own case collection. Glaucoma simplex seldom requires compensation e.g., when medical treatment had been impossible over several years. Acute angle glaucoma requires compensation when its occurrence can be brought into connection with an exceptionally emotional environmental situation. In chronic angle closure glaucoma an aggravation of the situation brings about an acute increase of I. O. P. or when treatment has been impossible. A glaucoma can occur decades after tears of the ciliary body aftermore » contusions. The diagnosis can be established only through comparative gonioscopy of the two eyes. Even complicated cases can be clarified through exact reconstruction of the course of disease and the findings supported by consultations with the ophthalmologists who treated the cases before. Examples of such cases are shown. Consensual changes of I. O. P. occurred oniy in 0.5% of cases. The question of conditions under which the assumption of a consenual glaucoma can be apparently assumed is discussed. A case of secondary pigmentary glaucoma through x rays is presented. (auth)« less
29 CFR 4231.10 - Actuarial calculations and assumptions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date of...
29 CFR 4231.10 - Actuarial calculations and assumptions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date of...
29 CFR 4231.10 - Actuarial calculations and assumptions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date of...
29 CFR 4231.10 - Actuarial calculations and assumptions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date of...
29 CFR 4231.10 - Actuarial calculations and assumptions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... MULTIEMPLOYER PLANS § 4231.10 Actuarial calculations and assumptions. (a) Most recent valuation. All calculations required by this part must be based on the most recent actuarial valuation as of the date of...
Project Air Force, Annual Report 2003
2003-01-01
to Simulate Personnel Retention The CAPM system is based on a simple assumption about employee retention: A rational individual faced with the...analysis to certain parts of the force. CAPM keeps a complete record of the assumptions , policies, and data used for each scenario. Thus decisionmakers...premises and assumptions . Instead, the Commission concluded that space is a separate oper- ating arena equivalent to the air, land, and maritime
A STRICTLY CONTRACTIVE PEACEMAN-RACHFORD SPLITTING METHOD FOR CONVEX PROGRAMMING.
Bingsheng, He; Liu, Han; Wang, Zhaoran; Yuan, Xiaoming
2014-07-01
In this paper, we focus on the application of the Peaceman-Rachford splitting method (PRSM) to a convex minimization model with linear constraints and a separable objective function. Compared to the Douglas-Rachford splitting method (DRSM), another splitting method from which the alternating direction method of multipliers originates, PRSM requires more restrictive assumptions to ensure its convergence, while it is always faster whenever it is convergent. We first illustrate that the reason for this difference is that the iterative sequence generated by DRSM is strictly contractive, while that generated by PRSM is only contractive with respect to the solution set of the model. With only the convexity assumption on the objective function of the model under consideration, the convergence of PRSM is not guaranteed. But for this case, we show that the first t iterations of PRSM still enable us to find an approximate solution with an accuracy of O (1/ t ). A worst-case O (1/ t ) convergence rate of PRSM in the ergodic sense is thus established under mild assumptions. After that, we suggest attaching an underdetermined relaxation factor with PRSM to guarantee the strict contraction of its iterative sequence and thus propose a strictly contractive PRSM. A worst-case O (1/ t ) convergence rate of this strictly contractive PRSM in a nonergodic sense is established. We show the numerical efficiency of the strictly contractive PRSM by some applications in statistical learning and image processing.
Kansal-Kalra, Suleena; Milad, Magdy P; Grobman, William A
2005-09-01
To compare the economic consequences of proceeding directly to IVF to those of proceeding with gonadotropins followed by IVF in patients <35 years of age with unexplained infertility. A decision-tree model. The model incorporated the cost and success of each infertility regimen as well as the pregnancy-associated costs of singleton or multiple gestations and the risk and cost of cerebral palsy. Cost per live birth. Both treatment arms resulted in a >80% chance of birth. The gonadotropin arm was over four times more likely to result in a high-order multiple pregnancy (HOMP). Despite this, when the base case estimates were utilized, immediate IVF emerged as more costly per live birth. In sensitivity analysis, immediate IVF became less costly per live birth when IVF was more likely to achieve birth (55.1%) or cheaper (11,432 dollars) than our base case assumptions. After considering the risk and cost of HOMP, immediate IVF is more costly per live birth than a trial of gonadotropins prior to IVF.
An Industrial Ecology Approach to Municipal Solid Waste ...
The organic fraction of municipal solid waste provides abundant opportunities for industrial ecology-based symbiotic use. Energy production, economics, and environmental aspects are analyzed for four alternatives based on different technologies: incineration with energy recovery, gasification, anaerobic digestion, and fermentation. In these cases electricity and ethanol are the products considered, but other products and attempts at symbiosis can be made. The four technologies are in various states of commercial development. To highlight their relative complexities some adjustable parameters which are important for the operability of each process are discussed. While these technologies need to be considered for specific locations and circumstances, generalized economic and environmental information suggests relative comparisons for newly conceptualized processes. The results of industrial ecology-based analysis suggest that anaerobic digestion may improve seven emission categories, while fermentation, gasification, and incineration successively improve fewer emissions. A conceptual level analysis indicates that gasification, anaerobic digestion, and fermentation alternatives lead to positive economic results. In each case the alternatives and their assumptions need further analysis for any particular community. Presents information useful for analyzing the sustainability of alternatives for the management of municipal solid waste.
Sideris, Eleftherios; Corbett, Mark; Palmer, Stephen; Woolacott, Nerys; Bojke, Laura
2016-11-01
As part of the National Institute for Health and Clinical Excellence (NICE) single technology appraisal (STA) process, the manufacturer of apremilast was invited to submit evidence for its clinical and cost effectiveness for the treatment of active psoriatic arthritis (PsA) for whom disease-modifying anti-rheumatic drugs (DMARDs) have been inadequately effective, not tolerated or contraindicated. The Centre for Reviews and Dissemination and Centre for Health Economics at the University of York were commissioned to act as the independent Evidence Review Group (ERG). This paper provides a description of the ERG review of the company's submission, the ERG report and submission and summarises the NICE Appraisal Committee's subsequent guidance (December 2015). In the company's initial submission, the base-case analysis resulted in an incremental cost-effectiveness ratio (ICER) of £14,683 per quality-adjusted life-year (QALY) gained for the sequence including apremilast (positioned before tumour necrosis factor [TNF]-α inhibitors) versus a comparator sequence without apremilast. However, the ERG considered that the base-case sequence proposed by the company represented a limited set of potentially relevant treatment sequences and positions for apremilast. The company's base-case results were therefore not a sufficient basis to inform the most efficient use and position of apremilast. The exploratory ERG analyses indicated that apremilast is more effective (i.e. produces higher health gains) when positioned after TNF-α inhibitor therapies. Furthermore, assumptions made regarding a potential beneficial effect of apremilast on long-term Health Assessment Questionnaire (HAQ) progression, which cannot be substantiated, have a very significant impact on results. The NICE Appraisal Committee (AC), when taking into account their preferred assumptions for HAQ progression for patients on treatment with apremilast, placebo response and monitoring costs for apremilast, concluded that the addition of apremilast resulted in cost savings but also a QALY loss. These cost savings were not high enough to compensate for the clinical effectiveness that would be lost. The AC thus decided that apremilast alone or in combination with DMARD therapy is not recommended for treating adults with active PsA that has not responded to prior DMARD therapy, or where such therapy is not tolerated.
NASA Astrophysics Data System (ADS)
Chen, Po-Chun; Wang, Yuan-Heng; You, Gene Jiing-Yun; Wei, Chih-Chiang
2017-02-01
Future climatic conditions likely will not satisfy stationarity assumption. To address this concern, this study applied three methods to analyze non-stationarity in hydrologic conditions. Based on the principle of identifying distribution and trends (IDT) with time-varying moments, we employed the parametric weighted least squares (WLS) estimation in conjunction with the non-parametric discrete wavelet transform (DWT) and ensemble empirical mode decomposition (EEMD). Our aim was to evaluate the applicability of non-parameter approaches, compared with traditional parameter-based methods. In contrast to most previous studies, which analyzed the non-stationarity of first moments, we incorporated second-moment analysis. Through the estimation of long-term risk, we were able to examine the behavior of return periods under two different definitions: the reciprocal of the exceedance probability of occurrence and the expected recurrence time. The proposed framework represents an improvement over stationary frequency analysis for the design of hydraulic systems. A case study was performed using precipitation data from major climate stations in Taiwan to evaluate the non-stationarity of annual maximum daily precipitation. The results demonstrate the applicability of these three methods in the identification of non-stationarity. For most cases, no significant differences were observed with regard to the trends identified using WLS, DWT, and EEMD. According to the results, a linear model should be able to capture time-variance in either the first or second moment while parabolic trends should be used with caution due to their characteristic rapid increases. It is also observed that local variations in precipitation tend to be overemphasized by DWT and EEMD. The two definitions provided for the concept of return period allows for ambiguous interpretation. With the consideration of non-stationarity, the return period is relatively small under the definition of expected recurrence time comparing to the estimation using the reciprocal of the exceedance probability of occurrence. However, the calculation of expected recurrence time is based on the assumption of perfect knowledge of long-term risk, which involves high uncertainty. When the risk is decreasing with time, the expected recurrence time will lead to the divergence of return period and make this definition inapplicable for engineering purposes.
ERIC Educational Resources Information Center
Straubhaar, Rolf
2017-01-01
The purpose of this article is to ethnographically document the market-based ideological assumptions of Rio de Janeiro's educational policymakers, and the ways in which those assumptions have informed these policymakers' decision to implement value-added modeling-based teacher evaluation policies. Drawing on the anthropological literature on…
Behavioral Modeling Based on Probabilistic Finite Automata: An Empirical Study.
Tîrnăucă, Cristina; Montaña, José L; Ontañón, Santiago; González, Avelino J; Pardo, Luis M
2016-06-24
Imagine an agent that performs tasks according to different strategies. The goal of Behavioral Recognition (BR) is to identify which of the available strategies is the one being used by the agent, by simply observing the agent's actions and the environmental conditions during a certain period of time. The goal of Behavioral Cloning (BC) is more ambitious. In this last case, the learner must be able to build a model of the behavior of the agent. In both settings, the only assumption is that the learner has access to a training set that contains instances of observed behavioral traces for each available strategy. This paper studies a machine learning approach based on Probabilistic Finite Automata (PFAs), capable of achieving both the recognition and cloning tasks. We evaluate the performance of PFAs in the context of a simulated learning environment (in this case, a virtual Roomba vacuum cleaner robot), and compare it with a collection of other machine learning approaches.
Morrison, Mike; DeVaul-Fetters, Amanda; Gawronski, Bertram
2016-08-01
Most legal systems are based on the premise that defendants are treated as innocent until proven guilty and that decisions will be unbiased and solely based on the facts of the case. The validity of this assumption has been questioned for cases involving racial minority members, in that racial bias among jury members may influence jury decisions. The current research shows that legal professionals are adept at identifying jurors with levels of implicit race bias that are consistent with their legal interests. Using a simulated voir dire, professionals assigned to the role of defense lawyer for a Black defendant were more likely to exclude jurors with high levels of implicit race bias, whereas prosecutors of a Black defendant did the opposite. There was no relation between professionals' peremptory challenges and jurors' levels of explicit race bias. Implications for the role of racial bias in legal decision making are discussed. © 2016 by the Society for Personality and Social Psychology, Inc.
DNS and modeling of the interaction between turbulent premixed flames and walls
NASA Technical Reports Server (NTRS)
Poinsot, T. J.; Haworth, D. C.
1992-01-01
The interaction between turbulent premixed flames and walls is studied using a two-dimensional full Navier-Stokes solver with simple chemistry. The effects of wall distance on the local and global flame structure are investigated. Quenching distances and maximum wall heat fluxes during quenching are computed in laminar cases and are found to be comparable to experimental and analytical results. For turbulent cases, it is shown that quenching distances and maximum heat fluxes remain of the same order as for laminar flames. Based on simulation results, a 'law-of-the-wall' model is derived to describe the interaction between a turbulent premixed flame and a wall. This model is constructed to provide reasonable behavior of flame surface density near a wall under the assumption that flame-wall interaction takes place at scales smaller than the computational mesh. It can be implemented in conjunction with any of several recent flamelet models based on a modeled surface density equation, with no additional constraints on mesh size or time step.
Scarpazza, Cristina; Nichols, Thomas E; Seramondi, Donato; Maumet, Camille; Sartori, Giuseppe; Mechelli, Andrea
2016-01-01
In recent years, an increasing number of studies have used Voxel Based Morphometry (VBM) to compare a single patient with a psychiatric or neurological condition of interest against a group of healthy controls. However, the validity of this approach critically relies on the assumption that the single patient is drawn from a hypothetical population with a normal distribution and variance equal to that of the control group. In a previous investigation, we demonstrated that family-wise false positive error rate (i.e., the proportion of statistical comparisons yielding at least one false positive) in single case VBM are much higher than expected (Scarpazza et al., 2013). Here, we examine whether the use of non-parametric statistics, which does not rely on the assumptions of normal distribution and equal variance, would enable the investigation of single subjects with good control of false positive risk. We empirically estimated false positive rates (FPRs) in single case non-parametric VBM, by performing 400 statistical comparisons between a single disease-free individual and a group of 100 disease-free controls. The impact of smoothing (4, 8, and 12 mm) and type of pre-processing (Modulated, Unmodulated) was also examined, as these factors have been found to influence FPRs in previous investigations using parametric statistics. The 400 statistical comparisons were repeated using two independent, freely available data sets in order to maximize the generalizability of the results. We found that the family-wise error rate was 5% for increases and 3.6% for decreases in one data set; and 5.6% for increases and 6.3% for decreases in the other data set (5% nominal). Further, these results were not dependent on the level of smoothing and modulation. Therefore, the present study provides empirical evidence that single case VBM studies with non-parametric statistics are not susceptible to high false positive rates. The critical implication of this finding is that VBM can be used to characterize neuroanatomical alterations in individual subjects as long as non-parametric statistics are employed.
Autoimmune diseases and vaccinations.
Vial, Thierry; Descotes, Jacques
2004-01-01
The potential association between vaccination and autoimmune diseases has been largely questioned in the past few years, but this assumption has mostly been based on case reports. The available evidence derived from several negative epidemiological studies is reassuring and at least indicates that vaccines are not a major cause of autoimmune diseases. However, there are still uncertainties as to whether a susceptible subpopulation may be at a higher risk of developing an autoimmune disease without causing an overall increase in the disease incidence. Based on selected examples, this review highlights the difficulties in assessing this issue. We suggest that a potential link between vaccines and autoimmune diseases cannot be definitely ruled out and should be carefully explored during the development of new candidate vaccines. Copyright John Libbey Eurotext 2003.
NASA Astrophysics Data System (ADS)
Mignone, B. K.
2008-12-01
Effective solutions to the climate change problem will require unprecedented cooperation across space, continuity across time and coordination between disciplines. One well-known methodology for synthesizing the lessons of physical science, energy engineering and economics is integrated assessment. Typically, integrated assessment models use scientific and technological relationships as physical constraints in a larger macroeconomic optimization that is designed to either balance the costs and benefits of climate change mitigation or find the least-cost path to an exogenously prescribed endpoint (e.g. atmospheric CO2 stabilization). The usefulness of these models depends to a large extent on the quality of the assumptions and the relevance of the outcome metrics chosen by the user. In this study, I show how a scientifically-based emissions reduction scenario can be combined with engineering-based assumptions about the energy system (e.g. estimates of the marginal cost premium of carbon-free technology) to yield insights about the price path of CO2 under a future regulatory regime. I then show how this outcome metric (carbon price) relates to key decisions about the design of a future cap-and-trade system and the way in which future carbon markets may be regulated.
Kendall, William L.; Hines, James E.; Nichols, James D.; Grant, Evan H. Campbell
2013-01-01
Occupancy statistical models that account for imperfect detection have proved very useful in several areas of ecology, including species distribution and spatial dynamics, disease ecology, and ecological responses to climate change. These models are based on the collection of multiple samples at each of a number of sites within a given season, during which it is assumed the species is either absent or present and available for detection while each sample is taken. However, for some species, individuals are only present or available for detection seasonally. We present a statistical model that relaxes the closure assumption within a season by permitting staggered entry and exit times for the species of interest at each site. Based on simulation, our open model eliminates bias in occupancy estimators and in some cases increases precision. The power to detect the violation of closure is high if detection probability is reasonably high. In addition to providing more robust estimation of occupancy, this model permits comparison of phenology across sites, species, or years, by modeling variation in arrival or departure probabilities. In a comparison of four species of amphibians in Maryland we found that two toad species arrived at breeding sites later in the season than a salamander and frog species, and departed from sites earlier.
McDonald, Scott A; Devleesschauwer, Brecht; Wallinga, Jacco
2016-12-08
Disease burden is not evenly distributed within a population; this uneven distribution can be due to individual heterogeneity in progression rates between disease stages. Composite measures of disease burden that are based on disease progression models, such as the disability-adjusted life year (DALY), are widely used to quantify the current and future burden of infectious diseases. Our goal was to investigate to what extent ignoring the presence of heterogeneity could bias DALY computation. Simulations using individual-based models for hypothetical infectious diseases with short and long natural histories were run assuming either "population-averaged" progression probabilities between disease stages, or progression probabilities that were influenced by an a priori defined individual-level frailty (i.e., heterogeneity in disease risk) distribution, and DALYs were calculated. Under the assumption of heterogeneity in transition rates and increasing frailty with age, the short natural history disease model predicted 14% fewer DALYs compared with the homogenous population assumption. Simulations of a long natural history disease indicated that assuming homogeneity in transition rates when heterogeneity was present could overestimate total DALYs, in the present case by 4% (95% quantile interval: 1-8%). The consequences of ignoring population heterogeneity should be considered when defining transition parameters for natural history models and when interpreting the resulting disease burden estimates.
Formalization and analysis of reasoning by assumption.
Bosse, Tibor; Jonker, Catholijn M; Treur, Jan
2006-01-02
This article introduces a novel approach for the analysis of the dynamics of reasoning processes and explores its applicability for the reasoning pattern called reasoning by assumption. More specifically, for a case study in the domain of a Master Mind game, it is shown how empirical human reasoning traces can be formalized and automatically analyzed against dynamic properties they fulfill. To this end, for the pattern of reasoning by assumption a variety of dynamic properties have been specified, some of which are considered characteristic for the reasoning pattern, whereas some other properties can be used to discriminate among different approaches to the reasoning. These properties have been automatically checked for the traces acquired in experiments undertaken. The approach turned out to be beneficial from two perspectives. First, checking characteristic properties contributes to the empirical validation of a theory on reasoning by assumption. Second, checking discriminating properties allows the analyst to identify different classes of human reasoners. 2006 Lawrence Erlbaum Associates, Inc.
Park, H M; Lee, J S; Kim, T W
2007-11-15
In the analysis of electroosmotic flows, the internal electric potential is usually modeled by the Poisson-Boltzmann equation. The Poisson-Boltzmann equation is derived from the assumption of thermodynamic equilibrium where the ionic distributions are not affected by fluid flows. Although this is a reasonable assumption for steady electroosmotic flows through straight microchannels, there are some important cases where convective transport of ions has nontrivial effects. In these cases, it is necessary to adopt the Nernst-Planck equation instead of the Poisson-Boltzmann equation to model the internal electric field. In the present work, the predictions of the Nernst-Planck equation are compared with those of the Poisson-Boltzmann equation for electroosmotic flows in various microchannels where the convective transport of ions is not negligible.
Novice Teachers' Case Dilemmas: Surprising Perspectives Related to Diversity.
ERIC Educational Resources Information Center
Mastrilli, Thomas; Sardo-Brown, Deborah; Hinson, Stephanie
This study described novice teachers' case dilemmas, analyzing them for assumptions made by teachers about teaching and learning as well as for solutions to the dilemmas. Twenty-one of the thirty-six dilemmas emphasized either minority students, students of low socioeconomic status, or students from single-parent households. Among the issues…
The Significance of Motivation in Student-Centred Learning: A Reflective Case Study
ERIC Educational Resources Information Center
Maclellan, Effie
2008-01-01
The theoretical underpinnings of student-centred learning suggest motivation to be an integral component. However, lack of clarification of what is involved in motivation in education often results in unchallenged assumptions that fail to recognise that what motivates some students may alienate others. This case study, using socio-cognitive…
K-5 Student Experiences in a Dance Residency: A Case Study
ERIC Educational Resources Information Center
Leonard, Alison E.; McShane-Hellenbrand, Karen
2012-01-01
In this article, the collaborating authors, a researcher and dance artist, confront assumptions surrounding dance's experiential nature and assessment in schools. Presenting findings from a qualitative case study assessment of a three-week, whole-school dance artist-in-residence at a diverse and inclusive metropolitan K-5 school, the authors focus…
A Case Study of Conflict in an Educational Workplace: Managing Personal and Cultural Differences
ERIC Educational Resources Information Center
Torpey, Michael John
2006-01-01
This article is about conflict in an educational workplace setting. It reports on a case study investigating the emergence, development, and management of conflict among diverse native English speakers working as language instructors within a Japanese university. The example of conflict presented, which deals with divergent assumptions about the…
A Case Example of Insect Gymnastics: How Is Non-Euclidean Geometry Learned?
ERIC Educational Resources Information Center
Junius, Premalatha
2008-01-01
The focus of the article is on the complex cognitive process involved in learning the concept of "straightness" in Non-Euclidean geometry. Learning new material is viewed through a conflict resolution framework, as a student questions familiar assumptions understood in Euclidean geometry. A case study reveals how mathematization of the straight…
Assessing the role of case mix in cesarean delivery rates.
Lieberman, E; Lang, J M; Heffner, L J; Cohen, A
1998-07-01
Implicit in comparisons of unadjusted cesarean rates for hospitals and providers is the assumption that differences result from management practices rather than differences in case mix. This study proposes a method for comparison of cesarean rates that takes the effect of case mix into account. All women delivered of infants at our institution from December 1, 1994, through July 31, 1995, were classified according to whether they received care from community-based practitioners (N=3913) or from the hospital-based practice that serves a higher-risk population (N=1556). Women were categorized according to both obstetric history (nulliparas, multiparas without a previous cesarean, multiparas with a previous cesarean) and the presence of obstetric conditions influencing the risk of cesarean delivery (multiple birth, breech presentation or transverse lie, preterm, no trial of labor for a medical indication). We determined the percent of women in each parity-obstetric condition subgroup and calculated a standardized cesarean rate for the hospital-based practice using the case mix of the community-based practitioners as the standard. The crude cesarean rate was higher for the hospital-based practice (24.4%) than for the community-based practitioners (21.5%), a rate difference of 2.9% (95% confidence interval=0.4%, 5.4%; P=.02). However, the proportion of women falling into categories conferring a high risk of cesarean delivery (multiple pregnancy, breech presentation or transverse lie, preterm, no trial of labor permitted) was twice as high for the hospital-based practice (24.4% hospital, 12.1% community). The standardization indicates that if the hospital-based practitioners had the same case mix as community-based practitioners, their overall cesarean rate would be 20.1%, similar to the 21.5% rate of community providers (rate difference=-1.4%, 95% confidence interval =-3.1%, 0.3%; P=.11). Standardization for case mix provides a mechanism for distinguishing differences in cesarean rates resulting from case mix from those relating to differences in practice. The methodology is not complex and could be applied to facilitate fairer comparisons of rates among providers and across institutions.
Xu, John
2017-01-01
This paper addresses the key assumption in behavioral and transportation planning literature that, when people use a transit system more frequently, they become less dependent on and less sensitive to transit maps in their decision-making. Therefore, according to this assumption, map changes are much less impactful to travel decisions of frequent riders than to that of first-time or new passengers. This assumption—though never empirically validated—has been the major hurdle for transit maps to becoming a planning tool to change passengers’ behavior. This paper examines this assumption using the Washington DC metro map as a case study by conducting a route choice experiment between 30 Origin-Destination (O-D) pairs on seven metro map designs. The experiment targets two types of passengers: frequent metro riders through advertisements on a free daily newspaper available at DC metro stations, and general residents in the Washington metropolitan area through Amazon Mechanical Turk, an online crowdsourcing platform. A total of 255 and 371 participants made 2024 and 2960 route choices in the respective experiments. The results show that frequent passengers are in fact more sensitive to subtle changes in map design than general residents who are less likely to be familiar with the metro map and therefore unaffected by map changes presented in the alternative designs. The work disproves the aforementioned assumption and further validates metro maps as an effective planning tool in transit systems. PMID:29068371
Model Considerations for Memory-based Automatic Music Transcription
NASA Astrophysics Data System (ADS)
Albrecht, Štěpán; Šmídl, Václav
2009-12-01
The problem of automatic music description is considered. The recorded music is modeled as a superposition of known sounds from a library weighted by unknown weights. Similar observation models are commonly used in statistics and machine learning. Many methods for estimation of the weights are available. These methods differ in the assumptions imposed on the weights. In Bayesian paradigm, these assumptions are typically expressed in the form of prior probability density function (pdf) on the weights. In this paper, commonly used assumptions about music signal are summarized and complemented by a new assumption. These assumptions are translated into pdfs and combined into a single prior density using combination of pdfs. Validity of the model is tested in simulation using synthetic data.
Chen, Yun; Yang, Hui
2016-01-01
In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering. PMID:27966581
Chen, Yun; Yang, Hui
2016-12-14
In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.
Key rate for calibration robust entanglement based BB84 quantum key distribution protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gittsovich, O.; Moroder, T.
2014-12-04
We apply the approach of verifying entanglement, which is based on the sole knowledge of the dimension of the underlying physical system to the entanglement based version of the BB84 quantum key distribution protocol. We show that the familiar one-way key rate formula holds already if one assumes the assumption that one of the parties is measuring a qubit and no further assumptions about the measurement are needed.
NASA Astrophysics Data System (ADS)
Hüser, Imke; Harder, Hartwig; Heil, Angelika; Kaiser, Johannes W.
2017-09-01
Lagrangian particle dispersion models (LPDMs) in backward mode are widely used to quantify the impact of transboundary pollution on downwind sites. Most LPDM applications count particles with a technique that introduces a so-called footprint layer (FL) with constant height, in which passing air tracer particles are assumed to be affected by surface emissions. The mixing layer dynamics are represented by the underlying meteorological model. This particle counting technique implicitly assumes that the atmosphere is well mixed in the FL. We have performed backward trajectory simulations with the FLEXPART model starting at Cyprus to calculate the sensitivity to emissions of upwind pollution sources. The emission sensitivity is used to quantify source contributions at the receptor and support the interpretation of ground measurements carried out during the CYPHEX campaign in July 2014. Here we analyse the effects of different constant and dynamic FL height assumptions. The results show that calculations with FL heights of 100 and 300 m yield similar but still discernible results. Comparison of calculations with FL heights constant at 300 m and dynamically following the planetary boundary layer (PBL) height exhibits systematic differences, with daytime and night-time sensitivity differences compensating for each other. The differences at daytime when a well-mixed PBL can be assumed indicate that residual inaccuracies in the representation of the mixing layer dynamics in the trajectories may introduce errors in the impact assessment on downwind sites. Emissions from vegetation fires are mixed up by pyrogenic convection which is not represented in FLEXPART. Neglecting this convection may lead to severe over- or underestimations of the downwind smoke concentrations. Introducing an extreme fire source from a different year in our study period and using fire-observation-based plume heights as reference, we find an overestimation of more than 60 % by the constant FL height assumptions used for surface emissions. Assuming a FL that follows the PBL may reproduce the peak of the smoke plume passing through but erroneously elevates the background for shallow stable PBL heights. It might thus be a reasonable assumption for open biomass burning emissions wherever observation-based injection heights are not available.
NASA Astrophysics Data System (ADS)
Dai, Fei; Winn, Joshua N.; Berta-Thompson, Zachory; Sanchis-Ojeda, Roberto; Albrecht, Simon
2018-04-01
The light curve of an eclipsing system shows anomalies whenever the eclipsing body passes in front of active regions on the eclipsed star. In some cases, the pattern of anomalies can be used to determine the obliquity Ψ of the eclipsed star. Here we present a method for detecting and analyzing these patterns, based on a statistical test for correlations between the anomalies observed in a sequence of eclipses. Compared to previous methods, ours makes fewer assumptions and is easier to automate. We apply it to a sample of 64 stars with transiting planets and 24 eclipsing binaries for which precise space-based data are available, and for which there was either some indication of flux anomalies or a previously reported obliquity measurement. We were able to determine obliquities for 10 stars with hot Jupiters. In particular we found Ψ ≲ 10° for Kepler-45, which is only the second M dwarf with a measured obliquity. The other eight cases are G and K stars with low obliquities. Among the eclipsing binaries, we were able to determine obliquities in eight cases, all of which are consistent with zero. Our results also reveal some common patterns of stellar activity for magnetically active G and K stars, including persistently active longitudes.
NASA Astrophysics Data System (ADS)
Matos, José P.; Schaefli, Bettina; Schleiss, Anton J.
2017-04-01
Uncertainty affects hydrological modelling efforts from the very measurements (or forecasts) that serve as inputs to the more or less inaccurate predictions that are produced. Uncertainty is truly inescapable in hydrology and yet, due to the theoretical and technical hurdles associated with its quantification, it is at times still neglected or estimated only qualitatively. In recent years the scientific community has made a significant effort towards quantifying this hydrologic prediction uncertainty. Despite this, most of the developed methodologies can be computationally demanding, are complex from a theoretical point of view, require substantial expertise to be employed, and are constrained by a number of assumptions about the model error distribution. These assumptions limit the reliability of many methods in case of errors that show particular cases of non-normality, heteroscedasticity, or autocorrelation. The present contribution builds on a non-parametric data-driven approach that was developed for uncertainty quantification in operational (real-time) forecasting settings. The approach is based on the concept of Pareto optimality and can be used as a standalone forecasting tool or as a postprocessor. By virtue of its non-parametric nature and a general operating principle, it can be applied directly and with ease to predictions of streamflow, water stage, or even accumulated runoff. Also, it is a methodology capable of coping with high heteroscedasticity and seasonal hydrological regimes (e.g. snowmelt and rainfall driven events in the same catchment). Finally, the training and operation of the model are very fast, making it a tool particularly adapted to operational use. To illustrate its practical use, the uncertainty quantification method is coupled with a process-based hydrological model to produce statistically reliable forecasts for an Alpine catchment located in Switzerland. Results are presented and discussed in terms of their reliability and resolution.
ICU early physical rehabilitation programs: financial modeling of cost savings.
Lord, Robert K; Mayhew, Christopher R; Korupolu, Radha; Mantheiy, Earl C; Friedman, Michael A; Palmer, Jeffrey B; Needham, Dale M
2013-03-01
To evaluate the potential annual net cost savings of implementing an ICU early rehabilitation program. Using data from existing publications and actual experience with an early rehabilitation program in the Johns Hopkins Hospital Medical ICU, we developed a model of net financial savings/costs and presented results for ICUs with 200, 600, 900, and 2,000 annual admissions, accounting for both conservative- and best-case scenarios. Our example scenario provided a projected financial analysis of the Johns Hopkins Medical ICU early rehabilitation program, with 900 admissions per year, using actual reductions in length of stay achieved by this program. U.S.-based adult ICUs. Financial modeling of the introduction of an ICU early rehabilitation program. Net cost savings generated in our example scenario, with 900 annual admissions and actual length of stay reductions of 22% and 19% for the ICU and floor, respectively, were $817,836. Sensitivity analyses, which used conservative- and best-case scenarios for length of stay reductions and varied the per-day ICU and floor costs, across ICUs with 200-2,000 annual admissions, yielded financial projections ranging from -$87,611 (net cost) to $3,763,149 (net savings). Of the 24 scenarios included in these sensitivity analyses, 20 (83%) demonstrated net savings, with a relatively small net cost occurring in the remaining four scenarios, mostly when simultaneously combining the most conservative assumptions. A financial model, based on actual experience and published data, projects that investment in an ICU early rehabilitation program can generate net financial savings for U.S. hospitals. Even under the most conservative assumptions, the projected net cost of implementing such a program is modest relative to the substantial improvements in patient outcomes demonstrated by ICU early rehabilitation programs.
Some European capabilities in satellite cinema exhibition
NASA Astrophysics Data System (ADS)
Bock, Wolfgang
1990-08-01
The likely performance envelope and architecture for satellite cinema systems are derived from simple practical assumptions. A case is made for possible transatlantic cooperation towards establishing a satellite cinema standard.
Knell, R J; Begon, M; Thompson, D J
1996-01-22
Central to theoretical studies of host-pathogen population dynamics is a term describing transmission of the pathogen. This usually assumes that transmission is proportional to the density of infectious hosts or particles and of susceptible individuals. We tested this assumption with the bacterial pathogen Bacillus thuringiensis infecting larvae of Plodia interpunctella, the Indian meal moth. Transmission was found to increase in a more than linear way with host density in fourth and fifth instar P. interpunctella, and to decrease with the density of infectious cadavers in the case of fifth instar larvae. Food availability was shown to play an important part in this process. Therefore, on a number of counts, the usual assumption was found not to apply in our experimental system.
Using simulation to aid trial design: Ring-vaccination trials.
Hitchings, Matt David Thomas; Grais, Rebecca Freeman; Lipsitch, Marc
2017-03-01
The 2014-6 West African Ebola epidemic highlights the need for rigorous, rapid clinical trial methods for vaccines. A challenge for trial design is making sample size calculations based on incidence within the trial, total vaccine effect, and intracluster correlation, when these parameters are uncertain in the presence of indirect effects of vaccination. We present a stochastic, compartmental model for a ring vaccination trial. After identification of an index case, a ring of contacts is recruited and either vaccinated immediately or after 21 days. The primary outcome of the trial is total vaccine effect, counting cases only from a pre-specified window in which the immediate arm is assumed to be fully protected and the delayed arm is not protected. Simulation results are used to calculate necessary sample size and estimated vaccine effect. Under baseline assumptions about vaccine properties, monthly incidence in unvaccinated rings and trial design, a standard sample-size calculation neglecting dynamic effects estimated that 7,100 participants would be needed to achieve 80% power to detect a difference in attack rate between arms, while incorporating dynamic considerations in the model increased the estimate to 8,900. This approach replaces assumptions about parameters at the ring level with assumptions about disease dynamics and vaccine characteristics at the individual level, so within this framework we were able to describe the sensitivity of the trial power and estimated effect to various parameters. We found that both of these quantities are sensitive to properties of the vaccine, to setting-specific parameters over which investigators have little control, and to parameters that are determined by the study design. Incorporating simulation into the trial design process can improve robustness of sample size calculations. For this specific trial design, vaccine effectiveness depends on properties of the ring vaccination design and on the measurement window, as well as the epidemiologic setting.
A case study to quantify prediction bounds caused by model-form uncertainty of a portal frame
NASA Astrophysics Data System (ADS)
Van Buren, Kendra L.; Hall, Thomas M.; Gonzales, Lindsey M.; Hemez, François M.; Anton, Steven R.
2015-01-01
Numerical simulations, irrespective of the discipline or application, are often plagued by arbitrary numerical and modeling choices. Arbitrary choices can originate from kinematic assumptions, for example the use of 1D beam, 2D shell, or 3D continuum elements, mesh discretization choices, boundary condition models, and the representation of contact and friction in the simulation. This work takes a step toward understanding the effect of arbitrary choices and model-form assumptions on the accuracy of numerical predictions. The application is the simulation of the first four resonant frequencies of a one-story aluminum portal frame structure under free-free boundary conditions. The main challenge of the portal frame structure resides in modeling the joint connections, for which different modeling assumptions are available. To study this model-form uncertainty, and compare it to other types of uncertainty, two finite element models are developed using solid elements, and with differing representations of the beam-to-column and column-to-base plate connections: (i) contact stiffness coefficients or (ii) tied nodes. Test-analysis correlation is performed to compare the lower and upper bounds of numerical predictions obtained from parametric studies of the joint modeling strategies to the range of experimentally obtained natural frequencies. The approach proposed is, first, to characterize the experimental variability of the joints by varying the bolt torque, method of bolt tightening, and the sequence in which the bolts are tightened. The second step is to convert what is learned from these experimental studies to models that "envelope" the range of observed bolt behavior. We show that this approach, that combines small-scale experiments, sensitivity analysis studies, and bounding-case models, successfully produces lower and upper bounds of resonant frequency predictions that match those measured experimentally on the frame structure. (Approved for unlimited, public release, LA-UR-13-27561).
Gabeza, R
1995-03-01
The dual nature of the Japanese writing system was used to investigate two assumptions of the processing view of memory transfer: (1) that both perceptual and conceptual processing can contribute to the same memory test (mixture assumption) and (2) that both can be broken into more specific processes (subdivision assumption). Supporting the mixture assumption, a word fragment completion test based on ideographic kanji characters (kanji fragment completion test) was affected by both perceptual (hiragana/kanji script shift) and conceptual (levels-of-processing) study manipulations kanji fragments, because it did not occur with the use of meaningless hiragana fragments. The mixture assumption is also supported by an effect of study script on an implicit conceptual test (sentence completion), and the subdivision assumption is supported by a crossover dissociation between hiragana and kanji fragment completion as a function of study script.
Supporting calculations and assumptions for use in WESF safetyanalysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hey, B.E.
This document provides a single location for calculations and assumptions used in support of Waste Encapsulation and Storage Facility (WESF) safety analyses. It also provides the technical details and bases necessary to justify the contained results.
Projections of the Population of the United States, by Age, Sex, and Race: 1983 to 2080.
ERIC Educational Resources Information Center
Spencer, Gregory
1984-01-01
Based on assumptions about fertility, mortality, and net immigration trends, statistical tables depict the future U.S. population by age, sex, and race. Figures are based on the July 1, 1982, population estimates and race definitions and are projected using the cohort-component method with alternative assumptions for future fertility, mortality,…
On the validity of time-dependent AUC estimators.
Schmid, Matthias; Kestler, Hans A; Potapov, Sergej
2015-01-01
Recent developments in molecular biology have led to the massive discovery of new marker candidates for the prediction of patient survival. To evaluate the predictive value of these markers, statistical tools for measuring the performance of survival models are needed. We consider estimators of discrimination measures, which are a popular approach to evaluate survival predictions in biomarker studies. Estimators of discrimination measures are usually based on regularity assumptions such as the proportional hazards assumption. Based on two sets of molecular data and a simulation study, we show that violations of the regularity assumptions may lead to over-optimistic estimates of prediction accuracy and may therefore result in biased conclusions regarding the clinical utility of new biomarkers. In particular, we demonstrate that biased medical decision making is possible even if statistical checks indicate that all regularity assumptions are satisfied. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Hypothesis testing in evolutionary developmental biology: a case study from insect wings.
Jockusch, E L; Ober, K A
2004-01-01
Developmental data have the potential to give novel insights into morphological evolution. Because developmental data are time-consuming to obtain, support for hypotheses often rests on data from only a few distantly related species. Similarities between these distantly related species are parsimoniously inferred to represent ancestral aspects of development. However, with limited taxon sampling, ancestral similarities in developmental patterning can be difficult to distinguish from similarities that result from convergent co-option of developmental networks, which appears to be common in developmental evolution. Using a case study from insect wings, we discuss how these competing explanations for similarity can be evaluated. Two kinds of developmental data have recently been used to support the hypothesis that insect wings evolved by modification of limb branches that were present in ancestral arthropods. This support rests on the assumption that aspects of wing development in Drosophila, including similarities to crustacean epipod patterning, are ancestral for winged insects. Testing this assumption requires comparisons of wing development in Drosophila and other winged insects. Here we review data that bear on this assumption, including new data on the functions of wingless and decapentaplegic during appendage allocation in the red flour beetle Tribolium castaneum.
Can Newton's Third Law Be "Derived" from the Second?
NASA Astrophysics Data System (ADS)
Gangopadhyaya, Asim; Harrington, James
2017-04-01
Newton's laws have engendered much discussion over several centuries. Today, the internet is awash with a plethora of information on this topic. We find many references to Newton's laws, often discussions of various types of misunderstandings and ways to explain them. Here we present an intriguing example that shows an assumption hidden in Newton's third law that is often overlooked. As is well known, the first law defines an inertial frame of reference and the second law determines the acceleration of a particle in such a frame due to an external force. The third law describes forces exerted on each other in a two-particle system, and allows us to extend the second law to a system of particles. Students are often taught that the three laws are independent. Here we present an example that challenges this assumption. At first glance, it seems to show that, at least for a special case, the third law follows from the second law. However, a careful examination of the assumptions demonstrates that is not quite the case. Ultimately, the example does illustrate the significance of the concept of mass in linking Newton's dynamical principles.
Measurement Theory Based on the Truth Values Violates Local Realism
NASA Astrophysics Data System (ADS)
Nagata, Koji
2017-02-01
We investigate the violation factor of the Bell-Mermin inequality. Until now, we use an assumption that the results of measurement are ±1. In this case, the maximum violation factor is 2( n-1)/2. The quantum predictions by n-partite Greenberger-Horne-Zeilinger (GHZ) state violate the Bell-Mermin inequality by an amount that grows exponentially with n. Recently, a new measurement theory based on the truth values is proposed (Nagata and Nakamura, Int. J. Theor. Phys. 55:3616, 2016). The values of measurement outcome are either +1 or 0. Here we use the new measurement theory. We consider multipartite GHZ state. It turns out that the Bell-Mermin inequality is violated by the amount of 2( n-1)/2. The measurement theory based on the truth values provides the maximum violation of the Bell-Mermin inequality.
Waveform design for detection of weapons based on signature exploitation
NASA Astrophysics Data System (ADS)
Ahmad, Fauzia; Amin, Moeness G.; Dogaru, Traian
2010-04-01
We present waveform design based on signature exploitation techniques for improved detection of weapons in urban sensing applications. A single-antenna monostatic radar system is considered. Under the assumption of exact knowledge of the target orientation and, hence, known impulse response, matched illumination approach is used for optimal target detection. For the case of unknown target orientation, we analyze the target signatures as random processes and perform signal-to-noise-ratio based waveform optimization. Numerical electromagnetic modeling is used to provide the impulse responses of an AK-47 assault rifle for various target aspect angles relative to the radar. Simulation results depict an improvement in the signal-to-noise-ratio at the output of the matched filter receiver for both matched illumination and stochastic waveforms as compared to a chirp waveform of the same duration and energy.
Storm, Lance; Tressoldi, Patrizio E; Utts, Jessica
2013-01-01
Rouder, Morey, and Province (2013) stated that (a) the evidence-based case for psi in Storm, Tressoldi, and Di Risio's (2010) meta-analysis is supported only by a number of studies that used manual randomization, and (b) when these studies are excluded so that only investigations using automatic randomization are evaluated (and some additional studies previously omitted by Storm et al., 2010, are included), the evidence for psi is "unpersuasive." Rouder et al. used a Bayesian approach, and we adopted the same methodology, finding that our case is upheld. Because of recent updates and corrections, we reassessed the free-response databases of Storm et al. using a frequentist approach. We discuss and critique the assumptions and findings of Rouder et al. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Karin, Eyal; Dear, Blake F; Heller, Gillian Z; Crane, Monique F; Titov, Nickolai
2018-04-19
Missing cases following treatment are common in Web-based psychotherapy trials. Without the ability to directly measure and evaluate the outcomes for missing cases, the ability to measure and evaluate the effects of treatment is challenging. Although common, little is known about the characteristics of Web-based psychotherapy participants who present as missing cases, their likely clinical outcomes, or the suitability of different statistical assumptions that can characterize missing cases. Using a large sample of individuals who underwent Web-based psychotherapy for depressive symptoms (n=820), the aim of this study was to explore the characteristics of cases who present as missing cases at posttreatment (n=138), their likely treatment outcomes, and compare between statistical methods for replacing their missing data. First, common participant and treatment features were tested through binary logistic regression models, evaluating the ability to predict missing cases. Second, the same variables were screened for their ability to increase or impede the rate symptom change that was observed following treatment. Third, using recontacted cases at 3-month follow-up to proximally represent missing cases outcomes following treatment, various simulated replacement scores were compared and evaluated against observed clinical follow-up scores. Missing cases were dominantly predicted by lower treatment adherence and increased symptoms at pretreatment. Statistical methods that ignored these characteristics can overlook an important clinical phenomenon and consequently produce inaccurate replacement outcomes, with symptoms estimates that can swing from -32% to 70% from the observed outcomes of recontacted cases. In contrast, longitudinal statistical methods that adjusted their estimates for missing cases outcomes by treatment adherence rates and baseline symptoms scores resulted in minimal measurement bias (<8%). Certain variables can characterize and predict missing cases likelihood and jointly predict lesser clinical improvement. Under such circumstances, individuals with potentially worst off treatment outcomes can become concealed, and failure to adjust for this can lead to substantial clinical measurement bias. Together, this preliminary research suggests that missing cases in Web-based psychotherapeutic interventions may not occur as random events and can be systematically predicted. Critically, at the same time, missing cases may experience outcomes that are distinct and important for a complete understanding of the treatment effect. ©Eyal Karin, Blake F Dear, Gillian Z Heller, Monique F Crane, Nickolai Titov. Originally published in JMIR Mental Health (http://mental.jmir.org), 19.04.2018.
Dear, Blake F; Heller, Gillian Z; Crane, Monique F; Titov, Nickolai
2018-01-01
Background Missing cases following treatment are common in Web-based psychotherapy trials. Without the ability to directly measure and evaluate the outcomes for missing cases, the ability to measure and evaluate the effects of treatment is challenging. Although common, little is known about the characteristics of Web-based psychotherapy participants who present as missing cases, their likely clinical outcomes, or the suitability of different statistical assumptions that can characterize missing cases. Objective Using a large sample of individuals who underwent Web-based psychotherapy for depressive symptoms (n=820), the aim of this study was to explore the characteristics of cases who present as missing cases at posttreatment (n=138), their likely treatment outcomes, and compare between statistical methods for replacing their missing data. Methods First, common participant and treatment features were tested through binary logistic regression models, evaluating the ability to predict missing cases. Second, the same variables were screened for their ability to increase or impede the rate symptom change that was observed following treatment. Third, using recontacted cases at 3-month follow-up to proximally represent missing cases outcomes following treatment, various simulated replacement scores were compared and evaluated against observed clinical follow-up scores. Results Missing cases were dominantly predicted by lower treatment adherence and increased symptoms at pretreatment. Statistical methods that ignored these characteristics can overlook an important clinical phenomenon and consequently produce inaccurate replacement outcomes, with symptoms estimates that can swing from −32% to 70% from the observed outcomes of recontacted cases. In contrast, longitudinal statistical methods that adjusted their estimates for missing cases outcomes by treatment adherence rates and baseline symptoms scores resulted in minimal measurement bias (<8%). Conclusions Certain variables can characterize and predict missing cases likelihood and jointly predict lesser clinical improvement. Under such circumstances, individuals with potentially worst off treatment outcomes can become concealed, and failure to adjust for this can lead to substantial clinical measurement bias. Together, this preliminary research suggests that missing cases in Web-based psychotherapeutic interventions may not occur as random events and can be systematically predicted. Critically, at the same time, missing cases may experience outcomes that are distinct and important for a complete understanding of the treatment effect. PMID:29674311
Spread of Epidemic on Complex Networks Under Voluntary Vaccination Mechanism
NASA Astrophysics Data System (ADS)
Xue, Shengjun; Ruan, Feng; Yin, Chuanyang; Zhang, Haifeng; Wang, Binghong
Under the assumption that the decision of vaccination is a voluntary behavior, in this paper, we use two forms of risk functions to characterize how susceptible individuals estimate the perceived risk of infection. One is uniform case, where each susceptible individual estimates the perceived risk of infection only based on the density of infection at each time step, so the risk function is only a function of the density of infection; another is preferential case, where each susceptible individual estimates the perceived risk of infection not only based on the density of infection but only related to its own activities/immediate neighbors (in network terminology, the activity or the number of immediate neighbors is the degree of node), so the risk function is a function of the density of infection and the degree of individuals. By investigating two different ways of estimating the risk of infection for susceptible individuals on complex network, we find that, for the preferential case, the spread of epidemic can be effectively controlled; yet, for the uniform case, voluntary vaccination mechanism is almost invalid in controlling the spread of epidemic on networks. Furthermore, given the temporality of some vaccines, the waves of epidemic for two cases are also different. Therefore, our work insight that the way of estimating the perceived risk of infection determines the decision on vaccination options, and then determines the success or failure of control strategy.
Lopez, Anna Lena; You, Young Ae; Kim, Young Eun; Sah, Binod; Maskery, Brian; Clemens, John
2012-01-01
Abstract Objective To estimate the global burden of cholera using population-based incidence data and reports. Methods Countries with a recent history of cholera were classified as endemic or non-endemic, depending on whether they had reported cholera cases in at least three of the five most recent years. The percentages of the population in each country that lacked access to improved sanitation were used to compute the populations at risk for cholera, and incidence rates from published studies were applied to groups of countries to estimate the annual number of cholera cases in endemic countries. The estimates of cholera cases in non-endemic countries were based on the average numbers of cases reported from 2000 to 2008. Literature-based estimates of cholera case-fatality rates (CFRs) were used to compute the variance-weighted average cholera CFRs for estimating the number of cholera deaths. Findings About 1.4 billion people are at risk for cholera in endemic countries. An estimated 2.8 million cholera cases occur annually in such countries (uncertainty range: 1.4–4.3) and an estimated 87 000 cholera cases occur in non-endemic countries. The incidence is estimated to be greatest in children less than 5 years of age. Every year about 91 000 people (uncertainty range: 28 000 to 142 000) die of cholera in endemic countries and 2500 people die of the disease in non-endemic countries. Conclusion The global burden of cholera, as determined through a systematic review with clearly stated assumptions, is high. The findings of this study provide a contemporary basis for planning public health interventions to control cholera. PMID:22461716
Structural and practical identifiability analysis of S-system.
Zhan, Choujun; Li, Benjamin Yee Shing; Yeung, Lam Fat
2015-12-01
In the field of systems biology, biological reaction networks are usually modelled by ordinary differential equations. A sub-class, the S-systems representation, is a widely used form of modelling. Existing S-systems identification techniques assume that the system itself is always structurally identifiable. However, due to practical limitations, biological reaction networks are often only partially measured. In addition, the captured data only covers a limited trajectory, therefore data can only be considered as a local snapshot of the system responses with respect to the complete set of state trajectories over the entire state space. Hence the estimated model can only reflect partial system dynamics and may not be unique. To improve the identification quality, the structural and practical identifiablility of S-system are studied. The S-system is shown to be identifiable under a set of assumptions. Then, an application on yeast fermentation pathway was conducted. Two case studies were chosen; where the first case is based on a larger state trajectories and the second case is based on a smaller one. By expanding the dataset which span a relatively larger state space, the uncertainty of the estimated system can be reduced. The results indicated that initial concentration is related to the practical identifiablity.
Improving safety culture through the health and safety organization: a case study.
Nielsen, Kent J
2014-02-01
International research indicates that internal health and safety organizations (HSO) and health and safety committees (HSC) do not have the intended impact on companies' safety performance. The aim of this case study at an industrial plant was to test whether the HSO can improve company safety culture by creating more and better safety-related interactions both within the HSO and between HSO members and the shop-floor. A quasi-experimental single case study design based on action research with both quantitative and qualitative measures was used. Based on baseline mapping of safety culture and the efficiency of the HSO three developmental processes were started aimed at the HSC, the whole HSO, and the safety representatives, respectively. Results at follow-up indicated a marked improvement in HSO performance, interaction patterns concerning safety, safety culture indicators, and a changed trend in injury rates. These improvements are interpreted as cultural change because an organizational double-loop learning process leading to modification of the basic assumptions could be identified. The study provides evidence that the HSO can improve company safety culture by focusing on safety-related interactions. © 2013. Published by Elsevier Ltd and National Safety Council.
Millimeter Wave Alternate Route Study.
1981-04-01
processing gains are based upon the assumption that the jammer equally distributes his available power over all the hopping frequencies. If this is true...Examples Assumptions 0 25 GHz hopping range (e.g., 20 GHz to 45 GHz) 0 10 ms settling time * 0.1 second dwell time - implies 11% increase in channel data...of the architectures presented previously. The assumption that each link has equal probability p of being disrupted (i.e., successfully jammed) seems
Design Considerations for Large Computer Communication Networks,
1976-04-01
particular, we will discuss the last three assumptions in order to motivate some of the models to be considered in this chapter. Independence Assumption...channels. fg Part (a), again motivated by an earlier remark on deterministic routing, will become more accurate when we include in the model, based on fixed...hierarchical routing, then this assumption appears to be quite acceptable. Part (b) is motivated by the quite symmetrical structure of the networks considered
The Doctor Is In! Diagnostic Analysis.
Jupiter, Daniel C
To make meaningful inferences based on our regression models, we must ensure that we have met the necessary assumptions of these tests. In this commentary, we review these assumptions and those for the t-test and analysis of variance, and introduce a variety of methods, formal and informal, numeric and visual, for assessing conformity with the assumptions. Copyright © 2018 The American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
Causal analysis of ordinal treatments and binary outcomes under truncation by death.
Wang, Linbo; Richardson, Thomas S; Zhou, Xiao-Hua
2017-06-01
It is common that in multi-arm randomized trials, the outcome of interest is "truncated by death," meaning that it is only observed or well-defined conditioning on an intermediate outcome. In this case, in addition to pairwise contrasts, the joint inference for all treatment arms is also of interest. Under a monotonicity assumption we present methods for both pairwise and joint causal analyses of ordinal treatments and binary outcomes in presence of truncation by death. We illustrate via examples the appropriateness of our assumptions in different scientific contexts.
ERIC Educational Resources Information Center
American Vocational Association, Alexandria, VA.
This document is a practical guide to demonstrating the value of school-to-careers preparation for all students and to debunking outdated stereotypes and false assumptions surrounding school-to-careers and vocational education programs. Part 1 explains the importance of political and policy advocacy in public education and outlines strategies for…
Robustness of location estimators under t-distributions: a literature review
NASA Astrophysics Data System (ADS)
Sumarni, C.; Sadik, K.; Notodiputro, K. A.; Sartono, B.
2017-03-01
The assumption of normality is commonly used in estimation of parameters in statistical modelling, but this assumption is very sensitive to outliers. The t-distribution is more robust than the normal distribution since the t-distributions have longer tails. The robustness measures of location estimators under t-distributions are reviewed and discussed in this paper. For the purpose of illustration we use the onion yield data which includes outliers as a case study and showed that the t model produces better fit than the normal model.
NASA Astrophysics Data System (ADS)
Kim, Mijin; Kim, Jhoon; Yoon, Jongmin; Chung, Chu-Yong; Chung, Sung-Rae
2017-04-01
In 2010, the Korean geostationary earth orbit (GEO) satellite, the Communication, Ocean, and Meteorological Satellite (COMS), was launched including the Meteorological Imager (MI). The MI measures atmospheric condition over Northeast Asia (NEA) using a single visible channel centered at 0.675 μm and four IR channels at 3.75, 6.75, 10.8, 12.0 μm. The visible measurement can also be utilized for the retrieval of aerosol optical properties (AOPs). Since the GEO satellite measurement has an advantage for continuous monitoring of AOPs, we can analyze the spatiotemporal variation of the aerosol using the MI observations over NEA. Therefore, we developed an algorithm to retrieve aerosol optical depth (AOD) using the visible observation of MI, and named as MI Yonsei Aerosol Retrieval Algorithm (YAER). In this study, we investigated the accuracy of MI YAER AOD by comparing the values with the long-term products of AERONET sun-photometer. The result showed that the MI AODs were significantly overestimated than the AERONET values over bright surface in low AOD case. Because the MI visible channel centered at red color range, contribution of aerosol signal to the measured reflectance is relatively lower than the surface contribution. Therefore, the AOD error in low AOD case over bright surface can be a fundamental limitation of the algorithm. Meanwhile, an assumption of background aerosol optical depth (BAOD) could result in the retrieval uncertainty, also. To estimate the surface reflectance by considering polluted air condition over the NEA, we estimated the BAOD from the MODIS dark target (DT) aerosol products by pixel. The satellite-based AOD retrieval, however, largely depends on the accuracy of the surface reflectance estimation especially in low AOD case, and thus, the BAOD could include the uncertainty in surface reflectance estimation of the satellite-based retrieval. Therefore, we re-estimated the BAOD using the ground-based sun-photometer measurement, and investigated the effects of the BAOD assumption. The satellite-based BAOD was significantly higher than the ground-based value over urban area, and thus, resulted in the underestimation of surface reflectance and the overestimation of AOD. The error analysis of the MI AOD also showed sensitivity to cloud contamination, clearly. Therefore, improvements of cloud masking process in the developed single channel MI algorithm as well as the modification of the surface reflectance estimation will be required for the future study.
Health and economic impact of PHiD-CV in Canada and the UK: a Markov modelling exercise.
Knerer, Gerhart; Ismaila, Afisi; Pearce, David
2012-01-01
The spectrum of diseases caused by Streptococcus pneumoniae and non-typeable Haemophilus influenzae (NTHi) represents a large burden on healthcare systems around the world. Meningitis, bacteraemia, community-acquired pneumonia (CAP), and acute otitis media (AOM) are vaccine-preventable infectious diseases that can have severe consequences. The health economic model presented here is intended to estimate the clinical and economic impact of vaccinating birth cohorts in Canada and the UK with the 10-valent, pneumococcal non-typeable Haemophilus influenzae protein D conjugate vaccine (PHiD-CV) compared with the newly licensed 13-valent pneumococcal conjugate vaccine (PCV-13). The model described herein is a Markov cohort model built to simulate the epidemiological burden of pneumococcal- and NTHi-related diseases within birth cohorts in the UK and Canada. Base-case assumptions include estimates of vaccine efficacy and NTHi infection rates that are based on published literature. The model predicts that the two vaccines will provide a broadly similar impact on all-cause invasive disease and CAP under base-case assumptions. However, PHiD-CV is expected to provide a substantially greater reduction in AOM compared with PCV-13, offering additional savings of Canadian $9.0 million and £4.9 million in discounted direct medical costs in Canada and the UK, respectively. The main limitations of the study are the difficulties in modelling indirect vaccine effects (herd effect and serotype replacement), the absence of PHiD-CV- and PCV-13-specific efficacy data and a lack of comprehensive NTHi surveillance data. Additional limitations relate to the fact that the transmission dynamics of pneumococcal serotypes have not been modelled, nor has antibiotic resistance been accounted for in this paper. This cost-effectiveness analysis suggests that, in Canada and the UK, PHiD-CV's potential to protect against NTHi infections could provide a greater impact on overall disease burden than the additional serotypes contained in PCV-13.
Parish, William J; Aldridge, Arnie; Allaire, Benjamin; Ekwueme, Donatus U; Poehler, Diana; Guy, Gery P; Thomas, Cheryll C; Trogdon, Justin G
2017-11-01
To assess the burden of excessive alcohol use, researchers estimate alcohol-attributable fractions (AAFs) routinely. However, under-reporting in survey data can bias these estimates. We present an approach that adjusts for under-reporting in the estimation of AAFs, particularly within subgroups. This framework is a refinement of a previous method conducted by Rehm et al. We use a measurement error model to derive the 'true' alcohol distribution from a 'reported' alcohol distribution. The 'true' distribution leverages per-capita sales data to identify the distribution average and then identifies the shape of the distribution with self-reported survey data. Data are from the National Alcohol Survey (NAS), the National Household Survey on Drug Abuse (NHSDA) and the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC). We compared our approach with previous approaches by estimating the AAF of female breast cancer cases. Compared with Rehm et al.'s approach, our refinement performs similarly under a gamma assumption. For example, among females aged 18-25 years, the two approaches produce estimates from NHSDA that are within a percentage point. However, relaxing the gamma assumption generally produces more conservative evidence. For example, among females aged 18-25 years, estimates from NHSDA based on the best-fitting distribution are only 19.33% of breast cancer cases, which is a much smaller proportion than the gamma-based estimates of approximately 28%. A refinement of Rehm et al.'s approach to adjusting for underreporting in the estimation of alcohol-attributable fractions provides more flexibility. This flexibility can avoid biases associated with failing to account for the underlying differences in alcohol consumption patterns across different study populations. Comparisons of our refinement with Rehm et al.'s approach show that results are similar when a gamma distribution is assumed. However, results are appreciably lower when the best-fitting distribution is chosen versus gamma-based results. © 2017 Society for the Study of Addiction.
Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.
Morgan, Timothy M; Case, L Douglas
2013-07-05
In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.
NASA Astrophysics Data System (ADS)
Brekke, L. D.; Prairie, J.; Pruitt, T.; Rajagopalan, B.; Woodhouse, C.
2008-12-01
Water resources adaptation planning under climate change involves making assumptions about probabilistic water supply conditions, which are linked to a given climate context (e.g., instrument records, paleoclimate indicators, projected climate data, or blend of these). Methods have been demonstrated to associate water supply assumptions with any of these climate information types. Additionally, demonstrations have been offered that represent these information types in a scenario-rich (ensemble) planning framework, either via ensembles (e.g., survey of many climate projections) or stochastic modeling (e.g., based on instrument records or paleoclimate indicators). If the planning goal involves using a hydrologic ensemble that jointly reflects paleoclimate (e.g., lower- frequency variations) and projected climate information (e.g., monthly to annual trends), methods are required to guide how these information types might be translated into water supply assumptions. However, even if such a method exists, there is lack of understanding on how such a hydrologic ensemble might differ from ensembles developed relative to paleoclimate or projected climate information alone. This research explores two questions: (1) how might paleoclimate and projected climate information be blended into an planning hydrologic ensemble, and (2) how does a planning hydrologic ensemble differ when associated with the individual climate information types (i.e. instrumental records, paleoclimate, projected climate, or blend of the latter two). Case study basins include the Gunnison River Basin in Colorado and the Missouri River Basin above Toston in Montana. Presentation will highlight ensemble development methods by information type, and comparison of ensemble results.
Postmodernity and a hypertensive patient: rescuing value from nihilism.
Smith, S
1998-01-01
Much of postmodern philosophy questions the assumptions of Modernity, that period in the history of the Western world since the Enlightment. These assumptions are that truth is discoverable through human reason; that certain knowledge is possible; and furthermore, that such knowledge will provide a basis for the ineluctable progress of Mankind. The Enlightenment project is underwritten by the conviction that knowledge gained through the scientific method is secure. In so far as biomedicine inherits these assumptions it becomes fair game for postmodern deconstruction. Today, perhaps more than ever, plural values compete, and contradictory approaches to health, for instance, garner support and acquire supremacy through consumer choice and media manipulation rather than evidence-based science. Many doctors feel a tension between meeting the needs of the patient face to face, and working towards the broader health needs of the public at large. But if the very foundations of medical science are questioned, by patients, or by doctors themselves, wherein lies the value of their work? This paper examines the issues that the anti-foundationalist thrust of postmodernism raises, in the light of a case of mild hypertension. The strict application of medical protocol, derived from a nomothetic, statistical perspective, seems unlikely to furnish value in the treatment of an individual. The anything goes, consumerist approach, however, fares no better. The author argues that whilst value cannot depend on any rationally predetermined parameters, it can be rescued, and emerges from the process of the meeting with the patient. PMID:9549679
The Power and Challenge of Facilitating Reframing: Applications in Teaching Negotiation
ERIC Educational Resources Information Center
Cannon, Mark D.
2017-01-01
Reframing is the ability to identify and significantly change assumptions or perspectives. It is a powerful skill but can be difficult to learn and apply. This article presents two experiential exercises for teaching reframing in negotiations: the Rental Home case and the Multiplex Saw case. These exercises are designed to produce frame-shifting…
Travelling "the Caledonian Way": Education Policy Learning and the Making of Europe
ERIC Educational Resources Information Center
Grek, Sotiria
2015-01-01
The paper examines the case of education policy learning in Europe and argues that, contrary to dominant assumptions, education is a fruitful area for the analysis of Europeanising processes. More specifically, an examination of the case of the Scottish school inspectorate's European exchanges is useful in relation to the study of international…
ERIC Educational Resources Information Center
Lee, Barbara A.
1990-01-01
Questions assumptions by Schoenfeld and Zirkel in a study reviewing gender discrimination cases against institutions of higher education. Critiques the methodology used in that study, cautions about the overall utility of "outcomes analysis," and reports more promising routes of empirical legal research. (15 references) (MLF)
An unusual birthmark case thought to be linked to a person who had previously died.
Keil, H H; Tucker, J B
2000-12-01
The following case report describes a Burmese subject with an unusual birthmark and birth defects thought by local people to be linked to events surrounding the death of his mother's first husband. The nature of the link is explored, including how the assumption of a linkage could have led to subsequent events.
ERIC Educational Resources Information Center
Welner, Kevin G.
This book challenges fundamental assumptions about the opportunities for equity-minded educational reform, using data from case studies of districts nationwide and their experiences with court-ordered detracking. The case studies show how white, upper middle class parents exercised a disproportionate amount of power in local school policy making…
ERIC Educational Resources Information Center
Ben-Tsur, Dalia
2009-01-01
This paper explores the impact of conflict on international student mobility. Through an examination of undergraduate, international students studying in Israel, this case study questions how and if a situation of ongoing violent conflict affects international student travel decisions to study in a host country. Contrary to assumptions of…
Floros, Nikolaos; Papadakis, Marios; Schelzig, Hubert; Oberhuber, Alexander
2018-03-10
Over the last three decades, the development of systematic and protocol-based algorithms, and advances in available diagnostic tests have become the indispensable parts of practising medicine. Naturally, despite the implementation of meticulous protocols involving diagnostic tests or even trials of empirical therapies, the cause of one's symptoms may still not be obvious. We herein report a case of chronic back pain, which took about 5 years to get accurately diagnosed. The case challenges the diagnostic assumptions and sets ground of discussion for the diagnostic reasoning pitfalls and heuristic biases that mislead the caring physicians and cost years of low quality of life to our patient. This case serves as an example of how anchoring heuristics can interfere in the diagnostic process of a complex and rare entity when combined with a concurrent potentially life-threatening condition. © BMJ Publishing Group Ltd (unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
The impact of management science on political decision making
NASA Technical Reports Server (NTRS)
White, M. J.
1971-01-01
The possible impact on public policy and organizational decision making of operations research/management science (OR/MS) is discussed. Criticisms based on the assumption that OR/MS will have influence on decision making and criticisms based on the assumption that it will have no influence are described. New directions in the analysis of analysis and in thinking about policy making are also considered.
NASA Technical Reports Server (NTRS)
Hildebrand, Francis B
1943-01-01
A mathematical procedure is herein developed for obtaining exact solutions of shear-lag problems in flat panels and box beams: the method is based on the assumption that the amount of stretching of the sheets in the direction perpendicular to the direction of essential normal stresses is negligible. Explicit solutions, including the treatment of cut-outs, are given for several cases and numerical results are presented in graphic and tabular form. The general theory is presented in a from which further solutions can be readily obtained. The extension of the theory to cover certain cases of non-uniform cross section is indicated. Although the solutions are obtained in terms of infinite series, the present developments differ from those previously given in that, in practical cases, the series usually converge so rapidly that sufficient accuracy is afforded by a small number of terms. Comparisons are made in several cases between the present results and the corresponding solutions obtained by approximate procedures devised by Reissner and by Kuhn and Chiarito.
Cost-Effectiveness Analysis of Morcellation Hysterectomy for Myomas.
Bortoletto, Pietro; Einerson, Brett D; Miller, Emily S; Milad, Magdy P
2015-01-01
To estimate the cost-effectiveness of eliminating morcellation in the surgical treatment of leiomyomas from a societal perspective. Cost-effectiveness analysis. Not applicable. A theoretical cohort of women undergoing hysterectomy for myoma disease large enough to require morcellation. None. None. A decision analysis model was constructed using probabilities, costs, and utility data from published sources. A cost-effectiveness analysis analyzing both quality-adjusted life years (QALYs) and cases of disseminated cancer was performed to determine the incremental cost-effectiveness ratio (ICER) of eliminating morcellation as a tool in the surgical treatment of leiomyomas. Costs and utilities were discounted using standard methodology. The base case included health care system costs and costs incurred by the patient for surgery-related disability. One-way sensitivity analyses were performed to assess the effect of various assumptions. The cost to prevent 1 case of disseminated cancer was $10 540 832. A strategy of nonmorcellation hysterectomy via laparotomy costed more ($30 359.92 vs $20 853.15) and yielded more QALYs (21.284 vs 21.280) relative to morcellation hysterectomy. The ICER for nonmorcellation hysterectomy compared with morcellation hysterectomy was $2 184 172 per QALY. Health care costs (prolonged hospitalizations) and costs to patients of prolonged time away from work were the primary drivers of cost differential between the 2 strategies. Even when the incidence of occult sarcoma in leiomyoma surgery was ranged to twice that reported in the literature (.98%), the ICER for nonmorcellation hysterectomy was $644 393.30. Eliminating morcellation hysterectomy as a treatment for myomas is not cost-effective under a wide variety of probability and cost assumptions. Performing laparotomy for all patients who might otherwise be candidates for morcellation hysterectomy is a costly policy from a societal perspective. Copyright © 2015 AAGL. Published by Elsevier Inc. All rights reserved.
Diagnosis of exercise-induced anaphylaxis: current insights.
Pravettoni, Valerio; Incorvaia, Cristoforo
2016-01-01
Exercise-induced anaphylaxis (EIAn) is defined as the occurrence of anaphylactic symptoms (skin, respiratory, gastrointestinal, and cardiovascular symptoms) after physical activity. In about a third of cases, cofactors, such as food intake, temperature (warm or cold), and drugs (especially nonsteroidal anti-inflammatory drugs) can be identified. When the associated cofactor is food ingestion, the correct diagnosis is food-dependent EIAn (FDEIAn). The literature describes numerous reports of FDEIAn after intake of very different foods, from vegetables and nuts to meats and seafood. One of the best-characterized types of FDEIAn is that due to ω5-gliadin of wheat, though cases of FDEIAn after wheat ingestion by sensitization to wheat lipid transfer protien (LTP) are described. Some pathophysiological mechanisms underlying EIAn have been hypothesized, such as increase/alteration in gastrointestinal permeability, alteration of tissue transglutaminase promoting IgE cross-linking, enhanced expression of cytokines, redistribution of blood during physical exercise leading to altered mast-cell degranulation, and also changes in the acid-base balance. Nevertheless, until now, none of these hypotheses has been validated. The diagnosis of EIAn and FDEIAn is achieved by means of a challenge, with physical exercise alone for EIAn, and with the assumption of the suspected food followed by physical exercise for FDEIAn; in cases of doubtful results, a double-blind placebo-controlled combined food-exercise challenge should be performed. The prevention of this particular kind of anaphylaxis is the avoidance of the specific trigger, ie, physical exercise for EIAn, the assumption of the culprit food before exercise for FDEIAn, and in general the avoidance of the recognized cofactors. Patients must be supplied with an epinephrine autoinjector, as epinephrine has been clearly recognized as the first-line intervention for anaphylaxis.
Extensions of criteria for evaluating risk prediction models for public health applications.
Pfeiffer, Ruth M
2013-04-01
We recently proposed two novel criteria to assess the usefulness of risk prediction models for public health applications. The proportion of cases followed, PCF(p), is the proportion of individuals who will develop disease who are included in the proportion p of individuals in the population at highest risk. The proportion needed to follow-up, PNF(q), is the proportion of the general population at highest risk that one needs to follow in order that a proportion q of those destined to become cases will be followed (Pfeiffer, R.M. and Gail, M.H., 2011. Two criteria for evaluating risk prediction models. Biometrics 67, 1057-1065). Here, we extend these criteria in two ways. First, we introduce two new criteria by integrating PCF and PNF over a range of values of q or p to obtain iPCF, the integrated PCF, and iPNF, the integrated PNF. A key assumption in the previous work was that the risk model is well calibrated. This assumption also underlies novel estimates of iPCF and iPNF based on observed risks in a population alone. The second extension is to propose and study estimates of PCF, PNF, iPCF, and iPNF that are consistent even if the risk models are not well calibrated. These new estimates are obtained from case-control data when the outcome prevalence in the population is known, and from cohort data, with baseline covariates and observed health outcomes. We study the efficiency of the various estimates and propose and compare tests for comparing two risk models, both of which were evaluated in the same validation data.
Grosse, Scott D; Berry, Robert J; Mick Tilford, J; Kucik, James E; Waitzman, Norman J
2016-05-01
Although fortification of food with folic acid has been calculated to be cost saving in the U.S., updated estimates are needed. This analysis calculates new estimates from the societal perspective of net cost savings per year associated with mandatory folic acid fortification of enriched cereal grain products in the U.S. that was implemented during 1997-1998. Estimates of annual numbers of live-born spina bifida cases in 1995-1996 relative to 1999-2011 based on birth defects surveillance data were combined during 2015 with published estimates of the present value of lifetime direct costs updated in 2014 U.S. dollars for a live-born infant with spina bifida to estimate avoided direct costs and net cost savings. The fortification mandate is estimated to have reduced the annual number of U.S. live-born spina bifida cases by 767, with a lower-bound estimate of 614. The present value of mean direct lifetime cost per infant with spina bifida is estimated to be $791,900, or $577,000 excluding caregiving costs. Using a best estimate of numbers of avoided live-born spina bifida cases, fortification is estimated to reduce the present value of total direct costs for each year's birth cohort by $603 million more than the cost of fortification. A lower-bound estimate of cost savings using conservative assumptions, including the upper-bound estimate of fortification cost, is $299 million. The estimates of cost savings are larger than previously reported, even using conservative assumptions. The analysis can also inform assessments of folic acid fortification in other countries. Published by Elsevier Inc.
Parental Dynamics--Their Role in Learning Disabilities
ERIC Educational Resources Information Center
Abrams, Jules C.
1970-01-01
Suggests that the interaction of parents with each other and with their child significantly affects the latter's development. A discussion of case studies of brain-damaged and dyslexic children supports the assumption. Bibliography. (RW)
Identification of Extraterrestrial Microbiology
NASA Technical Reports Server (NTRS)
Flynn, Michael; Rasky, Daniel J. (Technical Monitor)
1998-01-01
Many of the key questions addressed in the field of Astrobiology are based upon the assumption that life exists, or at one time existed, in locations throughout the universe. However, this assumption is just that, an assumption. No definitive proof exists. On Earth, life has been found to exist in many diverse environment. We believe that this tendency towards diversity supports the assumption that life could exists throughout the universe. This paper provides a summary of several innovative techniques for the detection of extraterrestrial life forms. The primary questions addressed are does life currently exist beyond Earth and if it does, is that life evolutionary related to life on Earth?
Nonstationary oscillations in gyrotrons revisited
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumbrajs, O., E-mail: olgerts.dumbrajs@lu.lv; Kalis, H., E-mail: harijs.kalis@lu.lv
2015-05-15
Development of gyrotrons requires careful understanding of different regimes of gyrotron oscillations. It is known that in the planes of the generalized gyrotron variables: cyclotron resonance mismatch and dimensionless current or cyclotron resonance mismatch and dimensionless interaction length complicated alternating sequences of regions of stationary, periodic, automodulation, and chaotic oscillations exist. In the past, these regions were investigated on the supposition that the transit time of electrons through the interaction space is much shorter than the cavity decay time. This assumption is valid for short and/or high diffraction quality resonators. However, in the case of long and/or low diffraction qualitymore » resonators, which are often utilized, this assumption is no longer valid. In such a case, a different mathematical formalism has to be used for studying nonstationary oscillations. One example of such a formalism is described in the present paper.« less
A STRICTLY CONTRACTIVE PEACEMAN–RACHFORD SPLITTING METHOD FOR CONVEX PROGRAMMING
BINGSHENG, HE; LIU, HAN; WANG, ZHAORAN; YUAN, XIAOMING
2014-01-01
In this paper, we focus on the application of the Peaceman–Rachford splitting method (PRSM) to a convex minimization model with linear constraints and a separable objective function. Compared to the Douglas–Rachford splitting method (DRSM), another splitting method from which the alternating direction method of multipliers originates, PRSM requires more restrictive assumptions to ensure its convergence, while it is always faster whenever it is convergent. We first illustrate that the reason for this difference is that the iterative sequence generated by DRSM is strictly contractive, while that generated by PRSM is only contractive with respect to the solution set of the model. With only the convexity assumption on the objective function of the model under consideration, the convergence of PRSM is not guaranteed. But for this case, we show that the first t iterations of PRSM still enable us to find an approximate solution with an accuracy of O(1/t). A worst-case O(1/t) convergence rate of PRSM in the ergodic sense is thus established under mild assumptions. After that, we suggest attaching an underdetermined relaxation factor with PRSM to guarantee the strict contraction of its iterative sequence and thus propose a strictly contractive PRSM. A worst-case O(1/t) convergence rate of this strictly contractive PRSM in a nonergodic sense is established. We show the numerical efficiency of the strictly contractive PRSM by some applications in statistical learning and image processing. PMID:25620862
Botallo's error, or the quandaries of the universality assumption.
Bartolomeo, Paolo; Seidel Malkinson, Tal; de Vito, Stefania
2017-01-01
One of the founding principles of human cognitive neuroscience is the so-called universality assumption, the postulate that neurocognitive mechanisms do not show major differences among individuals. Without negating the importance of the universality assumption for the development of cognitive neuroscience, or the importance of single-case studies, here we aim at stressing the potential dangers of interpreting the pattern of performance of single patients as conclusive evidence concerning the architecture of the intact neurocognitive system. We take example from the case of Leonardo Botallo, an Italian surgeon of the Renaissance period, who claimed to have discovered a new anatomical structure of the adult human heart. Unfortunately, Botallo's discovery was erroneous, because what he saw in the few samples he examined was in fact the anomalous persistence of a fetal structure. Botallo's error is a reminder of the necessity to always strive for replication, despite the major hindrance of a publication system heavily biased towards novelty. In the present paper, we briefly discuss variations and anomalies in human brain anatomy and introduce the issue of variability in cognitive neuroscience. We then review some examples of the impact on cognition of individual variations in (1) brain structure, (2) brain functional organization and (3) brain damage. We finally discuss the importance and limits of single case studies in the neuroimaging era, outline potential ways to deal with individual variability, and draw some general conclusions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Source term model evaluations for the low-level waste facility performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yim, M.S.; Su, S.I.
1995-12-31
The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.
Swimming and other activities: applied aspects of fish swimming performance
Castro-Santos, Theodore R.; Farrell, A.P.
2011-01-01
Human activities such as hydropower development, water withdrawals, and commercial fisheries often put fish species at risk. Engineered solutions designed to protect species or their life stages are frequently based on assumptions about swimming performance and behaviors. In many cases, however, the appropriate data to support these designs are either unavailable or misapplied. This article provides an overview of the state of knowledge of fish swimming performance – where the data come from and how they are applied – identifying both gaps in knowledge and common errors in application, with guidance on how to avoid repeating mistakes, as well as suggestions for further study.
voom: precision weights unlock linear model analysis tools for RNA-seq read counts
2014-01-01
New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods. PMID:24485249
voom: Precision weights unlock linear model analysis tools for RNA-seq read counts.
Law, Charity W; Chen, Yunshun; Shi, Wei; Smyth, Gordon K
2014-02-03
New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods.
Heat and mass transfer analogy for condensation of humid air in a vertical channel
NASA Astrophysics Data System (ADS)
Desrayaud, G.; Lauriat, G.
This study examines energy transport associated with liquid film condensation in natural convection flows driven by differences in density due to temperature and concentration gradients. The condensation problem is based on the thin-film assumptions. The most common compositional gradient, which is encountered in humid air at ambient temperature is considered. A steady laminar Boussinesq flow of an ideal gas-vapor mixture is studied for the case of a vertical parallel plate channel. New correlations for the latent and sensible Nusselt numbers are established, and the heat and mass transfer analogy between the sensible Nusselt number and Sherwood number is demonstrated.
[Media, cloning, and bioethics].
Costa, S I; Diniz, D
2000-01-01
This article was based on an analysis of three hundred articles from mainstream Brazilian periodicals over a period of eighteen months, beginning with the announcement of the Dolly case in February 1997. There were two main objectives: to outline the moral constants in the press associated with the possibility of cloning human beings and to identify some of the moral assumptions concerning scientific research with non-human animals that were published carelessly by the media. The authors conclude that there was a haphazard spread of fear concerning the cloning of human beings rather than an ethical debate on the issue, and that there is a serious gap between bioethical reflections and the Brazilian media.
[When is the prescription of prismatic eyeglasses reasonable?].
Kommerell, G
2014-03-01
Prismatic glasses are used to deflect rays of light. In ophthalmology, prisms are mainly used to correct double vision caused by strabismus which is acquired after early childhood. In congenital or infantile strabismus, the image of the deviated eye is usually suppressed so that double vision does not occur and prismatic glasses are not indicated. Latent strabismus is very common and only rarely leads to double vision or asthenopic symptoms so that correction with prismatic glasses is only indicated in exceptional cases. The "Measuring and Correcting Methodology after H.-J. Haase" is based on flawed assumptions, and therefore can not be recommended for the prescription of prisms.
A quantum framework for likelihood ratios
NASA Astrophysics Data System (ADS)
Bond, Rachael L.; He, Yang-Hui; Ormerod, Thomas C.
The ability to calculate precise likelihood ratios is fundamental to science, from Quantum Information Theory through to Quantum State Estimation. However, there is no assumption-free statistical methodology to achieve this. For instance, in the absence of data relating to covariate overlap, the widely used Bayes’ theorem either defaults to the marginal probability driven “naive Bayes’ classifier”, or requires the use of compensatory expectation-maximization techniques. This paper takes an information-theoretic approach in developing a new statistical formula for the calculation of likelihood ratios based on the principles of quantum entanglement, and demonstrates that Bayes’ theorem is a special case of a more general quantum mechanical expression.
Insights into horizontal canal benign paroxysmal positional vertigo from a human case report.
Aron, Margaret; Bance, Manohar
2013-12-01
For horizontal canal benign paroxysmal positional vertigo, determination of the pathologic side is difficult and based on many physiological assumptions. This article reports findings on a patient who had one dysfunctional inner ear and who presented with horizontal canal benign paroxysmal positional vertigo, giving us a relatively pure model for observing nystagmus arising in a subject in whom the affected side is known a priori. It is an interesting human model corroborating theories of nystagmus generation in this pathology and also serves to validate Ewald's second law in a living human subject. Copyright © 2013 The American Laryngological, Rhinological and Otological Society, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Lijuan; Gonder, Jeff; Burton, Evan
This study evaluates the costs and benefits associated with the use of a plug-in hybrid electric bus and determines the cost effectiveness relative to a conventional bus and a hybrid electric bus. A sensitivity sweep analysis was performed over a number of a different battery sizes, charging powers, and charging stations. The net present value was calculated for each vehicle design and provided the basis for the design evaluation. In all cases, given present day economic assumptions, the conventional bus achieved the lowest net present value while the optimal plug-in hybrid electric bus scenario reached lower lifetime costs than themore » hybrid electric bus. The study also performed parameter sensitivity analysis under low market potential assumptions and high market potential assumptions. The net present value of plug-in hybrid electric bus is close to that of conventional bus.« less
Faith and reason and physician-assisted suicide.
Kaczor, Christopher
1998-08-01
Aquinas's conception of the relationship of faith and reason calls into question the arguments and some of the conclusions advanced in contributions to the debate on physician-assisted suicide by David Thomasma and H. Tristram Engelhardt. An understanding of the nature of theology as based on revelation calls into question Thomasma's theological argument in favor of physician-assisted suicide based on the example of Christ and the martyrs. On the other hand, unaided reason calls into question his assumptions about the nature of death as in some cases a good for the human person. Finally, if Aquinas is right about the relationship of faith and reason, Engelhardt's sharp contrast between "Christian" and "secular" approaches to physician-assisted suicide needs reconsideration, although his conclusions about physician-assisted suicide would find support.
FLRW Cosmology from Yang-Mills Gravity
NASA Astrophysics Data System (ADS)
Katz, Daniel
2013-04-01
We extend to basic cosmology the subject of Yang-Mills gravity - a theory of gravity based on local translational gauge invariance in flat spacetime. It has been shown that this particular gauge invariance leads to tensor factors in the macroscopic limit of the equations of motion of particles which plays the same role as the metric tensor of General Relativity. The assumption that this ``effective metric" tensor takes on the standard FLRW form is our starting point. Equations analogous to the Friedman equations are derived and then solved in closed form for the three special cases of a universe dominated by 1) matter, 2) radiation, and 3) dark energy. We find that the solutions for the scale factor are similar to, but distinct from, those found in the corresponding GR based treatment.
Land-use change and greenhouse gas emissions from corn and cellulosic ethanol
2013-01-01
Background The greenhouse gas (GHG) emissions that may accompany land-use change (LUC) from increased biofuel feedstock production are a source of debate in the discussion of drawbacks and advantages of biofuels. Estimates of LUC GHG emissions focus mainly on corn ethanol and vary widely. Increasing the understanding of LUC GHG impacts associated with both corn and cellulosic ethanol will inform the on-going debate concerning their magnitudes and sources of variability. Results In our study, we estimate LUC GHG emissions for ethanol from four feedstocks: corn, corn stover, switchgrass, and miscanthus. We use new computable general equilibrium (CGE) results for worldwide LUC. U.S. domestic carbon emission factors are from state-level modelling with a surrogate CENTURY model and U.S. Forest Service data. This paper investigates the effect of several key domestic lands carbon content modelling parameters on LUC GHG emissions. International carbon emission factors are from the Woods Hole Research Center. LUC GHG emissions are calculated from these LUCs and carbon content data with Argonne National Laboratory’s Carbon Calculator for Land Use Change from Biofuels Production (CCLUB) model. Our results indicate that miscanthus and corn ethanol have the lowest (−10 g CO2e/MJ) and highest (7.6 g CO2e/MJ) LUC GHG emissions under base case modelling assumptions. The results for corn ethanol are lower than corresponding results from previous studies. Switchgrass ethanol base case results (2.8 g CO2e/MJ) were the most influenced by assumptions regarding converted forestlands and the fate of carbon in harvested wood products. They are greater than miscanthus LUC GHG emissions because switchgrass is a lower-yielding crop. Finally, LUC GHG emissions for corn stover are essentially negligible and insensitive to changes in model assumptions. Conclusions This research provides new insight into the influence of key carbon content modelling variables on LUC GHG emissions associated with the four bioethanol pathways we examined. Our results indicate that LUC GHG emissions may have a smaller contribution to the overall biofuel life cycle than previously thought. Additionally, they highlight the need for future advances in LUC GHG emissions estimation including improvements to CGE models and aboveground and belowground carbon content data. PMID:23575438
ERIC Educational Resources Information Center
Burling, Robbins
Aspects of second language learning and instruction are explored in order to develop a rationale for a comprehension-based approach to language instruction. Eight characteristic pedagogical assumptions are critically examined, including assumptions regarding the role of grammar, age differences in learning ability, the priority given to each of…
Pseudo-incompressible, finite-amplitude gravity waves: wave trains and stability
NASA Astrophysics Data System (ADS)
Schlutow, Mark; Klein, Rupert
2017-04-01
Based on weak asymptotic WKB-like solutions for two-dimensional atmospheric gravity waves (GWs) traveling wave solutions (wave trains) are derived and analyzed with respect to stability. A systematic multiple-scale analysis using the ratio of the dominant wavelength and the scale height as a scale separation parameter is applied on the fully compressible Euler equations. A distinguished limit favorable for GWs close to static instability, reveals that pseudo-incompressible rather than Boussinesq theory applies. A spectral expansion including a mean flow, combined with the additional WKB assumption of slowly varying phases and amplitudes, is used to find general weak asymptotic solutions. This ansatz allows for arbitrarily strong, non-uniform stratification and holds even for finite-amplitude waves. It is deduced that wave trains as leading order solutions can only exist if either some non-uniform background stratification is given but the wave train propagates only horizontally or if the wave train velocity vector is given but the background is isothermal. For the first case, general analytical solutions are obtained that may be used to model mountain lee waves. For the second case with the additional assumption of horizontal periodicity, upward propagating wave train fronts were found. These wave train fronts modify the mean flow beyond the non-acceleration theorem. Stability analysis reveal that they are intrinsically modulationally unstable. The range of validity for the scale separation parameter was tested with fully nonlinear simulations. Even for large values an excellent agreement with the theory was found.
Missing CD4+ cell response in randomized clinical trials of maraviroc and dolutegravir.
Cuffe, Robert; Barnett, Carly; Granier, Catherine; Machida, Mitsuaki; Wang, Cunshan; Roger, James
2015-10-01
Missing data can compromise inferences from clinical trials, yet the topic has received little attention in the clinical trial community. Shortcomings in commonly used methods used to analyze studies with missing data (complete case, last- or baseline-observation carried forward) have been highlighted in a recent Food and Drug Administration-sponsored report. This report recommends how to mitigate the issues associated with missing data. We present an example of the proposed concepts using data from recent clinical trials. CD4+ cell count data from the previously reported SINGLE and MOTIVATE studies of dolutegravir and maraviroc were analyzed using a variety of statistical methods to explore the impact of missing data. Four methodologies were used: complete case analysis, simple imputation, mixed models for repeated measures, and multiple imputation. We compared the sensitivity of conclusions to the volume of missing data and to the assumptions underpinning each method. Rates of missing data were greater in the MOTIVATE studies (35%-68% premature withdrawal) than in SINGLE (12%-20%). The sensitivity of results to assumptions about missing data was related to volume of missing data. Estimates of treatment differences by various analysis methods ranged across a 61 cells/mm3 window in MOTIVATE and a 22 cells/mm3 window in SINGLE. Where missing data are anticipated, analyses require robust statistical and clinical debate of the necessary but unverifiable underlying statistical assumptions. Multiple imputation makes these assumptions transparent, can accommodate a broad range of scenarios, and is a natural analysis for clinical trials in HIV with missing data.
Racine, Eric; Martin Rubio, Tristana; Chandler, Jennifer; Forlini, Cynthia; Lucke, Jayne
2014-08-01
In the debate on the ethics of the non-medical use of pharmaceuticals for cognitive performance enhancement in healthy individuals there is a clear division between those who view "cognitive enhancement" as ethically unproblematic and those who see such practices as fraught with ethical problems. Yet another, more subtle issue, relates to the relevance and quality of the contribution of scholarly bioethics to this debate. More specifically, how have various forms of speculation, anticipatory ethics, and methods to predict scientific trends and societal responses augmented or diminished this contribution? In this paper, we use the discussion of the ethics of cognitive enhancement to explore the positive and negative contribution of speculation in bioethics scholarship. First, we review and discuss how speculation has relied on different sets of assumptions regarding the non-medical use of stimulants, namely: (1) terminology and framing; (2) scientific aspects such as efficacy and safety; (3) estimates of prevalence and consequent normalization; and (4) the need for normative reflection and regulatory guidelines. Second, three methodological guideposts are proposed to alleviate some of the pitfalls of speculation: (1) acknowledge assumptions more explicitly and identify the value attributed to assumptions; (2) validate assumptions with interdisciplinary literature; and (3) adopt a broad perspective to promote more comprehensive reflection. We conclude that, through the examination of the controversy about cognitive enhancement, we can employ these methodological guideposts to enhance the value of contributions from bioethics and minimize potential epistemic and practical pitfalls in this case and perhaps in other areas of bioethical debate.
Case-based pedagogy as a context for collaborative inquiry in the Philippines
NASA Astrophysics Data System (ADS)
Arellano, Elvira L.; Barcenal, Tessie L.; Bilbao, Purita P.; Castellano, Merilin A.; Nichols, Sharon; Tippins, Deborah J.
2001-05-01
The purpose of this study was to investigate the potential for using case-based pedagogy as a context for collaborative inquiry into the teaching and learning of elementary science. The context for this study was the elementary science teacher preparation program at West Visayas State University on the the island of Panay in Iloilo City, the Philippines. In this context, triple linguistic conventions involving the interactions of the local Ilonggo dialect, the national language of Philipino (predominantly Tagalog) and English create unique challenges for science teachers. Participants in the study included six elementary student teachers, their respective critic teachers and a research team composed of four Filipino and two U.S. science teacher educators. Two teacher-generated case narratives serve as the centerpiece for deliberation, around which we highlight key tensions that reflect both the struggles and positive aspects of teacher learning that took place. Theoretical perspectives drawn from assumptions underlying the use of case-based pedagogy and scholarship surrounding the community metaphor as a referent for science education curriculum inquiry influenced our understanding of tensions at the intersection of re-presentation of science, authority of knowledge, and professional practice, at the intersection of not shared language, explicit moral codes, and indigenization, and at the intersection of identity and dilemmas in science teaching. Implications of this study are discussed with respect to the building of science teacher learning communities in both local and global contexts of reform.
Chatterji, Madhabi
2016-12-01
This paper explores avenues for navigating evaluation design challenges posed by complex social programs (CSPs) and their environments when conducting studies that call for generalizable, causal inferences on the intervention's effectiveness. A definition is provided of a CSP drawing on examples from different fields, and an evaluation case is analyzed in depth to derive seven (7) major sources of complexity that typify CSPs, threatening assumptions of textbook-recommended experimental designs for performing impact evaluations. Theoretically-supported, alternative methodological strategies are discussed to navigate assumptions and counter the design challenges posed by the complex configurations and ecology of CSPs. Specific recommendations include: sequential refinement of the evaluation design through systems thinking, systems-informed logic modeling; and use of extended term, mixed methods (ETMM) approaches with exploratory and confirmatory phases of the evaluation. In the proposed approach, logic models are refined through direct induction and interactions with stakeholders. To better guide assumption evaluation, question-framing, and selection of appropriate methodological strategies, a multiphase evaluation design is recommended. Copyright © 2016 Elsevier Ltd. All rights reserved.
Imperfect information facilitates the evolution of reciprocity.
Kurokawa, Shun
2016-06-01
The existence of cooperation demands explanation since cooperation is costly to the actor. Reciprocity has long been regarded as a potential explanatory mechanism for the existence of cooperation. Reciprocity is a mechanism wherein a cooperator responds to an opponent's behavior by switching his/her own behavior. Hence, a possible problematic case relevant to the theory of reciprocity evolution arises when the mechanism is such that the information regarding an opponent's behavior is imperfect. Although it has been confirmed also by previous theoretical studies that imperfect information interferes with the evolution of reciprocity, this argument is based on the assumption that there are no mistakes in behavior. And, a previous study presumed that it might be expected that when such mistakes occur, reciprocity can more readily evolve in the case of imperfect information than in the case of perfect information. The reason why the previous study considers so is that in the former case, reciprocators can miss defections incurred by other reciprocators' mistakes due to imperfect information, allowing cooperation to persist when such reciprocators meet. However, contrary to this expectation, the previous study has shown that even when mistakes occur, imperfect information interferes with the evolution of reciprocity. Nevertheless, the previous study assumed that payoffs are linear (i.e., that the effect of behavior is additive and there are no synergetic effects). In this study, we revisited the same problem but removed the assumption that payoffs are linear. We used evolutionarily stable strategy analysis to compare the condition for reciprocity to evolve when mistakes occur and information is imperfect with the condition for reciprocity to evolve when mistakes occur and information is perfect. Our study revealed that when payoffs are not linear, imperfect information can facilitate the evolution of reciprocity when mistakes occur; while when payoffs are linear, imperfect information disturbs the evolution of reciprocity even when mistakes occur. Imperfect information can encourage the evolution of cooperation. Copyright © 2016 Elsevier Inc. All rights reserved.
The effect of terrain slope on firefighter safety zone effectiveness
Bret Butler; J. Forthofer; K. Shannon; D. Jimenez; D. Frankman
2010-01-01
The current safety zone guidelines used in the US were developed based on the assumption that the fire and safety zone were located on flat terrain. The minimum safe distance for a firefighter to be from a flame was calculated as that corresponding to a radiant incident energy flux level of 7.0kW-m-2. Current firefighter safety guidelines are based on the assumption...
Time in School: The Case of the Prudent Patron.
ERIC Educational Resources Information Center
Johnson, Thomas
1978-01-01
Explores the properties of a life cycle model of human capital accumulation under the assumptions that the individual cannot borrow to finance his schooling, but may receive an allowance while specializing. (Author/IRT)
Goldstein, Alisa M; Dondon, Marie-Gabrielle; Andrieu, Nadine
2006-08-01
A design combining both related and unrelated controls, named the case-combined-control design, was recently proposed to increase the power for detecting gene-environment (GxE) interaction. Under a conditional analytic approach, the case-combined-control design appeared to be more efficient and feasible than a classical case-control study for detecting interaction involving rare events. We now propose an unconditional analytic strategy to further increase the power for detecting gene-environment (GxE) interactions. This strategy allows the estimation of GxE interaction and exposure (E) main effects under certain assumptions (e.g. no correlation in E between siblings and the same exposure frequency in both control groups). Only the genetic (G) main effect cannot be estimated because it is biased. Using simulations, we show that unconditional logistic regression analysis is often more efficient than conditional analysis for detecting GxE interaction, particularly for a rare gene and strong effects. The unconditional analysis is also at least as efficient as the conditional analysis when the gene is common and the main and joint effects of E and G are small. Under the required assumptions, the unconditional analysis retains more information than does the conditional analysis for which only discordant case-control pairs are informative leading to more precise estimates of the odds ratios.
ERIC Educational Resources Information Center
Olive, John; Vomvoridi, Eugenia
2006-01-01
This paper critically examines the discrepancies among the pre-requisite fractional concepts assumed by a curricular unit on operations with fractions, the teacher's assumptions about those concepts and a particular student's understanding of fractions. The paper focuses on the case of one student (Tim) in the teacher's 6th grade class who was…
ERIC Educational Resources Information Center
Stott, Angela; Hobden, Paul A.
2016-01-01
This article describes a case study of a gifted high achiever in learning science. This learner was selected on the assumption that drawing attention to the characteristics of a successful learner may improve learning effectiveness of less successful learners. The first author taught the gifted learner and collected data through participant…
A Case for Transforming the Criterion of a Predictive Validity Study
ERIC Educational Resources Information Center
Patterson, Brian F.; Kobrin, Jennifer L.
2011-01-01
This study presents a case for applying a transformation (Box and Cox, 1964) of the criterion used in predictive validity studies. The goals of the transformation were to better meet the assumptions of the linear regression model and to reduce the residual variance of fitted (i.e., predicted) values. Using data for the 2008 cohort of first-time,…
Required sample size for monitoring stand dynamics in strict forest reserves: a case study
Diego Van Den Meersschaut; Bart De Cuyper; Kris Vandekerkhove; Noel Lust
2000-01-01
Stand dynamics in European strict forest reserves are commonly monitored using inventory densities of 5 to 15 percent of the total surface. The assumption that these densities guarantee a representative image of certain parameters is critically analyzed in a case study for the parameters basal area and stem number. The required sample sizes for different accuracy and...
ERIC Educational Resources Information Center
ROSS, JOHN ROBERT
THIS ANALYSIS OF UNDERLYING SYNTACTIC STRUCTURE IS BASED ON THE ASSUMPTION THAT THE PARTS OF SPEECH CALLED "VERBS" AND "ADJECTIVES" ARE TWO SUBCATEGORIES OF ONE MAJOR LEXICAL CATEGORY, "PREDICATE." FROM THIS ASSUMPTION, THE HYPOTHESIS IS ADVANCED THAT, IN LANGUAGES EXHIBITING THE COPULA, THE DEEP STRUCTURE OF SENTENCES CONTAINING PREDICATE…
THE MODELING OF THE FATE AND TRANSPORT OF ENVIRONMENTAL POLLUTANTS
Current models that predict the fate of organic compounds released to the environment are based on the assumption that these compounds exist exclusively as neutral species. This assumption is untrue under many environmental conditions, as some molecules can exist as cations, anio...
Sensitivity of Rooftop PV Projections in the SunShot Vision Study to Market Assumptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drury, E.; Denholm, P.; Margolis, R.
2013-01-01
The SunShot Vision Study explored the potential growth of solar markets if solar prices decreased by about 75% from 2010 to 2020. The SolarDS model was used to simulate rooftop PV demand for this study, based on several PV market assumptions--future electricity rates, customer access to financing, and others--in addition to the SunShot PV price projections. This paper finds that modeled PV demand is highly sensitive to several non-price market assumptions, particularly PV financing parameters.
NASA Technical Reports Server (NTRS)
Gordon, Diana F.
1992-01-01
Selecting a good bias prior to concept learning can be difficult. Therefore, dynamic bias adjustment is becoming increasingly popular. Current dynamic bias adjustment systems, however, are limited in their ability to identify erroneous assumptions about the relationship between the bias and the target concept. Without proper diagnosis, it is difficult to identify and then remedy faulty assumptions. We have developed an approach that makes these assumptions explicit, actively tests them with queries to an oracle, and adjusts the bias based on the test results.
Trujillo, Caleb; Cooper, Melanie M; Klymkowsky, Michael W
2012-01-01
Biological systems, from the molecular to the ecological, involve dynamic interaction networks. To examine student thinking about networks we used graphical responses, since they are easier to evaluate for implied, but unarticulated assumptions. Senior college level molecular biology students were presented with simple molecular level scenarios; surprisingly, most students failed to articulate the basic assumptions needed to generate reasonable graphical representations; their graphs often contradicted their explicit assumptions. We then developed a tiered Socratic tutorial based on leading questions designed to provoke metacognitive reflection. The activity is characterized by leading questions (prompts) designed to provoke meta-cognitive reflection. When applied in a group or individual setting, there was clear improvement in targeted areas. Our results highlight the promise of using graphical responses and Socratic prompts in a tutorial context as both a formative assessment for students and an informative feedback system for instructors, in part because graphical responses are relatively easy to evaluate for implied, but unarticulated assumptions. Copyright © 2011 Wiley Periodicals, Inc.
Ouwens, Mario J N M; Littlewood, Kavi J; Sauboin, Christophe; Téhard, Bertrand; Denis, François; Boëlle, Pierre-Yves; Alain, Sophie
2015-04-01
Varicella has a high incidence affecting the vast majority of the population in France and can lead to severe complications. Almost every individual infected by varicella becomes susceptible to herpes zoster later in life due to reactivation of the latent virus. Zoster is characterized by pain that can be long-lasting in some cases and has no satisfactory treatment. Routine varicella vaccination can prevent varicella. The vaccination strategy of replacing both doses of measles, mumps, and rubella (MMR) with a combined MMR and varicella (MMRV) vaccine is a means of reaching high vaccination coverage for varicella immunization. The objective of this analysis was to assess the impact of routine varicella vaccination, with MMRV in place of MMR, on the incidence of varicella and zoster diseases in France and to assess the impact of exogenous boosting of zoster incidence, age shift in varicella cases, and other possible indirect effects. A dynamic transmission population-based model was developed using epidemiological data for France to determine the force of infection, as well as an empirically derived contact matrix to reduce assumptions underlying these key drivers of dynamic models. Scenario analyses tested assumptions regarding exogenous boosting, vaccine waning, vaccination coverage, risk of complications, and contact matrices. The model provides a good estimate of the incidence before varicella vaccination implementation in France. When routine varicella vaccination is introduced with French current coverage levels, varicella incidence is predicted to decrease by 57%, and related complications are expected to decrease by 76% over time. After vaccination, it is observed that exogenous boosting is the main driver of change in zoster incidence. When exogenous boosting is assumed, there is a temporary increase in zoster incidence before it gradually decreases, whereas without exogenous boosting, varicella vaccination leads to a gradual decrease in zoster incidence. Changing vaccine efficacy waning levels and coverage assumptions are still predicted to result in overall benefits with varicella vaccination. In conclusion, the model predicted that MMRV vaccination can significantly reduce varicella incidence. With suboptimal coverage, a limited age shift of varicella cases is predicted to occur post-vaccination with MMRV. However, it does not result in an increase in the number of complications. GSK study identifier: HO-12-6924. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Zhang, T; Yang, M; Xiao, X; Feng, Z; Li, C; Zhou, Z; Ren, Q; Li, X
2014-03-01
Many infectious diseases exhibit repetitive or regular behaviour over time. Time-domain approaches, such as the seasonal autoregressive integrated moving average model, are often utilized to examine the cyclical behaviour of such diseases. The limitations for time-domain approaches include over-differencing and over-fitting; furthermore, the use of these approaches is inappropriate when the assumption of linearity may not hold. In this study, we implemented a simple and efficient procedure based on the fast Fourier transformation (FFT) approach to evaluate the epidemic dynamic of scarlet fever incidence (2004-2010) in China. This method demonstrated good internal and external validities and overcame some shortcomings of time-domain approaches. The procedure also elucidated the cycling behaviour in terms of environmental factors. We concluded that, under appropriate circumstances of data structure, spectral analysis based on the FFT approach may be applicable for the study of oscillating diseases.
Integrated layout based Monte-Carlo simulation for design arc optimization
NASA Astrophysics Data System (ADS)
Shao, Dongbing; Clevenger, Larry; Zhuang, Lei; Liebmann, Lars; Wong, Robert; Culp, James
2016-03-01
Design rules are created considering a wafer fail mechanism with the relevant design levels under various design cases, and the values are set to cover the worst scenario. Because of the simplification and generalization, design rule hinders, rather than helps, dense device scaling. As an example, SRAM designs always need extensive ground rule waivers. Furthermore, dense design also often involves "design arc", a collection of design rules, the sum of which equals critical pitch defined by technology. In design arc, a single rule change can lead to chain reaction of other rule violations. In this talk we present a methodology using Layout Based Monte-Carlo Simulation (LBMCS) with integrated multiple ground rule checks. We apply this methodology on SRAM word line contact, and the result is a layout that has balanced wafer fail risks based on Process Assumptions (PAs). This work was performed at the IBM Microelectronics Div, Semiconductor Research and Development Center, Hopewell Junction, NY 12533
Cost-effectiveness of human papillomavirus vaccination in the United States.
Chesson, Harrell W; Ekwueme, Donatus U; Saraiya, Mona; Markowitz, Lauri E
2008-02-01
We describe a simplified model, based on the current economic and health effects of human papillomavirus (HPV), to estimate the cost-effectiveness of HPV vaccination of 12-year-old girls in the United States. Under base-case parameter values, the estimated cost per quality-adjusted life year gained by vaccination in the context of current cervical cancer screening practices in the United States ranged from $3,906 to $14,723 (2005 US dollars), depending on factors such as whether herd immunity effects were assumed; the types of HPV targeted by the vaccine; and whether the benefits of preventing anal, vaginal, vulvar, and oropharyngeal cancers were included. The results of our simplified model were consistent with published studies based on more complex models when key assumptions were similar. This consistency is reassuring because models of varying complexity will be essential tools for policy makers in the development of optimal HPV vaccination strategies.
Paths to destruction: the lives and crimes of two serial killers.
Wolf, Barbara C; Lavezzi, Wendy A
2007-01-01
Although research into the phenomenon of serial murder has revealed that serial killers frequently do not fit the initially described paradigm in terms of their physical and psychological profiles, backgrounds, and motives to kill, the media continues to sensationalize the figures of such killers and the investigators who attempt to analyze them on the basis of aspects of their crimes. Although the so-called "typical" profile of the serial murderer has proven accurate in some instances, in many other cases the demographics and behaviors of these killers have deviated widely from the generalized assumptions. This report details two unusual cases in which five and eight murders were committed in upstate New York. The lives and crimes of these offenders illustrate the wide spectrum of variations in the backgrounds, demographics, motivations, and actions witnessed among serial murderers, and highlight the limitations and dangers of profiling based on generalities.
[„Kids’ Skills” by Ben Furman – Description and Research Review].
Perband, Anke; Haupts, Nadja; Rogner, Josef
2016-01-01
The article describes the programme „Kids’ Skills“ by the Finnish psychiatrist Ben Furman. „Kids’ Skills“ was developed to address behavioural issues in children. It is based on the assumption that children’s behavioural problems should not be pathologized, but can instead be corrected by learning a corresponding skill. The programme is characterised by its focus on strengths and its humorous and playful approach. The 15 steps of “Kids’ Skills” are intended to identify the specific skill, help generate a learning process and continue motivating the child. The authors describe the steps of the programme using a case study. They also address the limited number of existing studies, which have included a telephone and an online survey of practitioners using the programme, as well as case studies. The results of these studies are discussed with regard to their basis in evidence and practical relevance. Continuing research is recommended and possible implementations are suggested.
Solomon, Benjamin George
2014-07-01
A wide variety of effect sizes (ESs) has been used in the single-case design literature. Several researchers have "stress tested" these ESs by subjecting them to various degrees of problem data (e.g., autocorrelation, slope), resulting in the conditions by which different ESs can be considered valid. However, on the back end, few researchers have considered how prevalent and severe these problems are in extant data and as a result, how concerned applied researchers should be. The current study extracted and aggregated indicators of violations of normality and independence across four domains of educational study. Significant violations were found in total and across fields, including low levels of autocorrelation and moderate levels of absolute trend. These violations affect the selection and interpretation of ESs at the individual study level and for meta-analysis. Implications and recommendations are discussed. © The Author(s) 2013.
Applicability of the Effective-Medium Approximation to Heterogeneous Aerosol Particles.
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Dlugach, Janna M.; Liu, Li
2016-01-01
The effective-medium approximation (EMA) is based on the assumption that a heterogeneous particle can have a homogeneous counterpart possessing similar scattering and absorption properties. We analyze the numerical accuracy of the EMA by comparing superposition T-matrix computations for spherical aerosol particles filled with numerous randomly distributed small inclusions and Lorenz-Mie computations based on the Maxwell-Garnett mixing rule. We verify numerically that the EMA can indeed be realized for inclusion size parameters smaller than a threshold value. The threshold size parameter depends on the refractive-index contrast between the host and inclusion materials and quite often does not exceed several tenths, especially in calculations of the scattering matrix and the absorption cross section. As the inclusion size parameter approaches the threshold value, the scattering-matrix errors of the EMA start to grow with increasing the host size parameter and or the number of inclusions. We confirm, in particular, the existence of the effective-medium regime in the important case of dust aerosols with hematite or air-bubble inclusions, but then the large refractive-index contrast necessitates inclusion size parameters of the order of a few tenths. Irrespective of the highly restricted conditions of applicability of the EMA, our results provide further evidence that the effective-medium regime must be a direct corollary of the macroscopic Maxwell equations under specific assumptions.
Immunity to proactive interference is not a property of the focus of attention in working memory.
Ralph, Alicia; Walters, Jade N; Stevens, Alison; Fitzgerald, Kirra J; Tehan, Gerald; Surprenant, Aimee M; Neath, Ian; Turcotte, Josée
2011-02-01
The Focus of Attention (FOA) is the latest incarnation of a limited capacity store in which a small number of items, in this case four, are deemed to be readily accessible and do not need to be retrieved. Thus a corollary of these ideas is that those items in the FOA are always immune to proactive interference. While there is empirical support for instances of immunity to PI in short-term retention tasks that involve memory for four-item lists, there are also many instances in which PI is observed with four-item lists as well as instances where PI and immunity to PI can be shown in the same experiment. In contrast to the FOA assumptions, an alternative cue-based account predicts both the presence of PI and immunity to PI as a function of the relation between the cues available and the particular test. Three experiments contrasted the FOA assumptions and the cue-based approach in a short-term cued recall task in which PI is manipulated by testing whether the presentation of previous, similar items would interfere with immediate recall of three list items. The results indicated that even with very short lists, both PI and immunity to PI could be observed. The PI effects observed in our experiment are at odds with the FOA approach and are more readily explained using the cueing account.
Simplified Estimation and Testing in Unbalanced Repeated Measures Designs.
Spiess, Martin; Jordan, Pascal; Wendt, Mike
2018-05-07
In this paper we propose a simple estimator for unbalanced repeated measures design models where each unit is observed at least once in each cell of the experimental design. The estimator does not require a model of the error covariance structure. Thus, circularity of the error covariance matrix and estimation of correlation parameters and variances are not necessary. Together with a weak assumption about the reason for the varying number of observations, the proposed estimator and its variance estimator are unbiased. As an alternative to confidence intervals based on the normality assumption, a bias-corrected and accelerated bootstrap technique is considered. We also propose the naive percentile bootstrap for Wald-type tests where the standard Wald test may break down when the number of observations is small relative to the number of parameters to be estimated. In a simulation study we illustrate the properties of the estimator and the bootstrap techniques to calculate confidence intervals and conduct hypothesis tests in small and large samples under normality and non-normality of the errors. The results imply that the simple estimator is only slightly less efficient than an estimator that correctly assumes a block structure of the error correlation matrix, a special case of which is an equi-correlation matrix. Application of the estimator and the bootstrap technique is illustrated using data from a task switch experiment based on an experimental within design with 32 cells and 33 participants.
Node-Based Learning of Multiple Gaussian Graphical Models
Mohan, Karthik; London, Palma; Fazel, Maryam; Witten, Daniela; Lee, Su-In
2014-01-01
We consider the problem of estimating high-dimensional Gaussian graphical models corresponding to a single set of variables under several distinct conditions. This problem is motivated by the task of recovering transcriptional regulatory networks on the basis of gene expression data containing heterogeneous samples, such as different disease states, multiple species, or different developmental stages. We assume that most aspects of the conditional dependence networks are shared, but that there are some structured differences between them. Rather than assuming that similarities and differences between networks are driven by individual edges, we take a node-based approach, which in many cases provides a more intuitive interpretation of the network differences. We consider estimation under two distinct assumptions: (1) differences between the K networks are due to individual nodes that are perturbed across conditions, or (2) similarities among the K networks are due to the presence of common hub nodes that are shared across all K networks. Using a row-column overlap norm penalty function, we formulate two convex optimization problems that correspond to these two assumptions. We solve these problems using an alternating direction method of multipliers algorithm, and we derive a set of necessary and sufficient conditions that allows us to decompose the problem into independent subproblems so that our algorithm can be scaled to high-dimensional settings. Our proposal is illustrated on synthetic data, a webpage data set, and a brain cancer gene expression data set. PMID:25309137
NASA Astrophysics Data System (ADS)
Girinoto, Sadik, Kusman; Indahwati
2017-03-01
The National Socio-Economic Survey samples are designed to produce estimates of parameters of planned domains (provinces and districts). The estimation of unplanned domains (sub-districts and villages) has its limitation to obtain reliable direct estimates. One of the possible solutions to overcome this problem is employing small area estimation techniques. The popular choice of small area estimation is based on linear mixed models. However, such models need strong distributional assumptions and do not easy allow for outlier-robust estimation. As an alternative approach for this purpose, M-quantile regression approach to small area estimation based on modeling specific M-quantile coefficients of conditional distribution of study variable given auxiliary covariates. It obtained outlier-robust estimation from influence function of M-estimator type and also no need strong distributional assumptions. In this paper, the aim of study is to estimate the poverty indicator at sub-district level in Bogor District-West Java using M-quantile models for small area estimation. Using data taken from National Socioeconomic Survey and Villages Potential Statistics, the results provide a detailed description of pattern of incidence and intensity of poverty within Bogor district. We also compare the results with direct estimates. The results showed the framework may be preferable when direct estimate having no incidence of poverty at all in the small area.
Ma, Dinglong; Liu, Jing; Qi, Jinyi; Marcu, Laura
2017-02-21
In this response we underscore that the instrumentation described in the original publication (Liu et al 2012 Phys. Med. Biol. 57 843-65) was based on pulse-sampling technique, while the comment by Zhang et al is based on the assumption that a time-correlated single photon counting (TCSPC) instrumentation was used. Therefore the arguments made in the comment are not applicable to the noise model reported by Liu et al. As reported in the literature (Lakowicz 2006 Principles of Fluorescence Spectroscopy (New York: Springer)), while in the TCSPC the experimental noise can be estimated from Poisson statistics, such an assumption is not valid for pulse-sampling (transient recording) techniques. To further clarify this aspect, we present here a comprehensive noise model describing the signal and noise propagation of the pulse sampling time-resolved fluorescence detection. Experimental data recorded in various conditions are analyzed as a case study to demonstrate the noise model of our instrumental system. In addition, regarding the statement of correcting equation (3) in Liu et al (2012 Phys. Med. Biol. 57 843-65), the notation of discrete time Laguerre function in the original publication was clear and consistent with literature conventions (Marmarelis 1993 Ann. Biomed. Eng. 21 573-89, Westwick and Kearney 2003 Identification of Nonlinear Physiological Systems (Hoboken, NJ: Wiley)). Thus, it does not require revision.
NASA Astrophysics Data System (ADS)
Ma, Dinglong; Liu, Jing; Qi, Jinyi; Marcu, Laura
2017-02-01
In this response we underscore that the instrumentation described in the original publication (Liu et al 2012 Phys. Med. Biol. 57 843-65) was based on pulse-sampling technique, while the comment by Zhang et al is based on the assumption that a time-correlated single photon counting (TCSPC) instrumentation was used. Therefore the arguments made in the comment are not applicable to the noise model reported by Liu et al. As reported in the literature (Lakowicz 2006 Principles of Fluorescence Spectroscopy (New York: Springer)), while in the TCSPC the experimental noise can be estimated from Poisson statistics, such an assumption is not valid for pulse-sampling (transient recording) techniques. To further clarify this aspect, we present here a comprehensive noise model describing the signal and noise propagation of the pulse sampling time-resolved fluorescence detection. Experimental data recorded in various conditions are analyzed as a case study to demonstrate the noise model of our instrumental system. In addition, regarding the statement of correcting equation (3) in Liu et al (2012 Phys. Med. Biol. 57 843-65), the notation of discrete time Laguerre function in the original publication was clear and consistent with literature conventions (Marmarelis 1993 Ann. Biomed. Eng. 21 573-89, Westwick and Kearney 2003 Identification of Nonlinear Physiological Systems (Hoboken, NJ: Wiley)). Thus, it does not require revision.
Physiologically Based Pharmacokinetic (PBPK) Modeling of ...
Background: Quantitative estimation of toxicokinetic variability in the human population is a persistent challenge in risk assessment of environmental chemicals. Traditionally, inter-individual differences in the population are accounted for by default assumptions or, in rare cases, are based on human toxicokinetic data.Objectives: To evaluate the utility of genetically diverse mouse strains for estimating toxicokinetic population variability for risk assessment, using trichloroethylene (TCE) metabolism as a case study. Methods: We used data on oxidative and glutathione conjugation metabolism of TCE in 16 inbred and one hybrid mouse strains to calibrate and extend existing physiologically-based pharmacokinetic (PBPK) models. We added one-compartment models for glutathione metabolites and a two-compartment model for dichloroacetic acid (DCA). A Bayesian population analysis of inter-strain variability was used to quantify variability in TCE metabolism. Results: Concentration-time profiles for TCE metabolism to oxidative and glutathione conjugation metabolites varied across strains. Median predictions for the metabolic flux through oxidation was less variable (5-fold range) than that through glutathione conjugation (10-fold range). For oxidative metabolites, median predictions of trichloroacetic acid production was less variable (2-fold range) than DCA production (5-fold range), although uncertainty bounds for DCA exceeded the predicted variability. Conclusions:
Cluster detection methods applied to the Upper Cape Cod cancer data.
Ozonoff, Al; Webster, Thomas; Vieira, Veronica; Weinberg, Janice; Ozonoff, David; Aschengrau, Ann
2005-09-15
A variety of statistical methods have been suggested to assess the degree and/or the location of spatial clustering of disease cases. However, there is relatively little in the literature devoted to comparison and critique of different methods. Most of the available comparative studies rely on simulated data rather than real data sets. We have chosen three methods currently used for examining spatial disease patterns: the M-statistic of Bonetti and Pagano; the Generalized Additive Model (GAM) method as applied by Webster; and Kulldorff's spatial scan statistic. We apply these statistics to analyze breast cancer data from the Upper Cape Cancer Incidence Study using three different latency assumptions. The three different latency assumptions produced three different spatial patterns of cases and controls. For 20 year latency, all three methods generally concur. However, for 15 year latency and no latency assumptions, the methods produce different results when testing for global clustering. The comparative analyses of real data sets by different statistical methods provides insight into directions for further research. We suggest a research program designed around examining real data sets to guide focused investigation of relevant features using simulated data, for the purpose of understanding how to interpret statistical methods applied to epidemiological data with a spatial component.
A comment on the use of flushing time, residence time, and age as transport time scales
Monsen, N.E.; Cloern, J.E.; Lucas, L.V.; Monismith, Stephen G.
2002-01-01
Applications of transport time scales are pervasive in biological, hydrologic, and geochemical studies yet these times scales are not consistently defined and applied with rigor in the literature. We compare three transport time scales (flushing time, age, and residence time) commonly used to measure the retention of water or scalar quantities transported with water. We identify the underlying assumptions associated with each time scale, describe procedures for computing these time scales in idealized cases, and identify pitfalls when real-world systems deviate from these idealizations. We then apply the time scale definitions to a shallow 378 ha tidal lake to illustrate how deviations between real water bodies and the idealized examples can result from: (1) non-steady flow; (2) spatial variability in bathymetry, circulation, and transport time scales; and (3) tides that introduce complexities not accounted for in the idealized cases. These examples illustrate that no single transport time scale is valid for all time periods, locations, and constituents, and no one time scale describes all transport processes. We encourage aquatic scientists to rigorously define the transport time scale when it is applied, identify the underlying assumptions in the application of that concept, and ask if those assumptions are valid in the application of that approach for computing transport time scales in real systems.
Probabilistic Fracture Mechanics Analysis of the Orbiter's LH2 Feedline Flowliner
NASA Technical Reports Server (NTRS)
Bonacuse, Peter J. (Technical Monitor); Hudak, Stephen J., Jr.; Huyse, Luc; Chell, Graham; Lee, Yi-Der; Riha, David S.; Thacker, Ben; McClung, Craig; Gardner, Brian; Leverant, Gerald R.;
2005-01-01
Work performed by Southwest Research Institute (SwRI) as part of an Independent Technical Assessment (ITA) for the NASA Engineering and Safety Center (NESC) is summarized. The ITA goal was to establish a flight rationale in light of a history of fatigue cracking due to flow induced vibrations in the feedline flowliners that supply liquid hydrogen to the space shuttle main engines. Prior deterministic analyses using worst-case assumptions predicted failure in a single flight. The current work formulated statistical models for dynamic loading and cryogenic fatigue crack growth properties, instead of using worst-case assumptions. Weight function solutions for bivariant stressing were developed to determine accurate crack "driving-forces". Monte Carlo simulations showed that low flowliner probabilities of failure (POF = 0.001 to 0.0001) are achievable, provided pre-flight inspections for cracks are performed with adequate probability of detection (POD)-specifically, 20/75 mils with 50%/99% POD. Measurements to confirm assumed POD curves are recommended. Since the computed POFs are very sensitive to the cyclic loads/stresses and the analysis of strain gage data revealed inconsistencies with the previous assumption of a single dominant vibrant mode, further work to reconcile this difference is recommended. It is possible that the unaccounted vibrational modes in the flight spectra could increase the computed POFs.
NASA Astrophysics Data System (ADS)
Galloway, A. W. E.; Eisenlord, M. E.; Brett, M. T.
2016-02-01
Stable isotope (SI) based mixing models are the most common approach used to infer resource pathways in consumers. However, SI based analyses are often underdetermined, and consumer SI fractionation is usually unknown. The use of fatty acid (FA) tracers in mixing models offers an alternative approach that can resolve the underdetermined constraint. A limitation to both methods is the considerable uncertainty about consumer `trophic modification' (TM) of dietary FA or SI, which occurs as consumers transform dietary resources into tissues. We tested the utility of SI and FA approaches for inferring the diets of the marine benthic isopod (Idotea wosnesenskii) fed various marine macroalgae in controlled feeding trials. Our analyses quantified how the accuracy and precision of Bayesian mixing models was influenced by choice of algorithm (SIAR vs MixSIR), fractionation (assumed or known), and whether the model was under or overdetermined (seven sources and two vs 26 tracers) for cases where isopods were fed an exclusive diet of one of the seven different macroalgae. Using the conventional approach (i.e., 2 SI with assumed TM) resulted in average model outputs, i.e., the contribution from the exclusive resource = 0.20 ± 0.23 (0.00-0.79), mean ± SD (95% credible interval), that only differed slightly from the prior assumption. Using the FA based approach with known TM greatly improved model performance, i.e., the contribution from the exclusive resource = 0.91 ± 0.10 (0.58-0.99). The choice of algorithm only made a difference when fractionation was known and the model was overdetermined (FA approach). In this case SIAR and MixSIR had outputs of 0.86 ± 0.11 (0.48-0.96) and 0.96 ± 0.05 (0.79-1.00), respectively. This analysis shows the choice of dietary tracers and the assumption of consumer trophic modification greatly influence the performance of mixing model dietary reconstructions, and ultimately our understanding of what resources actually support aquatic consumers.
ERIC Educational Resources Information Center
Penman, Kenneth A.; Niccolai, Frances R.
1985-01-01
The first of a series of articles explains the legal principles of tort liability, waiver of liability, comparative negligence, assumption of risk, and contributory negligence. Summarizes the kinds of cases going to court involving sport facility design and operation. (MLF)
The Central Registry for Child Abuse Cases: Rethinking Basic Assumptions
ERIC Educational Resources Information Center
Whiting, Leila
1977-01-01
Class data pools on abused and neglected children and their families are found desirable for program planning, but identification by name is of questionable value and possibly a dangerous invasion of civil liberties. (MS)
Fore! Forward on the Course of Diversity (Focus on Teaching).
ERIC Educational Resources Information Center
Pomerenke, Paula J.
1994-01-01
Presents a case study and writing assignment used in a business communication class that help students uncover assumptions that may disadvantage both females and males when diversity within and between gender groups is ignored. (SR)
Unpacking Assumptions in Research Synthesis: A Critical Construct Synthesis Approach
ERIC Educational Resources Information Center
Wolgemuth, Jennifer R.; Hicks, Tyler; Agosto, Vonzell
2017-01-01
Research syntheses in education, particularly meta-analyses and best-evidence syntheses, identify evidence-based practices by combining findings across studies whose constructs are similar enough to warrant comparison. Yet constructs come preloaded with social, historical, political, and cultural assumptions that anticipate how research problems…
Principal Score Methods: Assumptions, Extensions, and Practical Considerations
ERIC Educational Resources Information Center
Feller, Avi; Mealli, Fabrizia; Miratrix, Luke
2017-01-01
Researchers addressing posttreatment complications in randomized trials often turn to principal stratification to define relevant assumptions and quantities of interest. One approach for the subsequent estimation of causal effects in this framework is to use methods based on the "principal score," the conditional probability of belonging…
NASA Astrophysics Data System (ADS)
Imandi, Venkataramana; Nair, Nisanth N.
2016-09-01
The absence of isotope scrambling observed by Henry and coworkers in the Wacker oxidation of deuterated allylic alcohol was used by them as support for the inner-sphere mechanism hydroxypalladation mechanism. One of the assumptions used to interpret their experimental data was that allyl alcohol oxidation takes place through non-cyclic intermediate routes as in the case of ethene. Here we verify this assumption through ab initio metadynamics simulations of the Wacker oxidation of allyl alcohol in explicit solvent. Importance of our results in interpreting the isotope scrambling experiments is discussed.
Wealth distribution on complex networks
NASA Astrophysics Data System (ADS)
Ichinomiya, Takashi
2012-12-01
We study the wealth distribution of the Bouchaud-Mézard model on complex networks. It is known from numerical simulations that this distribution depends on the topology of the network; however, no one has succeeded in explaining it. Using “adiabatic” and “independent” assumptions along with the central-limit theorem, we derive equations that determine the probability distribution function. The results are compared to those of simulations for various networks. We find good agreement between our theory and the simulations, except for the case of Watts-Strogatz networks with a low rewiring rate due to the breakdown of independent assumption.
Successful lithium treatment of transvestism associated with manic-depression.
Ward, N G
1975-09-01
A case of transvestism in a 24-year-old manic-depressive man is described. The behavior had been maintained for 2 years and disappeared soon after lithium treatment was begun. It has not returned during the first year on lithium. Dynamic and behavioral explanations for this unusual therapeutic response are considered. The dynamic explanation involves the assumption that the transvestism was perpetuated by mood-dependent motives that were eliminated by lithium. The behavioral explanation involves the assumption that the manic state itself became an intermittent reinforcer for the transvestism, and the lithium, by eliminating the mania, created a relatively permanent extinction period.
Infrastructure Analysis Tools: A Focus on Cash Flow Analysis (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melaina, M.; Penev, M.
2012-09-01
NREL has developed and maintains a variety of infrastructure analysis models for the U.S. Department of Energy. Business case analysis has recently been added to this tool set. This presentation focuses on cash flow analysis. Cash flows depend upon infrastructure costs, optimized spatially and temporally, and assumptions about financing and revenue. NREL has incorporated detailed metrics on financing and incentives into the models. Next steps in modeling include continuing to collect feedback on regional/local infrastructure development activities and 'roadmap' dynamics, and incorporating consumer preference assumptions on infrastructure to provide direct feedback between vehicles and station rollout.
A new class of asymptotically non-chaotic vacuum singularities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klinger, Paul, E-mail: paul.klinger@univie.ac.at
2015-12-15
The BKL conjecture, stated in the 1960s and early 1970s by Belinski, Khalatnikov and Lifschitz, proposes a detailed description of the generic asymptotic dynamics of spacetimes as they approach a spacelike singularity. It predicts complicated chaotic behaviour in the generic case, but simpler non-chaotic one in cases with symmetry assumptions or certain kinds of matter fields. Here we construct a new class of four-dimensional vacuum spacetimes containing spacelike singularities which show non-chaotic behaviour. In contrast with previous constructions, no symmetry assumptions are made. Rather, the metric is decomposed in Iwasawa variables and conditions on the asymptotic evolution of some ofmore » them are imposed. The constructed solutions contain five free functions of all space coordinates, two of which are constrained by inequalities. We investigate continuous and discrete isometries and compare the solutions to previous constructions. Finally, we give the asymptotic behaviour of the metric components and curvature.« less
Anisotropic magnetotail equilibrium and convection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hau, L.N.
This paper reports on self-consistent two-dimensional equilibria with anisotropic plasma pressure for the Earth's magnetotail. These configurations are obtained by numerically solving the generalized Grad-Shafranov equation, describing anisotropic plasmas with p[parallel] [ne] p[perpendicular], including the Earth's dipolar field. Consistency between these new equilibria and the assumption of steady-state, sunward convection, described by the double-adiabatic laws, is examined. As for the case of isotropic pressure [Erickson and Wolf, 1980], there exists a discrepancy between typical quite-time magnetic field models and the assumption of steady-state double-adiabatic lossless plasma sheet convection. However, unlike that case, this inconsistency cannot be removed by the presencemore » of a weak equatorial normal magnetic field strength in the near tail region: magnetic field configurations of this type produce unreasonably large pressure anisotropies, p[parallel] > p[perpendicular], in the plasma sheet. 16 refs., 5 figs.« less
NASA Astrophysics Data System (ADS)
Yang, X.; Xiao, C.; Chen, Y.; Xu, T.; Yu, Y.; Xu, M.; Wang, L.; Wang, X.; Lin, C.
2018-03-01
Recently, a new diagnostic method, Laser-driven Ion-beam Trace Probe (LITP), has been proposed to reconstruct 2D profiles of the poloidal magnetic field (Bp) and radial electric field (Er) in the tokamak devices. A linear assumption and test particle model were used in those reconstructions. In some toroidal devices such as the spherical tokamak and the Reversal Field Pinch (RFP), Bp is not small enough to meet the linear assumption. In those cases, the error of reconstruction increases quickly when Bp is larger than 10% of the toroidal magnetic field (Bt), and the previous test particle model may cause large error in the tomography process. Here a nonlinear reconstruction method is proposed for those cases. Preliminary numerical results show that LITP could be applied not only in tokamak devices, but also in other toroidal devices, such as the spherical tokamak, RFP, etc.