Sample records for simplified risk model

  1. A Simplified Approach to Risk Assessment Based on System Dynamics: An Industrial Case Study.

    PubMed

    Garbolino, Emmanuel; Chery, Jean-Pierre; Guarnieri, Franck

    2016-01-01

    Seveso plants are complex sociotechnical systems, which makes it appropriate to support any risk assessment with a model of the system. However, more often than not, this step is only partially addressed, simplified, or avoided in safety reports. At the same time, investigations have shown that the complexity of industrial systems is frequently a factor in accidents, due to interactions between their technical, human, and organizational dimensions. In order to handle both this complexity and changes in the system over time, this article proposes an original and simplified qualitative risk evaluation method based on the system dynamics theory developed by Forrester in the early 1960s. The methodology supports the development of a dynamic risk assessment framework dedicated to industrial activities. It consists of 10 complementary steps grouped into two main activities: system dynamics modeling of the sociotechnical system and risk analysis. This system dynamics risk analysis is applied to a case study of a chemical plant and provides a way to assess the technological and organizational components of safety. © 2016 Society for Risk Analysis.

  2. Impact of Missing Physiologic Data on Performance of the Simplified Acute Physiology Score 3 Risk-Prediction Model.

    PubMed

    Engerström, Lars; Nolin, Thomas; Mårdh, Caroline; Sjöberg, Folke; Karlström, Göran; Fredrikson, Mats; Walther, Sten M

    2017-12-01

    The Simplified Acute Physiology 3 outcome prediction model has a narrow time window for recording physiologic measurements. Our objective was to examine the prevalence and impact of missing physiologic data on the Simplified Acute Physiology 3 model's performance. Retrospective analysis of prospectively collected data. Sixty-three ICUs in the Swedish Intensive Care Registry. Patients admitted during 2011-2014 (n = 107,310). None. Model performance was analyzed using the area under the receiver operating curve, scaled Brier's score, and standardized mortality rate. We used a recalibrated Simplified Acute Physiology 3 model and examined model performance in the original dataset and in a dataset of complete records where missing data were generated (simulated dataset). One or more data were missing in 40.9% of the admissions, more common in survivors and low-risk admissions than in nonsurvivors and high-risk admissions. Discrimination did not decrease with one to two missing variables, but accuracy was highest with no missing data. Calibration was best in the original dataset with a mix of full records and records with some missing values (area under the receiver operating curve was 0.85, scaled Brier 27%, and standardized mortality rate 0.99). With zero, one, and two data missing, the scaled Brier was 31%, 26%, and 21%; area under the receiver operating curve was 0.84, 0.87, and 0.89; and standardized mortality rate was 0.92, 1.05 and 1.10, respectively. Datasets where the missing data were simulated for oxygenation or oxygenation and hydrogen ion concentration together performed worse than datasets with these data originally missing. There is a coupling between missing physiologic data, admission type, low risk, and survival. Increased loss of physiologic data reduced model performance and will deflate mortality risk, resulting in falsely high standardized mortality rates.

  3. How to Decide on Modeling Details: Risk and Benefit Assessment.

    PubMed

    Özilgen, Mustafa

    Mathematical models based on thermodynamic, kinetic, heat, and mass transfer analysis are central to this chapter. Microbial growth, death, enzyme inactivation models, and the modeling of material properties, including those pertinent to conduction and convection heating, mass transfer, such as diffusion and convective mass transfer, and thermodynamic properties, such as specific heat, enthalpy, and Gibbs free energy of formation and specific chemical exergy are also needed in this task. The origins, simplifying assumptions, and uses of model equations are discussed in this chapter, together with their benefits. The simplified forms of these models are sometimes referred to as "laws," such as "the first law of thermodynamics" or "Fick's second law." Starting to modeling a study with such "laws" without considering the conditions under which they are valid runs the risk of ending up with erronous conclusions. On the other hand, models started with fundamental concepts and simplified with appropriate considerations may offer explanations for the phenomena which may not be obtained just with measurements or unprocessed experimental data. The discussion presented here is strengthened with case studies and references to the literature.

  4. Evaluation of Enhanced Risk Monitors for Use on Advanced Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramuhalli, Pradeep; Veeramany, Arun; Bonebrake, Christopher A.

    This study provides an overview of the methodology for integrating time-dependent failure probabilities into nuclear power reactor risk monitors. This prototypic enhanced risk monitor (ERM) methodology was evaluated using a hypothetical probabilistic risk assessment (PRA) model, generated using a simplified design of a liquid-metal-cooled advanced reactor (AR). Component failure data from industry compilation of failures of components similar to those in the simplified AR model were used to initialize the PRA model. Core damage frequency (CDF) over time were computed and analyzed. In addition, a study on alternative risk metrics for ARs was conducted. Risk metrics that quantify the normalizedmore » cost of repairs, replacements, or other operations and management (O&M) actions were defined and used, along with an economic model, to compute the likely economic risk of future actions such as deferred maintenance based on the anticipated change in CDF due to current component condition and future anticipated degradation. Such integration of conventional-risk metrics with alternate-risk metrics provides a convenient mechanism for assessing the impact of O&M decisions on safety and economics of the plant. It is expected that, when integrated with supervisory control algorithms, such integrated-risk monitors will provide a mechanism for real-time control decision-making that ensure safety margins are maintained while operating the plant in an economically viable manner.« less

  5. Response Surface Modeling Tolerance and Inference Error Risk Specifications: Proposed Industry Standards

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2012-01-01

    This paper reviews the derivation of an equation for scaling response surface modeling experiments. The equation represents the smallest number of data points required to fit a linear regression polynomial so as to achieve certain specified model adequacy criteria. Specific criteria are proposed which simplify an otherwise rather complex equation, generating a practical rule of thumb for the minimum volume of data required to adequately fit a polynomial with a specified number of terms in the model. This equation and the simplified rule of thumb it produces can be applied to minimize the cost of wind tunnel testing.

  6. Coronary risk assessment by point-based vs. equation-based Framingham models: significant implications for clinical care.

    PubMed

    Gordon, William J; Polansky, Jesse M; Boscardin, W John; Fung, Kathy Z; Steinman, Michael A

    2010-11-01

    US cholesterol guidelines use original and simplified versions of the Framingham model to estimate future coronary risk and thereby classify patients into risk groups with different treatment strategies. We sought to compare risk estimates and risk group classification generated by the original, complex Framingham model and the simplified, point-based version. We assessed 2,543 subjects age 20-79 from the 2001-2006 National Health and Nutrition Examination Surveys (NHANES) for whom Adult Treatment Panel III (ATP-III) guidelines recommend formal risk stratification. For each subject, we calculated the 10-year risk of major coronary events using the original and point-based Framingham models, and then compared differences in these risk estimates and whether these differences would place subjects into different ATP-III risk groups (<10% risk, 10-20% risk, or >20% risk). Using standard procedures, all analyses were adjusted for survey weights, clustering, and stratification to make our results nationally representative. Among 39 million eligible adults, the original Framingham model categorized 71% of subjects as having "moderate" risk (<10% risk of a major coronary event in the next 10 years), 22% as having "moderately high" (10-20%) risk, and 7% as having "high" (>20%) risk. Estimates of coronary risk by the original and point-based models often differed substantially. The point-based system classified 15% of adults (5.7 million) into different risk groups than the original model, with 10% (3.9 million) misclassified into higher risk groups and 5% (1.8 million) into lower risk groups, for a net impact of classifying 2.1 million adults into higher risk groups. These risk group misclassifications would impact guideline-recommended drug treatment strategies for 25-46% of affected subjects. Patterns of misclassifications varied significantly by gender, age, and underlying CHD risk. Compared to the original Framingham model, the point-based version misclassifies millions of Americans into risk groups for which guidelines recommend different treatment strategies.

  7. Simple new risk score model for adult cardiac extracorporeal membrane oxygenation: simple cardiac ECMO score.

    PubMed

    Peigh, Graham; Cavarocchi, Nicholas; Keith, Scott W; Hirose, Hitoshi

    2015-10-01

    Although the use of cardiac extracorporeal membrane oxygenation (ECMO) is increasing in adult patients, the field lacks understanding of associated risk factors. While standard intensive care unit risk scores such as SAPS II (simplified acute physiology score II), SOFA (sequential organ failure assessment), and APACHE II (acute physiology and chronic health evaluation II), or disease-specific scores such as MELD (model for end-stage liver disease) and RIFLE (kidney risk, injury, failure, loss of function, ESRD) exist, they may not apply to adult cardiac ECMO patients as their risk factors differ from variables used in these scores. Between 2010 and 2014, 73 ECMOs were performed for cardiac support at our institution. Patient demographics and survival were retrospectively analyzed. A new easily calculated score for predicting ECMO mortality was created using identified risk factors from univariate and multivariate analyses, and model discrimination was compared with other scoring systems. Cardiac ECMO was performed on 73 patients (47 males and 26 females) with a mean age of 48 ± 14 y. Sixty-four percent of patients (47/73) survived ECMO support. Pre-ECMO SAPS II, SOFA, APACHE II, MELD, RIFLE, PRESERVE, and ECMOnet scores, were not correlated with survival. Univariate analysis of pre-ECMO risk factors demonstrated that increased lactate, renal dysfunction, and postcardiotomy cardiogenic shock were risk factors for death. Applying these data into a new simplified cardiac ECMO score (minimal risk = 0, maximal = 5) predicted patient survival. Survivors had a lower risk score (1.8 ± 1.2) versus the nonsurvivors (3.0 ± 0.99), P < 0.0001. Common intensive care unit or disease-specific risk scores calculated for cardiac ECMO patients did not correlate with ECMO survival, whereas a new simplified cardiac ECMO score provides survival predictability. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Practical modeling approaches for geological storage of carbon dioxide.

    PubMed

    Celia, Michael A; Nordbotten, Jan M

    2009-01-01

    The relentless increase of anthropogenic carbon dioxide emissions and the associated concerns about climate change have motivated new ideas about carbon-constrained energy production. One technological approach to control carbon dioxide emissions is carbon capture and storage, or CCS. The underlying idea of CCS is to capture the carbon before it emitted to the atmosphere and store it somewhere other than the atmosphere. Currently, the most attractive option for large-scale storage is in deep geological formations, including deep saline aquifers. Many physical and chemical processes can affect the fate of the injected CO2, with the overall mathematical description of the complete system becoming very complex. Our approach to the problem has been to reduce complexity as much as possible, so that we can focus on the few truly important questions about the injected CO2, most of which involve leakage out of the injection formation. Toward this end, we have established a set of simplifying assumptions that allow us to derive simplified models, which can be solved numerically or, for the most simplified cases, analytically. These simplified models allow calculation of solutions to large-scale injection and leakage problems in ways that traditional multicomponent multiphase simulators cannot. Such simplified models provide important tools for system analysis, screening calculations, and overall risk-assessment calculations. We believe this is a practical and important approach to model geological storage of carbon dioxide. It also serves as an example of how complex systems can be simplified while retaining the essential physics of the problem.

  9. Cost effective management of space venture risks

    NASA Technical Reports Server (NTRS)

    Giuntini, Ronald E.; Storm, Richard E.

    1986-01-01

    The development of a model for the cost-effective management of space venture risks is discussed. The risk assessment and control program of insurance companies is examined. A simplified system development cycle which consists of a conceptual design phase, a preliminary design phase, a final design phase, a construction phase, and a system operations and maintenance phase is described. The model incorporates insurance safety risk methods and reliability engineering, and testing practices used in the development of large aerospace and defense systems.

  10. Risk-Screening Environmental Indicators (RSEI)

    EPA Pesticide Factsheets

    EPA's Risk-Screening Environmental Indicators (RSEI) is a geographically-based model that helps policy makers and communities explore data on releases of toxic substances from industrial facilities reporting to EPA??s Toxics Release Inventory (TRI). By analyzing TRI information together with simplified risk factors, such as the amount of chemical released, its fate and transport through the environment, each chemical??s relative toxicity, and the number of people potentially exposed, RSEI calculates a numeric score, which is designed to only be compared to other scores calculated by RSEI. Because it is designed as a screening-level model, RSEI uses worst-case assumptions about toxicity and potential exposure where data are lacking, and also uses simplifying assumptions to reduce the complexity of the calculations. A more refined assessment is required before any conclusions about health impacts can be drawn. RSEI is used to establish priorities for further investigation and to look at changes in potential impacts over time. Users can save resources by conducting preliminary analyses with RSEI.

  11. Failure mode and effects analysis: a comparison of two common risk prioritisation methods.

    PubMed

    McElroy, Lisa M; Khorzad, Rebeca; Nannicelli, Anna P; Brown, Alexandra R; Ladner, Daniela P; Holl, Jane L

    2016-05-01

    Failure mode and effects analysis (FMEA) is a method of risk assessment increasingly used in healthcare over the past decade. The traditional method, however, can require substantial time and training resources. The goal of this study is to compare a simplified scoring method with the traditional scoring method to determine the degree of congruence in identifying high-risk failures. An FMEA of the operating room (OR) to intensive care unit (ICU) handoff was conducted. Failures were scored and ranked using both the traditional risk priority number (RPN) and criticality-based method, and a simplified method, which designates failures as 'high', 'medium' or 'low' risk. The degree of congruence was determined by first identifying those failures determined to be critical by the traditional method (RPN≥300), and then calculating the per cent congruence with those failures designated critical by the simplified methods (high risk). In total, 79 process failures among 37 individual steps in the OR to ICU handoff process were identified. The traditional method yielded Criticality Indices (CIs) ranging from 18 to 72 and RPNs ranging from 80 to 504. The simplified method ranked 11 failures as 'low risk', 30 as medium risk and 22 as high risk. The traditional method yielded 24 failures with an RPN ≥300, of which 22 were identified as high risk by the simplified method (92% agreement). The top 20% of CI (≥60) included 12 failures, of which six were designated as high risk by the simplified method (50% agreement). These results suggest that the simplified method of scoring and ranking failures identified by an FMEA can be a useful tool for healthcare organisations with limited access to FMEA expertise. However, the simplified method does not result in the same degree of discrimination in the ranking of failures offered by the traditional method. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  12. CYP2E1 MEDIATED EXTRAHEPATIC METABOLISM IN PBPK MODELING OF LIPOPHILIC VOLATILE ORGANIC COMPOUNDS

    EPA Science Inventory

    Physiologically based pharmacokinetic (PBPK) models increasingly are available for environmental chemicals and applied in risk assessments. Often a simplified representation of a real biological system is used in order to reduce uncertainties in the PBPK predictions caused by unc...

  13. Comprehensive Numerical Simulation of Filling and Solidification of Steel Ingots

    PubMed Central

    Pola, Annalisa; Gelfi, Marcello; La Vecchia, Giovina Marina

    2016-01-01

    In this paper, a complete three-dimensional numerical model of mold filling and solidification of steel ingots is presented. The risk of powder entrapment and defects formation during filling is analyzed in detail, demonstrating the importance of using a comprehensive geometry, with trumpet and runner, compared to conventional simplified models. By using a case study, it was shown that the simplified model significantly underestimates the defects sources, reducing the utility of simulations in supporting mold and process design. An experimental test was also performed on an instrumented mold and the measurements were compared to the calculation results. The good agreement between calculation and trial allowed validating the simulation. PMID:28773890

  14. Mortality Probability Model III and Simplified Acute Physiology Score II

    PubMed Central

    Vasilevskis, Eduard E.; Kuzniewicz, Michael W.; Cason, Brian A.; Lane, Rondall K.; Dean, Mitzi L.; Clay, Ted; Rennie, Deborah J.; Vittinghoff, Eric; Dudley, R. Adams

    2009-01-01

    Background: To develop and compare ICU length-of-stay (LOS) risk-adjustment models using three commonly used mortality or LOS prediction models. Methods: Between 2001 and 2004, we performed a retrospective, observational study of 11,295 ICU patients from 35 hospitals in the California Intensive Care Outcomes Project. We compared the accuracy of the following three LOS models: a recalibrated acute physiology and chronic health evaluation (APACHE) IV-LOS model; and models developed using risk factors in the mortality probability model III at zero hours (MPM0) and the simplified acute physiology score (SAPS) II mortality prediction model. We evaluated models by calculating the following: (1) grouped coefficients of determination; (2) differences between observed and predicted LOS across subgroups; and (3) intraclass correlations of observed/expected LOS ratios between models. Results: The grouped coefficients of determination were APACHE IV with coefficients recalibrated to the LOS values of the study cohort (APACHE IVrecal) [R2 = 0.422], mortality probability model III at zero hours (MPM0 III) [R2 = 0.279], and simplified acute physiology score (SAPS II) [R2 = 0.008]. For each decile of predicted ICU LOS, the mean predicted LOS vs the observed LOS was significantly different (p ≤ 0.05) for three, two, and six deciles using APACHE IVrecal, MPM0 III, and SAPS II, respectively. Plots of the predicted vs the observed LOS ratios of the hospitals revealed a threefold variation in LOS among hospitals with high model correlations. Conclusions: APACHE IV and MPM0 III were more accurate than SAPS II for the prediction of ICU LOS. APACHE IV is the most accurate and best calibrated model. Although it is less accurate, MPM0 III may be a reasonable option if the data collection burden or the treatment effect bias is a consideration. PMID:19363210

  15. A Risk Score Model for Evaluation and Management of Patients with Thyroid Nodules.

    PubMed

    Zhang, Yongwen; Meng, Fanrong; Hong, Lianqing; Chu, Lanfang

    2018-06-12

    The study is aimed to establish a simplified and practical tool for analyzing thyroid nodules. A novel risk score model was designed, risk factors including patient history, patient characteristics, physical examination, symptoms of compression, thyroid function, ultrasonography (US) of thyroid and cervical lymph nodes were evaluated and classified into high risk factors, intermediate risk factors, and low risk factors. A total of 243 thyroid nodules in 162 patients were assessed with risk score system and Thyroid Imaging-Reporting and Data System (TI-RADS). The diagnostic performance of risk score system and TI-RADS was compared. The accuracy in the diagnosis of thyroid nodules was 89.3% for risk score system, 74.9% for TI-RADS respectively. The specificity, accuracy and positive predictive value (PPV) of risk score system were significantly higher than the TI-RADS system (χ 2 =26.287, 17.151, 11.983; p <0.05), statistically significant differences were not observed in the sensitivity and negative predictive value (NPV) between the risk score system and TI-RADS (χ 2 =1.276, 0.290; p>0.05). The area under the curve (AUC) for risk score diagnosis system was 0.963, standard error 0.014, 95% confidence interval (CI)=0.934-0.991, the AUC for TI-RADS diagnosis system was 0.912 with standard error 0.021, 95% CI=0.871-0.953, the AUC for risk score system was significantly different from that of TI-RADS (Z=2.02; p <0.05). Risk score model is a reliable, simplified and cost-effective diagnostic tool used in diagnosis of thyroid cancer. The higher the score is, the higher the risk of malignancy will be. © Georg Thieme Verlag KG Stuttgart · New York.

  16. Experimental investigation of the flow in a simplified model of water lubricated axial thrust bearing

    NASA Astrophysics Data System (ADS)

    Kirschner, O.; Ruprecht, A.; Riedelbauch, S.

    2014-03-01

    In hydropower plants the axial thrust bearing takes up the hydraulic axial thrust of the runner and, in case of vertical shafts, the entire weight of all rotating masses. The use of water lubricated bearings can eliminate the oil leakage risk possibly contaminating the environment. A complex flow is generated by the smaller film thickness due to the lower viscosity of water compared with oil. Measurements on a simplified hydrostatic axial trust bearing model were accomplished for validating CFD analysis of water lubricated bearings. In this simplified model, fixed pads are implemented and the width of the gap was enlarged to create a higher resolution in space for the measurements. Most parts of the model were manufactured from acrylic glass to get optical access for measurement with PIV. The focus of these measurements is on the flow within the space between two pads. Additional to the PIV- measurement, the pressure on the wall of the rotating disk is captured by pressure transducers. The model bearing measurement results are presented for varied operating conditions.

  17. Systemic risk in banking ecosystems.

    PubMed

    Haldane, Andrew G; May, Robert M

    2011-01-20

    In the run-up to the recent financial crisis, an increasingly elaborate set of financial instruments emerged, intended to optimize returns to individual institutions with seemingly minimal risk. Essentially no attention was given to their possible effects on the stability of the system as a whole. Drawing analogies with the dynamics of ecological food webs and with networks within which infectious diseases spread, we explore the interplay between complexity and stability in deliberately simplified models of financial networks. We suggest some policy lessons that can be drawn from such models, with the explicit aim of minimizing systemic risk.

  18. Anthropometric measurements of general and central obesity and the prediction of cardiovascular disease risk in women: a cross-sectional study

    PubMed Central

    Goh, Louise G H; Dhaliwal, Satvinder S; Welborn, Timothy A; Lee, Andy H; Della, Phillip R

    2014-01-01

    Objectives It is important to ascertain which anthropometric measurements of obesity, general or central, are better predictors of cardiovascular disease (CVD) risk in women. 10-year CVD risk was calculated from the Framingham risk score model, SCORE risk chart for high-risk regions, general CVD and simplified general CVD risk score models. Increase in CVD risk associated with 1 SD increment in each anthropometric measurement above the mean was calculated, and the diagnostic utility of obesity measures in identifying participants with increased likelihood of being above the treatment threshold was assessed. Design Cross-sectional data from the National Heart Foundation Risk Factor Prevalence Study. Setting Population-based survey in Australia. Participants 4487 women aged 20–69 years without heart disease, diabetes or stroke. Outcome measures Anthropometric obesity measures that demonstrated the greatest increase in CVD risk as a result of incremental change, 1 SD above the mean, and obesity measures that had the greatest diagnostic utility in identifying participants above the respective treatment thresholds of various risk score models. Results Waist circumference (WC), waist-to-hip ratio (WHR) and waist-to-stature ratio had larger effects on increased CVD risk compared with body mass index (BMI). These central obesity measures also had higher sensitivity and specificity in identifying women above and below the 20% treatment threshold than BMI. Central obesity measures also recorded better correlations with CVD risk compared with general obesity measures. WC and WHR were found to be significant and independent predictors of CVD risk, as indicated by the high area under the receiver operating characteristic curves (>0.76), after controlling for BMI in the simplified general CVD risk score model. Conclusions Central obesity measures are better predictors of CVD risk compared with general obesity measures in women. It is equally important to maintain a healthy weight and to prevent central obesity concurrently. PMID:24503301

  19. Anthropometric measurements of general and central obesity and the prediction of cardiovascular disease risk in women: a cross-sectional study.

    PubMed

    Goh, Louise G H; Dhaliwal, Satvinder S; Welborn, Timothy A; Lee, Andy H; Della, Phillip R

    2014-02-06

    It is important to ascertain which anthropometric measurements of obesity, general or central, are better predictors of cardiovascular disease (CVD) risk in women. 10-year CVD risk was calculated from the Framingham risk score model, SCORE risk chart for high-risk regions, general CVD and simplified general CVD risk score models. Increase in CVD risk associated with 1 SD increment in each anthropometric measurement above the mean was calculated, and the diagnostic utility of obesity measures in identifying participants with increased likelihood of being above the treatment threshold was assessed. Cross-sectional data from the National Heart Foundation Risk Factor Prevalence Study. Population-based survey in Australia. 4487 women aged 20-69 years without heart disease, diabetes or stroke. Anthropometric obesity measures that demonstrated the greatest increase in CVD risk as a result of incremental change, 1 SD above the mean, and obesity measures that had the greatest diagnostic utility in identifying participants above the respective treatment thresholds of various risk score models. Waist circumference (WC), waist-to-hip ratio (WHR) and waist-to-stature ratio had larger effects on increased CVD risk compared with body mass index (BMI). These central obesity measures also had higher sensitivity and specificity in identifying women above and below the 20% treatment threshold than BMI. Central obesity measures also recorded better correlations with CVD risk compared with general obesity measures. WC and WHR were found to be significant and independent predictors of CVD risk, as indicated by the high area under the receiver operating characteristic curves (>0.76), after controlling for BMI in the simplified general CVD risk score model. Central obesity measures are better predictors of CVD risk compared with general obesity measures in women. It is equally important to maintain a healthy weight and to prevent central obesity concurrently.

  20. Chronic lymphocytic leukemia: A prognostic model comprising only two biomarkers (IGHV mutational status and FISH cytogenetics) separates patients with different outcome and simplifies the CLL-IPI.

    PubMed

    Delgado, Julio; Doubek, Michael; Baumann, Tycho; Kotaskova, Jana; Molica, Stefano; Mozas, Pablo; Rivas-Delgado, Alfredo; Morabito, Fortunato; Pospisilova, Sarka; Montserrat, Emili

    2017-04-01

    Rai and Binet staging systems are important to predict the outcome of patients with chronic lymphocytic leukemia (CLL) but do not reflect the biologic diversity of the disease nor predict response to therapy, which ultimately shape patients' outcome. We devised a biomarkers-only CLL prognostic system based on the two most important prognostic parameters in CLL (i.e., IGHV mutational status and fluorescence in situ hybridization [FISH] cytogenetics), separating three different risk groups: (1) low-risk (mutated IGHV + no adverse FISH cytogenetics [del(17p), del(11q)]); (2) intermediate-risk (either unmutated IGHV or adverse FISH cytogenetics) and (3) high-risk (unmutated IGHV + adverse FISH cytogenetics). In 524 unselected subjects with CLL, the 10-year overall survival was 82% (95% CI 76%-88%), 52% (45%-62%), and 27% (17%-42%) for the low-, intermediate-, and high-risk groups, respectively. Patients with low-risk comprised around 50% of the series and had a life expectancy comparable to the general population. The prognostic model was fully validated in two independent cohorts, including 417 patients representative of general CLL population and 337 patients with Binet stage A CLL. The model had a similar discriminatory value as the CLL-IPI. Moreover, it applied to all patients with CLL independently of age, and separated patients with different risk within Rai or Binet clinical stages. The biomarkers-only CLL prognostic system presented here simplifies the CLL-IPI and could be useful in daily practice and to stratify patients in clinical trials. © 2017 Wiley Periodicals, Inc.

  1. Validated Risk Score for Predicting 6-Month Mortality in Infective Endocarditis.

    PubMed

    Park, Lawrence P; Chu, Vivian H; Peterson, Gail; Skoutelis, Athanasios; Lejko-Zupa, Tatjana; Bouza, Emilio; Tattevin, Pierre; Habib, Gilbert; Tan, Ren; Gonzalez, Javier; Altclas, Javier; Edathodu, Jameela; Fortes, Claudio Querido; Siciliano, Rinaldo Focaccia; Pachirat, Orathai; Kanj, Souha; Wang, Andrew

    2016-04-18

    Host factors and complications have been associated with higher mortality in infective endocarditis (IE). We sought to develop and validate a model of clinical characteristics to predict 6-month mortality in IE. Using a large multinational prospective registry of definite IE (International Collaboration on Endocarditis [ICE]-Prospective Cohort Study [PCS], 2000-2006, n=4049), a model to predict 6-month survival was developed by Cox proportional hazards modeling with inverse probability weighting for surgery treatment and was internally validated by the bootstrapping method. This model was externally validated in an independent prospective registry (ICE-PLUS, 2008-2012, n=1197). The 6-month mortality was 971 of 4049 (24.0%) in the ICE-PCS cohort and 342 of 1197 (28.6%) in the ICE-PLUS cohort. Surgery during the index hospitalization was performed in 48.1% and 54.0% of the cohorts, respectively. In the derivation model, variables related to host factors (age, dialysis), IE characteristics (prosthetic or nosocomial IE, causative organism, left-sided valve vegetation), and IE complications (severe heart failure, stroke, paravalvular complication, and persistent bacteremia) were independently associated with 6-month mortality, and surgery was associated with a lower risk of mortality (Harrell's C statistic 0.715). In the validation model, these variables had similar hazard ratios (Harrell's C statistic 0.682), with a similar, independent benefit of surgery (hazard ratio 0.74, 95% CI 0.62-0.89). A simplified risk model was developed by weight adjustment of these variables. Six-month mortality after IE is ≈25% and is predicted by host factors, IE characteristics, and IE complications. Surgery during the index hospitalization is associated with lower mortality but is performed less frequently in the highest risk patients. A simplified risk model may be used to identify specific risk subgroups in IE. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  2. Multimedia-modeling integration development environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pelton, Mitchell A.; Hoopes, Bonnie L.

    2002-09-02

    There are many framework systems available; however, the purpose of the framework presented here is to capitalize on the successes of the Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES) and Multi-media Multi-pathway Multi-receptor Risk Assessment (3MRA) methodology as applied to the Hazardous Waste Identification Rule (HWIR) while focusing on the development of software tools to simplify the module developer?s effort of integrating a module into the framework.

  3. Controlling Inventory: Real-World Mathematical Modeling

    ERIC Educational Resources Information Center

    Edwards, Thomas G.; Özgün-Koca, S. Asli; Chelst, Kenneth R.

    2013-01-01

    Amazon, Walmart, and other large-scale retailers owe their success partly to efficient inventory management. For such firms, holding too little inventory risks losing sales, whereas holding idle inventory wastes money. Therefore profits hinge on the inventory level chosen. In this activity, students investigate a simplified inventory-control…

  4. Risk stratification in middle-aged patients with congestive heart failure: prospective comparison of the Heart Failure Survival Score (HFSS) and a simplified two-variable model.

    PubMed

    Zugck, C; Krüger, C; Kell, R; Körber, S; Schellberg, D; Kübler, W; Haass, M

    2001-10-01

    The performance of a US-American scoring system (Heart Failure Survival Score, HFSS) was prospectively evaluated in a sample of ambulatory patients with congestive heart failure (CHF). Additionally, it was investigated whether the HFSS might be simplified by assessment of the distance ambulated during a 6-min walk test (6'WT) instead of determination of peak oxygen uptake (peak VO(2)). In 208 middle-aged CHF patients (age 54+/-10 years, 82% male, NYHA class 2.3+/-0.7; follow-up 28+/-14 months) the seven variables of the HFSS: CHF aetiology; heart rate; mean arterial pressure; serum sodium concentration; intraventricular conduction time; left ventricular ejection fraction (LVEF); and peak VO(2), were determined. Additionally, a 6'WT was performed. The HFSS allowed discrimination between patients at low, medium and high risk, with mortality rates of 16, 39 and 50%, respectively. However, the prognostic power of the HFSS was not superior to a two-variable model consisting only of LVEF and peak VO(2). The areas under the receiver operating curves (AUC) for prediction of 1-year survival were even higher for the two-variable model (0.84 vs. 0.74, P<0.05). Replacing peak VO(2) with 6'WT resulted in a similar AUC (0.83). The HFSS continued to predict survival when applied to this patient sample. However, the HFSS was inferior to a two-variable model containing only LVEF and either peak VO(2) or 6'WT. As the 6'WT requires no sophisticated equipment, a simplified two-variable model containing only LVEF and 6'WT may be more widely applicable, and is therefore recommended.

  5. Fractal Risk Assessment of ISS Propulsion Module in Meteoroid and Orbital Debris Environments

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.

    2001-01-01

    A unique and innovative risk assessment of the International Space Station (ISS) Propulsion Module is conducted using fractal modeling of the Module's response to the meteoroid and orbital debris environments. Both the environment models and structural failure modes due to the resultant hypervelocity impact phenomenology, as well as Module geometry, are investigated for fractal applicability. The fractal risk assessment methodology could produce a greatly simplified alternative to current methodologies, such as BUMPER analyses, while maintaining or increasing the number of complex scenarios that can be assessed. As a minimum, this innovative fractal approach will provide an independent assessment of existing methodologies in a unique way.

  6. Can the FIGO 2000 scoring system for gestational trophoblastic neoplasia be simplified? A new retrospective analysis from a nationwide dataset.

    PubMed

    Eysbouts, Y K; Ottevanger, P B; Massuger, L F A G; IntHout, J; Short, D; Harvey, R; Kaur, B; Sebire, N J; Sarwar, N; Sweep, F C G J; Seckl, M J

    2017-08-01

    Worldwide introduction of the International Fedaration of Gynaecology and Obstetrics (FIGO) 2000 scoring system has provided an effective means to stratify patients with gestational trophoblastic neoplasia to single- or multi-agent chemotherapy. However, the system is quite elaborate with an extensive set of risk factors. In this study, we re-evaluate all prognostic risk factors involved in the FIGO 2000 scoring system and examine if simplification is feasible. Between January 2003 and December 2012, 813 patients diagnosed with gestational trophoblastic neoplasia were identified at the Trophoblastic Disease Centre in London and scored using the FIGO 2000. Multivariable analysis and stepwise logistic regression were carried out to evaluate whether the FIGO 2000 scoring system could be simplified. Of the eight FIGO risk factors only pre-treatment serum human chorionic gonadotropin (hCG) levels exceeding 10 000 IU/l (OR = 5.0; 95% CI 2.5-10.4) and 100 000 IU/l (OR = 14.3; 95% CI 4.7-44.1), interval exceeding 7 months since antecedent pregnancy (OR = 4.1; 95% CI 1.0-16.2), and tumor size of over 5 cm (OR = 2.2; 95% CI 1.3-3.6) were identified as independently predictive for single-agent resistance. In addition, increased risk was apparent for antecedent term pregnancy (OR = 3.4; 95% CI 0.9-12.7) and the presence of five or more metastases (OR = 3.5; 95% CI 0.4-30.4), but patient numbers in these categories were relatively small. Stepwise logistic regression identified a simplified risk scoring model comprising age, pretreatment serum hCG, number of metastases, antecedent pregnancy, and interval but omitting tumor size, previous failed chemotherapy, and site of metastases. With this model only 1 out 725 patients was classified different from the FIGO 2000 system. Our simplified alternative using only five of the FIGO prognostic factors appears to be an accurate system for discriminating patients requiring single as opposed to multi-agent chemotherapy. Further work is urgently needed to validate these findings. © The Author 2017. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  7. Development of a screening tool using electronic health records for undiagnosed Type 2 diabetes mellitus and impaired fasting glucose detection in the Slovenian population.

    PubMed

    Štiglic, G; Kocbek, P; Cilar, L; Fijačko, N; Stožer, A; Zaletel, J; Sheikh, A; Povalej Bržan, P

    2018-05-01

    To develop and validate a simplified screening test for undiagnosed Type 2 diabetes mellitus and impaired fasting glucose for the Slovenian population (SloRisk) to be used in the general population. Data on 11 391 people were collected from the electronic health records of comprehensive medical examinations in five Slovenian healthcare centres. Fasting plasma glucose as well as information related to the Finnish Diabetes Risk Score questionnaire, FINDRISC, were collected for 2073 people to build predictive models. Bootstrapping-based evaluation was used to estimate the area under the receiver-operating characteristic curve performance metric of two proposed logistic regression models as well as the Finnish Diabetes Risk Score model both at recommended and at alternative cut-off values. The final model contained five questions for undiagnosed Type 2 diabetes prediction and achieved an area under the receiver-operating characteristic curve of 0.851 (95% CI 0.850-0.853). The impaired fasting glucose prediction model included six questions and achieved an area under the receiver-operating characteristic curve of 0.840 (95% CI 0.839-0.840). There were four questions that were included in both models (age, sex, waist circumference and blood sugar history), with physical activity selected only for undiagnosed Type 2 diabetes and questions on family history and hypertension drug use selected only for the impaired fasting glucose prediction model. This study proposes two simplified models based on FINDRISC questions for screening of undiagnosed Type 2 diabetes and impaired fasting glucose in the Slovenian population. A significant improvement in performance was achieved compared with the original FINDRISC questionnaire. Both models include waist circumference instead of BMI. © 2018 Diabetes UK.

  8. Development and Validation of a Disease Severity Scoring Model for Pediatric Sepsis.

    PubMed

    Hu, Li; Zhu, Yimin; Chen, Mengshi; Li, Xun; Lu, Xiulan; Liang, Ying; Tan, Hongzhuan

    2016-07-01

    Multiple severity scoring systems have been devised and evaluated in adult sepsis, but a simplified scoring model for pediatric sepsis has not yet been developed. This study aimed to develop and validate a new scoring model to stratify the severity of pediatric sepsis, thus assisting the treatment of sepsis in children. Data from 634 consecutive patients who presented with sepsis at Children's hospital of Hunan province in China in 2011-2013 were analyzed, with 476 patients placed in training group and 158 patients in validation group. Stepwise discriminant analysis was used to develop the accurate discriminate model. A simplified scoring model was generated using weightings defined by the discriminate coefficients. The discriminant ability of the model was tested by receiver operating characteristic curves (ROC). The discriminant analysis showed that prothrombin time, D-dimer, total bilirubin, serum total protein, uric acid, PaO2/FiO2 ratio, myoglobin were associated with severity of sepsis. These seven variables were assigned with values of 4, 3, 3, 4, 3, 3, 3 respectively based on the standardized discriminant coefficients. Patients with higher scores had higher risk of severe sepsis. The areas under ROC (AROC) were 0.836 for accurate discriminate model, and 0.825 for simplified scoring model in validation group. The proposed disease severity scoring model for pediatric sepsis showed adequate discriminatory capacity and sufficient accuracy, which has important clinical significance in evaluating the severity of pediatric sepsis and predicting its progress.

  9. Validation of simplified centre of mass models during gait in individuals with chronic stroke.

    PubMed

    Huntley, Andrew H; Schinkel-Ivy, Alison; Aqui, Anthony; Mansfield, Avril

    2017-10-01

    The feasibility of using a multiple segment (full-body) kinematic model in clinical gait assessment is difficult when considering obstacles such as time and cost constraints. While simplified gait models have been explored in healthy individuals, no such work to date has been conducted in a stroke population. The aim of this study was to quantify the errors of simplified kinematic models for chronic stroke gait assessment. Sixteen individuals with chronic stroke (>6months), outfitted with full body kinematic markers, performed a series of gait trials. Three centre of mass models were computed: (i) 13-segment whole-body model, (ii) 3 segment head-trunk-pelvis model, and (iii) 1 segment pelvis model. Root mean squared error differences were compared between models, along with correlations to measures of stroke severity. Error differences revealed that, while both models were similar in the mediolateral direction, the head-trunk-pelvis model had less error in the anteroposterior direction and the pelvis model had less error in the vertical direction. There was some evidence that the head-trunk-pelvis model error is influenced in the mediolateral direction for individuals with more severe strokes, as a few significant correlations were observed between the head-trunk-pelvis model and measures of stroke severity. These findings demonstrate the utility and robustness of the pelvis model for clinical gait assessment in individuals with chronic stroke. Low error in the mediolateral and vertical directions is especially important when considering potential stability analyses during gait for this population, as lateral stability has been previously linked to fall risk. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Potential of 3D City Models to assess flood vulnerability

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Bochow, Mathias; Schüttig, Martin; Nagel, Claus; Ross, Lutz; Kreibich, Heidi

    2016-04-01

    Vulnerability, as the product of exposure and susceptibility, is a key factor of the flood risk equation. Furthermore, the estimation of flood loss is very sensitive to the choice of the vulnerability model. Still, in contrast to elaborate hazard simulations, vulnerability is often considered in a simplified manner concerning the spatial resolution and geo-location of exposed objects as well as the susceptibility of these objects at risk. Usually, area specific potential flood loss is quantified on the level of aggregated land-use classes, and both hazard intensity and resistance characteristics of affected objects are represented in highly simplified terms. We investigate the potential of 3D City Models and spatial features derived from remote sensing data to improve the differentiation of vulnerability in flood risk assessment. 3D City Models are based on CityGML, an application scheme of the Geography Markup Language (GML), which represents the 3D geometry, 3D topology, semantics and appearance of objects on different levels of detail. As such, 3D City Models offer detailed spatial information which is useful to describe the exposure and to characterize the susceptibility of residential buildings at risk. This information is further consolidated with spatial features of the building stock derived from remote sensing data. Using this database a spatially detailed flood vulnerability model is developed by means of data-mining. Empirical flood damage data are used to derive and to validate flood susceptibility models for individual objects. We present first results from a prototype application in the city of Dresden, Germany. The vulnerability modeling based on 3D City Models and remote sensing data is compared i) to the generally accepted good engineering practice based on area specific loss potential and ii) to a highly detailed representation of flood vulnerability based on a building typology using urban structure types. Comparisons are drawn in terms of affected building area and estimated loss for a selection of inundation scenarios.

  11. A Fast Method for Embattling Optimization of Ground-Based Radar Surveillance Network

    NASA Astrophysics Data System (ADS)

    Jiang, H.; Cheng, H.; Zhang, Y.; Liu, J.

    A growing number of space activities have created an orbital debris environment that poses increasing impact risks to existing space systems and human space flight. For the safety of in-orbit spacecraft, a lot of observation facilities are needed to catalog space objects, especially in low earth orbit. Surveillance of Low earth orbit objects are mainly rely on ground-based radar, due to the ability limitation of exist radar facilities, a large number of ground-based radar need to build in the next few years in order to meet the current space surveillance demands. How to optimize the embattling of ground-based radar surveillance network is a problem to need to be solved. The traditional method for embattling optimization of ground-based radar surveillance network is mainly through to the detection simulation of all possible stations with cataloged data, and makes a comprehensive comparative analysis of various simulation results with the combinational method, and then selects an optimal result as station layout scheme. This method is time consuming for single simulation and high computational complexity for the combinational analysis, when the number of stations increases, the complexity of optimization problem will be increased exponentially, and cannot be solved with traditional method. There is no better way to solve this problem till now. In this paper, target detection procedure was simplified. Firstly, the space coverage of ground-based radar was simplified, a space coverage projection model of radar facilities in different orbit altitudes was built; then a simplified objects cross the radar coverage model was established according to the characteristics of space objects orbit motion; after two steps simplification, the computational complexity of the target detection was greatly simplified, and simulation results shown the correctness of the simplified results. In addition, the detection areas of ground-based radar network can be easily computed with the simplified model, and then optimized the embattling of ground-based radar surveillance network with the artificial intelligent algorithm, which can greatly simplifies the computational complexities. Comparing with the traditional method, the proposed method greatly improved the computational efficiency.

  12. Investigating outliers to improve conceptual models of bedrock aquifers

    NASA Astrophysics Data System (ADS)

    Worthington, Stephen R. H.

    2018-06-01

    Numerical models play a prominent role in hydrogeology, with simplifying assumptions being inevitable when implementing these models. However, there is a risk of oversimplification, where important processes become neglected. Such processes may be associated with outliers, and consideration of outliers can lead to an improved scientific understanding of bedrock aquifers. Using rigorous logic to investigate outliers can help to explain fundamental scientific questions such as why there are large variations in permeability between different bedrock lithologies.

  13. Making the Case for a Model-Based Definition of Engineering Materials (Postprint)

    DTIC Science & Technology

    2017-09-12

    MBE relies on digi- tal representations, or a model-based definition (MBD), to define a product throughout design , manufacturing and sus- tainment...discovery through development, scale-up, product design and qualification, manufacture and sustainment have changed little over the past decades. This...testing data provided a certifiable material definition, so as to minimize risk and simplify procurement of materials during the design , manufacture , and

  14. A cluster-randomized controlled trial to evaluate the effects of a simplified cardiovascular management program in Tibet, China and Haryana, India: study design and rationale.

    PubMed

    Ajay, Vamadevan S; Tian, Maoyi; Chen, Hao; Wu, Yangfeng; Li, Xian; Dunzhu, Danzeng; Ali, Mohammed K; Tandon, Nikhil; Krishnan, Anand; Prabhakaran, Dorairaj; Yan, Lijing L

    2014-09-06

    In resource-poor areas of China and India, the cardiovascular disease burden is high, but availability of and access to quality healthcare is limited. Establishing a management scheme that utilizes the local infrastructure and builds healthcare capacity is essential for cardiovascular disease prevention and management. The study aims to develop, implement, and evaluate the feasibility and effectiveness of a simplified, evidence-based cardiovascular management program delivered by community healthcare workers in resource-constrained areas in Tibet, China and Haryana, India. This yearlong cluster-randomized controlled trial will be conducted in 20 villages in Tibet and 20 villages in Haryana. Randomization of villages to usual care or intervention will be stratified by country. High cardiovascular disease risk individuals (aged 40 years or older, history of heart disease, stroke, diabetes, or measured systolic blood pressure of 160 mmHg or higher) will be screened at baseline. Community health workers in the intervention villages will be trained to manage and follow up high-risk patients on a monthly basis following a simplified '2+2' intervention model involving two lifestyle recommendations and the appropriate prescription of two medications. A customized electronic decision support system based on the intervention strategy will be developed to assist the community health workers with patient management. Baseline and follow-up surveys will be conducted in a standardized fashion in all villages. The primary outcome will be the net difference between-group in the proportion of high-risk patients taking antihypertensive medication pre- and post-intervention. Secondary outcomes will include the proportion of patients taking aspirin and changes in blood pressure. Process and economic evaluations will also be conducted. To our knowledge, this will be the first study to evaluate the effect of a simplified management program delivered by community health workers with the help of electronic decision support system on improving the health of high cardiovascular disease risk patients. If effective, this intervention strategy can serve as a model that can be implemented, where applicable, in rural China, India, and other resource-constrained areas. The trial was registered in the clinicaltrials.gov database on 30 December, 2011 and the registration number is NCT01503814.

  15. Improving comprehension and recall of information for an HIV vaccine trial among women at risk for HIV: reading level simplification and inclusion of pictures to illustrate key concepts.

    PubMed

    Murphy, D A; O'Keefe, Z H; Kaufman, A H

    1999-10-01

    A simplified version of the prototype HIV vaccine material was developed through (a) reducing reading grade level, (b) restructuring of the organization and categorization of the material, (c) adding pictures designed to emphasize key concepts, and (d) obtaining feedback on the simplified version through focus groups with the target population. Low-income women at risk for HIV (N = 141) recruited from a primary care clinic were randomly assigned to be presented the standard or the simplified version. There were no significant differences between the groups in terms of education or Vocabulary, Block Design, and Passage Comprehension scores. Women who received the simplified version had significantly higher comprehension scores immediately following presentation of the material than did women who received the standard version and were also significantly more likely to recall study benefits and risks. These findings were maintained at 3-month follow-up. Implications for informed consent are discussed.

  16. Managing Disease Risks from Trade: Strategic Behavior with Many Choices and Price Effects.

    PubMed

    Chitchumnong, Piyayut; Horan, Richard D

    2018-03-16

    An individual's infectious disease risks, and hence the individual's incentives for risk mitigation, may be influenced by others' risk management choices. If so, then there will be strategic interactions among individuals, whereby each makes his or her own risk management decisions based, at least in part, on the expected decisions of others. Prior work has shown that multiple equilibria could arise in this setting, with one equilibrium being a coordination failure in which individuals make too few investments in protection. However, these results are largely based on simplified models involving a single management choice and fixed prices that may influence risk management incentives. Relaxing these assumptions, we find strategic interactions influence, and are influenced by, choices involving multiple management options and market price effects. In particular, we find these features can reduce or eliminate concerns about multiple equilibria and coordination failure. This has important policy implications relative to simpler models.

  17. Simplified predictive models for CO 2 sequestration performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, Srikanta; Ganesh, Priya; Schuetter, Jared

    CO2 sequestration in deep saline formations is increasingly being considered as a viable strategy for the mitigation of greenhouse gas emissions from anthropogenic sources. In this context, detailed numerical simulation based models are routinely used to understand key processes and parameters affecting pressure propagation and buoyant plume migration following CO2 injection into the subsurface. As these models are data and computation intensive, the development of computationally-efficient alternatives to conventional numerical simulators has become an active area of research. Such simplified models can be valuable assets during preliminary CO2 injection project screening, serve as a key element of probabilistic system assessmentmore » modeling tools, and assist regulators in quickly evaluating geological storage projects. We present three strategies for the development and validation of simplified modeling approaches for CO2 sequestration in deep saline formations: (1) simplified physics-based modeling, (2) statisticallearning based modeling, and (3) reduced-order method based modeling. In the first category, a set of full-physics compositional simulations is used to develop correlations for dimensionless injectivity as a function of the slope of the CO2 fractional-flow curve, variance of layer permeability values, and the nature of vertical permeability arrangement. The same variables, along with a modified gravity number, can be used to develop a correlation for the total storage efficiency within the CO2 plume footprint. Furthermore, the dimensionless average pressure buildup after the onset of boundary effects can be correlated to dimensionless time, CO2 plume footprint, and storativity contrast between the reservoir and caprock. In the second category, statistical “proxy models” are developed using the simulation domain described previously with two approaches: (a) classical Box-Behnken experimental design with a quadratic response surface, and (b) maximin Latin Hypercube sampling (LHS) based design with a multidimensional kriging metamodel fit. For roughly the same number of simulations, the LHS-based metamodel yields a more robust predictive model, as verified by a k-fold cross-validation approach (with data split into training and test sets) as well by validation with an independent dataset. In the third category, a reduced-order modeling procedure is utilized that combines proper orthogonal decomposition (POD) for reducing problem dimensionality with trajectory-piecewise linearization (TPWL) in order to represent system response at new control settings from a limited number of training runs. Significant savings in computational time are observed with reasonable accuracy from the PODTPWL reduced-order model for both vertical and horizontal well problems – which could be important in the context of history matching, uncertainty quantification and optimization problems. The simplified physics and statistical learning based models are also validated using an uncertainty analysis framework. Reference cumulative distribution functions of key model outcomes (i.e., plume radius and reservoir pressure buildup) generated using a 97-run full-physics simulation are successfully validated against the CDF from 10,000 sample probabilistic simulations using the simplified models. The main contribution of this research project is the development and validation of a portfolio of simplified modeling approaches that will enable rapid feasibility and risk assessment for CO2 sequestration in deep saline formations.« less

  18. Mass and power modeling of communication satellites

    NASA Technical Reports Server (NTRS)

    Price, Kent M.; Pidgeon, David; Tsao, Alex

    1991-01-01

    Analytic estimating relationships for the mass and power requirements for major satellite subsystems are described. The model for each subsystem is keyed to the performance drivers and system requirements that influence their selection and use. Guidelines are also given for choosing among alternative technologies which accounts for other significant variables such as cost, risk, schedule, operations, heritage, and life requirements. These models are intended for application to first order systems analyses, where resources do not warrant detailed development of a communications system scenario. Given this ground rule, the models are simplified to 'smoothed' representation of reality. Therefore, the user is cautioned that cost, schedule, and risk may be significantly impacted where interpolations are sufficiently different from existing hardware as to warrant development of new devices.

  19. Weighing up the weighted case mix tool (WCMT): a psychometric investigation using confirmatory factor analysis.

    PubMed

    Duane, B G; Humphris, G; Richards, D; Okeefe, E J; Gordon, K; Freeman, R

    2014-12-01

    To assess the use of the WCMT in two Scottish health boards and to consider the impact of simplifying the tool to improve efficient use. A retrospective analysis of routine WCMT data (47,276 cases). Public Dental Service (PDS) within NHS Lothian and Highland. The WCMT consists of six criteria. Each criterion is measured independently on a four-point scale to assess patient complexity and the dental care for the disabled/impaired patient. Psychometric analyses on the data-set were conducted. Conventional internal consistency coefficients were calculated. Latent variable modelling was performed to assess the 'fit' of the raw data to a pre-specified measurement model. A Confirmatory Factor Analysis (CFA) was used to test three potential changes to the existing WCMT that included, the removal of the oral risk factor question, the removal of original weightings for scoring the Tool, and collapsing the 4-point rating scale to three categories. The removal of the oral risk factor question had little impact on the reliability of the proposed simplified CMT to discriminate between levels of patient complexity. The removal of weighting and collapsing each item's rating scale to three categories had limited impact on reliability of the revised tool. The CFA analysis provided strong evidence that a new, proposed simplified Case Mix Tool (sCMT) would operate closely to the pre-specified measurement model (the WMCT). A modified sCMT can demonstrate, without reducing reliability, a useful measure of the complexity of patient care. The proposed sCMT may be implemented within primary care dentistry to record patient complexity as part of an oral health assessment.

  20. Experimental and Numerical Analysis of Narrowband Coherent Rayleigh-Brillouin Scattering in Atomic and Molecular Species (Pre Print)

    DTIC Science & Technology

    2012-02-01

    use of polar gas species. While current simplified models have adequately predicted CRS and CRBS line shapes for a wide variety of cases, multiple ...published simplified models are presented for argon, molecular nitrogen, and methane at 300 & 500 K and 1 atm. The simplified models require uncertain gas... models are presented for argon, molecular nitrogen, and methane at 300 & 500 K and 1 atm. The simplified models require uncertain gas properties

  1. Research on simplified parametric finite element model of automobile frontal crash

    NASA Astrophysics Data System (ADS)

    Wu, Linan; Zhang, Xin; Yang, Changhai

    2018-05-01

    The modeling method and key technologies of the automobile frontal crash simplified parametric finite element model is studied in this paper. By establishing the auto body topological structure, extracting and parameterizing the stiffness properties of substructures, choosing appropriate material models for substructures, the simplified parametric FE model of M6 car is built. The comparison of the results indicates that the simplified parametric FE model can accurately calculate the automobile crash responses and the deformation of the key substructures, and the simulation time is reduced from 6 hours to 2 minutes.

  2. iGen: An automated generator of simplified models with provable error bounds.

    NASA Astrophysics Data System (ADS)

    Tang, D.; Dobbie, S.

    2009-04-01

    Climate models employ various simplifying assumptions and parameterisations in order to increase execution speed. However, in order to draw conclusions about the Earths climate from the results of a climate simulation it is necessary to have information about the error that these assumptions and parameterisations introduce. A novel computer program, called iGen, is being developed which automatically generates fast, simplified models by analysing the source code of a slower, high resolution model. The resulting simplified models have provable bounds on error compared to the high resolution model and execute at speeds that are typically orders of magnitude faster. iGen's input is a definition of the prognostic variables of the simplified model, a set of bounds on acceptable error and the source code of a model that captures the behaviour of interest. In the case of an atmospheric model, for example, this would be a global cloud resolving model with very high resolution. Although such a model would execute far too slowly to be used directly in a climate model, iGen never executes it. Instead, it converts the code of the resolving model into a mathematical expression which is then symbolically manipulated and approximated to form a simplified expression. This expression is then converted back into a computer program and output as a simplified model. iGen also derives and reports formal bounds on the error of the simplified model compared to the resolving model. These error bounds are always maintained below the user-specified acceptable error. Results will be presented illustrating the success of iGen's analysis of a number of example models. These extremely encouraging results have lead on to work which is currently underway to analyse a cloud resolving model and so produce an efficient parameterisation of moist convection with formally bounded error.

  3. Electricity market pricing, risk hedging and modeling

    NASA Astrophysics Data System (ADS)

    Cheng, Xu

    In this dissertation, we investigate the pricing, price risk hedging/arbitrage, and simplified system modeling for a centralized LMP-based electricity market. In an LMP-based market model, the full AC power flow model and the DC power flow model are most widely used to represent the transmission system. We investigate the differences of dispatching results, congestion pattern, and LMPs for the two power flow models. An appropriate LMP decomposition scheme to quantify the marginal costs of the congestion and real power losses is critical for the implementation of financial risk hedging markets. However, the traditional LMP decomposition heavily depends on the slack bus selection. In this dissertation we propose a slack-independent scheme to break LMP down into energy, congestion, and marginal loss components by analyzing the actual marginal cost of each bus at the optimal solution point. The physical and economic meanings of the marginal effect at each bus provide accurate price information for both congestion and losses, and thus the slack-dependency of the traditional scheme is eliminated. With electricity priced at the margin instead of the average value, the market operator typically collects more revenue from power sellers than that paid to power buyers. According to the LMP decomposition results, the revenue surplus is then divided into two parts: congestion charge surplus and marginal loss revenue surplus. We apply the LMP decomposition results to the financial tools, such as financial transmission right (FTR) and loss hedging right (LHR), which have been introduced to hedge against price risks associated to congestion and losses, to construct a full price risk hedging portfolio. The two-settlement market structure and the introduction of financial tools inevitably create market manipulation opportunities. We investigate several possible market manipulation behaviors by virtual bidding and propose a market monitor approach to identify and quantify such behavior. Finally, the complexity of the power market and size of the transmission grid make it difficult for market participants to efficiently analyze the long-term market behavior. We propose a simplified power system commercial model by simulating the PTDFs of critical transmission bottlenecks of the original system.

  4. The role of building models in the evaluation of heat-related risks

    NASA Astrophysics Data System (ADS)

    Buchin, Oliver; Jänicke, Britta; Meier, Fred; Scherer, Dieter; Ziegler, Felix

    2016-04-01

    Hazard-risk relationships in epidemiological studies are generally based on the outdoor climate, despite the fact that most of humans' lifetime is spent indoors. By coupling indoor and outdoor climates with a building model, the risk concept developed can still be based on the outdoor conditions but also includes exposure to the indoor climate. The influence of non-linear building physics and the impact of air conditioning on heat-related risks can be assessed in a plausible manner using this risk concept. For proof of concept, the proposed risk concept is compared to a traditional risk analysis. As an example, daily and city-wide mortality data of the age group 65 and older in Berlin, Germany, for the years 2001-2010 are used. Four building models with differing complexity are applied in a time-series regression analysis. This study shows that indoor hazard better explains the variability in the risk data compared to outdoor hazard, depending on the kind of building model. Simplified parameter models include the main non-linear effects and are proposed for the time-series analysis. The concept shows that the definitions of heat events, lag days, and acclimatization in a traditional hazard-risk relationship are influenced by the characteristics of the prevailing building stock.

  5. [Influence of trabecular microstructure modeling on finite element analysis of dental implant].

    PubMed

    Shen, M J; Wang, G G; Zhu, X H; Ding, X

    2016-09-01

    To analyze the influence of trabecular microstructure modeling on the biomechanical distribution of implant-bone interface with a three-dimensional finite element mandible model of trabecular structure. Dental implants were embeded in the mandibles of a beagle dog. After three months of the implant installation, the mandibles with dental implants were harvested and scaned by micro-CT and cone-beam CT. Two three-dimensional finite element mandible models, trabecular microstructure(precise model) and macrostructure(simplified model), were built. The values of stress and strain of implant-bone interface were calculated using the software of Ansys 14.0. Compared with the simplified model, the precise models' average values of the implant bone interface stress increased obviously and its maximum values did not change greatly. The maximum values of quivalent stress of the precise models were 80% and 110% of the simplified model and the average values were 170% and 290% of simplified model. The maximum and average values of equivalent strain of precise models were obviously decreased, and the maximum values of the equivalent effect strain were 17% and 26% of simplified model and the average ones were 21% and 16% of simplified model respectively. Stress and strain concentrations at implant-bone interface were obvious in the simplified model. However, the distributions of stress and strain were uniform in the precise model. The precise model has significant effect on the distribution of stress and strain at implant-bone interface.

  6. Prospective mixture risk assessment and management prioritizations for river catchments with diverse land uses

    PubMed Central

    Brown, Colin D.; de Zwart, Dick; Diamond, Jerome; Dyer, Scott D.; Holmes, Christopher M.; Marshall, Stuart; Burton, G. Allen

    2018-01-01

    Abstract Ecological risk assessment increasingly focuses on risks from chemical mixtures and multiple stressors because ecosystems are commonly exposed to a plethora of contaminants and nonchemical stressors. To simplify the task of assessing potential mixture effects, we explored 3 land use–related chemical emission scenarios. We applied a tiered methodology to judge the implications of the emissions of chemicals from agricultural practices, domestic discharges, and urban runoff in a quantitative model. The results showed land use–dependent mixture exposures, clearly discriminating downstream effects of land uses, with unique chemical “signatures” regarding composition, concentration, and temporal patterns. Associated risks were characterized in relation to the land‐use scenarios. Comparisons to measured environmental concentrations and predicted impacts showed relatively good similarity. The results suggest that the land uses imply exceedances of regulatory protective environmental quality standards, varying over time in relation to rain events and associated flow and dilution variation. Higher‐tier analyses using ecotoxicological effect criteria confirmed that species assemblages may be affected by exposures exceeding no‐effect levels and that mixture exposure could be associated with predicted species loss under certain situations. The model outcomes can inform various types of prioritization to support risk management, including a ranking across land uses as a whole, a ranking on characteristics of exposure times and frequencies, and various rankings of the relative role of individual chemicals. Though all results are based on in silico assessments, the prospective land use–based approach applied in the present study yields useful insights for simplifying and assessing potential ecological risks of chemical mixtures and can therefore be useful for catchment‐management decisions. Environ Toxicol Chem 2018;37:715–728. © 2017 The Authors. Environmental Toxicology Chemistry Published by Wiley Periodicals, Inc. PMID:28845901

  7. Simplifying and upscaling water resources systems models that combine natural and engineered components

    NASA Astrophysics Data System (ADS)

    McIntyre, N.; Keir, G.

    2014-12-01

    Water supply systems typically encompass components of both natural systems (e.g. catchment runoff, aquifer interception) and engineered systems (e.g. process equipment, water storages and transfers). Many physical processes of varying spatial and temporal scales are contained within these hybrid systems models. The need to aggregate and simplify system components has been recognised for reasons of parsimony and comprehensibility; and the use of probabilistic methods for modelling water-related risks also prompts the need to seek computationally efficient up-scaled conceptualisations. How to manage the up-scaling errors in such hybrid systems models has not been well-explored, compared to research in the hydrological process domain. Particular challenges include the non-linearity introduced by decision thresholds and non-linear relations between water use, water quality, and discharge strategies. Using a case study of a mining region, we explore the nature of up-scaling errors in water use, water quality and discharge, and we illustrate an approach to identification of a scale-adjusted model including an error model. Ways forward for efficient modelling of such complex, hybrid systems are discussed, including interactions with human, energy and carbon systems models.

  8. Including operational data in QMRA model: development and impact of model inputs.

    PubMed

    Jaidi, Kenza; Barbeau, Benoit; Carrière, Annie; Desjardins, Raymond; Prévost, Michèle

    2009-03-01

    A Monte Carlo model, based on the Quantitative Microbial Risk Analysis approach (QMRA), has been developed to assess the relative risks of infection associated with the presence of Cryptosporidium and Giardia in drinking water. The impact of various approaches for modelling the initial parameters of the model on the final risk assessments is evaluated. The Monte Carlo simulations that we performed showed that the occurrence of parasites in raw water was best described by a mixed distribution: log-Normal for concentrations > detection limit (DL), and a uniform distribution for concentrations < DL. The selection of process performance distributions for modelling the performance of treatment (filtration and ozonation) influences the estimated risks significantly. The mean annual risks for conventional treatment are: 1.97E-03 (removal credit adjusted by log parasite = log spores), 1.58E-05 (log parasite = 1.7 x log spores) or 9.33E-03 (regulatory credits based on the turbidity measurement in filtered water). Using full scale validated SCADA data, the simplified calculation of CT performed at the plant was shown to largely underestimate the risk relative to a more detailed CT calculation, which takes into consideration the downtime and system failure events identified at the plant (1.46E-03 vs. 3.93E-02 for the mean risk).

  9. Probabilistic seismic vulnerability and risk assessment of stone masonry structures

    NASA Astrophysics Data System (ADS)

    Abo El Ezz, Ahmad

    Earthquakes represent major natural hazards that regularly impact the built environment in seismic prone areas worldwide and cause considerable social and economic losses. The high losses incurred following the past destructive earthquakes promoted the need for assessment of the seismic vulnerability and risk of the existing buildings. Many historic buildings in the old urban centers in Eastern Canada such as Old Quebec City are built of stone masonry and represent un-measurable architectural and cultural heritage. These buildings were built to resist gravity loads only and generally offer poor resistance to lateral seismic loads. Seismic vulnerability assessment of stone masonry buildings is therefore the first necessary step in developing seismic retrofitting and pre-disaster mitigation plans. The objective of this study is to develop a set of probability-based analytical tools for efficient seismic vulnerability and uncertainty analysis of stone masonry buildings. A simplified probabilistic analytical methodology for vulnerability modelling of stone masonry building with systematic treatment of uncertainties throughout the modelling process is developed in the first part of this study. Building capacity curves are developed using a simplified mechanical model. A displacement based procedure is used to develop damage state fragility functions in terms of spectral displacement response based on drift thresholds of stone masonry walls. A simplified probabilistic seismic demand analysis is proposed to capture the combined uncertainty in capacity and demand on fragility functions. In the second part, a robust analytical procedure for the development of seismic hazard compatible fragility and vulnerability functions is proposed. The results are given by sets of seismic hazard compatible vulnerability functions in terms of structure-independent intensity measure (e.g. spectral acceleration) that can be used for seismic risk analysis. The procedure is very efficient for conducting rapid vulnerability assessment of stone masonry buildings. With modification of input structural parameters, it can be adapted and applied to any other building class. A sensitivity analysis of the seismic vulnerability modelling is conducted to quantify the uncertainties associated with each of the input parameters. The proposed methodology was validated for a scenario-based seismic risk assessment of existing buildings in Old Quebec City. The procedure for hazard compatible vulnerability modelling was used to develop seismic fragility functions in terms of spectral acceleration representative of the inventoried buildings. A total of 1220 buildings were considered. The assessment was performed for a scenario event of magnitude 6.2 at distance 15km with a probability of exceedance of 2% in 50 years. The study showed that most of the expected damage is concentrated in the old brick and stone masonry buildings.

  10. Quantifying the economic risks of climate change

    NASA Astrophysics Data System (ADS)

    Diaz, Delavane; Moore, Frances

    2017-11-01

    Understanding the value of reducing greenhouse-gas emissions matters for policy decisions and climate risk management, but quantification is challenging because of the complex interactions and uncertainties in the Earth and human systems, as well as normative ethical considerations. Current modelling approaches use damage functions to parameterize a simplified relationship between climate variables, such as temperature change, and economic losses. Here we review and synthesize the limitations of these damage functions and describe how incorporating impacts, adaptation and vulnerability research advances and empirical findings could substantially improve damage modelling and the robustness of social cost of carbon values produced. We discuss the opportunities and challenges associated with integrating these research advances into cost-benefit integrated assessment models, with guidance for future work.

  11. Comparison between a typical and a simplified model for blast load-induced structural response

    NASA Astrophysics Data System (ADS)

    Abd-Elhamed, A.; Mahmoud, S.

    2017-02-01

    As explosive blasts continue to cause severe damage as well as victims in both civil and military environments. There is a bad need for understanding the behavior of structural elements to such extremely short duration dynamic loads where it is of great concern nowadays. Due to the complexity of the typical blast pressure profile model and in order to reduce the modelling and computational efforts, the simplified triangle model for blast loads profile is used to analyze structural response. This simplified model considers only the positive phase and ignores the suction phase which characterizes the typical one in simulating blast loads. The closed from solution for the equation of motion under blast load as a forcing term modelled either typical or simplified models has been derived. The considered herein two approaches have been compared using the obtained results from simulation response analysis of a building structure under an applied blast load. The computed error in simulating response using the simplified model with respect to the typical one has been computed. In general, both simplified and typical models can perform the dynamic blast-load induced response of building structures. However, the simplified one shows a remarkably different response behavior as compared to the typical one despite its simplicity and the use of only positive phase for simulating the explosive loads. The prediction of the dynamic system responses using the simplified model is not satisfactory due to the obtained larger errors as compared to the system responses obtained using the typical one.

  12. Order Matters: Sequencing Scale-Realistic Versus Simplified Models to Improve Science Learning

    NASA Astrophysics Data System (ADS)

    Chen, Chen; Schneps, Matthew H.; Sonnert, Gerhard

    2016-10-01

    Teachers choosing between different models to facilitate students' understanding of an abstract system must decide whether to adopt a model that is simplified and striking or one that is realistic and complex. Only recently have instructional technologies enabled teachers and learners to change presentations swiftly and to provide for learning based on multiple models, thus giving rise to questions about the order of presentation. Using disjoint individual growth modeling to examine the learning of astronomical concepts using a simulation of the solar system on tablets for 152 high school students (age 15), the authors detect both a model effect and an order effect in the use of the Orrery, a simplified model that exaggerates the scale relationships, and the True-to-scale, a proportional model that more accurately represents the realistic scale relationships. Specifically, earlier exposure to the simplified model resulted in diminution of the conceptual gain from the subsequent realistic model, but the realistic model did not impede learning from the following simplified model.

  13. Malaria Risk Assessment for the Republic of Korea Based on Models of Mosquito Distribution

    DTIC Science & Technology

    2008-06-01

    Yam;lda All. kleilli Rueda All. belellme Rueda VPH 0.8 • 0.6• ~ ~ 0.’ 0.2 0 H P V VPH Figure I, Illustration of the concept of the mal-area as it...the percentage of the sampled area that these parameters cover. The value for VPH could be used as a simplified index of malaria risk to compare...combinations of the VPH variables. These statistics will consist of the percentage of cells that contain a certain value for the user defined area

  14. The integrated effect of moderate exercise on coronary heart disease.

    PubMed

    Mathews, Marc J; Mathews, Edward H; Mathews, George E

    Moderate exercise is associated with a lower risk for coronary heart disease (CHD). A suitable integrated model of the CHD pathogenetic pathways relevant to moderate exercise may help to elucidate this association. Such a model is currently not available in the literature. An integrated model of CHD was developed and used to investigate pathogenetic pathways of importance between exercise and CHD. Using biomarker relative-risk data, the pathogenetic effects are representable as measurable effects based on changes in biomarkers. The integrated model provides insight into higherorder interactions underlying the associations between CHD and moderate exercise. A novel 'connection graph' was developed, which simplifies these interactions. It quantitatively illustrates the relationship between moderate exercise and various serological biomarkers of CHD. The connection graph of moderate exercise elucidates all the possible integrated actions through which risk reduction may occur. An integrated model of CHD provides a summary of the effects of moderate exercise on CHD. It also shows the importance of each CHD pathway that moderate exercise influences. The CHD risk-reducing effects of exercise appear to be primarily driven by decreased inflammation and altered metabolism.

  15. Mathematical model to estimate risk of calcium-containing renal stones

    NASA Technical Reports Server (NTRS)

    Pietrzyk, R. A.; Feiveson, A. H.; Whitson, P. A.

    1999-01-01

    BACKGROUND/AIMS: Astronauts exposed to microgravity during the course of spaceflight undergo physiologic changes that alter the urinary environment so as to increase the risk of renal stone formation. This study was undertaken to identify a simple method with which to evaluate the potential risk of renal stone development during spaceflight. METHOD: We used a large database of urinary risk factors obtained from 323 astronauts before and after spaceflight to generate a mathematical model with which to predict the urinary supersaturation of calcium stone forming salts. RESULT: This model, which involves the fewest possible analytical variables (urinary calcium, citrate, oxalate, phosphorus, and total volume), reliably and accurately predicted the urinary supersaturation of the calcium stone forming salts when compared to results obtained from a group of 6 astronauts who collected urine during flight. CONCLUSIONS: The use of this model will simplify both routine medical monitoring during spaceflight as well as the evaluation of countermeasures designed to minimize renal stone development. This model also can be used for Earth-based applications in which access to analytical resources is limited.

  16. Terry Turbopump Analytical Modeling Efforts in Fiscal Year 2016 ? Progress Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, Douglas; Ross, Kyle; Cardoni, Jeffrey N

    This document details the Fiscal Year 2016 modeling efforts to define the true operating limitations (margins) of the Terry turbopump systems used in the nuclear industry for Milestone 3 (full-scale component experiments) and Milestone 4 (Terry turbopump basic science experiments) experiments. The overall multinational-sponsored program creates the technical basis to: (1) reduce and defer additional utility costs, (2) simplify plant operations, and (3) provide a better understanding of the true margin which could reduce overall risk of operations.

  17. Object-Oriented Bayesian Networks (OOBN) for Aviation Accident Modeling and Technology Portfolio Impact Assessment

    NASA Technical Reports Server (NTRS)

    Shih, Ann T.; Ancel, Ersin; Jones, Sharon M.

    2012-01-01

    The concern for reducing aviation safety risk is rising as the National Airspace System in the United States transforms to the Next Generation Air Transportation System (NextGen). The NASA Aviation Safety Program is committed to developing an effective aviation safety technology portfolio to meet the challenges of this transformation and to mitigate relevant safety risks. The paper focuses on the reasoning of selecting Object-Oriented Bayesian Networks (OOBN) as the technique and commercial software for the accident modeling and portfolio assessment. To illustrate the benefits of OOBN in a large and complex aviation accident model, the in-flight Loss-of-Control Accident Framework (LOCAF) constructed as an influence diagram is presented. An OOBN approach not only simplifies construction and maintenance of complex causal networks for the modelers, but also offers a well-organized hierarchical network that is easier for decision makers to exploit the model examining the effectiveness of risk mitigation strategies through technology insertions.

  18. Simplified models for dark matter searches at the LHC

    NASA Astrophysics Data System (ADS)

    Abdallah, Jalal; Araujo, Henrique; Arbey, Alexandre; Ashkenazi, Adi; Belyaev, Alexander; Berger, Joshua; Boehm, Celine; Boveia, Antonio; Brennan, Amelia; Brooke, Jim; Buchmueller, Oliver; Buckley, Matthew; Busoni, Giorgio; Calibbi, Lorenzo; Chauhan, Sushil; Daci, Nadir; Davies, Gavin; De Bruyn, Isabelle; De Jong, Paul; De Roeck, Albert; de Vries, Kees; Del Re, Daniele; De Simone, Andrea; Di Simone, Andrea; Doglioni, Caterina; Dolan, Matthew; Dreiner, Herbi K.; Ellis, John; Eno, Sarah; Etzion, Erez; Fairbairn, Malcolm; Feldstein, Brian; Flaecher, Henning; Feng, Eric; Fox, Patrick; Genest, Marie-Hélène; Gouskos, Loukas; Gramling, Johanna; Haisch, Ulrich; Harnik, Roni; Hibbs, Anthony; Hoh, Siewyan; Hopkins, Walter; Ippolito, Valerio; Jacques, Thomas; Kahlhoefer, Felix; Khoze, Valentin V.; Kirk, Russell; Korn, Andreas; Kotov, Khristian; Kunori, Shuichi; Landsberg, Greg; Liem, Sebastian; Lin, Tongyan; Lowette, Steven; Lucas, Robyn; Malgeri, Luca; Malik, Sarah; McCabe, Christopher; Mete, Alaettin Serhan; Morgante, Enrico; Mrenna, Stephen; Nakahama, Yu; Newbold, Dave; Nordstrom, Karl; Pani, Priscilla; Papucci, Michele; Pataraia, Sophio; Penning, Bjoern; Pinna, Deborah; Polesello, Giacomo; Racco, Davide; Re, Emanuele; Riotto, Antonio Walter; Rizzo, Thomas; Salek, David; Sarkar, Subir; Schramm, Steven; Skubic, Patrick; Slone, Oren; Smirnov, Juri; Soreq, Yotam; Sumner, Timothy; Tait, Tim M. P.; Thomas, Marc; Tomalin, Ian; Tunnell, Christopher; Vichi, Alessandro; Volansky, Tomer; Weiner, Neal; West, Stephen M.; Wielers, Monika; Worm, Steven; Yavin, Itay; Zaldivar, Bryan; Zhou, Ning; Zurek, Kathryn

    2015-09-01

    This document outlines a set of simplified models for dark matter and its interactions with Standard Model particles. It is intended to summarize the main characteristics that these simplified models have when applied to dark matter searches at the LHC, and to provide a number of useful expressions for reference. The list of models includes both ss-channel and tt-channel scenarios. For ss-channel, spin-0 and spin-1 mediations are discussed, and also realizations where the Higgs particle provides a portal between the dark and visible sectors. The guiding principles underpinning the proposed simplified models are spelled out, and some suggestions for implementation are presented.

  19. Risk score for predicting long-term mortality after coronary artery bypass graft surgery.

    PubMed

    Wu, Chuntao; Camacho, Fabian T; Wechsler, Andrew S; Lahey, Stephen; Culliford, Alfred T; Jordan, Desmond; Gold, Jeffrey P; Higgins, Robert S D; Smith, Craig R; Hannan, Edward L

    2012-05-22

    No simplified bedside risk scores have been created to predict long-term mortality after coronary artery bypass graft surgery. The New York State Cardiac Surgery Reporting System was used to identify 8597 patients who underwent isolated coronary artery bypass graft surgery in July through December 2000. The National Death Index was used to ascertain patients' vital statuses through December 31, 2007. A Cox proportional hazards model was fit to predict death after CABG surgery using preprocedural risk factors. Then, points were assigned to significant predictors of death on the basis of the values of their regression coefficients. For each possible point total, the predicted risks of death at years 1, 3, 5, and 7 were calculated. It was found that the 7-year mortality rate was 24.2 in the study population. Significant predictors of death included age, body mass index, ejection fraction, unstable hemodynamic state or shock, left main coronary artery disease, cerebrovascular disease, peripheral arterial disease, congestive heart failure, malignant ventricular arrhythmia, chronic obstructive pulmonary disease, diabetes mellitus, renal failure, and history of open heart surgery. The points assigned to these risk factors ranged from 1 to 7; possible point totals for each patient ranged from 0 to 28. The observed and predicted risks of death at years 1, 3, 5, and 7 across patient groups stratified by point totals were highly correlated. The simplified risk score accurately predicted the risk of mortality after coronary artery bypass graft surgery and can be used for informed consent and as an aid in determining treatment choice.

  20. Hypersonic Vehicle Propulsion System Simplified Model Development

    NASA Technical Reports Server (NTRS)

    Stueber, Thomas J.; Raitano, Paul; Le, Dzu K.; Ouzts, Peter

    2007-01-01

    This document addresses the modeling task plan for the hypersonic GN&C GRC team members. The overall propulsion system modeling task plan is a multi-step process and the task plan identified in this document addresses the first steps (short term modeling goals). The procedures and tools produced from this effort will be useful for creating simplified dynamic models applicable to a hypersonic vehicle propulsion system. The document continues with the GRC short term modeling goal. Next, a general description of the desired simplified model is presented along with simulations that are available to varying degrees. The simulations may be available in electronic form (FORTRAN, CFD, MatLab,...) or in paper form in published documents. Finally, roadmaps outlining possible avenues towards realizing simplified model are presented.

  1. Fast Flood damage estimation coupling hydraulic modeling and Multisensor Satellite data

    NASA Astrophysics Data System (ADS)

    Fiorini, M.; Rudari, R.; Delogu, F.; Candela, L.; Corina, A.; Boni, G.

    2011-12-01

    Damage estimation requires a good representation of the Elements at risk and their vulnerability, the knowledge of the flooded area extension and the description of the hydraulic forcing. In this work the real time use of a simplified two dimensional hydraulic model constrained by satellite retrieved flooded areas is analyzed. The main features of such a model are computational speed and simple start-up, with no need to insert complex information but a subset of simplified boundary and initial condition. Those characteristics allow the model to be fast enough to be used in real time for the simulation of flooding events. The model fills the gap of information left by single satellite scenes of flooded area, allowing for the estimation of the maximum flooding extension and magnitude. The static information provided by earth observation (like SAR extension of flooded areas at a certain time) are interpreted in a dynamic consistent way and very useful hydraulic information (e.g., water depth, water speed and the evolution of flooded areas)are provided. These information are merged with satellite identification of elements exposed to risk that are characterized in terms of their vulnerability to floods in order to obtain fast estimates of Food damages. The model has been applied in several flooding events occurred worldwide. amongst the other activations in the Mediterranean areas like Veneto (IT) (October 2010), Basilicata (IT) (March 2011) and Shkoder (January 2010 and December 2010) are considered and compared with larger types of floods like the one of Queensland in December 2010.

  2. Simplified models for dark matter searches at the LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdallah, Jalal; Araujo, Henrique; Arbey, Alexandre

    This document a outlines a set of simplified models for dark matter and its interactions with Standard Model particles. It is intended to summarize the main characteristics that these simplified models have when applied to dark matter searches at the LHC, and to provide a number of useful expressions for reference. The list of models includes both s-channel and t-channel scenarios. For s-channel, spin-0 and spin-1 mediations are discussed, and also realizations where the Higgs particle provides a portal between the dark and visible sectors. The guiding principles underpinning the proposed simplified models are spelled out, and some suggestions formore » implementation are presented.« less

  3. Simplified Models for Dark Matter Searches at the LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdallah, Jalal

    This document outlines a set of simplified models for dark matter and its interactions with Standard Model particles. It is intended to summarize the main characteristics that these simplified models have when applied to dark matter searches at the LHC, and to provide a number of useful expressions for reference. The list of models includes both s-channel and t-channel scenarios. For s-channel, spin-0 and spin-1 mediations are discussed, and also realizations where the Higgs particle provides a portal between the dark and visible sectors. The guiding principles underpinning the proposed simplified models are spelled out, and some suggestions for implementationmore » are presented.« less

  4. Simplified Models for Dark Matter Searches at the LHC

    DOE PAGES

    Abdallah, Jalal

    2015-08-11

    This document outlines a set of simplified models for dark matter and its interactions with Standard Model particles. It is intended to summarize the main characteristics that these simplified models have when applied to dark matter searches at the LHC, and to provide a number of useful expressions for reference. The list of models includes both s-channel and t-channel scenarios. For s-channel, spin-0 and spin-1 mediations are discussed, and also realizations where the Higgs particle provides a portal between the dark and visible sectors. The guiding principles underpinning the proposed simplified models are spelled out, and some suggestions for implementationmore » are presented.« less

  5. Trimming a hazard logic tree with a new model-order-reduction technique

    USGS Publications Warehouse

    Porter, Keith; Field, Edward; Milner, Kevin R

    2017-01-01

    The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.

  6. Spatial spreading model and dynamics of West Nile virus in birds and mosquitoes with free boundary.

    PubMed

    Lin, Zhigui; Zhu, Huaiping

    2017-12-01

    In this paper, a reaction-diffusion system is proposed to model the spatial spreading of West Nile virus in vector mosquitoes and host birds in North America. Transmission dynamics are based on a simplified model involving mosquitoes and birds, and the free boundary is introduced to model and explore the expanding front of the infected region. The spatial-temporal risk index [Formula: see text], which involves regional characteristic and time, is defined for the simplified reaction-diffusion model with the free boundary to compare with other related threshold values, including the usual basic reproduction number [Formula: see text]. Sufficient conditions for the virus to vanish or to spread are given. Our results suggest that the virus will be in a scenario of vanishing if [Formula: see text], and will spread to the whole region if [Formula: see text] for some [Formula: see text], while if [Formula: see text], the spreading or vanishing of the virus depends on the initial number of infected individuals, the area of the infected region, the diffusion rate and other factors. Moreover, some remarks on the basic reproduction numbers and the spreading speeds are presented and compared.

  7. Design and performance evaluation of a simplified dynamic model for combined sewer overflows in pumped sewer systems

    NASA Astrophysics Data System (ADS)

    van Daal-Rombouts, Petra; Sun, Siao; Langeveld, Jeroen; Bertrand-Krajewski, Jean-Luc; Clemens, François

    2016-07-01

    Optimisation or real time control (RTC) studies in wastewater systems increasingly require rapid simulations of sewer systems in extensive catchments. To reduce the simulation time calibrated simplified models are applied, with the performance generally based on the goodness of fit of the calibration. In this research the performance of three simplified and a full hydrodynamic (FH) model for two catchments are compared based on the correct determination of CSO event occurrences and of the total discharged volumes to the surface water. Simplified model M1 consists of a rainfall runoff outflow (RRO) model only. M2 combines the RRO model with a static reservoir model for the sewer behaviour. M3 comprises the RRO model and a dynamic reservoir model. The dynamic reservoir characteristics were derived from FH model simulations. It was found that M2 and M3 are able to describe the sewer behaviour of the catchments, contrary to M1. The preferred model structure depends on the quality of the information (geometrical database and monitoring data) available for the design and calibration of the model. Finally, calibrated simplified models are shown to be preferable to uncalibrated FH models when performing optimisation or RTC studies.

  8. Influence of model reduction on uncertainty of flood inundation predictions

    NASA Astrophysics Data System (ADS)

    Romanowicz, R. J.; Kiczko, A.; Osuch, M.

    2012-04-01

    Derivation of flood risk maps requires an estimation of the maximum inundation extent for a flood with an assumed probability of exceedence, e.g. a 100 or 500 year flood. The results of numerical simulations of flood wave propagation are used to overcome the lack of relevant observations. In practice, deterministic 1-D models are used for flow routing, giving a simplified image of a flood wave propagation process. The solution of a 1-D model depends on the simplifications to the model structure, the initial and boundary conditions and the estimates of model parameters which are usually identified using the inverse problem based on the available noisy observations. Therefore, there is a large uncertainty involved in the derivation of flood risk maps. In this study we examine the influence of model structure simplifications on estimates of flood extent for the urban river reach. As the study area we chose the Warsaw reach of the River Vistula, where nine bridges and several dikes are located. The aim of the study is to examine the influence of water structures on the derived model roughness parameters, with all the bridges and dikes taken into account, with a reduced number and without any water infrastructure. The results indicate that roughness parameter values of a 1-D HEC-RAS model can be adjusted for the reduction in model structure. However, the price we pay is the model robustness. Apart from a relatively simple question regarding reducing model structure, we also try to answer more fundamental questions regarding the relative importance of input, model structure simplification, parametric and rating curve uncertainty to the uncertainty of flood extent estimates. We apply pseudo-Bayesian methods of uncertainty estimation and Global Sensitivity Analysis as the main methodological tools. The results indicate that the uncertainties have a substantial influence on flood risk assessment. In the paper we present a simplified methodology allowing the influence of that uncertainty to be assessed. This work was supported by National Science Centre of Poland (grant 2011/01/B/ST10/06866).

  9. Study on Collision of Ship Side Structure by Simplified Plastic Analysis Method

    NASA Astrophysics Data System (ADS)

    Sun, C. J.; Zhou, J. H.; Wu, W.

    2017-10-01

    During its lifetime, a ship may encounter collision or grounding and sustain permanent damage after these types of accidents. Crashworthiness has been based on two kinds of main methods: simplified plastic analysis and numerical simulation. A simplified plastic analysis method is presented in this paper. Numerical methods using the non-linear finite-element software LS-DYNA are conducted to validate the method. The results show that, as for the accuracy of calculation results, the simplified plasticity analysis are in good agreement with the finite element simulation, which reveals that the simplified plasticity analysis method can quickly and accurately estimate the crashworthiness of the side structure during the collision process and can be used as a reliable risk assessment method.

  10. Review of Qualitative Approaches for the Construction Industry: Designing a Risk Management Toolbox

    PubMed Central

    Spee, Ton; Gillen, Matt; Lentz, Thomas J.; Garrod, Andrew; Evans, Paul; Swuste, Paul

    2011-01-01

    Objectives This paper presents the framework and protocol design for a construction industry risk management toolbox. The construction industry needs a comprehensive, systematic approach to assess and control occupational risks. These risks span several professional health and safety disciplines, emphasized by multiple international occupational research agenda projects including: falls, electrocution, noise, silica, welding fumes, and musculoskeletal disorders. Yet, the International Social Security Association says, "whereas progress has been made in safety and health, the construction industry is still a high risk sector." Methods Small- and medium-sized enterprises (SMEs) employ about 80% of the world's construction workers. In recent years a strategy for qualitative occupational risk management, known as Control Banding (CB) has gained international attention as a simplified approach for reducing work-related risks. CB groups hazards into stratified risk 'bands', identifying commensurate controls to reduce the level of risk and promote worker health and safety. We review these qualitative solutions-based approaches and identify strengths and weaknesses toward designing a simplified CB 'toolbox' approach for use by SMEs in construction trades. Results This toolbox design proposal includes international input on multidisciplinary approaches for performing a qualitative risk assessment determining a risk 'band' for a given project. Risk bands are used to identify the appropriate level of training to oversee construction work, leading to commensurate and appropriate control methods to perform the work safely. Conclusion The Construction Toolbox presents a review-generated format to harness multiple solutions-based national programs and publications for controlling construction-related risks with simplified approaches across the occupational safety, health and hygiene professions. PMID:22953194

  11. Review of qualitative approaches for the construction industry: designing a risk management toolbox.

    PubMed

    Zalk, David M; Spee, Ton; Gillen, Matt; Lentz, Thomas J; Garrod, Andrew; Evans, Paul; Swuste, Paul

    2011-06-01

    This paper presents the framework and protocol design for a construction industry risk management toolbox. The construction industry needs a comprehensive, systematic approach to assess and control occupational risks. These risks span several professional health and safety disciplines, emphasized by multiple international occupational research agenda projects including: falls, electrocution, noise, silica, welding fumes, and musculoskeletal disorders. Yet, the International Social Security Association says, "whereas progress has been made in safety and health, the construction industry is still a high risk sector." Small- and medium-sized enterprises (SMEs) employ about 80% of the world's construction workers. In recent years a strategy for qualitative occupational risk management, known as Control Banding (CB) has gained international attention as a simplified approach for reducing work-related risks. CB groups hazards into stratified risk 'bands', identifying commensurate controls to reduce the level of risk and promote worker health and safety. We review these qualitative solutions-based approaches and identify strengths and weaknesses toward designing a simplified CB 'toolbox' approach for use by SMEs in construction trades. This toolbox design proposal includes international input on multidisciplinary approaches for performing a qualitative risk assessment determining a risk 'band' for a given project. Risk bands are used to identify the appropriate level of training to oversee construction work, leading to commensurate and appropriate control methods to perform the work safely. The Construction Toolbox presents a review-generated format to harness multiple solutions-based national programs and publications for controlling construction-related risks with simplified approaches across the occupational safety, health and hygiene professions.

  12. Simplified models for dark matter face their consistent completions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonçalves, Dorival; Machado, Pedro A. N.; No, Jose Miguel

    Simplified dark matter models have been recently advocated as a powerful tool to exploit the complementarity between dark matter direct detection, indirect detection and LHC experimental probes. Focusing on pseudoscalar mediators between the dark and visible sectors, we show that the simplified dark matter model phenomenology departs significantly from that of consistentmore » $${SU(2)_{\\mathrm{L}} \\times U(1)_{\\mathrm{Y}}}$$ gauge invariant completions. We discuss the key physics simplified models fail to capture, and its impact on LHC searches. Notably, we show that resonant mono-Z searches provide competitive sensitivities to standard mono-jet analyses at $13$ TeV LHC.« less

  13. Validation of Simplified Load Equations Through Loads Measurement and Modeling of a Small Horizontal-Axis Wind Turbine Tower

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dana, Scott; Van Dam, Jeroen J; Damiani, Rick R

    As part of an ongoing effort to improve the modeling and prediction of small wind turbine dynamics, the National Renewable Energy Laboratory (NREL) tested a small horizontal-axis wind turbine in the field at the National Wind Technology Center. The test turbine was a 2.1-kW downwind machine mounted on an 18-m multi-section fiberglass composite tower. The tower was instrumented and monitored for approximately 6 months. The collected data were analyzed to assess the turbine and tower loads and further validate the simplified loads equations from the International Electrotechnical Commission (IEC) 61400-2 design standards. Field-measured loads were also compared to the outputmore » of an aeroelastic model of the turbine. In particular, we compared fatigue loads as measured in the field, predicted by the aeroelastic model, and calculated using the simplified design equations. Ultimate loads at the tower base were assessed using both the simplified design equations and the aeroelastic model output. The simplified design equations in IEC 61400-2 do not accurately model fatigue loads and a discussion about the simplified design equations is discussed.« less

  14. Simplified aerosol modeling for variational data assimilation

    NASA Astrophysics Data System (ADS)

    Huneeus, N.; Boucher, O.; Chevallier, F.

    2009-11-01

    We have developed a simplified aerosol model together with its tangent linear and adjoint versions for the ultimate aim of optimizing global aerosol and aerosol precursor emission using variational data assimilation. The model was derived from the general circulation model LMDz; it groups together the 24 aerosol species simulated in LMDz into 4 species, namely gaseous precursors, fine mode aerosols, coarse mode desert dust and coarse mode sea salt. The emissions have been kept as in the original model. Modifications, however, were introduced in the computation of aerosol optical depth and in the processes of sedimentation, dry and wet deposition and sulphur chemistry to ensure consistency with the new set of species and their composition. The simplified model successfully manages to reproduce the main features of the aerosol distribution in LMDz. The largest differences in aerosol load are observed for fine mode aerosols and gaseous precursors. Differences between the original and simplified models are mainly associated to the new deposition and sedimentation velocities consistent with the definition of species in the simplified model and the simplification of the sulphur chemistry. Furthermore, simulated aerosol optical depth remains within the variability of monthly AERONET observations for all aerosol types and all sites throughout most of the year. Largest differences are observed over sites with strong desert dust influence. In terms of the daily aerosol variability, the model is less able to reproduce the observed variability from the AERONET data with larger discrepancies in stations affected by industrial aerosols. The simplified model however, closely follows the daily simulation from LMDz. Sensitivity analyses with the tangent linear version show that the simplified sulphur chemistry is the dominant process responsible for the strong non-linearity of the model.

  15. On the coverage of the pMSSM by simplified model results

    NASA Astrophysics Data System (ADS)

    Ambrogi, Federico; Kraml, Sabine; Kulkarni, Suchita; Laa, Ursula; Lessa, Andre; Waltenberger, Wolfgang

    2018-03-01

    We investigate to which extent the SUSY search results published by ATLAS and CMS in the context of simplified models actually cover the more realistic scenarios of a full model. Concretely, we work within the phenomenological MSSM (pMSSM) with 19 free parameters and compare the constraints obtained from SModelS v1.1.1 with those from the ATLAS pMSSM study in arXiv:1508.06608. We find that about 40-45% of the points excluded by ATLAS escape the currently available simplified model constraints. For these points we identify the most relevant topologies which are not tested by the current simplified model results. In particular, we find that topologies with asymmetric branches, including 3-jet signatures from gluino-squark associated production, could be important for improving the current constraining power of simplified models results. Furthermore, for a better coverage of light stops and sbottoms, constraints for decays via heavier neutralinos and charginos, which subsequently decay visibly to the lightest neutralino are also needed.

  16. Recalibration of risk prediction models in a large multicenter cohort of admissions to adult, general critical care units in the United Kingdom.

    PubMed

    Harrison, David A; Brady, Anthony R; Parry, Gareth J; Carpenter, James R; Rowan, Kathy

    2006-05-01

    To assess the performance of published risk prediction models in common use in adult critical care in the United Kingdom and to recalibrate these models in a large representative database of critical care admissions. Prospective cohort study. A total of 163 adult general critical care units in England, Wales, and Northern Ireland, during the period of December 1995 to August 2003. A total of 231,930 admissions, of which 141,106 met inclusion criteria and had sufficient data recorded for all risk prediction models. None. The published versions of the Acute Physiology and Chronic Health Evaluation (APACHE) II, APACHE II UK, APACHE III, Simplified Acute Physiology Score (SAPS) II, and Mortality Probability Models (MPM) II were evaluated for discrimination and calibration by means of a combination of appropriate statistical measures recommended by an expert steering committee. All models showed good discrimination (the c index varied from 0.803 to 0.832) but imperfect calibration. Recalibration of the models, which was performed by both the Cox method and re-estimating coefficients, led to improved discrimination and calibration, although all models still showed significant departures from perfect calibration. Risk prediction models developed in another country require validation and recalibration before being used to provide risk-adjusted outcomes within a new country setting. Periodic reassessment is beneficial to ensure calibration is maintained.

  17. Multivariate hydrological frequency analysis for extreme events using Archimedean copula. Case study: Lower Tunjuelo River basin (Colombia)

    NASA Astrophysics Data System (ADS)

    Gómez, Wilmar

    2017-04-01

    By analyzing the spatial and temporal variability of extreme precipitation events we can prevent or reduce the threat and risk. Many water resources projects require joint probability distributions of random variables such as precipitation intensity and duration, which can not be independent with each other. The problem of defining a probability model for observations of several dependent variables is greatly simplified by the joint distribution in terms of their marginal by taking copulas. This document presents a general framework set frequency analysis bivariate and multivariate using Archimedean copulas for extreme events of hydroclimatological nature such as severe storms. This analysis was conducted in the lower Tunjuelo River basin in Colombia for precipitation events. The results obtained show that for a joint study of the intensity-duration-frequency, IDF curves can be obtained through copulas and thus establish more accurate and reliable information from design storms and associated risks. It shows how the use of copulas greatly simplifies the study of multivariate distributions that introduce the concept of joint return period used to represent the needs of hydrological designs properly in frequency analysis.

  18. Medical Decision-Making Among Elderly People in Long Term Care.

    ERIC Educational Resources Information Center

    Tymchuk, Alexander J.; And Others

    1988-01-01

    Presented informed consent information on high and low risk medical procedures to elderly persons in long term care facility in standard, simplified, or storybook format. Comprehension was significantly better for simplified and storybook formats. Ratings of decision-making ability approximated comprehension test results. Comprehension test…

  19. Simplified method for numerical modeling of fiber lasers.

    PubMed

    Shtyrina, O V; Yarutkina, I A; Fedoruk, M P

    2014-12-29

    A simplified numerical approach to modeling of dissipative dispersion-managed fiber lasers is examined. We present a new numerical iteration algorithm for finding the periodic solutions of the system of nonlinear ordinary differential equations describing the intra-cavity dynamics of the dissipative soliton characteristics in dispersion-managed fiber lasers. We demonstrate that results obtained using simplified model are in good agreement with full numerical modeling based on the corresponding partial differential equations.

  20. Uncertainty Analysis of Air Radiation for Lunar Return Shock Layers

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Johnston, Christopher O.

    2008-01-01

    By leveraging a new uncertainty markup technique, two risk analysis methods are used to compute the uncertainty of lunar-return shock layer radiation predicted by the High temperature Aerothermodynamic Radiation Algorithm (HARA). The effects of epistemic uncertainty, or uncertainty due to a lack of knowledge, is considered for the following modeling parameters: atomic line oscillator strengths, atomic line Stark broadening widths, atomic photoionization cross sections, negative ion photodetachment cross sections, molecular bands oscillator strengths, and electron impact excitation rates. First, a simplified shock layer problem consisting of two constant-property equilibrium layers is considered. The results of this simplified problem show that the atomic nitrogen oscillator strengths and Stark broadening widths in both the vacuum ultraviolet and infrared spectral regions, along with the negative ion continuum, are the dominant uncertainty contributors. Next, three variable property stagnation-line shock layer cases are analyzed: a typical lunar return case and two Fire II cases. For the near-equilibrium lunar return and Fire 1643-second cases, the resulting uncertainties are very similar to the simplified case. Conversely, the relatively nonequilibrium 1636-second case shows significantly larger influence from electron impact excitation rates of both atoms and molecules. For all cases, the total uncertainty in radiative heat flux to the wall due to epistemic uncertainty in modeling parameters is 30% as opposed to the erroneously-small uncertainty levels (plus or minus 6%) found when treating model parameter uncertainties as aleatory (due to chance) instead of epistemic (due to lack of knowledge).

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hobbs, Michael L.

    We previously developed a PETN thermal decomposition model that accurately predicts thermal ignition and detonator failure [1]. This model was originally developed for CALORE [2] and required several complex user subroutines. Recently, a simplified version of the PETN decomposition model was implemented into ARIA [3] using a general chemistry framework without need for user subroutines. Detonator failure was also predicted with this new model using ENCORE. The model was simplified by 1) basing the model on moles rather than mass, 2) simplifying the thermal conductivity model, and 3) implementing ARIA’s new phase change model. This memo briefly describes the model,more » implementation, and validation.« less

  2. Source-term development for a contaminant plume for use by multimedia risk assessment models

    NASA Astrophysics Data System (ADS)

    Whelan, Gene; McDonald, John P.; Taira, Randal Y.; Gnanapragasam, Emmanuel K.; Yu, Charley; Lew, Christine S.; Mills, William B.

    2000-02-01

    Multimedia modelers from the US Environmental Protection Agency (EPA) and US Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: MEPAS, MMSOILS, PRESTO, and RESRAD. These models represent typical analytically based tools that are used in human-risk and endangerment assessments at installations containing radioactive and hazardous contaminants. The objective is to demonstrate an approach for developing an adequate source term by simplifying an existing, real-world, 90Sr plume at DOE's Hanford installation in Richland, WA, for use in a multimedia benchmarking exercise between MEPAS, MMSOILS, PRESTO, and RESRAD. Source characteristics and a release mechanism are developed and described; also described is a typical process and procedure that an analyst would follow in developing a source term for using this class of analytical tool in a preliminary assessment.

  3. Measuring the Risk of Shortfalls in Air Force Capabilities

    DTIC Science & Technology

    2004-03-01

    quantifying risk and simplifying that quantification in a risk measure is to order different risks and, ultimately, to choose between them. The...the analytic goal of understanding and quantifying risk . The growth in information technology, and the amount of data collected on, for example

  4. Estimates of present and future flood risk in the conterminous United States

    NASA Astrophysics Data System (ADS)

    Wing, Oliver E. J.; Bates, Paul D.; Smith, Andrew M.; Sampson, Christopher C.; Johnson, Kris A.; Fargione, Joseph; Morefield, Philip

    2018-03-01

    Past attempts to estimate rainfall-driven flood risk across the US either have incomplete coverage, coarse resolution or use overly simplified models of the flooding process. In this paper, we use a new 30 m resolution model of the entire conterminous US with a 2D representation of flood physics to produce estimates of flood hazard, which match to within 90% accuracy the skill of local models built with detailed data. These flood depths are combined with exposure datasets of commensurate resolution to calculate current and future flood risk. Our data show that the total US population exposed to serious flooding is 2.6-3.1 times higher than previous estimates, and that nearly 41 million Americans live within the 1% annual exceedance probability floodplain (compared to only 13 million when calculated using FEMA flood maps). We find that population and GDP growth alone are expected to lead to significant future increases in exposure, and this change may be exacerbated in the future by climate change.

  5. Probabilistic framework for product design optimization and risk management

    NASA Astrophysics Data System (ADS)

    Keski-Rahkonen, J. K.

    2018-05-01

    Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.

  6. A simplified dynamic model of the T700 turboshaft engine

    NASA Technical Reports Server (NTRS)

    Duyar, Ahmet; Gu, Zhen; Litt, Jonathan S.

    1992-01-01

    A simplified open-loop dynamic model of the T700 turboshaft engine, valid within the normal operating range of the engine, is developed. This model is obtained by linking linear state space models obtained at different engine operating points. Each linear model is developed from a detailed nonlinear engine simulation using a multivariable system identification and realization method. The simplified model may be used with a model-based real time diagnostic scheme for fault detection and diagnostics, as well as for open loop engine dynamics studies and closed loop control analysis utilizing a user generated control law.

  7. Improvement on a simplified model for protein folding simulation.

    PubMed

    Zhang, Ming; Chen, Changjun; He, Yi; Xiao, Yi

    2005-11-01

    Improvements were made on a simplified protein model--the Ramachandran model-to achieve better computer simulation of protein folding. To check the validity of such improvements, we chose the ultrafast folding protein Engrailed Homeodomain as an example and explored several aspects of its folding. The engrailed homeodomain is a mainly alpha-helical protein of 61 residues from Drosophila melanogaster. We found that the simplified model of Engrailed Homeodomain can fold into a global minimum state with a tertiary structure in good agreement with its native structure.

  8. Unimolecular decomposition reactions at low-pressure: A comparison of competitive methods

    NASA Technical Reports Server (NTRS)

    Adams, G. F.

    1980-01-01

    The lack of a simple rate coefficient expression to describe the pressure and temperature dependence hampers chemical modeling of flame systems. Recently developed simplified models to describe unimolecular processes include the calculation of rate constants for thermal unimolecular reactions and recombinations at the low pressure limit, at the high pressure limit and in the intermediate fall-off region. Comparison between two different applications of Troe's simplified model and a comparison between the simplified model and the classic RRKM theory are described.

  9. TMDL RUSLE MODEL

    EPA Science Inventory

    We developed a simplified spreadsheet modeling approach for characterizing and prioritizing sources of sediment loadings from watersheds in the United States. A simplified modeling approach was developed to evaluate sediment loadings from watersheds and selected land segments. ...

  10. Validation of Simplified Load Equations through Loads Measurement and Modeling of a Small Horizontal-Axis Wind Turbine Tower; NREL (National Renewable Energy Laboratory)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dana, S.; Damiani, R.; vanDam, J.

    As part of an ongoing effort to improve the modeling and prediction of small wind turbine dynamics, NREL tested a small horizontal axis wind turbine in the field at the National Wind Technology Center (NWTC). The test turbine was a 2.1-kW downwind machine mounted on an 18-meter multi-section fiberglass composite tower. The tower was instrumented and monitored for approximately 6 months. The collected data were analyzed to assess the turbine and tower loads and further validate the simplified loads equations from the International Electrotechnical Commission (IEC) 61400-2 design standards. Field-measured loads were also compared to the output of an aeroelasticmore » model of the turbine. Ultimate loads at the tower base were assessed using both the simplified design equations and the aeroelastic model output. The simplified design equations in IEC 61400-2 do not accurately model fatigue loads. In this project, we compared fatigue loads as measured in the field, as predicted by the aeroelastic model, and as calculated using the simplified design equations.« less

  11. Experimental validation of finite element modelling of a modular metal-on-polyethylene total hip replacement.

    PubMed

    Hua, Xijin; Wang, Ling; Al-Hajjar, Mazen; Jin, Zhongmin; Wilcox, Ruth K; Fisher, John

    2014-07-01

    Finite element models are becoming increasingly useful tools to conduct parametric analysis, design optimisation and pre-clinical testing for hip joint replacements. However, the verification of the finite element model is critically important. The purposes of this study were to develop a three-dimensional anatomic finite element model for a modular metal-on-polyethylene total hip replacement for predicting its contact mechanics and to conduct experimental validation for a simple finite element model which was simplified from the anatomic finite element model. An anatomic modular metal-on-polyethylene total hip replacement model (anatomic model) was first developed and then simplified with reasonable accuracy to a simple modular total hip replacement model (simplified model) for validation. The contact areas on the articulating surface of three polyethylene liners of modular metal-on-polyethylene total hip replacement bearings with different clearances were measured experimentally in the Leeds ProSim hip joint simulator under a series of loading conditions and different cup inclination angles. The contact areas predicted from the simplified model were then compared with that measured experimentally under the same conditions. The results showed that the simplification made for the anatomic model did not change the predictions of contact mechanics of the modular metal-on-polyethylene total hip replacement substantially (less than 12% for contact stresses and contact areas). Good agreements of contact areas between the finite element predictions from the simplified model and experimental measurements were obtained, with maximum difference of 14% across all conditions considered. This indicated that the simplification and assumptions made in the anatomic model were reasonable and the finite element predictions from the simplified model were valid. © IMechE 2014.

  12. The cardiovascular event reduction tool (CERT)--a simplified cardiac risk prediction model developed from the West of Scotland Coronary Prevention Study (WOSCOPS).

    PubMed

    L'Italien, G; Ford, I; Norrie, J; LaPuerta, P; Ehreth, J; Jackson, J; Shepherd, J

    2000-03-15

    The clinical decision to treat hypercholesterolemia is premised on an awareness of patient risk, and cardiac risk prediction models offer a practical means of determining such risk. However, these models are based on observational cohorts where estimates of the treatment benefit are largely inferred. The West of Scotland Coronary Prevention Study (WOSCOPS) provides an opportunity to develop a risk-benefit prediction model from the actual observed primary event reduction seen in the trial. Five-year Cox model risk estimates were derived from all WOSCOPS subjects (n = 6,595 men, aged 45 to 64 years old at baseline) using factors previously shown to be predictive of definite fatal coronary heart disease or nonfatal myocardial infarction. Model risk factors included age, diastolic blood pressure, total cholesterol/ high-density lipoprotein ratio (TC/HDL), current smoking, diabetes, family history of fatal coronary heart disease, nitrate use or angina, and treatment (placebo/ 40-mg pravastatin). All risk factors were expressed as categorical variables to facilitate risk assessment. Risk estimates were incorporated into a simple, hand-held slide rule or risk tool. Risk estimates were identified for 5-year age bands (45 to 65 years), 4 categories of TC/HDL ratio (<5.5, 5.5 to <6.5, 6.5 to <7.5, > or = 7.5), 2 levels of diastolic blood pressure (<90, > or = 90 mm Hg), from 0 to 3 additional risk factors (current smoking, diabetes, family history of premature fatal coronary heart disease, nitrate use or angina), and pravastatin treatment. Five-year risk estimates ranged from 2% in very low-risk subjects to 61% in the very high-risk subjects. Risk reduction due to pravastatin treatment averaged 31%. Thus, the Cardiovascular Event Reduction Tool (CERT) is a risk prediction model derived from the WOSCOPS trial. Its use will help physicians identify patients who will benefit from cholesterol reduction.

  13. Simplified Models for the Study of Postbuckled Hat-Stiffened Composite Panels

    NASA Technical Reports Server (NTRS)

    Vescovini, Riccardo; Davila, Carlos G.; Bisagni, Chiara

    2012-01-01

    The postbuckling response and failure of multistringer stiffened panels is analyzed using models with three levels of approximation. The first model uses a relatively coarse mesh to capture the global postbuckling response of a five-stringer panel. The second model can predict the nonlinear response as well as the debonding and crippling failure mechanisms in a single stringer compression specimen (SSCS). The third model consists of a simplified version of the SSCS that is designed to minimize the computational effort. The simplified model is well-suited to perform sensitivity analyses for studying the phenomena that lead to structural collapse. In particular, the simplified model is used to obtain a deeper understanding of the role played by geometric and material modeling parameters such as mesh size, inter-laminar strength, fracture toughness, and fracture mode mixity. Finally, a global/local damage analysis method is proposed in which a detailed local model is used to scan the global model to identify the locations that are most critical for damage tolerance.

  14. Less-simplified models of dark matter for direct detection and the LHC

    NASA Astrophysics Data System (ADS)

    Choudhury, Arghya; Kowalska, Kamila; Roszkowski, Leszek; Sessolo, Enrico Maria; Williams, Andrew J.

    2016-04-01

    We construct models of dark matter with suppressed spin-independent scattering cross section utilizing the existing simplified model framework. Even simple combinations of simplified models can exhibit interference effects that cause the tree level contribution to the scattering cross section to vanish, thus demonstrating that direct detection limits on simplified models are not robust when embedded in a more complicated and realistic framework. In general for fermionic WIMP masses ≳ 10 GeV direct detection limits on the spin-independent scattering cross section are much stronger than those coming from the LHC. However these model combinations, which we call less-simplified models, represent situations where LHC searches become more competitive than direct detection experiments even for moderate dark matter mass. We show that a complementary use of several searches at the LHC can strongly constrain the direct detection blind spots by setting limits on the coupling constants and mediators' mass. We derive the strongest limits for combinations of vector + scalar, vector + "squark", and "squark" + scalar mediator, and present the corresponding projections for the LHC 14 TeV for a number of searches: mono-jet, jets + missing energy, and searches for heavy vector resonances.

  15. Can Pediatric Hypertension Criteria Be Simplified? A Prediction Analysis of Subclinical Cardiovascular Outcomes From the Bogalusa Heart Study.

    PubMed

    Xi, Bo; Zhang, Tao; Li, Shengxu; Harville, Emily; Bazzano, Lydia; He, Jiang; Chen, Wei

    2017-04-01

    Prehypertension and hypertension in childhood are defined by sex-, age-, and height-specific 90th (or ≥120/80 mm Hg) and 95th percentiles of blood pressure, respectively, by the 2004 Fourth Report. However, these cutoffs are complex and cumbersome for use. This study assessed the performance of a simplified blood pressure definition to predict adult hypertension and subclinical cardiovascular disease. The cohort consisted of 1225 adults (530 men; aged 26.3-47.7 years) from the Bogalusa Heart Study with 27.1-year follow-up since childhood. We used 110/70 and 120/80 mm Hg for children (age, 6-11 years), and 120/80 and 130/85 mm Hg for adolescents (age, 12-17 years) as the simplified definition of childhood prehypertension and hypertension, respectively, to compare with the 2004 Fourth Report (the complex definition). Adult carotid intima-media thickness, pulse wave velocity, and left ventricular mass were measured using digital ultrasound instruments. Compared with normal blood pressure, childhood hypertensives diagnosed by the simplified definition and the complex definition were both at higher risk of adult hypertension with hazard ratio of 3.1 (95% confidence interval, 1.8-5.3) by the simplified definition and 3.2 (2.0-5.0) by the complex definition, high pulse wave velocity with 3.5 (1.7-7.1) and 2.2 (1.2-4.1), high carotid intima-media thickness with 3.1 (1.7-5.6) and 2.0 (1.2-3.6), and left ventricular hypertrophy with 3.4 (1.7-6.8) and 3.0 (1.6-5.6). The results were confirmed by reclassification or receiver operating curve analyses. The simplified childhood blood pressure definition predicts the risk of adult hypertension and subclinical cardiovascular disease equally as the complex definition does, which could be useful for screening hypertensive children to reduce risk of adult cardiovascular disease. © 2017 American Heart Association, Inc.

  16. Simplification of a scoring system maintained overall accuracy but decreased the proportion classified as low risk.

    PubMed

    Sanders, Sharon; Flaws, Dylan; Than, Martin; Pickering, John W; Doust, Jenny; Glasziou, Paul

    2016-01-01

    Scoring systems are developed to assist clinicians in making a diagnosis. However, their uptake is often limited because they are cumbersome to use, requiring information on many predictors, or complicated calculations. We examined whether, and how, simplifications affected the performance of a validated score for identifying adults with chest pain in an emergency department who have low risk of major adverse cardiac events. We simplified the Emergency Department Assessment of Chest pain Score (EDACS) by three methods: (1) giving equal weight to each predictor included in the score, (2) reducing the number of predictors, and (3) using both methods--giving equal weight to a reduced number of predictors. The diagnostic accuracy of the simplified scores was compared with the original score in the derivation (n = 1,974) and validation (n = 909) data sets. There was no difference in the overall accuracy of the simplified versions of the score compared with the original EDACS as measured by the area under the receiver operating characteristic curve (0.74 to 0.75 for simplified versions vs. 0.75 for the original score in the validation cohort). With score cut-offs set to maintain the sensitivity of the combination of score and tests (electrocardiogram and cardiac troponin) at a level acceptable to clinicians (99%), simplification reduced the proportion of patients classified as low risk from 50% with the original score to between 22% and 42%. Simplification of a clinical score resulted in similar overall accuracy but reduced the proportion classified as low risk and therefore eligible for early discharge compared with the original score. Whether the trade-off is acceptable, will depend on the context in which the score is to be used. Developers of clinical scores should consider simplification as a method to increase uptake, but further studies are needed to determine the best methods of deriving and evaluating simplified scores. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Analysis of Wind Turbine Simulation Models: Assessment of Simplified versus Complete Methodologies: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Honrubia-Escribano, A.; Jimenez-Buendia, F.; Molina-Garcia, A.

    This paper presents the current status of simplified wind turbine models used for power system stability analysis. This work is based on the ongoing work being developed in IEC 61400-27. This international standard, for which a technical committee was convened in October 2009, is focused on defining generic (also known as simplified) simulation models for both wind turbines and wind power plants. The results of the paper provide an improved understanding of the usability of generic models to conduct power system simulations.

  18. Reduced order models for prediction of groundwater quality impacts from CO₂ and brine leakage

    DOE PAGES

    Zheng, Liange; Carroll, Susan; Bianchi, Marco; ...

    2014-12-31

    A careful assessment of the risk associated with geologic CO₂ storage is critical to the deployment of large-scale storage projects. A potential risk is the deterioration of groundwater quality caused by the leakage of CO₂ and brine leakage from deep subsurface reservoirs. In probabilistic risk assessment studies, numerical modeling is the primary tool employed to assess risk. However, the application of traditional numerical models to fully evaluate the impact of CO₂ leakage on groundwater can be computationally complex, demanding large processing times and resources, and involving large uncertainties. As an alternative, reduced order models (ROMs) can be used as highlymore » efficient surrogates for the complex process-based numerical models. In this study, we represent the complex hydrogeological and geochemical conditions in a heterogeneous aquifer and subsequent risk by developing and using two separate ROMs. The first ROM is derived from a model that accounts for the heterogeneous flow and transport conditions in the presence of complex leakage functions for CO₂ and brine. The second ROM is obtained from models that feature similar, but simplified flow and transport conditions, and allow for a more complex representation of all relevant geochemical reactions. To quantify possible impacts to groundwater aquifers, the basic risk metric is taken as the aquifer volume in which the water quality of the aquifer may be affected by an underlying CO₂ storage project. The integration of the two ROMs provides an estimate of the impacted aquifer volume taking into account uncertainties in flow, transport and chemical conditions. These two ROMs can be linked in a comprehensive system level model for quantitative risk assessment of the deep storage reservoir, wellbore leakage, and shallow aquifer impacts to assess the collective risk of CO₂ storage projects.« less

  19. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert M.

    2013-01-01

    A new regression model search algorithm was developed that may be applied to both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The algorithm is a simplified version of a more complex algorithm that was originally developed for the NASA Ames Balance Calibration Laboratory. The new algorithm performs regression model term reduction to prevent overfitting of data. It has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a regression model search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression model. Therefore, the simplified algorithm is not intended to replace the original algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new search algorithm.

  20. 29 CFR 2520.104-48 - Alternative method of compliance for model simplified employee pensions-IRS Form 5305-SEP.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... employee pensions-IRS Form 5305-SEP. 2520.104-48 Section 2520.104-48 Labor Regulations Relating to Labor... compliance for model simplified employee pensions—IRS Form 5305-SEP. Under the authority of section 110 of... Security Act of 1974 in the case of a simplified employee pension (SEP) described in section 408(k) of the...

  1. Combining exposure and effect modeling into an integrated probabilistic environmental risk assessment for nanoparticles.

    PubMed

    Jacobs, Rianne; Meesters, Johannes A J; Ter Braak, Cajo J F; van de Meent, Dik; van der Voet, Hilko

    2016-12-01

    There is a growing need for good environmental risk assessment of engineered nanoparticles (ENPs). Environmental risk assessment of ENPs has been hampered by lack of data and knowledge about ENPs, their environmental fate, and their toxicity. This leads to uncertainty in the risk assessment. To deal with uncertainty in the risk assessment effectively, probabilistic methods are advantageous. In the present study, the authors developed a method to model both the variability and the uncertainty in environmental risk assessment of ENPs. This method is based on the concentration ratio and the ratio of the exposure concentration to the critical effect concentration, both considered to be random. In this method, variability and uncertainty are modeled separately so as to allow the user to see which part of the total variation in the concentration ratio is attributable to uncertainty and which part is attributable to variability. The authors illustrate the use of the method with a simplified aquatic risk assessment of nano-titanium dioxide. The authors' method allows a more transparent risk assessment and can also direct further environmental and toxicological research to the areas in which it is most needed. Environ Toxicol Chem 2016;35:2958-2967. © 2016 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC. © 2016 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC.

  2. Statistical Issues for Calculating Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Matney, Mark; Bacon, John

    2016-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering object to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. This information, combined with information on the expected ground path of the reentry, is used to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of this analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper builds on previous IAASS work to re-examine many of these theoretical assumptions, including the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. This study also employs empirical and theoretical information to test these assumptions, and makes recommendations how to improve the accuracy of these calculations in the future.

  3. A simplified parsimonious higher order multivariate Markov chain model

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Yang, Chuan-sheng

    2017-09-01

    In this paper, a simplified parsimonious higher-order multivariate Markov chain model (SPHOMMCM) is presented. Moreover, parameter estimation method of TPHOMMCM is give. Numerical experiments shows the effectiveness of TPHOMMCM.

  4. Predictors of incident heart failure in patients after an acute coronary syndrome: The LIPID heart failure risk-prediction model.

    PubMed

    Driscoll, Andrea; Barnes, Elizabeth H; Blankenberg, Stefan; Colquhoun, David M; Hunt, David; Nestel, Paul J; Stewart, Ralph A; West, Malcolm J; White, Harvey D; Simes, John; Tonkin, Andrew

    2017-12-01

    Coronary heart disease is a major cause of heart failure. Availability of risk-prediction models that include both clinical parameters and biomarkers is limited. We aimed to develop such a model for prediction of incident heart failure. A multivariable risk-factor model was developed for prediction of first occurrence of heart failure death or hospitalization. A simplified risk score was derived that enabled subjects to be grouped into categories of 5-year risk varying from <5% to >20%. Among 7101 patients from the LIPID study (84% male), with median age 61years (interquartile range 55-67years), 558 (8%) died or were hospitalized because of heart failure. Older age, history of claudication or diabetes mellitus, body mass index>30kg/m 2 , LDL-cholesterol >2.5mmol/L, heart rate>70 beats/min, white blood cell count, and the nature of the qualifying acute coronary syndrome (myocardial infarction or unstable angina) were associated with an increase in heart failure events. Coronary revascularization was associated with a lower event rate. Incident heart failure increased with higher concentrations of B-type natriuretic peptide >50ng/L, cystatin C>0.93nmol/L, D-dimer >273nmol/L, high-sensitivity C-reactive protein >4.8nmol/L, and sensitive troponin I>0.018μg/L. Addition of biomarkers to the clinical risk model improved the model's C statistic from 0.73 to 0.77. The net reclassification improvement incorporating biomarkers into the clinical model using categories of 5-year risk was 23%. Adding a multibiomarker panel to conventional parameters markedly improved discrimination and risk classification for future heart failure events. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  5. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2013-01-01

    A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.

  6. A transfer function type of simplified electrochemical model with modified boundary conditions and Padé approximation for Li-ion battery: Part 1. lithium concentration estimation

    NASA Astrophysics Data System (ADS)

    Yuan, Shifei; Jiang, Lei; Yin, Chengliang; Wu, Hongjie; Zhang, Xi

    2017-06-01

    To guarantee the safety, high efficiency and long lifetime for lithium-ion battery, an advanced battery management system requires a physics-meaningful yet computationally efficient battery model. The pseudo-two dimensional (P2D) electrochemical model can provide physical information about the lithium concentration and potential distributions across the cell dimension. However, the extensive computation burden caused by the temporal and spatial discretization limits its real-time application. In this research, we propose a new simplified electrochemical model (SEM) by modifying the boundary conditions for electrolyte diffusion equations, which significantly facilitates the analytical solving process. Then to obtain a reduced order transfer function, the Padé approximation method is adopted to simplify the derived transcendental impedance solution. The proposed model with the reduced order transfer function can be briefly computable and preserve physical meanings through the presence of parameters such as the solid/electrolyte diffusion coefficients (Ds&De) and particle radius. The simulation illustrates that the proposed simplified model maintains high accuracy for electrolyte phase concentration (Ce) predictions, saying 0.8% and 0.24% modeling error respectively, when compared to the rigorous model under 1C-rate pulse charge/discharge and urban dynamometer driving schedule (UDDS) profiles. Meanwhile, this simplified model yields significantly reduced computational burden, which benefits its real-time application.

  7. Development and validation of risk models to predict outcomes following in-hospital cardiac arrest attended by a hospital-based resuscitation team.

    PubMed

    Harrison, David A; Patel, Krishna; Nixon, Edel; Soar, Jasmeet; Smith, Gary B; Gwinnutt, Carl; Nolan, Jerry P; Rowan, Kathryn M

    2014-08-01

    The National Cardiac Arrest Audit (NCAA) is the UK national clinical audit for in-hospital cardiac arrest. To make fair comparisons among health care providers, clinical indicators require case mix adjustment using a validated risk model. The aim of this study was to develop and validate risk models to predict outcomes following in-hospital cardiac arrest attended by a hospital-based resuscitation team in UK hospitals. Risk models for two outcomes-return of spontaneous circulation (ROSC) for greater than 20min and survival to hospital discharge-were developed and validated using data for in-hospital cardiac arrests between April 2011 and March 2013. For each outcome, a full model was fitted and then simplified by testing for non-linearity, combining categories and stepwise reduction. Finally, interactions between predictors were considered. Models were assessed for discrimination, calibration and accuracy. 22,479 in-hospital cardiac arrests in 143 hospitals were included (14,688 development, 7791 validation). The final risk model for ROSC>20min included: age (non-linear), sex, prior length of stay in hospital, reason for attendance, location of arrest, presenting rhythm, and interactions between presenting rhythm and location of arrest. The model for hospital survival included the same predictors, excluding sex. Both models had acceptable performance across the range of measures, although discrimination for hospital mortality exceeded that for ROSC>20min (c index 0.81 versus 0.72). Validated risk models for ROSC>20min and hospital survival following in-hospital cardiac arrest have been developed. These models will strengthen comparative reporting in NCAA and support local quality improvement. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  8. Development and validation of risk models to predict outcomes following in-hospital cardiac arrest attended by a hospital-based resuscitation team☆

    PubMed Central

    Harrison, David A.; Patel, Krishna; Nixon, Edel; Soar, Jasmeet; Smith, Gary B.; Gwinnutt, Carl; Nolan, Jerry P.; Rowan, Kathryn M.

    2014-01-01

    Aim The National Cardiac Arrest Audit (NCAA) is the UK national clinical audit for in-hospital cardiac arrest. To make fair comparisons among health care providers, clinical indicators require case mix adjustment using a validated risk model. The aim of this study was to develop and validate risk models to predict outcomes following in-hospital cardiac arrest attended by a hospital-based resuscitation team in UK hospitals. Methods Risk models for two outcomes—return of spontaneous circulation (ROSC) for greater than 20 min and survival to hospital discharge—were developed and validated using data for in-hospital cardiac arrests between April 2011 and March 2013. For each outcome, a full model was fitted and then simplified by testing for non-linearity, combining categories and stepwise reduction. Finally, interactions between predictors were considered. Models were assessed for discrimination, calibration and accuracy. Results 22,479 in-hospital cardiac arrests in 143 hospitals were included (14,688 development, 7791 validation). The final risk model for ROSC > 20 min included: age (non-linear), sex, prior length of stay in hospital, reason for attendance, location of arrest, presenting rhythm, and interactions between presenting rhythm and location of arrest. The model for hospital survival included the same predictors, excluding sex. Both models had acceptable performance across the range of measures, although discrimination for hospital mortality exceeded that for ROSC > 20 min (c index 0.81 versus 0.72). Conclusions Validated risk models for ROSC > 20 min and hospital survival following in-hospital cardiac arrest have been developed. These models will strengthen comparative reporting in NCAA and support local quality improvement. PMID:24830872

  9. A New Strategy in Observer Modeling for Greenhouse Cucumber Seedling Growth

    PubMed Central

    Qiu, Quan; Zheng, Chenfei; Wang, Wenping; Qiao, Xiaojun; Bai, He; Yu, Jingquan; Shi, Kai

    2017-01-01

    State observer is an essential component in computerized control loops for greenhouse-crop systems. However, the current accomplishments of observer modeling for greenhouse-crop systems mainly focus on mass/energy balance, ignoring physiological responses of crops. As a result, state observers for crop physiological responses are rarely developed, and control operations are typically made based on experience rather than actual crop requirements. In addition, existing observer models require a large number of parameters, leading to heavy computational load and poor application feasibility. To address these problems, we present a new state observer modeling strategy that takes both environmental information and crop physiological responses into consideration during the observer modeling process. Using greenhouse cucumber seedlings as an instance, we sample 10 physiological parameters of cucumber seedlings at different time point during the exponential growth stage, and employ them to build growth state observers together with 8 environmental parameters. Support vector machine (SVM) acts as the mathematical tool for observer modeling. Canonical correlation analysis (CCA) is used to select the dominant environmental and physiological parameters in the modeling process. With the dominant parameters, simplified observer models are built and tested. We conduct contrast experiments with different input parameter combinations on simplified and un-simplified observers. Experimental results indicate that physiological information can improve the prediction accuracies of the growth state observers. Furthermore, the simplified observer models can give equivalent or even better performance than the un-simplified ones, which verifies the feasibility of CCA. The current study can enable state observers to reflect crop requirements and make them feasible for applications with simplified shapes, which is significant for developing intelligent greenhouse control systems for modern greenhouse production. PMID:28848565

  10. A New Strategy in Observer Modeling for Greenhouse Cucumber Seedling Growth.

    PubMed

    Qiu, Quan; Zheng, Chenfei; Wang, Wenping; Qiao, Xiaojun; Bai, He; Yu, Jingquan; Shi, Kai

    2017-01-01

    State observer is an essential component in computerized control loops for greenhouse-crop systems. However, the current accomplishments of observer modeling for greenhouse-crop systems mainly focus on mass/energy balance, ignoring physiological responses of crops. As a result, state observers for crop physiological responses are rarely developed, and control operations are typically made based on experience rather than actual crop requirements. In addition, existing observer models require a large number of parameters, leading to heavy computational load and poor application feasibility. To address these problems, we present a new state observer modeling strategy that takes both environmental information and crop physiological responses into consideration during the observer modeling process. Using greenhouse cucumber seedlings as an instance, we sample 10 physiological parameters of cucumber seedlings at different time point during the exponential growth stage, and employ them to build growth state observers together with 8 environmental parameters. Support vector machine (SVM) acts as the mathematical tool for observer modeling. Canonical correlation analysis (CCA) is used to select the dominant environmental and physiological parameters in the modeling process. With the dominant parameters, simplified observer models are built and tested. We conduct contrast experiments with different input parameter combinations on simplified and un-simplified observers. Experimental results indicate that physiological information can improve the prediction accuracies of the growth state observers. Furthermore, the simplified observer models can give equivalent or even better performance than the un-simplified ones, which verifies the feasibility of CCA. The current study can enable state observers to reflect crop requirements and make them feasible for applications with simplified shapes, which is significant for developing intelligent greenhouse control systems for modern greenhouse production.

  11. A novel simplified model for torsional vibration analysis of a series-parallel hybrid electric vehicle

    NASA Astrophysics Data System (ADS)

    Tang, Xiaolin; Yang, Wei; Hu, Xiaosong; Zhang, Dejiu

    2017-02-01

    In this study, based on our previous work, a novel simplified torsional vibration dynamic model is established to study the torsional vibration characteristics of a compound planetary hybrid propulsion system. The main frequencies of the hybrid driveline are determined. In contrast to vibration characteristics of the previous 16-degree of freedom model, the simplified model can be used to accurately describe the low-frequency vibration property of this hybrid powertrain. This study provides a basis for further vibration control of the hybrid powertrain during the process of engine start/stop.

  12. Equivalent model optimization with cyclic correction approximation method considering parasitic effect for thermoelectric coolers.

    PubMed

    Wang, Ning; Chen, Jiajun; Zhang, Kun; Chen, Mingming; Jia, Hongzhi

    2017-11-21

    As thermoelectric coolers (TECs) have become highly integrated in high-heat-flux chips and high-power devices, the parasitic effect between component layers has become increasingly obvious. In this paper, a cyclic correction method for the TEC model is proposed using the equivalent parameters of the proposed simplified model, which were refined from the intrinsic parameters and parasitic thermal conductance. The results show that the simplified model agrees well with the data of a commercial TEC under different heat loads. Furthermore, the temperature difference of the simplified model is closer to the experimental data than the conventional model and the model containing parasitic thermal conductance at large heat loads. The average errors in the temperature difference between the proposed simplified model and the experimental data are no more than 1.6 K, and the error is only 0.13 K when the absorbed heat power Q c is equal to 80% of the maximum achievable absorbed heat power Q max . The proposed method and model provide a more accurate solution for integrated TECs that are small in size.

  13. A simplified approach to the pooled analysis of calibration of clinical prediction rules for systematic reviews of validation studies

    PubMed Central

    Dimitrov, Borislav D; Motterlini, Nicola; Fahey, Tom

    2015-01-01

    Objective Estimating calibration performance of clinical prediction rules (CPRs) in systematic reviews of validation studies is not possible when predicted values are neither published nor accessible or sufficient or no individual participant or patient data are available. Our aims were to describe a simplified approach for outcomes prediction and calibration assessment and evaluate its functionality and validity. Study design and methods: Methodological study of systematic reviews of validation studies of CPRs: a) ABCD2 rule for prediction of 7 day stroke; and b) CRB-65 rule for prediction of 30 day mortality. Predicted outcomes in a sample validation study were computed by CPR distribution patterns (“derivation model”). As confirmation, a logistic regression model (with derivation study coefficients) was applied to CPR-based dummy variables in the validation study. Meta-analysis of validation studies provided pooled estimates of “predicted:observed” risk ratios (RRs), 95% confidence intervals (CIs), and indexes of heterogeneity (I2) on forest plots (fixed and random effects models), with and without adjustment of intercepts. The above approach was also applied to the CRB-65 rule. Results Our simplified method, applied to ABCD2 rule in three risk strata (low, 0–3; intermediate, 4–5; high, 6–7 points), indicated that predictions are identical to those computed by univariate, CPR-based logistic regression model. Discrimination was good (c-statistics =0.61–0.82), however, calibration in some studies was low. In such cases with miscalibration, the under-prediction (RRs =0.73–0.91, 95% CIs 0.41–1.48) could be further corrected by intercept adjustment to account for incidence differences. An improvement of both heterogeneities and P-values (Hosmer-Lemeshow goodness-of-fit test) was observed. Better calibration and improved pooled RRs (0.90–1.06), with narrower 95% CIs (0.57–1.41) were achieved. Conclusion Our results have an immediate clinical implication in situations when predicted outcomes in CPR validation studies are lacking or deficient by describing how such predictions can be obtained by everyone using the derivation study alone, without any need for highly specialized knowledge or sophisticated statistics. PMID:25931829

  14. 12 CFR 217.43 - Simplified supervisory formula approach (SSFA) and the gross-up approach.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 2 2014-01-01 2014-01-01 false Simplified supervisory formula approach (SSFA... risk weight of 100 percent represents a value of KG equal to 0.08). (2) Parameter W is expressed as a... underlying exposures of the securitization that meet any of the criteria as set forth in paragraphs (b)(2)(i...

  15. A simplified model of the source channel of the Leksell GammaKnife tested with PENELOPE.

    PubMed

    Al-Dweri, Feras M O; Lallena, Antonio M; Vilches, Manuel

    2004-06-21

    Monte Carlo simulations using the code PENELOPE have been performed to test a simplified model of the source channel geometry of the Leksell GammaKnife. The characteristics of the radiation passing through the treatment helmets are analysed in detail. We have found that only primary particles emitted from the source with polar angles smaller than 3 degrees with respect to the beam axis are relevant for the dosimetry of the Gamma Knife. The photon trajectories reaching the output helmet collimators at (x, v, z = 236 mm) show strong correlations between rho = (x2 + y2)(1/2) and their polar angle theta, on one side, and between tan(-1)(y/x) and their azimuthal angle phi, on the other. This enables us to propose a simplified model which treats the full source channel as a mathematical collimator. This simplified model produces doses in good agreement with those found for the full geometry. In the region of maximal dose, the relative differences between both calculations are within 3%, for the 18 and 14 mm helmets, and 10%, for the 8 and 4 mm ones. Besides, the simplified model permits a strong reduction (larger than a factor 15) in the computational time.

  16. Influence of mass transfer resistance on overall nitrate removal rate in upflow sludge bed reactors.

    PubMed

    Ting, Wen-Huei; Huang, Ju-Sheng

    2006-09-01

    A kinetic model with intrinsic reaction kinetics and a simplified model with apparent reaction kinetics for denitrification in upflow sludge bed (USB) reactors were proposed. USB-reactor performance data with and without sludge wasting were also obtained for model verification. An independent batch study showed that the apparent kinetic constants k' did not differ from the intrinsic k but the apparent Ks' was significantly larger than the intrinsic Ks suggesting that the intra-granule mass transfer resistance can be modeled by changes in Ks. Calculations of the overall effectiveness factor, Thiele modulus, and Biot number combined with parametric sensitivity analysis showed that the influence of internal mass transfer resistance on the overall nitrate removal rate in USB reactors is more significant than the external mass transfer resistance. The simulated residual nitrate concentrations using the simplified model were in good agreement with the experimental data; the simulated results using the simplified model were also close to those using the kinetic model. Accordingly, the simplified model adequately described the overall nitrate removal rate and can be used for process design.

  17. Performance and customization of 4 prognostic models for postoperative onset of nausea and vomiting in ear, nose, and throat surgery.

    PubMed

    Engel, Jörg M; Junger, Axel; Hartmann, Bernd; Little, Simon; Schnöbel, Rose; Mann, Valesco; Jost, Andreas; Welters, Ingeborg D; Hempelmann, Gunter

    2006-06-01

    To evaluate the performance of 4 published prognostic models for postoperative onset of nausea and vomiting (PONV) by means of discrimination and calibration and the possible impact of customization on these models. Prospective, observational study. Tertiary care university hospital. 748 adult patients (>18 years old) enrolled in this study. Severe obesity (weight > 150 kg or body mass index > 40 kg/m) was an exclusion criterion. All perioperative data were recorded with an anesthesia information management system. A standardized patient interview was performed on the postoperative morning and afternoon. Individual PONV risk was calculated using 4 original regression equations by Koivuranta et al, Apfel et al, Sinclair et al, and Junger et al Discrimination was assessed using receiver operating characteristic (ROC) curves. Calibration was tested using Hosmer-Lemeshow goodness-of-fit statistics. New predictive equations for the 4 models were derived by means of logistic regression (customization). The prognostic performance of the customized models was validated using the "leaving-one-out" technique. Postoperative onset of nausea and vomiting was observed in 11.2% of the specialized patient population. Discrimination could be demonstrated as shown by areas under the receiver operating characteristic curve of 0.62 for the Koivuranta et al model, 0.63 for the Apfel et al model, 0.70 for the Sinclair et al model, and 0.70 for the Junger et al model. Calibration was poor for all 4 original models, indicated by a P value lower than 0.01 in the C and H statistics. Customization improved the accuracy of the prediction for all 4 models. However, the simplified risk scores of the Koivuranta et al model and the Apfel et al model did not show the same efficiency as those of the Sinclair et al model and the Junger et al model. This is possibly a result of having relatively few patients at high risk for PONV in combination with an information loss caused by too few dichotomous variables in the simplified scores. The original models were not well validated in our study. An antiemetic therapy based on the results of these scores seems therefore unsatisfactory. Customization improved the accuracy of the prediction in our specialized patient population, more so for the Sinclair et al model and the Junger et al model than for the Koivuranta et al model and the Apfel et al model.

  18. Assessing the seismic risk potential of South America

    USGS Publications Warehouse

    Jaiswal, Kishor; Petersen, Mark D.; Harmsen, Stephen; Smoczyk, Gregory M.

    2016-01-01

    We present here a simplified approach to quantifying regional seismic risk. The seismic risk for a given region can be inferred in terms of average annual loss (AAL) that represents long-term value of earthquake losses in any one year caused from a long-term seismic hazard. The AAL are commonly measured in the form of earthquake shaking-induced deaths, direct economic impacts or indirect losses caused due to loss of functionality. In the context of South American subcontinent, the analysis makes use of readily available public data on seismicity, population exposure, and the hazard and vulnerability models for the region. The seismic hazard model was derived using available seismic catalogs, fault databases, and the hazard methodologies that are analogous to the U.S. Geological Survey’s national seismic hazard mapping process. The Prompt Assessment of Global Earthquakes for Response (PAGER) system’s direct empirical vulnerability functions in terms of fatality and economic impact were used for performing exposure and risk analyses. The broad findings presented and the risk maps produced herein are preliminary, yet they do offer important insights into the underlying zones of high and low seismic risks in the South American subcontinent. A more detailed analysis of risk may be warranted by engaging local experts, especially in some of the high risk zones identified through the present investigation.

  19. SUITABILITY OF USING IN VITRO AND COMPUTATIONALLY ESTIMATED PARAMETERS IN SIMPLIFIED PHARMACOKINETIC MODELS

    EPA Science Inventory

    A challenge in PBPK model development is estimating the parameters for absorption, distribution, metabolism, and excretion of the parent compound and metabolites of interest. One approach to reduce the number of parameters has been to simplify pharmacokinetic models by lumping p...

  20. A Regularized Deep Learning Approach for Clinical Risk Prediction of Acute Coronary Syndrome Using Electronic Health Records.

    PubMed

    Huang, Zhengxing; Dong, Wei; Duan, Huilong; Liu, Jiquan

    2018-05-01

    Acute coronary syndrome (ACS), as a common and severe cardiovascular disease, is a leading cause of death and the principal cause of serious long-term disability globally. Clinical risk prediction of ACS is important for early intervention and treatment. Existing ACS risk scoring models are based mainly on a small set of hand-picked risk factors and often dichotomize predictive variables to simplify the score calculation. This study develops a regularized stacked denoising autoencoder (SDAE) model to stratify clinical risks of ACS patients from a large volume of electronic health records (EHR). To capture characteristics of patients at similar risk levels, and preserve the discriminating information across different risk levels, two constraints are added on SDAE to make the reconstructed feature representations contain more risk information of patients, which contribute to a better clinical risk prediction result. We validate our approach on a real clinical dataset consisting of 3464 ACS patient samples. The performance of our approach for predicting ACS risk remains robust and reaches 0.868 and 0.73 in terms of both AUC and accuracy, respectively. The obtained results show that the proposed approach achieves a competitive performance compared to state-of-the-art models in dealing with the clinical risk prediction problem. In addition, our approach can extract informative risk factors of ACS via a reconstructive learning strategy. Some of these extracted risk factors are not only consistent with existing medical domain knowledge, but also contain suggestive hypotheses that could be validated by further investigations in the medical domain.

  1. Introducing Risk Management Techniques Within Project Based Software Engineering Courses

    NASA Astrophysics Data System (ADS)

    Port, Daniel; Boehm, Barry

    2002-03-01

    In 1996, USC switched its core two-semester software engineering course from a hypothetical-project, homework-and-exam course based on the Bloom taxonomy of educational objectives (knowledge, comprehension, application, analysis, synthesis, and evaluation). The revised course is a real-client team-project course based on the CRESST model of learning objectives (content understanding, problem solving, collaboration, communication, and self-regulation). We used the CRESST cognitive demands analysis to determine the necessary student skills required for software risk management and the other major project activities, and have been refining the approach over the last 5 years of experience, including revised versions for one-semester undergraduate and graduate project course at Columbia. This paper summarizes our experiences in evolving the risk management aspects of the project course. These have helped us mature more general techniques such as risk-driven specifications, domain-specific simplifier and complicator lists, and the schedule as an independent variable (SAIV) process model. The largely positive results in terms of review of pass / fail rates, client evaluations, product adoption rates, and hiring manager feedback are summarized as well.

  2. Application of the NUREG/CR-6850 EPRI/NRC Fire PRA Methodology to a DOE Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tom Elicson; Bentley Harwood; Richard Yorg

    2011-03-01

    The application NUREG/CR-6850 EPRI/NRC fire PRA methodology to DOE facility presented several challenges. This paper documents the process and discusses several insights gained during development of the fire PRA. A brief review of the tasks performed is provided with particular focus on the following: • Tasks 5 and 14: Fire-induced risk model and fire risk quantification. A key lesson learned was to begin model development and quantification as early as possible in the project using screening values and simplified modeling if necessary. • Tasks 3 and 9: Fire PRA cable selection and detailed circuit failure analysis. In retrospect, it wouldmore » have been beneficial to perform the model development and quantification in 2 phases with detailed circuit analysis applied during phase 2. This would have allowed for development of a robust model and quantification earlier in the project and would have provided insights into where to focus the detailed circuit analysis efforts. • Tasks 8 and 11: Scoping fire modeling and detailed fire modeling. More focus should be placed on detailed fire modeling and less focus on scoping fire modeling. This was the approach taken for the fire PRA. • Task 14: Fire risk quantification. Typically, multiple safe shutdown (SSD) components fail during a given fire scenario. Therefore dependent failure analysis is critical to obtaining a meaningful fire risk quantification. Dependent failure analysis for the fire PRA presented several challenges which will be discussed in the full paper.« less

  3. Revisiting the direct detection of dark matter in simplified models

    NASA Astrophysics Data System (ADS)

    Li, Tong

    2018-07-01

    In this work we numerically re-examine the loop-induced WIMP-nucleon scattering cross section for the simplified dark matter models and the constraint set by the latest direct detection experiment. We consider a fermion, scalar or vector dark matter component from five simplified models with leptophobic spin-0 mediators coupled only to Standard Model quarks and dark matter particles. The tree-level WIMP-nucleon cross sections in these models are all momentum-suppressed. We calculate the non-suppressed spin-independent WIMP-nucleon cross sections from loop diagrams and investigate the constrained space of dark matter mass and mediator mass by Xenon1T. The constraints from indirect detection and collider search are also discussed.

  4. The influence of a wind tunnel on helicopter rotational noise: Formulation of analysis

    NASA Technical Reports Server (NTRS)

    Mosher, M.

    1984-01-01

    An analytical model is discussed that can be used to examine the effects of wind tunnel walls on helicopter rotational noise. A complete physical model of an acoustic source in a wind tunnel is described and a simplified version is then developed. This simplified model retains the important physical processes involved, yet it is more amenable to analysis. The simplified physical model is then modeled as a mathematical problem. An inhomogeneous partial differential equation with mixed boundary conditions is set up and then transformed into an integral equation. Details of generating a suitable Green's function and integral equation are included and the equation is discussed and also given for a two-dimensional case.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Majda, Andrew J.; Xing, Yulong; Mohammadian, Majid

    Determining the finite-amplitude preconditioned states in the hurricane embryo, which lead to tropical cyclogenesis, is a central issue in contemporary meteorology. In the embryo there is competition between different preconditioning mechanisms involving hydrodynamics and moist thermodynamics, which can lead to cyclogenesis. Here systematic asymptotic methods from applied mathematics are utilized to develop new simplified moist multi-scale models starting from the moist anelastic equations. Three interesting multi-scale models emerge in the analysis. The balanced mesoscale vortex (BMV) dynamics and the microscale balanced hot tower (BHT) dynamics involve simplified balanced equations without gravity waves for vertical vorticity amplification due to moist heatmore » sources and incorporate nonlinear advective fluxes across scales. The BMV model is the central one for tropical cyclogenesis in the embryo. The moist mesoscale wave (MMW) dynamics involves simplified equations for mesoscale moisture fluctuations, as well as linear hydrostatic waves driven by heat sources from moisture and eddy flux divergences. A simplified cloud physics model for deep convection is introduced here and used to study moist axisymmetric plumes in the BHT model. A simple application in periodic geometry involving the effects of mesoscale vertical shear and moist microscale hot towers on vortex amplification is developed here to illustrate features of the coupled multi-scale models. These results illustrate the use of these models in isolating key mechanisms in the embryo in a simplified content.« less

  6. Simplifying the interaction between cognitive models and task environments with the JSON Network Interface.

    PubMed

    Hope, Ryan M; Schoelles, Michael J; Gray, Wayne D

    2014-12-01

    Process models of cognition, written in architectures such as ACT-R and EPIC, should be able to interact with the same software with which human subjects interact. By eliminating the need to simulate the experiment, this approach would simplify the modeler's effort, while ensuring that all steps required of the human are also required by the model. In practice, the difficulties of allowing one software system to interact with another present a significant barrier to any modeler who is not also skilled at this type of programming. The barrier increases if the programming language used by the modeling software differs from that used by the experimental software. The JSON Network Interface simplifies this problem for ACT-R modelers, and potentially, modelers using other systems.

  7. A multiscale approach to modelling electrochemical processes occurring across the cell membrane with application to transmission of action potentials.

    PubMed

    Richardson, G

    2009-09-01

    By application of matched asymptotic expansions, a simplified partial differential equation (PDE) model for the dynamic electrochemical processes occurring in the vicinity of a membrane, as ions selectively permeate across it, is formally derived from the Poisson-Nernst-Planck equations of electrochemistry. It is demonstrated that this simplified model reduces itself, in the limit of a long thin axon, to the cable equation used by Hodgkin and Huxley to describe the propagation of action potentials in the unmyelinated squid giant axon. The asymptotic reduction from the simplified PDE model to the cable equation leads to insights that are not otherwise apparent; these include an explanation of why the squid giant axon attains a diameter in the region of 1 mm. The simplified PDE model has more general application than the Hodgkin-Huxley cable equation and can, e.g. be used to describe action potential propagation in myelinated axons and neuronal cell bodies.

  8. A Simplified Model of Human Alcohol Metabolism That Integrates Biotechnology and Human Health into a Mass Balance Team Project

    ERIC Educational Resources Information Center

    Yang, Allen H. J.; Dimiduk, Kathryn; Daniel, Susan

    2011-01-01

    We present a simplified human alcohol metabolism model for a mass balance team project. Students explore aspects of engineering in biotechnology: designing/modeling biological systems, testing the design/model, evaluating new conditions, and exploring cutting-edge "lab-on-a-chip" research. This project highlights chemical engineering's impact on…

  9. Examination of simplified travel demand model. [Internal volume forecasting model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, R.L. Jr.; McFarlane, W.J.

    1978-01-01

    A simplified travel demand model, the Internal Volume Forecasting (IVF) model, proposed by Low in 1972 is evaluated as an alternative to the conventional urban travel demand modeling process. The calibration of the IVF model for a county-level study area in Central Wisconsin results in what appears to be a reasonable model; however, analysis of the structure of the model reveals two primary mis-specifications. Correction of the mis-specifications leads to a simplified gravity model version of the conventional urban travel demand models. Application of the original IVF model to ''forecast'' 1960 traffic volumes based on the model calibrated for 1970more » produces accurate estimates. Shortcut and ad hoc models may appear to provide reasonable results in both the base and horizon years; however, as shown by the IVF mode, such models will not always provide a reliable basis for transportation planning and investment decisions.« less

  10. 12 CFR Appendix E to Part 208 - Risk-Based Capital Guidelines; Market Risk

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 11Simplified Supervisory Formula Approach Section 12Market Risk Disclosures Section 1. Purpose, Applicability... apply: Affiliate with respect to a company means any company that controls, is controlled by, or is under common control with, the company. Backtesting means the comparison of a bank's internal estimates...

  11. 12 CFR Appendix B to Part 3 - Risk-Based Capital Guidelines; Market Risk

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Specific Risk Section 11Simplified Supervisory Formula Approach Section 12Market Risk Disclosures Section 1... respect to a company means any company that controls, is controlled by, or is under common control with... organization. Control A person or company controls a company if it: (1) Owns, controls, or holds with power to...

  12. Systems Engineering Model for ART Energy Conversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendez Cruz, Carmen Margarita; Rochau, Gary E.; Wilson, Mollye C.

    The near-term objective of the EC team is to establish an operating, commercially scalable Recompression Closed Brayton Cycle (RCBC) to be constructed for the NE - STEP demonstration system (demo) with the lowest risk possible. A systems engineering approach is recommended to ensure adequate requirements gathering, documentation, and mode ling that supports technology development relevant to advanced reactors while supporting crosscut interests in potential applications. A holistic systems engineering model was designed for the ART Energy Conversion program by leveraging Concurrent Engineering, Balance Model, Simplified V Model, and Project Management principles. The resulting model supports the identification and validation ofmore » lifecycle Brayton systems requirements, and allows designers to detail system-specific components relevant to the current stage in the lifecycle, while maintaining a holistic view of all system elements.« less

  13. Simplified Predictive Models for CO2 Sequestration Performance Assessment

    NASA Astrophysics Data System (ADS)

    Mishra, Srikanta; RaviGanesh, Priya; Schuetter, Jared; Mooney, Douglas; He, Jincong; Durlofsky, Louis

    2014-05-01

    We present results from an ongoing research project that seeks to develop and validate a portfolio of simplified modeling approaches that will enable rapid feasibility and risk assessment for CO2 sequestration in deep saline formation. The overall research goal is to provide tools for predicting: (a) injection well and formation pressure buildup, and (b) lateral and vertical CO2 plume migration. Simplified modeling approaches that are being developed in this research fall under three categories: (1) Simplified physics-based modeling (SPM), where only the most relevant physical processes are modeled, (2) Statistical-learning based modeling (SLM), where the simulator is replaced with a "response surface", and (3) Reduced-order method based modeling (RMM), where mathematical approximations reduce the computational burden. The system of interest is a single vertical well injecting supercritical CO2 into a 2-D layered reservoir-caprock system with variable layer permeabilities. In the first category (SPM), we use a set of well-designed full-physics compositional simulations to understand key processes and parameters affecting pressure propagation and buoyant plume migration. Based on these simulations, we have developed correlations for dimensionless injectivity as a function of the slope of fractional-flow curve, variance of layer permeability values, and the nature of vertical permeability arrangement. The same variables, along with a modified gravity number, can be used to develop a correlation for the total storage efficiency within the CO2 plume footprint. In the second category (SLM), we develop statistical "proxy models" using the simulation domain described previously with two different approaches: (a) classical Box-Behnken experimental design with a quadratic response surface fit, and (b) maximin Latin Hypercube sampling (LHS) based design with a Kriging metamodel fit using a quadratic trend and Gaussian correlation structure. For roughly the same number of simulations, the LHS-based meta-model yields a more robust predictive model, as verified by a k-fold cross-validation approach. In the third category (RMM), we use a reduced-order modeling procedure that combines proper orthogonal decomposition (POD) for reducing problem dimensionality with trajectory-piecewise linearization (TPWL) for extrapolating system response at new control points from a limited number of trial runs ("snapshots"). We observe significant savings in computational time with very good accuracy from the POD-TPWL reduced order model - which could be important in the context of history matching, uncertainty quantification and optimization problems. The paper will present results from our ongoing investigations, and also discuss future research directions and likely outcomes. This work was supported by U.S. Department of Energy National Energy Technology Laboratory award DE-FE0009051 and Ohio Department of Development grant D-13-02.

  14. The development of a VBHOM-based outcome model for lower limb amputation performed for critical ischaemia.

    PubMed

    Tang, T Y; Prytherch, D R; Walsh, S R; Athanassoglou, V; Seppi, V; Sadat, U; Lees, T A; Varty, K; Boyle, J R

    2009-01-01

    VBHOM (Vascular Biochemistry and Haematology Outcome Models) adopts the approach of using a minimum data set to model outcome and has been previously shown to be feasible after index arterial operations. This study attempts to model mortality following lower limb amputation for critical limb ischaemia using the VBHOM concept. A binary logistic regression model of risk of mortality was built using National Vascular Database items that contained the complete data required by the model from 269 admissions for lower limb amputation. The subset of NVD data items used were urea, creatinine, sodium, potassium, haemoglobin, white cell count, age on and mode of admission. This model was applied prospectively to a test set of data (n=269), which were not part of the original training set to develop the predictor equation. Outcome following lower limb amputation could be described accurately using the same model. The overall mean predicted risk of mortality was 32%, predicting 86 deaths. Actual number of deaths was 86 (chi(2)=8.05, 8 d.f., p=0.429; no evidence of lack of fit). The model demonstrated adequate discrimination (c-index=0.704). VBHOM provides a single unified model that allows good prediction of surgical mortality in this high risk group of individuals. It uses a small, simple and objective clinical data set that may also simplify comparative audit within vascular surgery.

  15. A Practical Risk Stratification Approach for Implementing a Primary Care Chronic Disease Management Program in an Underserved Community.

    PubMed

    Xu, Junjun; Williams-Livingston, Arletha; Gaglioti, Anne; McAllister, Calvin; Rust, George

    2018-01-01

    The use of value metrics is often dependent on payer-initiated health care management incentives. There is a need for practices to define and manage their own patient panels regardless of payer to participate effectively in population health management. A key step is to define a panel of primary care patients with high comorbidity profiles. Our sample included all patients seen in an urban academic family medicine clinic over a two-year period. The simplified risk stratification was built using internal electronic health record and billing system data based on ICD-9 codes. There were 347 patients classified as high-risk out of the 5,364 patient panel. Average age was 59 years (SD 15). Hypertension (90%), hyperlipidemia (62%), and depression (55%) were the most common conditions among high-risk patients. Simplified risk stratification provides a feasible option for our team to understand and respond to the nuances of population health in our underserved community.

  16. Image segmentation algorithm based on improved PCNN

    NASA Astrophysics Data System (ADS)

    Chen, Hong; Wu, Chengdong; Yu, Xiaosheng; Wu, Jiahui

    2017-11-01

    A modified simplified Pulse Coupled Neural Network (PCNN) model is proposed in this article based on simplified PCNN. Some work have done to enrich this model, such as imposing restrictions items of the inputs, improving linking inputs and internal activity of PCNN. A self-adaptive parameter setting method of linking coefficient and threshold value decay time constant is proposed here, too. At last, we realized image segmentation algorithm for five pictures based on this proposed simplified PCNN model and PSO. Experimental results demonstrate that this image segmentation algorithm is much better than method of SPCNN and OTSU.

  17. A simplified model of the source channel of the Leksell GammaKnife® tested with PENELOPE

    NASA Astrophysics Data System (ADS)

    Al-Dweri, Feras M. O.; Lallena, Antonio M.; Vilches, Manuel

    2004-06-01

    Monte Carlo simulations using the code PENELOPE have been performed to test a simplified model of the source channel geometry of the Leksell GammaKnife®. The characteristics of the radiation passing through the treatment helmets are analysed in detail. We have found that only primary particles emitted from the source with polar angles smaller than 3° with respect to the beam axis are relevant for the dosimetry of the Gamma Knife. The photon trajectories reaching the output helmet collimators at (x, y, z = 236 mm) show strong correlations between rgr = (x2 + y2)1/2 and their polar angle thgr, on one side, and between tan-1(y/x) and their azimuthal angle phgr, on the other. This enables us to propose a simplified model which treats the full source channel as a mathematical collimator. This simplified model produces doses in good agreement with those found for the full geometry. In the region of maximal dose, the relative differences between both calculations are within 3%, for the 18 and 14 mm helmets, and 10%, for the 8 and 4 mm ones. Besides, the simplified model permits a strong reduction (larger than a factor 15) in the computational time.

  18. Modeling Fecal Indicator Bacteria Like Salt in Newport Bay

    NASA Astrophysics Data System (ADS)

    Ciglar, A. M.; Rippy, M.; Grant, S. B.

    2015-12-01

    Newport Bay is a harbor and estuary located in Orange County, CA that provides many water sports and recreational activities for millions of southern California residents and tourists. The aim of this study is to quickly assess exceedances of FIB in the Newport Bay which pose a health risk to recreational users. The ability to quickly assess water quality is made possible with an advection-diffusion mass transport model that uses easily measurable parameters such as volumetric flow rate from tributaries. Current FIB assessment methods for Newport Bay take a minimum of 24 hours to evaluate health risk by either culturing for FIB or running a more complex fluid dynamics model. By this time the FIB may have already reached the ocean outlet thus no longer posing a risk in the bay or recreationists may have already come in close contact with contaminated waters. The advection-diffusion model can process and disseminate health risk information within a few hours of flow rate measurements, minimizing time between an FIB exceedance and public awareness about the event. Data used to calibrate and validate the model was collected from January 2006 through February 2007. Salinity data was used for calibration and FIB data was used for validation. Both steady-state and transient conditions were assessed to determine if dry weather patterns can be simplified to the steady-state condition.

  19. Order Matters: Sequencing Scale-Realistic versus Simplified Models to Improve Science Learning

    ERIC Educational Resources Information Center

    Chen, Chen; Schneps, Matthew H.; Sonnert, Gerhard

    2016-01-01

    Teachers choosing between different models to facilitate students' understanding of an abstract system must decide whether to adopt a model that is simplified and striking or one that is realistic and complex. Only recently have instructional technologies enabled teachers and learners to change presentations swiftly and to provide for learning…

  20. Monojet searches for MSSM simplified models

    DOE PAGES

    Arbey, Alexandre; Battaglia, Marco; Mahmoudi, Farvah

    2016-09-12

    We explore the implications of monojet searches at hadron colliders in the minimal supersymmetric extension of the Standard Model (MSSM). To quantify the impact of monojet searches, we consider simplified MSSM scenarios with neutralino dark matter. The monojet results of the LHC Run 1 are reinterpreted in the context of several MSSM simplified scenarios, and the complementarity with direct supersymmetry search results is highlighted. We also investigate the reach of monojet searches for the Run 2, as well as for future higher energy hadron colliders.

  1. Simplified analytical model and balanced design approach for light-weight wood-based structural panel in bending

    Treesearch

    Jinghao Li; John F. Hunt; Shaoqin Gong; Zhiyong Cai

    2016-01-01

    This paper presents a simplified analytical model and balanced design approach for modeling lightweight wood-based structural panels in bending. Because many design parameters are required to input for the model of finite element analysis (FEA) during the preliminary design process and optimization, the equivalent method was developed to analyze the mechanical...

  2. Preliminary Shuttle Space Suit Shielding Model. Chapter 9

    NASA Technical Reports Server (NTRS)

    Anderson, Brooke M.; Nealy, J. E.; Qualls, G. D.; Staritz, P. J.; Wilson, J. W.; Kim, M.-H. Y.; Cucinotta, F. A.; Atwell, W.; DeAngelis, G.; Ware, J.; hide

    2003-01-01

    There are two space suits in current usage within the space program: EMU [2] and Orlan-M Space Suit . The Shuttle space suit components are discussed elsewhere [2,5,6] and serve as a guide to development of the current model. The present model is somewhat simplified in details which are considered to be second order in their effects on exposures. A more systematic approach is ongoing on a part-by-part basis with the most important ones in terms of exposure contributions being addressed first with detailed studies of the relatively thin space suit fabric as the first example . Additional studies to validate the model of the head coverings (bubble, helmet, visors.. .) will be undertaken in the near future. The purpose of this paper is to present the details of the model as it is now and to examine its impact on estimates of astronaut health risks. In this respect, the nonuniform distribution of mass of the space suit provides increased shielding in some directions and some organs. These effects can be most important in terms of health risks and especially critical to evaluation of potential early radiation effects .

  3. The influence of wind-tunnel walls on discrete frequency noise

    NASA Technical Reports Server (NTRS)

    Mosher, M.

    1984-01-01

    This paper describes an analytical model that can be used to examine the effects of wind-tunnel walls on discrete frequency noise. First, a complete physical model of an acoustic source in a wind tunnel is described, and a simplified version is then developed. This simplified model retains the important physical processes involved, yet it is more amenable to analysis. Second, the simplified physical model is formulated as a mathematical problem. An inhomogeneous partial differential equation with mixed boundary conditions is set up and then transformed into an integral equation. The integral equation has been solved with a panel program on a computer. Preliminary results from a simple model problem will be shown and compared with the approximate analytic solution.

  4. Simplified tools for evaluating domestic ventilation systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maansson, L.G.; Orme, M.

    1999-07-01

    Within an International Energy Agency (IEA) project, Annex 27, experts from 8 countries (Canada, France, Italy, Japan, The Netherlands, Sweden, UK and USA) have developed simplified tools for evaluating domestic ventilation systems during the heating season. Tools for building and user aspects, thermal comfort, noise, energy, life cycle cost, reliability and indoor air quality (IAQ) have been devised. The results can be used both for dwellings at the design stage and after construction. The tools lead to immediate answers and indications about the consequences of different choices that may arise during discussion with clients. This paper presents an introduction tomore » these tools. Examples applications of the indoor air quality and energy simplified tools are also provided. The IAQ tool accounts for constant emission sources, CO{sub 2}, cooking products, tobacco smoke, condensation risks, humidity levels (i.e., for judging the risk for mould and house dust mites), and pressure difference (for identifying the risk for radon or land fill spillage entering the dwelling or problems with indoor combustion appliances). An elaborated set of design parameters were worked out that resulted in about 17,000 combinations. By using multi-variate analysis it was possible to reduce this to 174 combinations for IAQ. In addition, a sensitivity analysis was made using 990 combinations. The results from all the runs were used to develop a simplified tool, as well as quantifying equations relying on the design parameters. A computerized energy tool has also been developed within this project, which takes into account air tightness, climate, window airing pattern, outdoor air flow rate and heat exchange efficiency.« less

  5. 12 CFR Appendix C to Part 325 - Risk-Based Capital for State Nonmember Banks: Market Risk

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10Standardized Measurement Method for Specific Risk Section 11Simplified Supervisory Formula Approach Section... apply: Affiliate with respect to a company means any company that controls, is controlled by, or is under common control with, the company. Backtesting means the comparison of a bank's internal estimates...

  6. 12 CFR Appendix C to Part 325 - Risk-Based Capital for State Nonmember Banks: Market Risk

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10Standardized Measurement Method for Specific Risk Section 11Simplified Supervisory Formula Approach Section... apply: Affiliate with respect to a company means any company that controls, is controlled by, or is under common control with, the company. Backtesting means the comparison of a bank's internal estimates...

  7. When "Safe" Means "Dangerous": A Corpus Investigation of Risk Communication in the Media

    ERIC Educational Resources Information Center

    Tang, Chris; Rundblad, Gabriella

    2017-01-01

    The mass media has an important role in informing the general public about emerging health risks. Content-based studies of risk communication in the media have revealed a tendency to exaggerate risks or simplify science, but linguistic studies in this area are still scarce. This paper outlines a corpus based investigation of media reporting on the…

  8. Recommendations on presenting LHC searches for missing transverse energy signals using simplified s-channel models of dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boveia, Antonio; Buchmueller, Oliver; Busoni, Giorgio

    2016-03-14

    This document summarises the proposal of the LHC Dark Matter Working Group on how to present LHC results on s-channel simplified dark matter models and to compare them to direct (indirect) detection experiments.

  9. A simplified parsimonious higher order multivariate Markov chain model with new convergence condition

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Yang, Chuan-sheng

    2017-09-01

    In this paper, we present a simplified parsimonious higher-order multivariate Markov chain model with new convergence condition. (TPHOMMCM-NCC). Moreover, estimation method of the parameters in TPHOMMCM-NCC is give. Numerical experiments illustrate the effectiveness of TPHOMMCM-NCC.

  10. [The model of perioperative risk assessment in elderly patients - interim analysis].

    PubMed

    Grabowska, Izabela; Ścisło, Lucyna; Pietruszka, Szymon; Walewska, Elzbieta; Paszko, Agata; Siarkiewicz, Benita; Richter, Piotr; Budzyński, Andrzej; Szczepanik, Antoni M

    2017-04-21

    Demographic changes in contemporary society require implementation of proper perioperative care of elderly patients due to an increased risk of perioperative complications in this group. Preoperative assessment of health status identifies risks and enables preventive interventions, improving outcomes of surgical treatment. The Comprehensive Geriatric Assessment contains numerous diagnostic tests and consultations, which is expensive and difficult to use in everyday practice. The development of a simplified model of perioperative assessment of elderly patients will help identifying the group of patients who require further diagnostic workup. The aim of the study is to evaluate the usefulness of the tests used in a proposed model of perioperative risk assessment in elderly patients. In a group of 178 patients older than 64 years admitted for surgical procedures, a battery of tests was performed. The proposed model of perioperative risk assessment included: Charlson Comorbidity Index, ADL (activities of daily living), TUG test (timed "up and go" test), MNA (mini nutritional assessment), AMTS (abbreviated mental test score), spirometry measurement of respiratory muscle strength (Pimax, Pemax). Distribution of abnormal results of each test has been analysed. The Charlson Index over 6 points was recorded in 10.1% of patients (15.1% in cancer patients). Abnormal result of the TUG test was observed in 32.1%. The risk of malnutrition in MNA test has been identified in 29.7% (39.2% in cancer patients). Abnormal test results at the level of 10-30% indicate potential diagnostic value of Charlson Comorbidity Index, TUG test and MNA in the evaluation of perioperative risk in elderly patients.

  11. Dissipation models for central difference schemes

    NASA Astrophysics Data System (ADS)

    Eliasson, Peter

    1992-12-01

    In this paper different flux limiters are used to construct dissipation models. The flux limiters are usually of Total Variation Diminishing (TVD type and are applied to the characteristic variables for the hyperbolic Euler equations in one, two or three dimensions. A number of simplified dissipation models with a reduced number of limiters are considered to reduce the computational effort. The most simplified methods use only one limiter, the dissipation model by Jameson belongs to this class since the Jameson pressure switch is considered as a limiter, not TVD though. Other one-limiter models with TVD limiters are also investigated. Models in between the most simplified one-limiter models and the full model with limiters on all the different characteristics are considered where different dissipation models are applied to the linear and non-linear characteristcs. In this paper the theory by Yee is extended to a general explicit Runge-Kutta type of schemes.

  12. Building a Values-Informed Mental Model for New Orleans Climate Risk Management.

    PubMed

    Bessette, Douglas L; Mayer, Lauren A; Cwik, Bryan; Vezér, Martin; Keller, Klaus; Lempert, Robert J; Tuana, Nancy

    2017-10-01

    Individuals use values to frame their beliefs and simplify their understanding when confronted with complex and uncertain situations. The high complexity and deep uncertainty involved in climate risk management (CRM) lead to individuals' values likely being coupled to and contributing to their understanding of specific climate risk factors and management strategies. Most mental model approaches, however, which are commonly used to inform our understanding of people's beliefs, ignore values. In response, we developed a "Values-informed Mental Model" research approach, or ViMM, to elicit individuals' values alongside their beliefs and determine which values people use to understand and assess specific climate risk factors and CRM strategies. Our results show that participants consistently used one of three values to frame their understanding of risk factors and CRM strategies in New Orleans: (1) fostering a healthy economy, wealth, and job creation, (2) protecting and promoting healthy ecosystems and biodiversity, and (3) preserving New Orleans' unique culture, traditions, and historically significant neighborhoods. While the first value frame is common in analyses of CRM strategies, the latter two are often ignored, despite their mirroring commonly accepted pillars of sustainability. Other values like distributive justice and fairness were prioritized differently depending on the risk factor or strategy being discussed. These results suggest that the ViMM method could be a critical first step in CRM decision-support processes and may encourage adoption of CRM strategies more in line with stakeholders' values. © 2017 Society for Risk Analysis.

  13. Anthropometric measures in cardiovascular disease prediction: comparison of laboratory-based versus non-laboratory-based model.

    PubMed

    Dhana, Klodian; Ikram, M Arfan; Hofman, Albert; Franco, Oscar H; Kavousi, Maryam

    2015-03-01

    Body mass index (BMI) has been used to simplify cardiovascular risk prediction models by substituting total cholesterol and high-density lipoprotein cholesterol. In the elderly, the ability of BMI as a predictor of cardiovascular disease (CVD) declines. We aimed to find the most predictive anthropometric measure for CVD risk to construct a non-laboratory-based model and to compare it with the model including laboratory measurements. The study included 2675 women and 1902 men aged 55-79 years from the prospective population-based Rotterdam Study. We used Cox proportional hazard regression analysis to evaluate the association of BMI, waist circumference, waist-to-hip ratio and a body shape index (ABSI) with CVD, including coronary heart disease and stroke. The performance of the laboratory-based and non-laboratory-based models was evaluated by studying the discrimination, calibration, correlation and risk agreement. Among men, ABSI was the most informative measure associated with CVD, therefore ABSI was used to construct the non-laboratory-based model. Discrimination of the non-laboratory-based model was not different than laboratory-based model (c-statistic: 0.680-vs-0.683, p=0.71); both models were well calibrated (15.3% observed CVD risk vs 16.9% and 17.0% predicted CVD risks by the non-laboratory-based and laboratory-based models, respectively) and Spearman rank correlation and the agreement between non-laboratory-based and laboratory-based models were 0.89 and 91.7%, respectively. Among women, none of the anthropometric measures were independently associated with CVD. Among middle-aged and elderly where the ability of BMI to predict CVD declines, the non-laboratory-based model, based on ABSI, could predict CVD risk as accurately as the laboratory-based model among men. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  14. A simplified model for glass formation

    NASA Technical Reports Server (NTRS)

    Uhlmann, D. R.; Onorato, P. I. K.; Scherer, G. W.

    1979-01-01

    A simplified model of glass formation based on the formal theory of transformation kinetics is presented, which describes the critical cooling rates implied by the occurrence of glassy or partly crystalline bodies. In addition, an approach based on the nose of the time-temperature-transformation (TTT) curve as an extremum in temperature and time has provided a relatively simple relation between the activation energy for viscous flow in the undercooled region and the temperature of the nose of the TTT curve. Using this relation together with the simplified model, it now seems possible to predict cooling rates using only the liquidus temperature, glass transition temperature, and heat of fusion.

  15. Simplified models vs. effective field theory approaches in dark matter searches

    NASA Astrophysics Data System (ADS)

    De Simone, Andrea; Jacques, Thomas

    2016-07-01

    In this review we discuss and compare the usage of simplified models and Effective Field Theory (EFT) approaches in dark matter searches. We provide a state of the art description on the subject of EFTs and simplified models, especially in the context of collider searches for dark matter, but also with implications for direct and indirect detection searches, with the aim of constituting a common language for future comparisons between different strategies. The material is presented in a form that is as self-contained as possible, so that it may serve as an introductory review for the newcomer as well as a reference guide for the practitioner.

  16. Finding simplicity in complexity: modelling post-fire hydrogeomorphic processes and risks

    NASA Astrophysics Data System (ADS)

    Sheridan, Gary; Langhans, Christoph; Lane, Patrick; Nyman, Petter

    2017-04-01

    Post-fire runoff and erosion can shape landscapes, destroy infrastructure, and result in the loss of human life. However even within seemingly similar geographic regions post-fire hydro-geomorphic responses vary from almost no response through to catastrophic flash floods and debris flows. Why is there so much variability, and how can we predict areas at risk? This presentation describes the research journey taken by the post-fire research group at The University of Melbourne to answer this question for the se Australian uplands. Key steps along the way have included identifying the dominant erosion processes (and their forcings), and the key system properties controlling the rates of these dominant processes. The high degree of complexity in the interactions between the forcings, the system properties, and the erosion processes, necessitated the development of a simplified conceptual representation of post-fire hydrogeomorphic system that was conducive to modelling and simulation. Spatially mappable metrics (and proxies) for key system forcings and properties were then required to parameterize and drive the model. Each step in this journey has depended on new research, as well as ongoing feedback from land and water management agencies tasked with implementing these risk models and interpreting the results. These models are now imbedded within agencies and used for strategic risk assessments, for tactical response during fires, and for post-fire remediation and risk planning. Reflecting on the successes and failures along the way provides for some more general insights into the process of developing research-based models for operational use by land and water management agencies.

  17. SModelS v1.1 user manual: Improving simplified model constraints with efficiency maps

    NASA Astrophysics Data System (ADS)

    Ambrogi, Federico; Kraml, Sabine; Kulkarni, Suchita; Laa, Ursula; Lessa, Andre; Magerl, Veronika; Sonneveld, Jory; Traub, Michael; Waltenberger, Wolfgang

    2018-06-01

    SModelS is an automatized tool for the interpretation of simplified model results from the LHC. It allows to decompose models of new physics obeying a Z2 symmetry into simplified model components, and to compare these against a large database of experimental results. The first release of SModelS, v1.0, used only cross section upper limit maps provided by the experimental collaborations. In this new release, v1.1, we extend the functionality of SModelS to efficiency maps. This increases the constraining power of the software, as efficiency maps allow to combine contributions to the same signal region from different simplified models. Other new features of version 1.1 include likelihood and χ2 calculations, extended information on the topology coverage, an extended database of experimental results as well as major speed upgrades for both the code and the database. We describe in detail the concepts and procedures used in SModelS v1.1, explaining in particular how upper limits and efficiency map results are dealt with in parallel. Detailed instructions for code usage are also provided.

  18. Towards the next generation of simplified Dark Matter models

    NASA Astrophysics Data System (ADS)

    Albert, Andreas; Bauer, Martin; Brooke, Jim; Buchmueller, Oliver; Cerdeño, David G.; Citron, Matthew; Davies, Gavin; de Cosa, Annapaola; De Roeck, Albert; De Simone, Andrea; Du Pree, Tristan; Flaecher, Henning; Fairbairn, Malcolm; Ellis, John; Grohsjean, Alexander; Hahn, Kristian; Haisch, Ulrich; Harris, Philip C.; Khoze, Valentin V.; Landsberg, Greg; McCabe, Christopher; Penning, Bjoern; Sanz, Veronica; Schwanenberger, Christian; Scott, Pat; Wardle, Nicholas

    2017-06-01

    This White Paper is an input to the ongoing discussion about the extension and refinement of simplified Dark Matter (DM) models. It is not intended as a comprehensive review of the discussed subjects, but instead summarises ideas and concepts arising from a brainstorming workshop that can be useful when defining the next generation of simplified DM models (SDMM). In this spirit, based on two concrete examples, we show how existing SDMM can be extended to provide a more accurate and comprehensive framework to interpret and characterise collider searches. In the first example we extend the canonical SDMM with a scalar mediator to include mixing with the Higgs boson. We show that this approach not only provides a better description of the underlying kinematic properties that a complete model would possess, but also offers the option of using this more realistic class of scalar mixing models to compare and combine consistently searches based on different experimental signatures. The second example outlines how a new physics signal observed in a visible channel can be connected to DM by extending a simplified model including effective couplings. In the next part of the White Paper we outline other interesting options for SDMM that could be studied in more detail in the future. Finally, we review important aspects of supersymmetric models for DM and use them to propose how to develop more complete SDMMs. This White Paper is a summary of the brainstorming meeting "Next generation of simplified Dark Matter models" that took place at Imperial College, London on May 6, 2016, and corresponding follow-up studies on selected subjects.

  19. A Global 3D P-Velocity Model of the Earth’s Crust and Mantle for Improved Event Location

    DTIC Science & Technology

    2011-09-01

    starting model, we use a simplified layer crustal model derived from the NNSA Unified model in Eurasia and Crust 2.0 model everywhere else, over a...geographic and radial dimensions. For our starting model, we use a simplified layer crustal model derived from the NNSA Unified model in Eurasia and...tessellation with 4° triangles to the transition zone and upper mantle, and a third tessellation with variable resolution to all crustal layers. The

  20. The proposed 'concordance-statistic for benefit' provided a useful metric when modeling heterogeneous treatment effects.

    PubMed

    van Klaveren, David; Steyerberg, Ewout W; Serruys, Patrick W; Kent, David M

    2018-02-01

    Clinical prediction models that support treatment decisions are usually evaluated for their ability to predict the risk of an outcome rather than treatment benefit-the difference between outcome risk with vs. without therapy. We aimed to define performance metrics for a model's ability to predict treatment benefit. We analyzed data of the Synergy between Percutaneous Coronary Intervention with Taxus and Cardiac Surgery (SYNTAX) trial and of three recombinant tissue plasminogen activator trials. We assessed alternative prediction models with a conventional risk concordance-statistic (c-statistic) and a novel c-statistic for benefit. We defined observed treatment benefit by the outcomes in pairs of patients matched on predicted benefit but discordant for treatment assignment. The 'c-for-benefit' represents the probability that from two randomly chosen matched patient pairs with unequal observed benefit, the pair with greater observed benefit also has a higher predicted benefit. Compared to a model without treatment interactions, the SYNTAX score II had improved ability to discriminate treatment benefit (c-for-benefit 0.590 vs. 0.552), despite having similar risk discrimination (c-statistic 0.725 vs. 0.719). However, for the simplified stroke-thrombolytic predictive instrument (TPI) vs. the original stroke-TPI, the c-for-benefit (0.584 vs. 0.578) was similar. The proposed methodology has the potential to measure a model's ability to predict treatment benefit not captured with conventional performance metrics. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Complex Adaptive System Models and the Genetic Analysis of Plasma HDL-Cholesterol Concentration

    PubMed Central

    Rea, Thomas J.; Brown, Christine M.; Sing, Charles F.

    2006-01-01

    Despite remarkable advances in diagnosis and therapy, ischemic heart disease (IHD) remains a leading cause of morbidity and mortality in industrialized countries. Recent efforts to estimate the influence of genetic variation on IHD risk have focused on predicting individual plasma high-density lipoprotein cholesterol (HDL-C) concentration. Plasma HDL-C concentration (mg/dl), a quantitative risk factor for IHD, has a complex multifactorial etiology that involves the actions of many genes. Single gene variations may be necessary but are not individually sufficient to predict a statistically significant increase in risk of disease. The complexity of phenotype-genotype-environment relationships involved in determining plasma HDL-C concentration has challenged commonly held assumptions about genetic causation and has led to the question of which combination of variations, in which subset of genes, in which environmental strata of a particular population significantly improves our ability to predict high or low risk phenotypes. We document the limitations of inferences from genetic research based on commonly accepted biological models, consider how evidence for real-world dynamical interactions between HDL-C determinants challenges the simplifying assumptions implicit in traditional linear statistical genetic models, and conclude by considering research options for evaluating the utility of genetic information in predicting traits with complex etiologies. PMID:17146134

  2. Simplified versus geometrically accurate models of forefoot anatomy to predict plantar pressures: A finite element study.

    PubMed

    Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R

    2016-01-25

    Integration of patient-specific biomechanical measurements into the design of therapeutic footwear has been shown to improve clinical outcomes in patients with diabetic foot disease. The addition of numerical simulations intended to optimise intervention design may help to build on these advances, however at present the time and labour required to generate and run personalised models of foot anatomy restrict their routine clinical utility. In this study we developed second-generation personalised simple finite element (FE) models of the forefoot with varying geometric fidelities. Plantar pressure predictions from barefoot, shod, and shod with insole simulations using simplified models were compared to those obtained from CT-based FE models incorporating more detailed representations of bone and tissue geometry. A simplified model including representations of metatarsals based on simple geometric shapes, embedded within a contoured soft tissue block with outer geometry acquired from a 3D surface scan was found to provide pressure predictions closest to the more complex model, with mean differences of 13.3kPa (SD 13.4), 12.52kPa (SD 11.9) and 9.6kPa (SD 9.3) for barefoot, shod, and insole conditions respectively. The simplified model design could be produced in <1h compared to >3h in the case of the more detailed model, and solved on average 24% faster. FE models of the forefoot based on simplified geometric representations of the metatarsal bones and soft tissue surface geometry from 3D surface scans may potentially provide a simulation approach with improved clinical utility, however further validity testing around a range of therapeutic footwear types is required. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Precession and circularization of elliptical space-tether motion

    NASA Technical Reports Server (NTRS)

    Chapel, Jim D.; Grosserode, Patrick

    1993-01-01

    In this paper, we present a simplified analytic model for predicting motion of long space tethers. The perturbation model developed here addresses skip rope motion, where each end of the tether is held in place and the middle of the tether swings with a motion similar to that of a child's skip rope. If the motion of the tether midpoint is elliptical rather than circular, precession of the ellipse complicates the procedures required to damp this motion. The simplified analytic model developed in this paper parametrically predicts the precession of elliptical skip rope motion. Furthermore, the model shows that elliptic skip rope motion will circularize when damping is present in the longitudinal direction. Compared with high-fidelity simulation results, this simplified model provides excellent predictions of these phenomena.

  4. Semianalytical model predicting transfer of volatile pollutants from groundwater to the soil surface.

    PubMed

    Atteia, Olivier; Höhener, Patrick

    2010-08-15

    Volatilization of toxic organic contaminants from groundwater to the soil surface is often considered an important pathway in risk analysis. Most of the risk models use simplified linear solutions that may overpredict the volatile flux. Although complex numerical models have been developed, their use is restricted to experienced users and for sites where field data are known in great detail. We present here a novel semianalytical model running on a spreadsheet that simulates the volatilization flux and vertical concentration profile in a soil based on the Van Genuchten functions. These widely used functions describe precisely the gas and water saturations and movement in the capillary fringe. The analytical model shows a good accuracy over several orders of magnitude when compared to a numerical model and laboratory data. The effect of barometric pumping is also included in the semianalytical formulation, although the model predicts that barometric pumping is often negligible. A sensitivity study predicts significant fluxes in sandy vadose zones and much smaller fluxes in other soils. Fluxes are linked to the dimensionless Henry's law constant H for H < 0.2 and increase by approximately 20% when temperature increases from 5 to 25 degrees C.

  5. Improved heat transfer modeling of the eye for electromagnetic wave exposures.

    PubMed

    Hirata, Akimasa

    2007-05-01

    This study proposed an improved heat transfer model of the eye for exposure to electromagnetic (EM) waves. Particular attention was paid to the difference from the simplified heat transfer model commonly used in this field. From our computational results, the temperature elevation in the eye calculated with the simplified heat transfer model was largely influenced by the EM absorption outside the eyeball, but not when we used our improved model.

  6. A transfer function type of simplified electrochemical model with modified boundary conditions and Padé approximation for Li-ion battery: Part 2. Modeling and parameter estimation

    NASA Astrophysics Data System (ADS)

    Yuan, Shifei; Jiang, Lei; Yin, Chengliang; Wu, Hongjie; Zhang, Xi

    2017-06-01

    The electrochemistry-based battery model can provide physics-meaningful knowledge about the lithium-ion battery system with extensive computation burdens. To motivate the development of reduced order battery model, three major contributions have been made throughout this paper: (1) the transfer function type of simplified electrochemical model is proposed to address the current-voltage relationship with Padé approximation method and modified boundary conditions for electrolyte diffusion equations. The model performance has been verified under pulse charge/discharge and dynamic stress test (DST) profiles with the standard derivation less than 0.021 V and the runtime 50 times faster. (2) the parametric relationship between the equivalent circuit model and simplified electrochemical model has been established, which will enhance the comprehension level of two models with more in-depth physical significance and provide new methods for electrochemical model parameter estimation. (3) four simplified electrochemical model parameters: equivalent resistance Req, effective diffusion coefficient in electrolyte phase Deeff, electrolyte phase volume fraction ε and open circuit voltage (OCV), have been identified by the recursive least square (RLS) algorithm with the modified DST profiles under 45, 25 and 0 °C. The simulation results indicate that the proposed model coupled with RLS algorithm can achieve high accuracy for electrochemical parameter identification in dynamic scenarios.

  7. Simplified models for Higgs physics: singlet scalar and vector-like quark phenomenology

    DOE PAGES

    Dolan, Matthew J.; Hewett, J. L.; Krämer, M.; ...

    2016-07-08

    Simplified models provide a useful tool to conduct the search and exploration of physics beyond the Standard Model in a model-independent fashion. In this study, we consider the complementarity of indirect searches for new physics in Higgs couplings and distributions with direct searches for new particles, using a simplified model which includes a new singlet scalar resonance and vector-like fermions that can mix with the SM top-quark. We fit this model to the combined ATLAS and CMS 125 GeV Higgs production and coupling measurements and other precision electroweak constraints, and explore in detail the effects of the new matter contentmore » upon Higgs production and kinematics. Finally, we highlight some novel features and decay modes of the top partner phenomenology, and discuss prospects for Run II.« less

  8. Interpretation of searches for supersymmetry with simplified models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatrchyan, S.; Khachatryan, V.; Sirunyan, A. M.

    The results of searches for supersymmetry by the CMS experiment are interpreted in the framework of simplified models. The results are based on data corresponding to an integrated luminosity of 4.73 to 4.98 inverse femtobarns. The data were collected at the LHC in proton-proton collisions at a center-of-mass energy of 7 TeV. This paper describes the method of interpretation and provides upper limits on the product of the production cross section and branching fraction as a function of new particle masses for a number of simplified models. These limits and the corresponding experimental acceptance calculations can be used to constrainmore » other theoretical models and to compare different supersymmetry-inspired analyses.« less

  9. The risk of collapse in abandoned mine sites: the issue of data uncertainty

    NASA Astrophysics Data System (ADS)

    Longoni, Laura; Papini, Monica; Brambilla, Davide; Arosio, Diego; Zanzi, Luigi

    2016-04-01

    Ground collapses over abandoned underground mines constitute a new environmental risk in the world. The high risk associated with subsurface voids, together with lack of knowledge of the geometric and geomechanical features of mining areas, makes abandoned underground mines one of the current challenges for countries with a long mining history. In this study, a stability analysis of Montevecchia marl mine is performed in order to validate a general approach that takes into account the poor local information and the variability of the input data. The collapse risk was evaluated through a numerical approach that, starting with some simplifying assumptions, is able to provide an overview of the collapse probability. The final results is an easy-accessible-transparent summary graph that shows the collapse probability. This approach may be useful for public administrators called upon to manage this environmental risk. The approach tries to simplify this complex problem in order to achieve a roughly risk assessment, but, since it relies on just a small amount of information, any final user should be aware that a comprehensive and detailed risk scenario can be generated only through more exhaustive investigations.

  10. Putting proteins back into water

    NASA Astrophysics Data System (ADS)

    de Los Rios, Paolo; Caldarelli, Guido

    2000-12-01

    We introduce a simplified protein model where the solvent (water) degrees of freedom appear explicitly (although in an extremely simplified fashion). Using this model we are able to recover the thermodynamic phenomenology of proteins over a wide range of temperatures. In particular we describe both the warm and the cold protein denaturation within a single framework, while addressing important issues about the structure of model proteins.

  11. 12 CFR Appendix E to Part 225 - Capital Adequacy Guidelines for Bank Holding Companies: Market Risk

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... CONTROL (REGULATION Y) Pt. 225, App. E Appendix E to Part 225—Capital Adequacy Guidelines for Bank Holding... Method for Specific Risk Section 11Simplified Supervisory Formula Approach Section 12Market Risk... apply: Affiliate with respect to a company means any company that controls, is controlled by, or is...

  12. 12 CFR Appendix E to Part 225 - Capital Adequacy Guidelines for Bank Holding Companies: Market Risk

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... CONTROL (REGULATION Y) Pt. 225, App. E Appendix E to Part 225—Capital Adequacy Guidelines for Bank Holding... Measurement Method for Specific Risk Section 11 Simplified Supervisory Formula Approach Section 12 Market Risk... apply: Affiliate with respect to a company means any company that controls, is controlled by, or is...

  13. 12 CFR Appendix B to Part 3 - Risk-Based Capital Guidelines; Market Risk

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 11Simplified Supervisory Formula Approach Section 12Market Risk Disclosures Section 1. Purpose, Applicability... respect to a company means any company that controls, is controlled by, or is under common control with... organization. Control A person or company controls a company if it: (1) Owns, controls, or holds with power to...

  14. Verification of a Byzantine-Fault-Tolerant Self-stabilizing Protocol for Clock Synchronization

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2008-01-01

    This paper presents the mechanical verification of a simplified model of a rapid Byzantine-fault-tolerant self-stabilizing protocol for distributed clock synchronization systems. This protocol does not rely on any assumptions about the initial state of the system except for the presence of sufficient good nodes, thus making the weakest possible assumptions and producing the strongest results. This protocol tolerates bursts of transient failures, and deterministically converges within a time bound that is a linear function of the self-stabilization period. A simplified model of the protocol is verified using the Symbolic Model Verifier (SMV). The system under study consists of 4 nodes, where at most one of the nodes is assumed to be Byzantine faulty. The model checking effort is focused on verifying correctness of the simplified model of the protocol in the presence of a permanent Byzantine fault as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period. Although model checking results of the simplified model of the protocol confirm the theoretical predictions, these results do not necessarily confirm that the protocol solves the general case of this problem. Modeling challenges of the protocol and the system are addressed. A number of abstractions are utilized in order to reduce the state space.

  15. Causation in risk assessment and management: models, inference, biases, and a microbial risk-benefit case study.

    PubMed

    Cox, L A; Ricci, P F

    2005-04-01

    Causal inference of exposure-response relations from data is a challenging aspect of risk assessment with important implications for public and private risk management. Such inference, which is fundamentally empirical and based on exposure (or dose)-response models, seldom arises from a single set of data; rather, it requires integrating heterogeneous information from diverse sources and disciplines including epidemiology, toxicology, and cell and molecular biology. The causal aspects we discuss focus on these three aspects: drawing sound inferences about causal relations from one or more observational studies; addressing and resolving biases that can affect a single multivariate empirical exposure-response study; and applying the results from these considerations to the microbiological risk management of human health risks and benefits of a ban on antibiotic use in animals, in the context of banning enrofloxacin or macrolides, antibiotics used against bacterial illnesses in poultry, and the effects of such bans on changing the risk of human food-borne campylobacteriosis infections. The purposes of this paper are to describe novel causal methods for assessing empirical causation and inference; exemplify how to deal with biases that routinely arise in multivariate exposure- or dose-response modeling; and provide a simplified discussion of a case study of causal inference using microbial risk analysis as an example. The case study supports the conclusion that the human health benefits from a ban are unlikely to be greater than the excess human health risks that it could create, even when accounting for uncertainty. We conclude that quantitative causal analysis of risks is a preferable to qualitative assessments because it does not involve unjustified loss of information and is sound under the inferential use of risk results by management.

  16. SF-FDTD analysis of a predictive physical model for parallel aligned liquid crystal devices

    NASA Astrophysics Data System (ADS)

    Márquez, Andrés.; Francés, Jorge; Martínez, Francisco J.; Gallego, Sergi; Alvarez, Mariela L.; Calzado, Eva M.; Pascual, Inmaculada; Beléndez, Augusto

    2017-08-01

    Recently we demonstrated a novel and simplified model enabling to calculate the voltage dependent retardance provided by parallel aligned liquid crystal devices (PA-LCoS) for a very wide range of incidence angles and any wavelength in the visible. To our knowledge it represents the most simplified approach still showing predictive capability. Deeper insight into the physics behind the simplified model is necessary to understand if the parameters in the model are physically meaningful. Since the PA-LCoS is a black-box where we do not have information about the physical parameters of the device, we cannot perform this kind of analysis using the experimental retardance measurements. In this work we develop realistic simulations for the non-linear tilt of the liquid crystal director across the thickness of the liquid crystal layer in the PA devices. We consider these profiles to have a sine-like shape, which is a good approximation for typical ranges of applied voltage in commercial PA-LCoS microdisplays. For these simulations we develop a rigorous method based on the split-field finite difference time domain (SF-FDTD) technique which provides realistic retardance values. These values are used as the experimental measurements to which the simplified model is fitted. From this analysis we learn that the simplified model is very robust, providing unambiguous solutions when fitting its parameters. We also learn that two of the parameters in the model are physically meaningful, proving a useful reverse-engineering approach, with predictive capability, to probe into internal characteristics of the PA-LCoS device.

  17. Defence mechanisms: the role of physiology in current and future environmental protection paradigms

    PubMed Central

    Glover, Chris N

    2018-01-01

    Abstract Ecological risk assessments principally rely on simplified metrics of organismal sensitivity that do not consider mechanism or biological traits. As such, they are unable to adequately extrapolate from standard laboratory tests to real-world settings, and largely fail to account for the diversity of organisms and environmental variables that occur in natural environments. However, an understanding of how stressors influence organism health can compensate for these limitations. Mechanistic knowledge can be used to account for species differences in basal biological function and variability in environmental factors, including spatial and temporal changes in the chemical, physical and biological milieu. Consequently, physiological understanding of biological function, and how this is altered by stressor exposure, can facilitate proactive, predictive risk assessment. In this perspective article, existing frameworks that utilize physiological knowledge (e.g. biotic ligand models, adverse outcomes pathways and mechanistic effect models), are outlined, and specific examples of how mechanistic understanding has been used to predict risk are highlighted. Future research approaches and data needs for extending the incorporation of physiological information into ecological risk assessments are discussed. Although the review focuses on chemical toxicants in aquatic systems, physical and biological stressors and terrestrial environments are also briefly considered. PMID:29564135

  18. Scalable problems and memory bounded speedup

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He; Ni, Lionel M.

    1992-01-01

    In this paper three models of parallel speedup are studied. They are fixed-size speedup, fixed-time speedup and memory-bounded speedup. The latter two consider the relationship between speedup and problem scalability. Two sets of speedup formulations are derived for these three models. One set considers uneven workload allocation and communication overhead and gives more accurate estimation. Another set considers a simplified case and provides a clear picture on the impact of the sequential portion of an application on the possible performance gain from parallel processing. The simplified fixed-size speedup is Amdahl's law. The simplified fixed-time speedup is Gustafson's scaled speedup. The simplified memory-bounded speedup contains both Amdahl's law and Gustafson's scaled speedup as special cases. This study leads to a better understanding of parallel processing.

  19. An integrated and pragmatic approach: Global plant safety management

    NASA Astrophysics Data System (ADS)

    McNutt, Jack; Gross, Andrew

    1989-05-01

    The Bhopal disaster in India in 1984 has compelled manufacturing companies to review their operations in order to minimize their risk exposure. Much study has been done on the subject of risk assessment and in refining safety reviews of plant operations. However, little work has been done to address the broader needs of decision makers in the multinational environment. The corporate headquarters of multinational organizations are concerned with identifying vulnerable areas to assure that appropriate risk-minimization measures are in force or will be taken. But the task of screening global business units for safety prowess is complicated and time consuming. This article takes a step towards simplifying this process by presenting the decisional model developed by the authors. Beginning with an overview of key issues affecting global safety management, the focus shifts to the multinational vulnerability model developed by the authors, which reflects an integration of approaches. The article concludes with a discussion of areas for further research. While the global chemical industry and major incidents therein are used for illustration, the procedures and solutions suggested here are applicable to all manufacturing operations.

  20. Simplified models of dark matter with a long-lived co-annihilation partner

    NASA Astrophysics Data System (ADS)

    Khoze, Valentin V.; Plascencia, Alexis D.; Sakurai, Kazuki

    2017-06-01

    We introduce a new set of simplified models to address the effects of 3-point interactions between the dark matter particle, its dark co-annihilation partner, and the Standard Model degree of freedom, which we take to be the tau lepton. The contributions from dark matter co-annihilation channels are highly relevant for a determination of the correct relic abundance. We investigate these effects as well as the discovery potential for dark matter co-annihilation partners at the LHC. A small mass splitting between the dark matter and its partner is preferred by the co-annihilation mechanism and suggests that the co-annihilation partners may be long-lived (stable or meta-stable) at collider scales. It is argued that such long-lived electrically charged particles can be looked for at the LHC in searches of anomalous charged tracks. This approach and the underlying models provide an alternative/complementarity to the mono-jet and multi-jet based dark matter searches widely used in the context of simplified models with s-channel mediators. We consider four types of simplified models with different particle spins and coupling structures. Some of these models are manifestly gauge invariant and renormalizable, others would ultimately require a UV completion. These can be realised in terms of supersymmetric models in the neutralino-stau co-annihilation regime, as well as models with extra dimensions or composite models.

  1. Risk assessment of TBT in the Japanese short-neck clam ( Ruditapes philippinarum) of Tokyo Bay using a chemical fate model

    NASA Astrophysics Data System (ADS)

    Horiguchi, Fumio; Nakata, Kisaburo; Ito, Naganori; Okawa, Ken

    2006-12-01

    A risk assessment of Tributyltin (TBT) in Tokyo Bay was conducted using the Margin of Exposure (MOE) method at the species level using the Japanese short-neck clam, Ruditapes philippinarum. The assessment endpoint was defined to protect R. philippinarum in Tokyo Bay from TBT (growth effects). A No Observed Effect Concentration (NOEC) for this species with respect to growth reduction induced by TBT was estimated from experimental results published in the scientific literature. Sources of TBT in this study were assumed to be commercial vessels in harbors and navigation routes. Concentrations of TBT in Tokyo Bay were estimated using a three-dimensional hydrodynamic model, an ecosystem model and a chemical fate model. MOEs for this species were estimated for the years 1990, 2000, and 2007. Estimated MOEs for R. philippinarum for 1990, 2000, and 2007 were approximately 1-3, 10, and 100, respectively, indicating a declining temporal trend in the probability of adverse growth effects. A simplified software package called RAMTB was developed by incorporating the chemical fate model and the databases of seasonal flow fields and distributions of organic substances (phytoplankton and detritus) in Tokyo Bay, simulated by the hydrodynamic and ecological model, respectively.

  2. A Practical Probabilistic Graphical Modeling Tool for Weighing ...

    EPA Pesticide Factsheets

    Past weight-of-evidence frameworks for adverse ecological effects have provided soft-scoring procedures for judgments based on the quality and measured attributes of evidence. Here, we provide a flexible probabilistic structure for weighing and integrating lines of evidence for ecological risk determinations. Probabilistic approaches can provide both a quantitative weighing of lines of evidence and methods for evaluating risk and uncertainty. The current modeling structure wasdeveloped for propagating uncertainties in measured endpoints and their influence on the plausibility of adverse effects. To illustrate the approach, we apply the model framework to the sediment quality triad using example lines of evidence for sediment chemistry measurements, bioassay results, and in situ infauna diversity of benthic communities using a simplified hypothetical case study. We then combine the three lines evidence and evaluate sensitivity to the input parameters, and show how uncertainties are propagated and how additional information can be incorporated to rapidly update the probability of impacts. The developed network model can be expanded to accommodate additional lines of evidence, variables and states of importance, and different types of uncertainties in the lines of evidence including spatial and temporal as well as measurement errors. We provide a flexible Bayesian network structure for weighing and integrating lines of evidence for ecological risk determinations

  3. Fault Diagnostics for Turbo-Shaft Engine Sensors Based on a Simplified On-Board Model

    PubMed Central

    Lu, Feng; Huang, Jinquan; Xing, Yaodong

    2012-01-01

    Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient. PMID:23112645

  4. Computational split-field finite-difference time-domain evaluation of simplified tilt-angle models for parallel-aligned liquid-crystal devices

    NASA Astrophysics Data System (ADS)

    Márquez, Andrés; Francés, Jorge; Martínez, Francisco J.; Gallego, Sergi; Álvarez, Mariela L.; Calzado, Eva M.; Pascual, Inmaculada; Beléndez, Augusto

    2018-03-01

    Simplified analytical models with predictive capability enable simpler and faster optimization of the performance in applications of complex photonic devices. We recently demonstrated the most simplified analytical model still showing predictive capability for parallel-aligned liquid crystal on silicon (PA-LCoS) devices, which provides the voltage-dependent retardance for a very wide range of incidence angles and any wavelength in the visible. We further show that the proposed model is not only phenomenological but also physically meaningful, since two of its parameters provide the correct values for important internal properties of these devices related to the birefringence, cell gap, and director profile. Therefore, the proposed model can be used as a means to inspect internal physical properties of the cell. As an innovation, we also show the applicability of the split-field finite-difference time-domain (SF-FDTD) technique for phase-shift and retardance evaluation of PA-LCoS devices under oblique incidence. As a simplified model for PA-LCoS devices, we also consider the exact description of homogeneous birefringent slabs. However, we show that, despite its higher degree of simplification, the proposed model is more robust, providing unambiguous and physically meaningful solutions when fitting its parameters.

  5. Fault diagnostics for turbo-shaft engine sensors based on a simplified on-board model.

    PubMed

    Lu, Feng; Huang, Jinquan; Xing, Yaodong

    2012-01-01

    Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient.

  6. Multi-Fidelity Framework for Modeling Combustion Instability

    DTIC Science & Technology

    2016-07-27

    generated from the reduced-domain dataset. Evaluations of the framework are performed based on simplified test problems for a model rocket combustor showing...generated from the reduced-domain dataset. Evaluations of the framework are performed based on simplified test problems for a model rocket combustor...of Aeronautics and Astronautics and Associate Fellow AIAA. ‡ Professor Emeritus. § Senior Scientist, Rocket Propulsion Division and Senior Member

  7. Simplified hydraulic model of French vertical-flow constructed wetlands.

    PubMed

    Arias, Luis; Bertrand-Krajewski, Jean-Luc; Molle, Pascal

    2014-01-01

    Designing vertical-flow constructed wetlands (VFCWs) to treat both rain events and dry weather flow is a complex task due to the stochastic nature of rain events. Dynamic models can help to improve design, but they usually prove difficult to handle for designers. This study focuses on the development of a simplified hydraulic model of French VFCWs using an empirical infiltration coefficient--infiltration capacity parameter (ICP). The model was fitted using 60-second-step data collected on two experimental French VFCW systems and compared with Hydrus 1D software. The model revealed a season-by-season evolution of the ICP that could be explained by the mechanical role of reeds. This simplified model makes it possible to define time-course shifts in ponding time and outlet flows. As ponding time hinders oxygen renewal, thus impacting nitrification and organic matter degradation, ponding time limits can be used to fix a reliable design when treating both dry and rain events.

  8. Risk Importance Measures in the Designand Operation of Nuclear Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vrbanic I.; Samanta P.; Basic, I

    This monograph presents and discusses risk importance measures as quantified by the probabilistic risk assessment (PRA) models of nuclear power plants (NPPs) developed according to the current standards and practices. Usually, PRA tools calculate risk importance measures related to a single ?basic event? representing particular failure mode. This is, then, reflected in many current PRA applications. The monograph focuses on the concept of ?component-level? importance measures that take into account different failure modes of the component including common-cause failures (CCFs). In opening sections the roleof risk assessment in safety analysis of an NPP is introduced and discussion given of ?traditional?,more » mainly deterministic, design principles which have been established to assign a level of importance to a particular system, structure or component. This is followed by an overview of main risk importance measures for risk increase and risk decrease from current PRAs. Basic relations which exist among the measures are shown. Some of the current practical applications of risk importancemeasures from the field of NPP design, operation and regulation are discussed. The core of the monograph provides a discussion on theoreticalbackground and practical aspects of main risk importance measures at the level of ?component? as modeled in a PRA, starting from the simplest case, single basic event, and going toward more complexcases with multiple basic events and involvements in CCF groups. The intent is to express the component-level importance measures via theimportance measures and probabilities of the underlying single basic events, which are the inputs readily available from a PRA model andits results. Formulas are derived and discussed for some typical cases. The formulas and their results are demonstrated through some practicalexamples, done by means of a simplified PRA model developed in and run by RiskSpectrum? tool, which are presented in the appendices. The monograph concludes with discussion of limitations of the use of risk importance measures and a summary of component-level importance cases evaluated.« less

  9. Anthropogenic impact on flood-risk: a large-scale assessment for planning controlled inundation strategies along the River Po

    NASA Astrophysics Data System (ADS)

    Domeneghetti, Alessio; Castellarin, Attilio; Brath, Armando

    2013-04-01

    The European Flood Directive (2007/60/EC) has fostered the development of innovative and sustainable approaches and methodologies for flood-risk mitigation and management. Furthermore, concerning flood-risk mitigation, the increasing awareness of how the anthropogenic pressures (e.g. demographic and land-use dynamics, uncontrolled urban and industrial expansion on flood-prone area) could strongly increase potential flood damages and losses has triggered a paradigm shift from "defending the territory against flooding" (e.g. by means of levee system strengthening and heightening) to "living with floods" (e.g. promoting compatible land-uses or adopting controlled flooding strategies of areas located outside the main embankments). The assessment of how socio-economic dynamics may influence flood-risk represents a fundamental skill that should be considered for planning a sustainable industrial and urban development of flood-prone areas, reducing their vulnerability and therefore minimizing socio-economic and ecological losses due to large flood events. These aspects, which are of fundamental importance for Institutions and public bodies in charge of Flood Directive requirements, need to be considered through a holistic approach at river basin scale. This study focuses on the evaluation of large-scale flood-risk mitigation strategies for the middle-lower reach of River Po (~350km), the longest Italian river and the largest in terms of streamflow. Due to the social and economical importance of the Po River floodplain (almost 40% of the total national gross product results from this area), our study aims at investigating the potential of combining simplified vulnerability indices with a quasi-2D model for the definition of sustainable and robust flood-risk mitigation strategies. Referring to past (1954) and recent (2006) land-use data sets (e.g. CORINE) we propose simplified vulnerability indices for assessing potential flood-risk of industrial and urbanized flood prone areas taking into account altimetry and population density, and we analyze the modification of flood-risk occurred during last decades due to the demographic dynamics of the River Po floodplains. Flood hazard associated to a high magnitude event (i.e. return period of about 500 year) was estimated by means of a quasi-2D hydraulic model set up for the middle-lower portion of the Po River and for its major tributaries. The results of the study highlight how coupling a large-scale numerical model with the proposed flood-vulnerability indices could be a useful tool for decision-makers when they are called to define sustainable spatial development plans for the study area, or when they need to identify priorities in the organization of civil protection actions during a major flood event that could include the necessity of controlled flooding of flood-prone areas located outside the main embankment system.

  10. Anatomical and spiral wave reentry in a simplified model for atrial electrophysiology.

    PubMed

    Richter, Yvonne; Lind, Pedro G; Seemann, Gunnar; Maass, Philipp

    2017-04-21

    For modeling the propagation of action potentials in the human atria, various models have been developed in the past, which take into account in detail the influence of the numerous ionic currents flowing through the cell membrane. Aiming at a simplified description, the Bueno-Orovio-Cherry-Fenton (BOCF) model for electric wave propagation in the ventricle has been adapted recently to atrial physiology. Here, we study this adapted BOCF (aBOCF) model with respect to its capability to accurately generate spatio-temporal excitation patterns found in anatomical and spiral wave reentry. To this end, we compare results of the aBOCF model with the more detailed one proposed by Courtemanche, Ramirez and Nattel (CRN model). We find that characteristic features of the reentrant excitation patterns seen in the CRN model are well captured by the aBOCF model. This opens the possibility to study origins of atrial fibrillation based on a simplified but still reliable description. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Mono-X versus direct searches: simplified models for dark matter at the LHC

    DOE PAGES

    Liew, Seng Pei; Papucci, Michele; Vichi, Alessandro; ...

    2017-06-15

    We consider simplified models for dark matter (DM) at the LHC, focused on mono-Higgs, -Z or -b produced in the final state. Our primary purpose is to study the LHC reach of a relatively complete set of simplified models for these final states, while comparing the reach of the mono-X DM search against direct searches for the mediating particle. We find that direct searches for the mediating particle, whether in di-jets, jets+E T, multi-b+E T, or di-boson+E T, are usually stronger. We draw attention to the cases that the mono-X search is strongest, which include regions of parameter space inmore » inelastic DM, two Higgs doublet, and squark mediated production models with a compressed spectrum.« less

  12. Mono-X versus direct searches: simplified models for dark matter at the LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liew, Seng Pei; Papucci, Michele; Vichi, Alessandro

    We consider simplified models for dark matter (DM) at the LHC, focused on mono-Higgs, -Z or -b produced in the final state. Our primary purpose is to study the LHC reach of a relatively complete set of simplified models for these final states, while comparing the reach of the mono-X DM search against direct searches for the mediating particle. We find that direct searches for the mediating particle, whether in di-jets, jets+E T, multi-b+E T, or di-boson+E T, are usually stronger. We draw attention to the cases that the mono-X search is strongest, which include regions of parameter space inmore » inelastic DM, two Higgs doublet, and squark mediated production models with a compressed spectrum.« less

  13. Simplifying informed consent for biorepositories: stakeholder perspectives.

    PubMed

    Beskow, Laura M; Friedman, Joëlle Y; Hardy, N Chantelle; Lin, Li; Weinfurt, Kevin P

    2010-09-01

    Complex and sometimes controversial information must be conveyed during the consent process for participation in biorepositories, and studies suggest that consent documents in general are growing in length and complexity. As a first step toward creating a simplified biorepository consent form, we gathered data from multiple stakeholders about what information was most important for prospective participants to know when making a decision about taking part in a biorepository. We recruited 52 research participants, 12 researchers, and 20 institutional review board representatives from Durham and Kannapolis, NC. These subjects were asked to read a model biorepository consent form and highlight sentences they deemed most important. On average, institutional review board representatives identified 72.3% of the sentences as important; researchers selected 53.0%, and participants 40.4% (P = 0.0004). Participants most often selected sentences about the kinds of individual research results that might be offered, privacy risks, and large-scale data sharing. Researchers highlighted sentences about the biorepository's purpose, privacy protections, costs, and participant access to individual results. Institutional review board representatives highlighted sentences about collection of basic personal information, medical record access, and duration of storage. The differing mandates of these three groups can translate into widely divergent opinions about what information is important and appropriate to include a consent form. These differences could frustrate efforts to move simplified forms--for biobanking as well as for other kinds of research--into actual use, despite continued calls for such forms.

  14. Simplified ISCCP cloud regimes for evaluating cloudiness in CMIP5 models

    NASA Astrophysics Data System (ADS)

    Jin, Daeho; Oreopoulos, Lazaros; Lee, Dongmin

    2017-01-01

    We take advantage of ISCCP simulator data available for many models that participated in CMIP5, in order to introduce a framework for comparing model cloud output with corresponding ISCCP observations based on the cloud regime (CR) concept. Simplified global CRs are employed derived from the co-variations of three variables, namely cloud optical thickness, cloud top pressure and cloud fraction ( τ, p c , CF). Following evaluation criteria established in a companion paper of ours (Jin et al. 2016), we assess model cloud simulation performance based on how well the simplified CRs are simulated in terms of similarity of centroids, global values and map correlations of relative-frequency-of-occurrence, and long-term total cloud amounts. Mirroring prior results, modeled clouds tend to be too optically thick and not as extensive as in observations. CRs with high-altitude clouds from storm activity are not as well simulated here compared to the previous study, but other regimes containing near-overcast low clouds show improvement. Models that have performed well in the companion paper against CRs defined by joint τ- p c histograms distinguish themselves again here, but improvements for previously underperforming models are also seen. Averaging across models does not yield a drastically better picture, except for cloud geographical locations. Cloud evaluation with simplified regimes seems thus more forgiving than that using histogram-based CRs while still strict enough to reveal model weaknesses.

  15. Model Checking a Byzantine-Fault-Tolerant Self-Stabilizing Protocol for Distributed Clock Synchronization Systems

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2007-01-01

    This report presents the mechanical verification of a simplified model of a rapid Byzantine-fault-tolerant self-stabilizing protocol for distributed clock synchronization systems. This protocol does not rely on any assumptions about the initial state of the system. This protocol tolerates bursts of transient failures, and deterministically converges within a time bound that is a linear function of the self-stabilization period. A simplified model of the protocol is verified using the Symbolic Model Verifier (SMV) [SMV]. The system under study consists of 4 nodes, where at most one of the nodes is assumed to be Byzantine faulty. The model checking effort is focused on verifying correctness of the simplified model of the protocol in the presence of a permanent Byzantine fault as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period. Although model checking results of the simplified model of the protocol confirm the theoretical predictions, these results do not necessarily confirm that the protocol solves the general case of this problem. Modeling challenges of the protocol and the system are addressed. A number of abstractions are utilized in order to reduce the state space. Also, additional innovative state space reduction techniques are introduced that can be used in future verification efforts applied to this and other protocols.

  16. Combined Hydrologic (AGWA-KINEROS2) and Hydraulic (HEC2) Modeling for Post-Fire Runoff and Inundation Risk Assessment through a Set of Python Tools

    NASA Astrophysics Data System (ADS)

    Barlow, J. E.; Goodrich, D. C.; Guertin, D. P.; Burns, I. S.

    2016-12-01

    Wildfires in the Western United States can alter landscapes by removing vegetation and changing soil properties. These altered landscapes produce more runoff than pre-fire landscapes which can lead to post-fire flooding that can damage infrastructure and impair natural resources. Resources, structures, historical artifacts and others that could be impacted by increased runoff are considered values at risk. .The Automated Geospatial Watershed Assessment tool (AGWA) allows users to quickly set up and execute the Kinematic Runoff and Erosion model (KINEROS2 or K2) in the ESRI ArcMap environment. The AGWA-K2 workflow leverages the visualization capabilities of GIS to facilitate evaluation of rapid watershed assessments for post-fire planning efforts. High relative change in peak discharge, as simulated by K2, provides a visual and numeric indicator to investigate those channels in the watershed that should be evaluated for more detailed analysis, especially if values at risk are within or near that channel. Modeling inundation extent along a channel would provide more specific guidance about risk along a channel. HEC-2 and HEC-RAS can be used for hydraulic modeling efforts at the reach and river system scale. These models have been used to address flood boundaries and, accordingly, flood risk. However, data collection and organization for hydraulic models can be time consuming and therefore a combined hydrologic-hydraulic modeling approach is not often employed for rapid assessments. A simplified approach could streamline this process and provide managers with a simple workflow and tool to perform a quick risk assessment for a single reach. By focusing on a single reach highlighted by large relative change in peak discharge, data collection efforts can be minimized and the hydraulic computations can be performed to supplement risk analysis. The incorporation of hydraulic analysis through a suite of Python tools (as outlined by HEC-2) with AGWA-K2 will allow more rapid applications of combined hydrologic-hydraulic modeling. This combined modeling approach is built in the ESRI ArcGIS application to enable rapid model preparation, execution and result visualization for risk assessment in post-fire environments.

  17. Bioaccessibility and human health risk assessment of lead in soil from Daye City

    NASA Astrophysics Data System (ADS)

    Li, Q.; Li, F.; Xiao, M. S.; Cai, Y.; Xiong, L.; Huang, J. B.; Fu, J. T.

    2018-01-01

    Lead (Pb) in soil from 4 sampling sites of Daye City was studied. Bioaccessibilities of Pb in soil were determined by the method of simplified bioaccessible extraction test (SBET). Since traditional health risk assessment was built on the basis of metal total content, the risk may be overestimated. Modified human health risk assessment model considering bioaccessibility was built in this study. Health risk of adults and children exposure to Pb based on total contents and bioaccessible contents were evaluated. The results showed that bioaccessible content of Pb in soil was much lower than its total content, and the average bioaccessible factor (BF) was only 25.37%. The hazard indexes (HIs) for adults and children calculated by two methods were all lower than 1. It indicated that there were no no-carcinogenic risks of Pb for human in Daye. By comparing with the results, the average bioaccessible HIs for adults and children were lower than the total one, which was due to the lower hazard quotient (HQ). Proportions of non-carcinogenic risk exposure to Pb via different pathways have also changed. Particularly, the most main risk exposure pathway for adults turned from the oral ingestion to the inhalation.

  18. A Machine Learning Framework for Plan Payment Risk Adjustment.

    PubMed

    Rose, Sherri

    2016-12-01

    To introduce cross-validation and a nonparametric machine learning framework for plan payment risk adjustment and then assess whether they have the potential to improve risk adjustment. 2011-2012 Truven MarketScan database. We compare the performance of multiple statistical approaches within a broad machine learning framework for estimation of risk adjustment formulas. Total annual expenditure was predicted using age, sex, geography, inpatient diagnoses, and hierarchical condition category variables. The methods included regression, penalized regression, decision trees, neural networks, and an ensemble super learner, all in concert with screening algorithms that reduce the set of variables considered. The performance of these methods was compared based on cross-validated R 2 . Our results indicate that a simplified risk adjustment formula selected via this nonparametric framework maintains much of the efficiency of a traditional larger formula. The ensemble approach also outperformed classical regression and all other algorithms studied. The implementation of cross-validated machine learning techniques provides novel insight into risk adjustment estimation, possibly allowing for a simplified formula, thereby reducing incentives for increased coding intensity as well as the ability of insurers to "game" the system with aggressive diagnostic upcoding. © Health Research and Educational Trust.

  19. Simplified path integral for supersymmetric quantum mechanics and type-A trace anomalies

    NASA Astrophysics Data System (ADS)

    Bastianelli, Fiorenzo; Corradini, Olindo; Iacconi, Laura

    2018-05-01

    Particles in a curved space are classically described by a nonlinear sigma model action that can be quantized through path integrals. The latter require a precise regularization to deal with the derivative interactions arising from the nonlinear kinetic term. Recently, for maximally symmetric spaces, simplified path integrals have been developed: they allow to trade the nonlinear kinetic term with a purely quadratic kinetic term (linear sigma model). This happens at the expense of introducing a suitable effective scalar potential, which contains the information on the curvature of the space. The simplified path integral provides a sensible gain in the efficiency of perturbative calculations. Here we extend the construction to models with N = 1 supersymmetry on the worldline, which are applicable to the first quantized description of a Dirac fermion. As an application we use the simplified worldline path integral to compute the type-A trace anomaly of a Dirac fermion in d dimensions up to d = 16.

  20. Information security governance: a risk assessment approach to health information systems protection.

    PubMed

    Williams, Patricia A H

    2013-01-01

    It is no small task to manage the protection of healthcare data and healthcare information systems. In an environment that is demanding adaptation to change for all information collection, storage and retrieval systems, including those for of e-health and information systems, it is imperative that good information security governance is in place. This includes understanding and meeting legislative and regulatory requirements. This chapter provides three models to educate and guide organisations in this complex area, and to simplify the process of information security governance and ensure appropriate and effective measures are put in place. The approach is risk based, adapted and contextualized for healthcare. In addition, specific considerations of the impact of cloud services, secondary use of data, big data and mobile health are discussed.

  1. A systematic risk management approach employed on the CloudSat project

    NASA Technical Reports Server (NTRS)

    Basilio, R. R.; Plourde, K. S.; Lam, T.

    2000-01-01

    The CloudSat Project has developed a simplified approach for fault tree analysis and probabilistic risk assessment. A system-level fault tree has been constructed to identify credible fault scenarios and failure modes leading up to a potential failure to meet the nominal mission success criteria.

  2. The New York risk score for in-hospital and 30-day mortality for coronary artery bypass graft surgery.

    PubMed

    Hannan, Edward L; Farrell, Louise Szypulski; Wechsler, Andrew; Jordan, Desmond; Lahey, Stephen J; Culliford, Alfred T; Gold, Jeffrey P; Higgins, Robert S D; Smith, Craig R

    2013-01-01

    Simplified risk scores for coronary artery bypass graft surgery are frequently in lieu of more complicated statistical models and are valuable for informed consent and choice of intervention. Previous risk scores have been based on in-hospital mortality, but a substantial number of patients die within 30 days of the procedure. These deaths should also be accounted for, so we have developed a risk score based on in-hospital and 30-day mortality. New York's Cardiac Surgery Reporting System was used to develop an in-hospital and 30-day logistic regression model for patients undergoing coronary artery bypass graft surgery in 2009, and this model was converted into a simple linear risk score that provides estimated in-hospital and 30-day mortality rates for different values of the score. The accuracy of the risk score in predicting mortality was tested. This score was also validated by applying it to 2008 New York coronary artery bypass graft data. Subsequent analyses evaluated the ability of the risk score to predict complications and length of stay. The overall in-hospital and 30-day mortality rate for the 10,148 patients in the study was 1.79%. There are seven risk factors comprising the score, with risk factor scores ranging from 1 to 5, and the highest possible total score is 23. The score accurately predicted mortality in 2009 as well as in 2008, and was strongly correlated with complications and length of stay. The risk score is a simple way of estimating short-term mortality that accurately predicts mortality in the year the model was developed as well as in the previous year. Perioperative complications and length of stay are also well predicted by the risk score. Copyright © 2013 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  3. 68Ga-PSMA-617 PET/CT: a promising new technique for predicting risk stratification and metastatic risk of prostate cancer patients.

    PubMed

    Liu, Chen; Liu, Teli; Zhang, Ning; Liu, Yiqiang; Li, Nan; Du, Peng; Yang, Yong; Liu, Ming; Gong, Kan; Yang, Xing; Zhu, Hua; Yan, Kun; Yang, Zhi

    2018-05-02

    The purpose of this study was to investigate the performance of 68 Ga-PSMA-617 PET/CT in predicting risk stratification and metastatic risk of prostate cancer. Fifty newly diagnosed patients with prostate cancer as confirmed by needle biopsy were continuously included, 40 in a train set and ten in a test set. 68 Ga-PSMA-617 PET/CT and clinical data of all patients were retrospectively analyzed. Semi-quantitative analysis of PET images provided maximum standardized uptake (SUVmax) of primary prostate cancer and volumetric parameters including intraprostatic PSMA-derived tumor volume (iPSMA-TV) and intraprostatic total lesion PSMA (iTL-PSMA). According to prostate cancer risk stratification criteria of the NCCN Guideline, all patients were simplified into a low-intermediate risk group or a high-risk group. The semi-quantitative parameters of 68 Ga-PSMA-617 PET/CT were used to establish a univariate logistic regression model for high-risk prostate cancer and its metastatic risk, and to evaluate the diagnostic efficacy of the predictive model. In the train set, 30/40 (75%) patients had high-risk prostate cancer and 10/40 (25%) patients had low-to-moderate-risk prostate cancer; in the test set, 8/10 (80%) patients had high-risk prostate cancer while 2/10 (20%) had low-intermediate risk prostate cancer. The univariate logistic regression model established with SUVmax, iPSMA-TV and iTL-PSMA could all effectively predict high-risk prostate cancer; the AUC of ROC were 0.843, 0.802 and 0.900, respectively. Based on the test set, the sensitivity and specificity of each model were 87.5% and 50% for SUVmax, 62.5% and 100% for iPSMA-TV, and 87.5% and 100% for iTL-PSMA, respectively. The iPSMA-TV and iTL-PSMA-based predictive model could predict the metastatic risk of prostate cancer, the AUC of ROC was 0.863 and 0.848, respectively, but the SUVmax-based prediction model could not predict metastatic risk. Semi-quantitative analysis indexes of 68 Ga-PSMA-617 PET/CT imaging can be used as "imaging biomarkers" to predict risk stratification and metastatic risk of prostate cancer.

  4. Volcanic hazards at distant critical infrastructure: A method for bespoke, multi-disciplinary assessment

    NASA Astrophysics Data System (ADS)

    Odbert, H. M.; Aspinall, W.; Phillips, J.; Jenkins, S.; Wilson, T. M.; Scourse, E.; Sheldrake, T.; Tucker, P.; Nakeshree, K.; Bernardara, P.; Fish, K.

    2015-12-01

    Societies rely on critical services such as power, water, transport networks and manufacturing. Infrastructure may be sited to minimise exposure to natural hazards but not all can be avoided. The probability of long-range transport of a volcanic plume to a site is comparable to other external hazards that must be considered to satisfy safety assessments. Recent advances in numerical models of plume dispersion and stochastic modelling provide a formalized and transparent approach to probabilistic assessment of hazard distribution. To understand the risks to critical infrastructure far from volcanic sources, it is necessary to quantify their vulnerability to different hazard stressors. However, infrastructure assets (e.g. power plantsand operational facilities) are typically complex systems in themselves, with interdependent components that may differ in susceptibility to hazard impact. Usually, such complexity means that risk either cannot be estimated formally or that unsatisfactory simplifying assumptions are prerequisite to building a tractable risk model. We present a new approach to quantifying risk by bridging expertise of physical hazard modellers and infrastructure engineers. We use a joint expert judgment approach to determine hazard model inputs and constrain associated uncertainties. Model outputs are chosen on the basis of engineering or operational concerns. The procedure facilitates an interface between physical scientists, with expertise in volcanic hazards, and infrastructure engineers, with insight into vulnerability to hazards. The result is a joined-up approach to estimating risk from low-probability hazards to critical infrastructure. We describe our methodology and show preliminary results for vulnerability to volcanic hazards at a typical UK industrial facility. We discuss our findings in the context of developing bespoke assessment of hazards from distant sources in collaboration with key infrastructure stakeholders.

  5. A simplified solar cell array modelling program

    NASA Technical Reports Server (NTRS)

    Hughes, R. D.

    1982-01-01

    As part of the energy conversion/self sufficiency efforts of DSN engineering, it was necessary to have a simplified computer model of a solar photovoltaic (PV) system. This article describes the analysis and simplifications employed in the development of a PV cell array computer model. The analysis of the incident solar radiation, steady state cell temperature and the current-voltage characteristics of a cell array are discussed. A sample cell array was modelled and the results are presented.

  6. A Manpower Model for U.S. Navy Operational Contracting

    DTIC Science & Technology

    2012-06-01

    Accomplishment Time RFP Request For Proposal SAF/FM Air Force Financial Management SAP Simplified Acquisition Procedures SAT Simplified...conformance and seller’s release of claim (Garrett, 2007). 2. Contract Size and its Effect on Workload Simplified acquisition procedures ( SAP ) were...the SAP dollar threshold. 14 The drastic reduction in KO workload through the use of SAP is unmatched by any federal authorization that came

  7. Development and Validation of a Simplified Renal Replacement Therapy Suitable for Prolonged Field Care in a Porcine (Sus scrofa) Model of Acute Kidney Injury

    DTIC Science & Technology

    2018-03-01

    of a Simplified Renal Replacement Therapy Suitable for Prolonged Field Care in a Porcine (Sus scrofa) Model of Acute Kidney Injury. PRINCIPAL...and methods, results - include tables/figures, and conclusions/applications.) Objectives/Background: Acute kidney injury (AKI) is a serious

  8. A SIMPLIFIED MODELING OF FLUSHING AND RESIDENCE TIME IN 42 EMBAYMENTS IN NEW ENGLAND, USA, WITH SPECIAL ATTENTION TO GRENWICH BAY, RHODE ISLAND

    EPA Science Inventory

    A simplified protocol has been developed to meet the need for modeling hydrodynamics and transport in large numbers of embayments quickly and reliably. The procedure is illustrated with 42 embayments in southern New England, USA, giving special attention to Greenwich Bay, RI. The...

  9. A fluid model simulation of a simplified plasma limiter based on spectral-element time-domain method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qian, Cheng; Ding, Dazhi, E-mail: dzding@njust.edu.cn; Fan, Zhenhong

    2015-03-15

    A simplified plasma limiter prototype is proposed and the fluid model coupled with Maxwell's equations is established to describe the operating mechanism of plasma limiter. A three-dimensional (3-D) simplified sandwich structure plasma limiter model is analyzed with the spectral-element time-domain (SETD) method. The field breakdown threshold of air and argon at different frequency is predicted and compared with the experimental data and there is a good agreement between them for gas microwave breakdown discharge problems. Numerical results demonstrate that the two-layer plasma limiter (plasma-slab-plasma) has better protective characteristics than a one-layer plasma limiter (slab-plasma-slab) with the same length of gasmore » chamber.« less

  10. The Application of a Massively Parallel Computer to the Simulation of Electrical Wave Propagation Phenomena in the Heart Muscle Using Simplified Models

    NASA Technical Reports Server (NTRS)

    Karpoukhin, Mikhii G.; Kogan, Boris Y.; Karplus, Walter J.

    1995-01-01

    The simulation of heart arrhythmia and fibrillation are very important and challenging tasks. The solution of these problems using sophisticated mathematical models is beyond the capabilities of modern super computers. To overcome these difficulties it is proposed to break the whole simulation problem into two tightly coupled stages: generation of the action potential using sophisticated models. and propagation of the action potential using simplified models. The well known simplified models are compared and modified to bring the rate of depolarization and action potential duration restitution closer to reality. The modified method of lines is used to parallelize the computational process. The conditions for the appearance of 2D spiral waves after the application of a premature beat and the subsequent traveling of the spiral wave inside the simulated tissue are studied.

  11. Electric Power Distribution System Model Simplification Using Segment Substitution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). In contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less

  12. The effects of atmospheric chemistry on radiation budget in the Community Earth Systems Model

    NASA Astrophysics Data System (ADS)

    Choi, Y.; Czader, B.; Diao, L.; Rodriguez, J.; Jeong, G.

    2013-12-01

    The Community Earth Systems Model (CESM)-Whole Atmosphere Community Climate Model (WACCM) simulations were performed to study the impact of atmospheric chemistry on the radiation budget over the surface within a weather prediction time scale. The secondary goal is to get a simplified and optimized chemistry module for the short time period. Three different chemistry modules were utilized to represent tropospheric and stratospheric chemistry, which differ in how their reactions and species are represented: (1) simplified tropospheric and stratospheric chemistry (approximately 30 species), (2) simplified tropospheric chemistry and comprehensive stratospheric chemistry from the Model of Ozone and Related Chemical Tracers, version 3 (MOZART-3, approximately 60 species), and (3) comprehensive tropospheric and stratospheric chemistry (MOZART-4, approximately 120 species). Our results indicate the different details in chemistry treatment from these model components affect the surface temperature and impact the radiation budget.

  13. Modeling of Nitrogen Oxides Emissions from CFB Combustion

    NASA Astrophysics Data System (ADS)

    Kallio, S.; Keinonen, M.

    In this work, a simplified description of combustion and nitrogen oxides chemistry was implemented in a 1.5D model framework with the aim to compare the results with ones earlier obtained with a detailed reaction scheme. The simplified chemistry was written using 12 chemical components. Heterogeneous chemistry is given by the same models as in the earlier work but the homogeneous and catalytic reactions have been altered. The models have been taken from the literature. The paper describes the numerical model with emphasis on the chemistry submodels. A simulation of combustion of bituminous coal in the Chalmers 12 MW boiler is conducted and the results are compared with the results obtained earlier with the detailed chemistry description. The results are also compared with measured O2, CO, NO and N2O profiles. The simplified reaction scheme produces equally good results as earlier obtained with the more elaborate chemistry description.

  14. Simplified Model and Response Analysis for Crankshaft of Air Compressor

    NASA Astrophysics Data System (ADS)

    Chao-bo, Li; Jing-jun, Lou; Zhen-hai, Zhang

    2017-11-01

    The original model of crankshaft is simplified to the appropriateness to balance the calculation precision and calculation speed, and then the finite element method is used to analyse the vibration response of the structure. In order to study the simplification and stress concentration for crankshaft of air compressor, this paper compares calculative mode frequency and experimental mode frequency of the air compressor crankshaft before and after the simplification, the vibration response of reference point constraint conditions is calculated by using the simplified model, and the stress distribution of the original model is calculated. The results show that the error between calculative mode frequency and experimental mode frequency is controlled in less than 7%, the constraint will change the model density of the system, the position between the crank arm and the shaft appeared stress concentration, so the part of the crankshaft should be treated in the process of manufacture.

  15. Synergy and other ineffective mixture risk definitions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hertzberg, R.; MacDonell, M.; Environmental Assessment

    2002-04-08

    A substantial effort has been spent over the past few decades to label toxicologic interaction outcomes as synergistic, antagonistic, or additive. Although useful in influencing the emotions of the public and the press, these labels have contributed fairly little to our understanding of joint toxic action. Part of the difficulty is that their underlying toxicological concepts are only defined for two chemical mixtures, while most environmental and occupational exposures are to mixtures of many more chemicals. Furthermore, the mathematical characterizations of synergism and antagonism are inextricably linked to the prevailing definition of 'no interaction,' instead of some intrinsic toxicological property.more » For example, the US EPA has selected dose addition as the no-interaction definition for mixture risk assessment, so that synergism would represent toxic effects that exceed those predicted from dose addition. For now, labels such as synergism are useful to regulatory agencies, both for qualitative indications of public health risk as well as numerical decision tools for mixture risk characterization. Efforts to quantify interaction designations for use in risk assessment formulas, however, are highly simplified and carry large uncertainties. Several research directions, such as pharmacokinetic measurements and models, and toxicogenomics, should promote significant improvements by providing multi-component data that will allow biologically based mathematical models of joint toxicity to replace these pairwise interaction labels in mixture risk assessment procedures.« less

  16. The cost of simplifying air travel when modeling disease spread.

    PubMed

    Lessler, Justin; Kaufman, James H; Ford, Daniel A; Douglas, Judith V

    2009-01-01

    Air travel plays a key role in the spread of many pathogens. Modeling the long distance spread of infectious disease in these cases requires an air travel model. Highly detailed air transportation models can be over determined and computationally problematic. We compared the predictions of a simplified air transport model with those of a model of all routes and assessed the impact of differences on models of infectious disease. Using U.S. ticket data from 2007, we compared a simplified "pipe" model, in which individuals flow in and out of the air transport system based on the number of arrivals and departures from a given airport, to a fully saturated model where all routes are modeled individually. We also compared the pipe model to a "gravity" model where the probability of travel is scaled by physical distance; the gravity model did not differ significantly from the pipe model. The pipe model roughly approximated actual air travel, but tended to overestimate the number of trips between small airports and underestimate travel between major east and west coast airports. For most routes, the maximum number of false (or missed) introductions of disease is small (<1 per day) but for a few routes this rate is greatly underestimated by the pipe model. If our interest is in large scale regional and national effects of disease, the simplified pipe model may be adequate. If we are interested in specific effects of interventions on particular air routes or the time for the disease to reach a particular location, a more complex point-to-point model will be more accurate. For many problems a hybrid model that independently models some frequently traveled routes may be the best choice. Regardless of the model used, the effect of simplifications and sensitivity to errors in parameter estimation should be analyzed.

  17. Assessment of Geometry and In-Flow Effects on Contra-Rotating Open Rotor Broadband Noise Predictions

    NASA Technical Reports Server (NTRS)

    Zawodny, Nikolas S.; Nark, Douglas M.; Boyd, D. Douglas, Jr.

    2015-01-01

    Application of previously formulated semi-analytical models for the prediction of broadband noise due to turbulent rotor wake interactions and rotor blade trailing edges is performed on the historical baseline F31/A31 contra-rotating open rotor configuration. Simplified two-dimensional blade element analysis is performed on cambered NACA 4-digit airfoil profiles, which are meant to serve as substitutes for the actual rotor blade sectional geometries. Rotor in-flow effects such as induced axial and tangential velocities are incorporated into the noise prediction models based on supporting computational fluid dynamics (CFD) results and simplified in-flow velocity models. Emphasis is placed on the development of simplified rotor in-flow models for the purpose of performing accurate noise predictions independent of CFD information. The broadband predictions are found to compare favorably with experimental acoustic results.

  18. Receiving water quality assessment: comparison between simplified and detailed integrated urban modelling approaches.

    PubMed

    Mannina, Giorgio; Viviani, Gaspare

    2010-01-01

    Urban water quality management often requires use of numerical models allowing the evaluation of the cause-effect relationship between the input(s) (i.e. rainfall, pollutant concentrations on catchment surface and in sewer system) and the resulting water quality response. The conventional approach to the system (i.e. sewer system, wastewater treatment plant and receiving water body), considering each component separately, does not enable optimisation of the whole system. However, recent gains in understanding and modelling make it possible to represent the system as a whole and optimise its overall performance. Indeed, integrated urban drainage modelling is of growing interest for tools to cope with Water Framework Directive requirements. Two different approaches can be employed for modelling the whole urban drainage system: detailed and simplified. Each has its advantages and disadvantages. Specifically, detailed approaches can offer a higher level of reliability in the model results, but can be very time consuming from the computational point of view. Simplified approaches are faster but may lead to greater model uncertainty due to an over-simplification. To gain insight into the above problem, two different modelling approaches have been compared with respect to their uncertainty. The first urban drainage integrated model approach uses the Saint-Venant equations and the 1D advection-dispersion equations, for the quantity and for the quality aspects, respectively. The second model approach consists of the simplified reservoir model. The analysis used a parsimonious bespoke model developed in previous studies. For the uncertainty analysis, the Generalised Likelihood Uncertainty Estimation (GLUE) procedure was used. Model reliability was evaluated on the basis of capacity of globally limiting the uncertainty. Both models have a good capability to fit the experimental data, suggesting that all adopted approaches are equivalent both for quantity and quality. The detailed model approach is more robust and presents less uncertainty in terms of uncertainty bands. On the other hand, the simplified river water quality model approach shows higher uncertainty and may be unsuitable for receiving water body quality assessment.

  19. Statistical Issues for Uncontrolled Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Matney, Mark

    2008-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering objects to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. The statistical tools use this information to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of the analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper looks at a number of these theoretical assumptions, examining the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. In addition, this paper will also outline some new tools for assessing ground hazard risk in useful ways. Also, this study is able to make use of a database of known uncontrolled reentry locations measured by the United States Department of Defense. By using data from objects that were in orbit more than 30 days before reentry, sufficient time is allowed for the orbital parameters to be randomized in the way the models are designed to compute. The predicted ground footprint distributions of these objects are based on the theory that their orbits behave basically like simple Kepler orbits. However, there are a number of factors - including the effects of gravitational harmonics, the effects of the Earth's equatorial bulge on the atmosphere, and the rotation of the Earth and atmosphere - that could cause them to diverge from simple Kepler orbit behavior and change the ground footprints. The measured latitude and longitude distributions of these objects provide data that can be directly compared with the predicted distributions, providing a fundamental empirical test of the model assumptions.

  20. Tapping mode imaging with an interfacial force microscope

    NASA Astrophysics Data System (ADS)

    Warren, O. L.; Graham, J. F.; Norton, P. R.

    1997-11-01

    In their present embodiment, sensors used in interfacial force microscopy do not have the necessary mechanical bandwidth to be employed as free-running tapping mode devices. We describe an extremely stable method of obtaining tapping mode images using feedback on the sensor. Our method is immune to small dc drifts in the force signal, and the prospect of diminishing the risk of damaging fragile samples is realized. The feasibility of the technique is demonstrated by our imaging work on a Kevlar fiber-epoxy composite. We also present a model which accounts for the frequency dependence of the sensor in air when operating under closed loop control. A simplified force modulation model is investigated to explore the effect of contact on the closed loop response of the sensor.

  1. Thermal/structural design verification strategies for large space structures

    NASA Technical Reports Server (NTRS)

    Benton, David

    1988-01-01

    Requirements for space structures of increasing size, complexity, and precision have engendered a search for thermal design verification methods that do not impose unreasonable costs, that fit within the capabilities of existing facilities, and that still adequately reduce technical risk. This requires a combination of analytical and testing methods. This requires two approaches. The first is to limit thermal testing to sub-elements of the total system only in a compact configuration (i.e., not fully deployed). The second approach is to use a simplified environment to correlate analytical models with test results. These models can then be used to predict flight performance. In practice, a combination of these approaches is needed to verify the thermal/structural design of future very large space systems.

  2. Simplified Analytical Model of a Six-Degree-of-Freedom Large-Gap Magnetic Suspension System

    NASA Technical Reports Server (NTRS)

    Groom, Nelson J.

    1997-01-01

    A simplified analytical model of a six-degree-of-freedom large-gap magnetic suspension system is presented. The suspended element is a cylindrical permanent magnet that is magnetized in a direction which is perpendicular to its axis of symmetry. The actuators are air core electromagnets mounted in a planar array. The analytical model consists of an open-loop representation of the magnetic suspension system with electromagnet currents as inputs.

  3. simplified aerosol representations in global modeling

    NASA Astrophysics Data System (ADS)

    Kinne, Stefan; Peters, Karsten; Stevens, Bjorn; Rast, Sebastian; Schutgens, Nick; Stier, Philip

    2015-04-01

    The detailed treatment of aerosol in global modeling is complex and time-consuming. Thus simplified approaches are investigated, which prescribe 4D (space and time) distributions of aerosol optical properties and of aerosol microphysical properties. Aerosol optical properties are required to assess aerosol direct radiative effects and aerosol microphysical properties (in terms of their ability as aerosol nuclei to modify cloud droplet concentrations) are needed to address the indirect aerosol impact on cloud properties. Following the simplifying concept of the monthly gridded (1x1 lat/lon) aerosol climatology (MAC), new approaches are presented and evaluated against more detailed methods, including comparisons to detailed simulations with complex aerosol component modules.

  4. A simplified method for determining reactive rate parameters for reaction ignition and growth in explosives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, P.J.

    1996-07-01

    A simplified method for determining the reactive rate parameters for the ignition and growth model is presented. This simplified ignition and growth (SIG) method consists of only two adjustable parameters, the ignition (I) and growth (G) rate constants. The parameters are determined by iterating these variables in DYNA2D hydrocode simulations of the failure diameter and the gap test sensitivity until the experimental values are reproduced. Examples of four widely different explosives were evaluated using the SIG model. The observed embedded gauge stress-time profiles for these explosives are compared to those calculated by the SIG equation and the results are described.

  5. Debates—Perspectives on socio-hydrology: Modeling flood risk as a public policy problem

    NASA Astrophysics Data System (ADS)

    Gober, Patricia; Wheater, Howard S.

    2015-06-01

    Socio-hydrology views human activities as endogenous to water system dynamics; it is the interaction between human and biophysical processes that threatens the viability of current water systems through positive feedbacks and unintended consequences. Di Baldassarre et al. implement socio-hydrology as a flood risk problem using the concept of social memory as a vehicle to link human perceptions to flood damage. Their mathematical model has heuristic value in comparing potential flood damages in green versus technological societies. It can also support communities in exploring the potential consequences of policy decisions and evaluating critical policy tradeoffs, for example, between flood protection and economic development. The concept of social memory does not, however, adequately capture the social processes whereby public perceptions are translated into policy action, including the pivotal role played by the media in intensifying or attenuating perceived flood risk, the success of policy entrepreneurs in keeping flood hazard on the public agenda during short windows of opportunity for policy action, and different societal approaches to managing flood risk that derive from cultural values and economic interests. We endorse the value of seeking to capture these dynamics in a simplified conceptual framework, but favor a broader conceptualization of socio-hydrology that includes a knowledge exchange component, including the way modeling insights and scientific results are communicated to floodplain managers. The social processes used to disseminate the products of socio-hydrological research are as important as the research results themselves in determining whether modeling is used for real-world decision making.

  6. Risk factors of non-specific spinal pain in childhood.

    PubMed

    Szita, Julia; Boja, Sara; Szilagyi, Agnes; Somhegyi, Annamaria; Varga, Peter Pal; Lazary, Aron

    2018-05-01

    Non-specific spinal pain can occur at all ages and current evidence suggests that pediatric non-specific spinal pain is predictive for adult spinal conditions. A 5-year long, prospective cohort study was conducted to identify the lifestyle and environmental factors leading to non-specific spinal pain in childhood. Data were collected from school children aged 7-16 years, who were randomly selected from three different geographic regions in Hungary. The risk factors were measured with a newly developed patient-reported questionnaire (PRQ). The quality of the instrument was assessed by the reliability with the test-retest method. Test (N = 952) and validity (N = 897) datasets were randomly formed. Risk factors were identified with uni- and multivariate logistic regression models and the predictive performance of the final model was evaluated using the receiver operating characteristic (ROC) method. The final model was built up by seven risk factors for spinal pain for days; age > 12 years, learning or watching TV for more than 2 h/day, uncomfortable school-desk, sleeping problems, general discomfort and positive familiar medical history (χ 2  = 101.07; df = 8; p < 0.001). The probabilistic performance was confirmed with ROC analysis on the test and validation cohorts (AUC = 0.76; 0.71). A simplified risk scoring system showed increasing possibility for non-specific spinal pain depending on the number of the identified risk factors (χ 2  = 65.0; df = 4; p < 0.001). Seven significant risk factors of non-specific spinal pain in childhood were identified using the new, easy to use and reliable PRQ which makes it possible to stratify the children according to their individual risk. These slides can be retrieved under Electronic Supplementary Material.

  7. Evaluating performances of simplified physically based landslide susceptibility models.

    NASA Astrophysics Data System (ADS)

    Capparelli, Giovanna; Formetta, Giuseppe; Versace, Pasquale

    2015-04-01

    Rainfall induced shallow landslides cause significant damages involving loss of life and properties. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. This paper presents a package of GIS based models for landslide susceptibility analysis. It was integrated in the NewAge-JGrass hydrological model using the Object Modeling System (OMS) modeling framework. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices (GOF) by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system offers the possibility to investigate and fairly compare the quality and the robustness of models and models parameters, according a procedure that includes: i) model parameters estimation by optimizing each of the GOF index separately, ii) models evaluation in the ROC plane by using each of the optimal parameter set, and iii) GOF robustness evaluation by assessing their sensitivity to the input parameter variation. This procedure was repeated for all three models. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, Average Index (AI) optimization coupled with model M3 is the best modeling solution for our test case. This research was funded by PON Project No. 01_01503 "Integrated Systems for Hydrogeological Risk Monitoring, Early Warning and Mitigation Along the Main Lifelines", CUP B31H11000370005, in the framework of the National Operational Program for "Research and Competitiveness" 2007-2013.

  8. Integrated Research on the Development of Global Climate Risk Management Strategies - Framework and Initial Results of the Research Project ICA-RUS

    NASA Astrophysics Data System (ADS)

    Emori, Seita; Takahashi, Kiyoshi; Yamagata, Yoshiki; Oki, Taikan; Mori, Shunsuke; Fujigaki, Yuko

    2013-04-01

    With the aim of proposing strategies of global climate risk management, we have launched a five-year research project called ICA-RUS (Integrated Climate Assessment - Risks, Uncertainties and Society). In this project with the phrase "risk management" in its title, we aspire for a comprehensive assessment of climate change risks, explicit consideration of uncertainties, utilization of best available information, and consideration of every possible conditions and options. We also regard the problem as one of decision-making at the human level, which involves social value judgments and adapts to future changes in circumstances. The ICA-RUS project consists of the following five themes: 1) Synthesis of global climate risk management strategies, 2) Optimization of land, water and ecosystem uses for climate risk management, 3) Identification and analysis of critical climate risks, 4) Evaluation of climate risk management options under technological, social and economic uncertainties and 5) Interactions between scientific and social rationalities in climate risk management (see also: http://www.nies.go.jp/ica-rus/en/). For the integration of quantitative knowledge of climate change risks and responses, we apply a tool named AIM/Impact [Policy], which consists of an energy-economic model, a simplified climate model and impact projection modules. At the same time, in order to make use of qualitative knowledge as well, we hold monthly project meetings for the discussion of risk management strategies and publish annual reports based on the quantitative and qualitative information. To enhance the comprehensiveness of the analyses, we maintain an inventory of risks and risk management options. The inventory is revised iteratively through interactive meetings with stakeholders such as policymakers, government officials and industrial representatives.

  9. Comment on 'Parametrization of Stillinger-Weber potential based on a valence force field model: application to single-layer MoS2 and black phosphorus'.

    PubMed

    Midtvedt, Daniel; Croy, Alexander

    2016-06-10

    We compare the simplified valence-force model for single-layer black phosphorus with the original model and recent ab initio results. Using an analytic approach and numerical calculations we find that the simplified model yields Young's moduli that are smaller compared to the original model and are almost a factor of two smaller than ab initio results. Moreover, the Poisson ratios are an order of magnitude smaller than values found in the literature.

  10. Oak Ridge Spallation Neutron Source (ORSNS) target station design integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McManamy, T.; Booth, R.; Cleaves, J.

    1996-06-01

    The conceptual design for a 1- to 3-MW short pulse spallation source with a liquid mercury target has been started recently. The design tools and methods being developed to define requirements, integrate the work, and provide early cost guidance will be presented with a summary of the current target station design status. The initial design point was selected with performance and cost estimate projections by a systems code. This code was developed recently using cost estimates from the Brookhaven Pulsed Spallation Neutron Source study and experience from the Advanced Neutron Source Project`s conceptual design. It will be updated and improvedmore » as the design develops. Performance was characterized by a simplified figure of merit based on a ratio of neutron production to costs. A work breakdown structure was developed, with simplified systems diagrams used to define interfaces and system responsibilities. A risk assessment method was used to identify potential problems, to identify required research and development (R&D), and to aid contingency development. Preliminary 3-D models of the target station are being used to develop remote maintenance concepts and to estimate costs.« less

  11. Tire-rim interface pressure of a commercial vehicle wheel under radial loads: theory and experiment

    NASA Astrophysics Data System (ADS)

    Wan, Xiaofei; Shan, Yingchun; Liu, Xiandong; He, Tian; Wang, Jiegong

    2017-11-01

    The simulation of the radial fatigue test of a wheel has been a necessary tool to improve the design of the wheel and calculate its fatigue life. The simulation model, including the strong nonlinearity of the tire structure and material, may produce accurate results, but often leads to a divergence in calculation. Thus, a simplified simulation model in which the complicated tire model is replaced with a tire-wheel contact pressure model is used extensively in the industry. In this paper, a simplified tire-rim interface pressure model of a wheel under a radial load is established, and the pressure of the wheel under different radial loads is tested. The tire-rim contact behavior affected by the radial load is studied and analyzed according to the test result, and the tire-rim interface pressure extracted from the test result is used to evaluate the simplified pressure model and the traditional cosine function model. The results show that the proposed model may provide a more accurate prediction of the wheel radial fatigue life than the traditional cosine function model.

  12. Risk prediction score for death of traumatised and injured children

    PubMed Central

    2014-01-01

    Background Injury prediction scores facilitate the development of clinical management protocols to decrease mortality. However, most of the previously developed scores are limited in scope and are non-specific for use in children. We aimed to develop and validate a risk prediction model of death for injured and Traumatised Thai children. Methods Our cross-sectional study included 43,516 injured children from 34 emergency services. A risk prediction model was derived using a logistic regression analysis that included 15 predictors. Model performance was assessed using the concordance statistic (C-statistic) and the observed per expected (O/E) ratio. Internal validation of the model was performed using a 200-repetition bootstrap analysis. Results Death occurred in 1.7% of the injured children (95% confidence interval [95% CI]: 1.57–1.82). Ten predictors (i.e., age, airway intervention, physical injury mechanism, three injured body regions, the Glasgow Coma Scale, and three vital signs) were significantly associated with death. The C-statistic and the O/E ratio were 0.938 (95% CI: 0.929–0.947) and 0.86 (95% CI: 0.70–1.02), respectively. The scoring scheme classified three risk stratifications with respective likelihood ratios of 1.26 (95% CI: 1.25–1.27), 2.45 (95% CI: 2.42–2.52), and 4.72 (95% CI: 4.57–4.88) for low, intermediate, and high risks of death. Internal validation showed good model performance (C-statistic = 0.938, 95% CI: 0.926–0.952) and a small calibration bias of 0.002 (95% CI: 0.0005–0.003). Conclusions We developed a simplified Thai pediatric injury death prediction score with satisfactory calibrated and discriminative performance in emergency room settings. PMID:24575982

  13. Simplified subsurface modelling: data assimilation and violated model assumptions

    NASA Astrophysics Data System (ADS)

    Erdal, Daniel; Lange, Natascha; Neuweiler, Insa

    2017-04-01

    Integrated models are gaining more and more attention in hydrological modelling as they can better represent the interaction between different compartments. Naturally, these models come along with larger numbers of unknowns and requirements on computational resources compared to stand-alone models. If large model domains are to be represented, e.g. on catchment scale, the resolution of the numerical grid needs to be reduced or the model itself needs to be simplified. Both approaches lead to a reduced ability to reproduce the present processes. This lack of model accuracy may be compensated by using data assimilation methods. In these methods observations are used to update the model states, and optionally model parameters as well, in order to reduce the model error induced by the imposed simplifications. What is unclear is whether these methods combined with strongly simplified models result in completely data-driven models or if they can even be used to make adequate predictions of the model state for times when no observations are available. In the current work we consider the combined groundwater and unsaturated zone, which can be modelled in a physically consistent way using 3D-models solving the Richards equation. For use in simple predictions, however, simpler approaches may be considered. The question investigated here is whether a simpler model, in which the groundwater is modelled as a horizontal 2D-model and the unsaturated zones as a few sparse 1D-columns, can be used within an Ensemble Kalman filter to give predictions of groundwater levels and unsaturated fluxes. This is tested under conditions where the feedback between the two model-compartments are large (e.g. shallow groundwater table) and the simplification assumptions are clearly violated. Such a case may be a steep hill-slope or pumping wells, creating lateral fluxes in the unsaturated zone, or strong heterogeneous structures creating unaccounted flows in both the saturated and unsaturated compartments. Under such circumstances, direct modelling using a simplified model will not provide good results. However, a more data driven (e.g. grey box) approach, driven by the filter, may still provide an improved understanding of the system. Comparisons between full 3D simulations and simplified filter driven models will be shown and the resulting benefits and drawbacks will be discussed.

  14. Influence of vectors' risk-spreading strategies and environmental stochasticity on the epidemiology and evolution of vector-borne diseases: the example of Chagas' disease.

    PubMed

    Pelosse, Perrine; Kribs-Zaleta, Christopher M; Ginoux, Marine; Rabinovich, Jorge E; Gourbière, Sébastien; Menu, Frédéric

    2013-01-01

    Insects are known to display strategies that spread the risk of encountering unfavorable conditions, thereby decreasing the extinction probability of genetic lineages in unpredictable environments. To what extent these strategies influence the epidemiology and evolution of vector-borne diseases in stochastic environments is largely unknown. In triatomines, the vectors of the parasite Trypanosoma cruzi, the etiological agent of Chagas' disease, juvenile development time varies between individuals and such variation most likely decreases the extinction risk of vector populations in stochastic environments. We developed a simplified multi-stage vector-borne SI epidemiological model to investigate how vector risk-spreading strategies and environmental stochasticity influence the prevalence and evolution of a parasite. This model is based on available knowledge on triatomine biodemography, but its conceptual outcomes apply, to a certain extent, to other vector-borne diseases. Model comparisons between deterministic and stochastic settings led to the conclusion that environmental stochasticity, vector risk-spreading strategies (in particular an increase in the length and variability of development time) and their interaction have drastic consequences on vector population dynamics, disease prevalence, and the relative short-term evolution of parasite virulence. Our work shows that stochastic environments and associated risk-spreading strategies can increase the prevalence of vector-borne diseases and favor the invasion of more virulent parasite strains on relatively short evolutionary timescales. This study raises new questions and challenges in a context of increasingly unpredictable environmental variations as a result of global climate change and human interventions such as habitat destruction or vector control.

  15. Influence of Vectors’ Risk-Spreading Strategies and Environmental Stochasticity on the Epidemiology and Evolution of Vector-Borne Diseases: The Example of Chagas’ Disease

    PubMed Central

    Pelosse, Perrine; Kribs-Zaleta, Christopher M.; Ginoux, Marine; Rabinovich, Jorge E.; Gourbière, Sébastien; Menu, Frédéric

    2013-01-01

    Insects are known to display strategies that spread the risk of encountering unfavorable conditions, thereby decreasing the extinction probability of genetic lineages in unpredictable environments. To what extent these strategies influence the epidemiology and evolution of vector-borne diseases in stochastic environments is largely unknown. In triatomines, the vectors of the parasite Trypanosoma cruzi, the etiological agent of Chagas’ disease, juvenile development time varies between individuals and such variation most likely decreases the extinction risk of vector populations in stochastic environments. We developed a simplified multi-stage vector-borne SI epidemiological model to investigate how vector risk-spreading strategies and environmental stochasticity influence the prevalence and evolution of a parasite. This model is based on available knowledge on triatomine biodemography, but its conceptual outcomes apply, to a certain extent, to other vector-borne diseases. Model comparisons between deterministic and stochastic settings led to the conclusion that environmental stochasticity, vector risk-spreading strategies (in particular an increase in the length and variability of development time) and their interaction have drastic consequences on vector population dynamics, disease prevalence, and the relative short-term evolution of parasite virulence. Our work shows that stochastic environments and associated risk-spreading strategies can increase the prevalence of vector-borne diseases and favor the invasion of more virulent parasite strains on relatively short evolutionary timescales. This study raises new questions and challenges in a context of increasingly unpredictable environmental variations as a result of global climate change and human interventions such as habitat destruction or vector control. PMID:23951018

  16. BBN-Based Portfolio Risk Assessment for NASA Technology R&D Outcome

    NASA Technical Reports Server (NTRS)

    Geuther, Steven C.; Shih, Ann T.

    2016-01-01

    The NASA Aeronautics Research Mission Directorate (ARMD) vision falls into six strategic thrusts that are aimed to support the challenges of the Next Generation Air Transportation System (NextGen). In order to achieve the goals of the ARMD vision, the Airspace Operations and Safety Program (AOSP) is committed to developing and delivering new technologies. To meet the dual challenges of constrained resources and timely technology delivery, program portfolio risk assessment is critical for communication and decision-making. This paper describes how Bayesian Belief Network (BBN) is applied to assess the probability of a technology meeting the expected outcome. The network takes into account the different risk factors of technology development and implementation phases. The use of BBNs allows for all technologies of projects in a program portfolio to be separately examined and compared. In addition, the technology interaction effects are modeled through the application of object-oriented BBNs. The paper discusses the development of simplified project risk BBNs and presents various risk results. The results presented include the probability of project risks not meeting success criteria, the risk drivers under uncertainty via sensitivity analysis, and what-if analysis. Finally, the paper shows how program portfolio risk can be assessed using risk results from BBNs of projects in the portfolio.

  17. Simplified model of mean double step (MDS) in human body movement

    NASA Astrophysics Data System (ADS)

    Dusza, Jacek J.; Wawrzyniak, Zbigniew M.; Mugarra González, C. Fernando

    In this paper we present a simplified and useful model of the human body movement based on the full gait cycle description, called the Mean Double Step (MDS). It enables the parameterization and simplification of the human movement. Furthermore it allows a description of the gait cycle by providing standardized estimators to transform the gait cycle into a periodical movement process. Moreover the method of simplifying the MDS model and its compression are demonstrated. The simplification is achieved by reducing the number of bars of the spectrum and I or by reducing the number of samples describing the MDS both in terms of reducing their computational burden and their resources for the data storage. Our MDS model, which is applicable to the gait cycle method for examining patients, is non-invasive and provides the additional advantage of featuring a functional characterization of the relative or absolute movement of any part of the body.

  18. Cavitation and Wake Structure of Unsteady Tip Vortex Flows

    DTIC Science & Technology

    1992-12-10

    wake structure generated by three-dimensional lifting surfaces. No longer can the wake be modeled as a simple horseshoe vortex structure with the tip...first initiates. -13- Z Strtn vortex "~Bound vortex "’ ; b Wake 2 Figure 1.5 Far-Field Horseshoe Model of a Finite Wing This figure shows a finite wing...Figure 1.11 Simplified Illustration of Wake Structure Behind an Oscillating Wing This schematic shows a simplified model of the trailing vortex

  19. Simplified models for displaced dark matter signatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buchmueller, Oliver; De Roeck, Albert; Hahn, Kristian

    We propose a systematic programme to search for long-lived neutral particle signatures through a minimal set of displaced =E T searches (dMETs). Here, our approach is to extend the well-established dark matter simpli ed models to include displaced vertices. The dark matter simplified models are used to describe the primary production vertex. A displaced secondary vertex, characterised by the mass of the long-lived particle and its lifetime, is added for the displaced signature. We show how these models can be motivated by, and mapped onto, complete models such as gauge-mediated SUSY breaking and models of neutral naturalness. We also outlinemore » how this approach may be used to extend other simplified models to incorporate displaced signatures and to characterise searches for longlived charged particles. Displaced vertices are a striking signature which is often virtually background free, and thus provide an excellent target for the high-luminosity run of the Large Hadron Collider. The proposed models and searches provide a first step towards a systematic broadening of the displaced dark matter search programme.« less

  20. Molecular dynamics of conformational substates for a simplified protein model

    NASA Astrophysics Data System (ADS)

    Grubmüller, Helmut; Tavan, Paul

    1994-09-01

    Extended molecular dynamics simulations covering a total of 0.232 μs have been carried out on a simplified protein model. Despite its simplified structure, that model exhibits properties similar to those of more realistic protein models. In particular, the model was found to undergo transitions between conformational substates at a time scale of several hundred picoseconds. The computed trajectories turned out to be sufficiently long as to permit a statistical analysis of that conformational dynamics. To check whether effective descriptions neglecting memory effects can reproduce the observed conformational dynamics, two stochastic models were studied. A one-dimensional Langevin effective potential model derived by elimination of subpicosecond dynamical processes could not describe the observed conformational transition rates. In contrast, a simple Markov model describing the transitions between but neglecting dynamical processes within conformational substates reproduced the observed distribution of first passage times. These findings suggest, that protein dynamics generally does not exhibit memory effects at time scales above a few hundred picoseconds, but confirms the existence of memory effects at a picosecond time scale.

  1. Simplified models for displaced dark matter signatures

    DOE PAGES

    Buchmueller, Oliver; De Roeck, Albert; Hahn, Kristian; ...

    2017-09-18

    We propose a systematic programme to search for long-lived neutral particle signatures through a minimal set of displaced =E T searches (dMETs). Here, our approach is to extend the well-established dark matter simpli ed models to include displaced vertices. The dark matter simplified models are used to describe the primary production vertex. A displaced secondary vertex, characterised by the mass of the long-lived particle and its lifetime, is added for the displaced signature. We show how these models can be motivated by, and mapped onto, complete models such as gauge-mediated SUSY breaking and models of neutral naturalness. We also outlinemore » how this approach may be used to extend other simplified models to incorporate displaced signatures and to characterise searches for longlived charged particles. Displaced vertices are a striking signature which is often virtually background free, and thus provide an excellent target for the high-luminosity run of the Large Hadron Collider. The proposed models and searches provide a first step towards a systematic broadening of the displaced dark matter search programme.« less

  2. Quantifying risk and benchmarking performance in the adult intensive care unit.

    PubMed

    Higgins, Thomas L

    2007-01-01

    Morbidity, mortality, and length-of-stay outcomes in patients receiving critical care are difficult to interpret unless they are risk-stratified for diagnosis, presenting severity of illness, and other patient characteristics. Acuity adjustment systems for adults include the Acute Physiology And Chronic Health Evaluation (APACHE), the Mortality Probability Model (MPM), and the Simplified Acute Physiology Score (SAPS). All have recently been updated and recalibrated to reflect contemporary results. Specialized scores are also available for patient subpopulations where general acuity scores have drawbacks. Demand for outcomes data is likely to grow with pay-for-performance initiatives as well as for routine clinical, prognostic, administrative, and research applications. It is important for clinicians to understand how these scores are derived and how they are properly applied to quantify patient severity of illness and benchmark intensive care unit performance.

  3. Guidelines and Metrics for Assessing Space System Cost Estimates

    DTIC Science & Technology

    2008-01-01

    analysis time, reuse tooling, models , mechanical ground-support equipment [MGSE]) High mass margin ( simplifying assumptions used to bound solution...engineering environment changes High reuse of architecture, design , tools, code, test scripts, and commercial real- time operating systems Simplified life...Coronal Explorer TWTA traveling wave tube amplifier USAF U.S. Air Force USCM Unmanned Space Vehicle Cost Model USN U.S. Navy UV ultraviolet UVOT UV

  4. A new method, with application, for analysis of the impacts on flood risk of widely distributed enhanced hillslope storage

    NASA Astrophysics Data System (ADS)

    Metcalfe, Peter; Beven, Keith; Hankin, Barry; Lamb, Rob

    2018-04-01

    Enhanced hillslope storage is utilised in natural flood management in order to retain overland storm run-off and to reduce connectivity between fast surface flow pathways and the channel. Examples include excavated ponds, deepened or bunded accumulation areas, and gullies and ephemeral channels blocked with wooden barriers or debris dams. The performance of large, distributed networks of such measures is poorly understood. Extensive schemes can potentially retain large quantities of run-off, but there are indications that much of their effectiveness can be attributed to desynchronisation of sub-catchment flood waves. Inappropriately sited measures may therefore increase, rather than mitigate, flood risk. Fully distributed hydrodynamic models have been applied in limited studies but introduce significant computational complexity. The longer run times of such models also restrict their use for uncertainty estimation or evaluation of the many potential configurations and storm sequences that may influence the timings and magnitudes of flood waves. Here a simplified overland flow-routing module and semi-distributed representation of enhanced hillslope storage is developed. It is applied to the headwaters of a large rural catchment in Cumbria, UK, where the use of an extensive network of storage features is proposed as a flood mitigation strategy. The models were run within a Monte Carlo framework against data for a 2-month period of extreme flood events that caused significant damage in areas downstream. Acceptable realisations and likelihood weightings were identified using the GLUE uncertainty estimation framework. Behavioural realisations were rerun against the catchment model modified with the addition of the hillslope storage. Three different drainage rate parameters were applied across the network of hillslope storage. The study demonstrates that schemes comprising widely distributed hillslope storage can be modelled effectively within such a reduced complexity framework. It shows the importance of drainage rates from storage features while operating through a sequence of events. We discuss limitations in the simplified representation of overland flow-routing and representation and storage, and how this could be improved using experimental evidence. We suggest ways in which features could be grouped more strategically and thus improve the performance of such schemes.

  5. Predicting Time to Hospital Discharge for Extremely Preterm Infants

    PubMed Central

    Hintz, Susan R.; Bann, Carla M.; Ambalavanan, Namasivayam; Cotten, C. Michael; Das, Abhik; Higgins, Rosemary D.

    2010-01-01

    As extremely preterm infant mortality rates have decreased, concerns regarding resource utilization have intensified. Accurate models to predict time to hospital discharge could aid in resource planning, family counseling, and perhaps stimulate quality improvement initiatives. Objectives For infants <27 weeks estimated gestational age (EGA), to develop, validate and compare several models to predict time to hospital discharge based on time-dependent covariates, and based on the presence of 5 key risk factors as predictors. Patients and Methods This was a retrospective analysis of infants <27 weeks EGA, born 7/2002-12/2005 and surviving to discharge from a NICHD Neonatal Research Network site. Time to discharge was modeled as continuous (postmenstrual age at discharge, PMAD), and categorical variables (“Early” and “Late” discharge). Three linear and logistic regression models with time-dependent covariate inclusion were developed (perinatal factors only, perinatal+early neonatal factors, perinatal+early+later factors). Models for Early and Late discharge using the cumulative presence of 5 key risk factors as predictors were also evaluated. Predictive capabilities were compared using coefficient of determination (R2) for linear models, and AUC of ROC curve for logistic models. Results Data from 2254 infants were included. Prediction of PMAD was poor, with only 38% of variation explained by linear models. However, models incorporating later clinical characteristics were more accurate in predicting “Early” or “Late” discharge (full models: AUC 0.76-0.83 vs. perinatal factor models: AUC 0.56-0.69). In simplified key risk factors models, predicted probabilities for Early and Late discharge compared favorably with observed rates. Furthermore, the AUC (0.75-0.77) were similar to those of models including the full factor set. Conclusions Prediction of Early or Late discharge is poor if only perinatal factors are considered, but improves substantially with knowledge of later-occurring morbidities. Prediction using a few key risk factors is comparable to full models, and may offer a clinically applicable strategy. PMID:20008430

  6. A Simplified Finite Element Simulation for Straightening Process of Thin-Walled Tube

    NASA Astrophysics Data System (ADS)

    Zhang, Ziqian; Yang, Huilin

    2017-12-01

    The finite element simulation is an effective way for the study of thin-walled tube in the two cross rolls straightening process. To determine the accurate radius of curvature of the roll profile more efficiently, a simplified finite element model based on the technical parameters of an actual two cross roll straightening machine, was developed to simulate the complex straightening process. Then a dynamic simulation was carried out using ANSYS LS-DYNA program. The result implied that the simplified finite element model was reasonable for simulate the two cross rolls straightening process, and can be obtained the radius of curvature of the roll profile with the tube’s straightness 2 mm/m.

  7. Improving Estimation of Ground Casualty Risk From Reentering Space Objects

    NASA Technical Reports Server (NTRS)

    Ostrom, Chris L.

    2017-01-01

    A recent improvement to the long-term estimation of ground casualties from reentering space debris is the further refinement and update to the human population distribution. Previous human population distributions were based on global totals with simple scaling factors for future years, or a coarse grid of population counts in a subset of the world's countries, each cell having its own projected growth rate. The newest population model includes a 5-fold refinement in both latitude and longitude resolution. All areas along a single latitude are combined to form a global population distribution as a function of latitude, creating a more accurate population estimation based on non-uniform growth at the country and area levels. Previous risk probability calculations used simplifying assumptions that did not account for the ellipsoidal nature of the Earth. The new method uses first, a simple analytical method to estimate the amount of time spent above each latitude band for a debris object with a given orbit inclination and second, a more complex numerical method that incorporates the effects of a non-spherical Earth. These new results are compared with the prior models to assess the magnitude of the effects on reentry casualty risk.

  8. Improving Estimation of Ground Casualty Risk from Reentering Space Objects

    NASA Technical Reports Server (NTRS)

    Ostrom, C.

    2017-01-01

    A recent improvement to the long-term estimation of ground casualties from reentering space debris is the further refinement and update to the human population distribution. Previous human population distributions were based on global totals with simple scaling factors for future years, or a coarse grid of population counts in a subset of the world's countries, each cell having its own projected growth rate. The newest population model includes a 5-fold refinement in both latitude and longitude resolution. All areas along a single latitude are combined to form a global population distribution as a function of latitude, creating a more accurate population estimation based on non-uniform growth at the country and area levels. Previous risk probability calculations used simplifying assumptions that did not account for the ellipsoidal nature of the earth. The new method uses first, a simple analytical method to estimate the amount of time spent above each latitude band for a debris object with a given orbit inclination, and second, a more complex numerical method that incorporates the effects of a non-spherical Earth. These new results are compared with the prior models to assess the magnitude of the effects on reentry casualty risk.

  9. Multiple-Strain Approach and Probabilistic Modeling of Consumer Habits in Quantitative Microbial Risk Assessment: A Quantitative Assessment of Exposure to Staphylococcal Enterotoxin A in Raw Milk.

    PubMed

    Crotta, Matteo; Rizzi, Rita; Varisco, Giorgio; Daminelli, Paolo; Cunico, Elena Cosciani; Luini, Mario; Graber, Hans Ulrich; Paterlini, Franco; Guitian, Javier

    2016-03-01

    Quantitative microbial risk assessment (QMRA) models are extensively applied to inform management of a broad range of food safety risks. Inevitably, QMRA modeling involves an element of simplification of the biological process of interest. Two features that are frequently simplified or disregarded are the pathogenicity of multiple strains of a single pathogen and consumer behavior at the household level. In this study, we developed a QMRA model with a multiple-strain approach and a consumer phase module (CPM) based on uncertainty distributions fitted from field data. We modeled exposure to staphylococcal enterotoxin A in raw milk in Lombardy; a specific enterotoxin production module was thus included. The model is adaptable and could be used to assess the risk related to other pathogens in raw milk as well as other staphylococcal enterotoxins. The multiplestrain approach, implemented as a multinomial process, allowed the inclusion of variability and uncertainty with regard to pathogenicity at the bacterial level. Data from 301 questionnaires submitted to raw milk consumers were used to obtain uncertainty distributions for the CPM. The distributions were modeled to be easily updatable with further data or evidence. The sources of uncertainty due to the multiple-strain approach and the CPM were identified, and their impact on the output was assessed by comparing specific scenarios to the baseline. When the distributions reflecting the uncertainty in consumer behavior were fixed to the 95th percentile, the risk of exposure increased up to 160 times. This reflects the importance of taking into consideration the diversity of consumers' habits at the household level and the impact that the lack of knowledge about variables in the CPM can have on the final QMRA estimates. The multiple-strain approach lends itself to use in other food matrices besides raw milk and allows the model to better capture the complexity of the real world and to be capable of geographical specificity.

  10. Identification of patients at high risk for Clostridium difficile infection: development and validation of a risk prediction model in hospitalized patients treated with antibiotics.

    PubMed

    van Werkhoven, C H; van der Tempel, J; Jajou, R; Thijsen, S F T; Diepersloot, R J A; Bonten, M J M; Postma, D F; Oosterheert, J J

    2015-08-01

    To develop and validate a prediction model for Clostridium difficile infection (CDI) in hospitalized patients treated with systemic antibiotics, we performed a case-cohort study in a tertiary (derivation) and secondary care hospital (validation). Cases had a positive Clostridium test and were treated with systemic antibiotics before suspicion of CDI. Controls were randomly selected from hospitalized patients treated with systemic antibiotics. Potential predictors were selected from the literature. Logistic regression was used to derive the model. Discrimination and calibration of the model were tested in internal and external validation. A total of 180 cases and 330 controls were included for derivation. Age >65 years, recent hospitalization, CDI history, malignancy, chronic renal failure, use of immunosuppressants, receipt of antibiotics before admission, nonsurgical admission, admission to the intensive care unit, gastric tube feeding, treatment with cephalosporins and presence of an underlying infection were independent predictors of CDI. The area under the receiver operating characteristic curve of the model in the derivation cohort was 0.84 (95% confidence interval 0.80-0.87), and was reduced to 0.81 after internal validation. In external validation, consisting of 97 cases and 417 controls, the model area under the curve was 0.81 (95% confidence interval 0.77-0.85) and model calibration was adequate (Brier score 0.004). A simplified risk score was derived. Using a cutoff of 7 points, the positive predictive value, sensitivity and specificity were 1.0%, 72% and 73%, respectively. In conclusion, a risk prediction model was developed and validated, with good discrimination and calibration, that can be used to target preventive interventions in patients with increased risk of CDI. Copyright © 2015 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  11. A Mathematical Model for the Bee Hive of Apis Mellifera

    NASA Astrophysics Data System (ADS)

    Antonioni, Alberto; Bellom, Fabio Enrici; Montabone, Andrea; Venturino, Ezio

    2010-09-01

    In this work we introduce and discuss a model for the bee hive, in which only adult bees and drones are modeled. The role that the latter have in the system is interesting, their population can retrieve even if they are totally absent from the bee hive. The feasibility and stability of the equilibria is studied numerically. A simplified version of the model shows the importance of the drones' role, in spite of the fact that it allows only a trivial equilibrium. For this simplified system, no Hopf bifurcations are shown to arise.

  12. Definition of ground test for verification of large space structure control

    NASA Technical Reports Server (NTRS)

    Doane, G. B., III; Glaese, J. R.; Tollison, D. K.; Howsman, T. G.; Curtis, S. (Editor); Banks, B.

    1984-01-01

    Control theory and design, dynamic system modelling, and simulation of test scenarios are the main ideas discussed. The overall effort is the achievement at Marshall Space Flight Center of a successful ground test experiment of a large space structure. A simplified planar model of ground test experiment of a large space structure. A simplified planar model of ground test verification was developed. The elimination from that model of the uncontrollable rigid body modes was also examined. Also studied was the hardware/software of computation speed.

  13. Electric Power Distribution System Model Simplification Using Segment Substitution

    DOE PAGES

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; ...

    2017-09-20

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less

  14. Calculation of Thermally-Induced Displacements in Spherically Domed Ion Engine Grids

    NASA Technical Reports Server (NTRS)

    Soulas, George C.

    2006-01-01

    An analytical method for predicting the thermally-induced normal and tangential displacements of spherically domed ion optics grids under an axisymmetric thermal loading is presented. A fixed edge support that could be thermally expanded is used for this analysis. Equations for the displacements both normal and tangential to the surface of the spherical shell are derived. A simplified equation for the displacement at the center of the spherical dome is also derived. The effects of plate perforation on displacements and stresses are determined by modeling the perforated plate as an equivalent solid plate with modified, or effective, material properties. Analytical model results are compared to the results from a finite element model. For the solid shell, comparisons showed that the analytical model produces results that closely match the finite element model results. The simplified equation for the normal displacement of the spherical dome center is also found to accurately predict this displacement. For the perforated shells, the analytical solution and simplified equation produce accurate results for materials with low thermal expansion coefficients.

  15. Boosting invisible searches via Z H : From the Higgs boson to dark matter simplified models

    NASA Astrophysics Data System (ADS)

    Gonçalves, Dorival; Krauss, Frank; Kuttimalai, Silvan; Maierhöfer, Philipp

    2016-09-01

    Higgs boson production in association with a Z boson at the LHC is analyzed, both in the Standard Model and in simplified model extensions for dark matter. We focus on H →invisibles searches and show that loop-induced components for both the signal and background present phenomenologically relevant contributions to the B R (H →inv) limits. We also show how multijet merging improves the description of key distributions to this analysis. In addition, the constraining power of this channel to simplified models for dark matter with scalar and pseudoscalar mediators ϕ and A is discussed and compared with noncollider constraints. We find that with 100 fb-1 of LHC data, this channel provides competitive constraints to the noncollider bounds, for most of the parameter space we consider, bounding the universal Standard Model fermion-mediator strength at gv<1 for moderate masses in the range of 100 GeV

  16. Electric Power Distribution System Model Simplification Using Segment Substitution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less

  17. Consequences of Landscape Fragmentation on Lyme Disease Risk: A Cellular Automata Approach

    PubMed Central

    Li, Sen; Hartemink, Nienke; Speybroeck, Niko; Vanwambeke, Sophie O.

    2012-01-01

    The abundance of infected Ixodid ticks is an important component of human risk of Lyme disease, and various empirical studies have shown that this is associated, at least in part, to landscape fragmentation. In this study, we aimed at exploring how varying woodland fragmentation patterns affect the risk of Lyme disease, through infected tick abundance. A cellular automata model was developed, incorporating a heterogeneous landscape with three interactive components: an age-structured tick population, a classical disease transmission function, and hosts. A set of simplifying assumptions were adopted with respect to the study objective and field data limitations. In the model, the landscape influences both tick survival and host movement. The validation of the model was performed with an empirical study. Scenarios of various landscape configurations (focusing on woodland fragmentation) were simulated and compared. Lyme disease risk indices (density and infection prevalence of nymphs) differed considerably between scenarios: (i) the risk could be higher in highly fragmented woodlands, which is supported by a number of recently published empirical studies, and (ii) grassland could reduce the risk in adjacent woodland, which suggests landscape fragmentation studies of zoonotic diseases should not focus on the patch-level woodland patterns only, but also on landscape-level adjacent land cover patterns. Further analysis of the simulation results indicated strong correlations between Lyme disease risk indices and the density, shape and aggregation level of woodland patches. These findings highlight the strong effect of the spatial patterns of local host population and movement on the spatial dynamics of Lyme disease risks, which can be shaped by woodland fragmentation. In conclusion, using a cellular automata approach is beneficial for modelling complex zoonotic transmission systems as it can be combined with either real world landscapes for exploring direct spatial effects or artificial representations for outlining possible empirical investigations. PMID:22761842

  18. Multi-Drug Resistance Transporters and a Mechanism-Based Strategy for Assessing Risks of Pesticide Combinations to Honey Bees

    PubMed Central

    Guseman, Alex J.; Miller, Kaliah; Kunkle, Grace; Dively, Galen P.; Pettis, Jeffrey S.; Evans, Jay D.; vanEngelsdorp, Dennis; Hawthorne, David J.

    2016-01-01

    Annual losses of honey bee colonies remain high and pesticide exposure is one possible cause. Dangerous combinations of pesticides, plant-produced compounds and antibiotics added to hives may cause or contribute to losses, but it is very difficult to test the many combinations of those compounds that bees encounter. We propose a mechanism-based strategy for simplifying the assessment of combinations of compounds, focusing here on compounds that interact with xenobiotic handling ABC transporters. We evaluate the use of ivermectin as a model substrate for these transporters. Compounds that increase sensitivity of bees to ivermectin may be inhibiting key transporters. We show that several compounds commonly encountered by honey bees (fumagillin, Pristine, quercetin) significantly increased honey bee mortality due to ivermectin and significantly reduced the LC50 of ivermectin suggesting that they may interfere with transporter function. These inhibitors also significantly increased honey bees sensitivity to the neonicotinoid insecticide acetamiprid. This mechanism-based strategy may dramatically reduce the number of tests needed to assess the possibility of adverse combinations among pesticides. We also demonstrate an in vivo transporter assay that provides physical evidence of transporter inhibition by tracking the dynamics of a fluorescent substrate of these transporters (Rhodamine B) in bee tissues. Significantly more Rhodamine B remains in the head and hemolymph of bees pretreated with higher concentrations of the transporter inhibitor verapamil. Mechanism-based strategies for simplifying the assessment of adverse chemical interactions such as described here could improve our ability to identify those combinations that pose significantly greater risk to bees and perhaps improve the risk assessment protocols for honey bees and similar sensitive species. PMID:26840460

  19. Multi-Drug Resistance Transporters and a Mechanism-Based Strategy for Assessing Risks of Pesticide Combinations to Honey Bees.

    PubMed

    Guseman, Alex J; Miller, Kaliah; Kunkle, Grace; Dively, Galen P; Pettis, Jeffrey S; Evans, Jay D; vanEngelsdorp, Dennis; Hawthorne, David J

    2016-01-01

    Annual losses of honey bee colonies remain high and pesticide exposure is one possible cause. Dangerous combinations of pesticides, plant-produced compounds and antibiotics added to hives may cause or contribute to losses, but it is very difficult to test the many combinations of those compounds that bees encounter. We propose a mechanism-based strategy for simplifying the assessment of combinations of compounds, focusing here on compounds that interact with xenobiotic handling ABC transporters. We evaluate the use of ivermectin as a model substrate for these transporters. Compounds that increase sensitivity of bees to ivermectin may be inhibiting key transporters. We show that several compounds commonly encountered by honey bees (fumagillin, Pristine, quercetin) significantly increased honey bee mortality due to ivermectin and significantly reduced the LC50 of ivermectin suggesting that they may interfere with transporter function. These inhibitors also significantly increased honey bees sensitivity to the neonicotinoid insecticide acetamiprid. This mechanism-based strategy may dramatically reduce the number of tests needed to assess the possibility of adverse combinations among pesticides. We also demonstrate an in vivo transporter assay that provides physical evidence of transporter inhibition by tracking the dynamics of a fluorescent substrate of these transporters (Rhodamine B) in bee tissues. Significantly more Rhodamine B remains in the head and hemolymph of bees pretreated with higher concentrations of the transporter inhibitor verapamil. Mechanism-based strategies for simplifying the assessment of adverse chemical interactions such as described here could improve our ability to identify those combinations that pose significantly greater risk to bees and perhaps improve the risk assessment protocols for honey bees and similar sensitive species.

  20. Reliability and availability modeling of coupled communication networks - A simplified modeling approach

    NASA Technical Reports Server (NTRS)

    Shooman, Martin L.; Cortes, Eladio R.

    1991-01-01

    The network-complexity of LANs and of LANs that are interconnected by bridges and routers poses a challenging reliability-modeling problem. The present effort toward these problems' solution attempts to simplify them by reducing their number of states through truncation and state merging, as suggested by Shooman and Laemmel (1990). Through the use of state merging, it becomes possible to reduce the Bateman-Cortes 161 state model to a two state model with a closed-form solution. In the case of coupled networks, a technique which allows for problem-decomposition must be used.

  1. Four-Wave-Mixing Oscillations in a simplified Boltzmannian semiconductor model with LO-phonons

    NASA Astrophysics Data System (ADS)

    Tamborenea, P. I.; Bányai, L.; Haug, H.

    1996-03-01

    The recently discovered(L. Bányai, D. B. Tran Thoai, E. Reitsamer, H. Haug, D. Steinbach, M. U. Wehner, M. Wegener, T. Marschner and W. Stolz, Phys. Rev. Lett. 75), 2188 (1995). oscillations of the integrated four-wave-mixing signal in semiconductors due to electron-LO-phonon scattering are studied within a simplified Boltzmann-type model. Although several aspects of the experimental results require a description within the framework of non-Markovian quantum-kinetic theory, our simplified Boltzmannian model is well suited to analyze the origin of the observed novel oscillations of frequency (1+m_e/m_h) hbarω_LO. To this end, we developed a third-order, analytic solution of the semiconductor Bloch equations (SBE) with Boltzmann-type, LO-phonon collision terms. Results of this theory along with numerical solutions of the SBE will be presented.

  2. River predisposition to ice jams: a simplified geospatial model

    NASA Astrophysics Data System (ADS)

    De Munck, Stéphane; Gauthier, Yves; Bernier, Monique; Chokmani, Karem; Légaré, Serge

    2017-07-01

    Floods resulting from river ice jams pose a great risk to many riverside municipalities in Canada. The location of an ice jam is mainly influenced by channel morphology. The goal of this work was therefore to develop a simplified geospatial model to estimate the predisposition of a river channel to ice jams. Rather than predicting the timing of river ice breakup, the main question here was to predict where the broken ice is susceptible to jam based on the river's geomorphological characteristics. Thus, six parameters referred to potential causes for ice jams in the literature were initially selected: presence of an island, narrowing of the channel, high sinuosity, presence of a bridge, confluence of rivers, and slope break. A GIS-based tool was used to generate the aforementioned factors over regular-spaced segments along the entire channel using available geospatial data. An ice jam predisposition index (IJPI) was calculated by combining the weighted optimal factors. Three Canadian rivers (province of Québec) were chosen as test sites. The resulting maps were assessed from historical observations and local knowledge. Results show that 77 % of the observed ice jam sites on record occurred in river sections that the model considered as having high or medium predisposition. This leaves 23 % of false negative errors (missed occurrence). Between 7 and 11 % of the highly predisposed river sections did not have an ice jam on record (false-positive cases). Results, limitations, and potential improvements are discussed.

  3. Structural equation modeling in environmental risk assessment.

    PubMed

    Buncher, C R; Succop, P A; Dietrich, K N

    1991-01-01

    Environmental epidemiology requires effective models that take individual observations of environmental factors and connect them into meaningful patterns. Single-factor relationships have given way to multivariable analyses; simple additive models have been augmented by multiplicative (logistic) models. Each of these steps has produced greater enlightenment and understanding. Models that allow for factors causing outputs that can affect later outputs with putative causation working at several different time points (e.g., linkage) are not commonly used in the environmental literature. Structural equation models are a class of covariance structure models that have been used extensively in economics/business and social science but are still little used in the realm of biostatistics. Path analysis in genetic studies is one simplified form of this class of models. We have been using these models in a study of the health and development of infants who have been exposed to lead in utero and in the postnatal home environment. These models require as input the directionality of the relationship and then produce fitted models for multiple inputs causing each factor and the opportunity to have outputs serve as input variables into the next phase of the simultaneously fitted model. Some examples of these models from our research are presented to increase familiarity with this class of models. Use of these models can provide insight into the effect of changing an environmental factor when assessing risk. The usual cautions concerning believing a model, believing causation has been proven, and the assumptions that are required for each model are operative.

  4. Modeling of the metallic port in breast tissue expanders for photon radiotherapy.

    PubMed

    Yoon, Jihyung; Xie, Yibo; Heins, David; Zhang, Rui

    2018-03-30

    The purpose of this study was to model the metallic port in breast tissue expanders and to improve the accuracy of dose calculations in a commercial photon treatment planning system (TPS). The density of the model was determined by comparing TPS calculations and ion chamber (IC) measurements. The model was further validated and compared with two widely used clinical models by using a simplified anthropomorphic phantom and thermoluminescent dosimeters (TLD) measurements. Dose perturbations and target coverage for a single postmastectomy radiotherapy (PMRT) patient were also evaluated. The dimensions of the metallic port model were determined to be 1.75 cm in diameter and 5 mm in thickness. The density of the port was adjusted to be 7.5 g/cm 3 which minimized the differences between IC measurements and TPS calculations. Using the simplified anthropomorphic phantom, we found the TPS calculated point doses based on the new model were in agreement with TLD measurements within 5.0% and were more accurate than doses calculated based on the clinical models. Based on the photon treatment plans for a real patient, we found that the metallic port has a negligible dosimetric impact on chest wall, while the port introduced significant dose shadow in skin area. The current clinical port models either overestimate or underestimate the attenuation from the metallic port, and the dose perturbation depends on the plan and the model in a complex way. TPS calculations based on our model of the metallic port showed good agreement with measurements for all cases. This new model could improve the accuracy of dose calculations for PMRT patients who have temporary tissue expanders implanted during radiotherapy and could potentially reduce the risk of complications after the treatment. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  5. Dog leukocyte antigen class II-associated genetic risk testing for immune disorders of dogs: simplified approaches using Pug dog necrotizing meningoencephalitis as a model.

    PubMed

    Pedersen, Niels; Liu, Hongwei; Millon, Lee; Greer, Kimberly

    2011-01-01

    A significantly increased risk for a number of autoimmune and infectious diseases in purebred and mixed-breed dogs has been associated with certain alleles or allele combinations of the dog leukocyte antigen (DLA) class II complex containing the DRB1, DQA1, and DQB1 genes. The exact level of risk depends on the specific disease, the alleles in question, and whether alleles exist in a homozygous or heterozygous state. The gold standard for identifying high-risk alleles and their zygosity has involved direct sequencing of the exon 2 regions of each of the 3 genes. However, sequencing and identification of specific alleles at each of the 3 loci are relatively expensive and sequencing techniques are not ideal for additional parentage or identity determination. However, it is often possible to get the same information from sequencing only 1 gene given the small number of possible alleles at each locus in purebred dogs, extensive homozygosity, and tendency for disease-causing alleles at each of the 3 loci to be strongly linked to each other into haplotypes. Therefore, genetic testing in purebred dogs with immune diseases can be often simplified by sequencing alleles at 1 rather than 3 loci. Further simplification of genetic tests for canine immune diseases can be achieved by the use of alternative genetic markers in the DLA class II region that are also strongly linked with the disease genotype. These markers consist of either simple tandem repeats or single nucleotide polymorphisms that are also in strong linkage with specific DLA class II genotypes and/or haplotypes. The current study uses necrotizing meningoencephalitis of Pug dogs as a paradigm to assess simple alternative genetic tests for disease risk. It was possible to attain identical necrotizing meningoencephalitis risk assessments to 3-locus DLA class II sequencing by sequencing only the DQB1 gene, using 3 DLA class II-linked simple tandem repeat markers, or with a small single nucleotide polymorphism array designed to identify breed-specific DQB1 alleles.

  6. [Prediction of postoperative nausea and vomiting using an artificial neural network].

    PubMed

    Traeger, M; Eberhart, A; Geldner, G; Morin, A M; Putzke, C; Wulf, H; Eberhart, L H J

    2003-12-01

    Postoperative nausea and vomiting (PONV) are still frequent side-effects after general anaesthesia. These unpleasant symptoms for the patients can be sufficiently reduced using a multimodal antiemetic approach. However, these efforts should be restricted to risk patients for PONV. Thus, predictive models are required to identify these patients before surgery. So far all risk scores to predict PONV are based on results of logistic regression analysis. Artificial neural networks (ANN) can also be used for prediction since they can take into account complex and non-linear relationships between predictive variables and the dependent item. This study presents the development of an ANN to predict PONV and compares its performance with two established simplified risk scores (Apfel's and Koivuranta's scores). The development of the ANN was based on data from 1,764 patients undergoing elective surgical procedures under balanced anaesthesia. The ANN was trained with 1,364 datasets and a further 400 were used for supervising the learning process. One of the 49 ANNs showing the best predictive performance was compared with the established risk scores with respect to practicability, discrimination (by means of the area under a receiver operating characteristics curve) and calibration properties (by means of a weighted linear regression between the predicted and the actual incidences of PONV). The ANN tested showed a statistically significant ( p<0.0001) and clinically relevant higher discriminating power (0.74; 95% confidence interval: 0.70-0.78) than the Apfel score (0.66; 95% CI: 0.61-0.71) or Koivuranta's score (0.69; 95% CI: 0.65-0.74). Furthermore, the agreement between the actual incidences of PONV and those predicted by the ANN was also better and near to an ideal fit, represented by the equation y=1.0x+0. The equations for the calibration curves were: KNN y=1.11x+0, Apfel y=0.71x+1, Koivuranta 0.86x-5. The improved predictive accuracy achieved by the ANN is clinically relevant. However, the disadvantages of this system prevail because a computer is required for risk calculation. Thus, we still recommend the use of one of the simplified risk scores for clinical practice.

  7. A simplified regimen of targeted antifungal prophylaxis in liver transplant recipients: A single-center experience.

    PubMed

    Lavezzo, B; Patrono, D; Tandoi, F; Martini, S; Fop, F; Ballerini, V; Stratta, C; Skurzak, S; Lupo, F; Strignano, P; Donadio, P P; Salizzoni, M; Romagnoli, R; De Rosa, F G

    2018-04-01

    Invasive fungal infection (IFI) is a severe complication of liver transplantation burdened by high mortality. Guidelines recommend targeted rather than universal antifungal prophylaxis based on tiers of risk. We aimed to evaluate IFI incidence, risk factors, and outcome after implementation of a simplified two-tiered targeted prophylaxis regimen based on a single broad-spectrum antifungal drug (amphotericin B). Patients presenting 1 or more risk factors according to literature were administered prophylaxis. Prospectively collected data on all adult patients transplanted in Turin from January 2011 to December 2015 were reviewed. Patients re-transplanted before postoperative day 7 were considered once, yielding a study cohort of 581 cases. Prophylaxis was administered to 299 (51.4%) patients; adherence to protocol was 94.1%. Sixteen patients developed 18 IFIs for an overall rate of 2.8%. All IFI cases were in targeted prophylaxis group; none of the non-prophylaxis group developed IFI. Most cases (81.3%) presented within 30 days after transplantation during prophylaxis; predominant pathogens were molds (94.4%). Only 1 case of candidemia was observed. One-year mortality in IFI patients was 33.3% vs 6.4% in patients without IFI (P = .001); IFI attributable mortality was 6.3%. At multivariate analysis, significant risk factors for IFI were renal replacement therapy (OR = 8.1) and re-operation (OR = 5.2). The implementation of a simplified targeted prophylaxis regimen appeared to be safe and applicable and was associated with low IFI incidence and mortality. Association of IFI with re-operation and renal replacement therapy calls for further studies to identify optimal prophylaxis in this subset of patients. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. Prediction of NOx emissions from a simplified biodiesel surrogate by applying stochastic simulation algorithms (SSA)

    NASA Astrophysics Data System (ADS)

    Omidvarborna, Hamid; Kumar, Ashok; Kim, Dong-Shik

    2017-03-01

    A stochastic simulation algorithm (SSA) approach is implemented with the components of a simplified biodiesel surrogate to predict NOx (NO and NO2) emission concentrations from the combustion of biodiesel. The main reaction pathways were obtained by simplifying the previously derived skeletal mechanisms, including saturated methyl decenoate (MD), unsaturated methyl 5-decanoate (MD5D), and n-decane (ND). ND was added to match the energy content and the C/H/O ratio of actual biodiesel fuel. The MD/MD5D/ND surrogate model was also equipped with H2/CO/C1 formation mechanisms and a simplified NOx formation mechanism. The predicted model results are in good agreement with a limited number of experimental data at low-temperature combustion (LTC) conditions for three different biodiesel fuels consisting of various ratios of unsaturated and saturated methyl esters. The root mean square errors (RMSEs) of predicted values are 0.0020, 0.0018, and 0.0025 for soybean methyl ester (SME), waste cooking oil (WCO), and tallow oil (TO), respectively. The SSA model showed the potential to predict NOx emission concentrations, when the peak combustion temperature increased through the addition of ultra-low sulphur diesel (ULSD) to biodiesel. The SSA method used in this study demonstrates the possibility of reducing the computational complexity in biodiesel emissions modelling.

  9. Active and Reactive Power Optimal Dispatch Associated with Load and DG Uncertainties in Active Distribution Network

    NASA Astrophysics Data System (ADS)

    Gao, F.; Song, X. H.; Zhang, Y.; Li, J. F.; Zhao, S. S.; Ma, W. Q.; Jia, Z. Y.

    2017-05-01

    In order to reduce the adverse effects of uncertainty on optimal dispatch in active distribution network, an optimal dispatch model based on chance-constrained programming is proposed in this paper. In this model, the active and reactive power of DG can be dispatched at the aim of reducing the operating cost. The effect of operation strategy on the cost can be reflected in the objective which contains the cost of network loss, DG curtailment, DG reactive power ancillary service, and power quality compensation. At the same time, the probabilistic constraints can reflect the operation risk degree. Then the optimal dispatch model is simplified as a series of single stage model which can avoid large variable dimension and improve the convergence speed. And the single stage model is solved using a combination of particle swarm optimization (PSO) and point estimate method (PEM). Finally, the proposed optimal dispatch model and method is verified by the IEEE33 test system.

  10. Effects of human running cadence and experimental validation of the bouncing ball model

    NASA Astrophysics Data System (ADS)

    Bencsik, László; Zelei, Ambrus

    2017-05-01

    The biomechanical analysis of human running is a complex problem, because of the large number of parameters and degrees of freedom. However, simplified models can be constructed, which are usually characterized by some fundamental parameters, like step length, foot strike pattern and cadence. The bouncing ball model of human running is analysed theoretically and experimentally in this work. It is a minimally complex dynamic model when the aim is to estimate the energy cost of running and the tendency of ground-foot impact intensity as a function of cadence. The model shows that cadence has a direct effect on energy efficiency of running and ground-foot impact intensity. Furthermore, it shows that higher cadence implies lower risk of injury and better energy efficiency. An experimental data collection of 121 amateur runners is presented. The experimental results validate the model and provides information about the walk-to-run transition speed and the typical development of cadence and grounded phase ratio in different running speed ranges.

  11. A simplified fractional order impedance model and parameter identification method for lithium-ion batteries

    PubMed Central

    Yang, Qingxia; Xu, Jun; Cao, Binggang; Li, Xiuqing

    2017-01-01

    Identification of internal parameters of lithium-ion batteries is a useful tool to evaluate battery performance, and requires an effective model and algorithm. Based on the least square genetic algorithm, a simplified fractional order impedance model for lithium-ion batteries and the corresponding parameter identification method were developed. The simplified model was derived from the analysis of the electrochemical impedance spectroscopy data and the transient response of lithium-ion batteries with different states of charge. In order to identify the parameters of the model, an equivalent tracking system was established, and the method of least square genetic algorithm was applied using the time-domain test data. Experiments and computer simulations were carried out to verify the effectiveness and accuracy of the proposed model and parameter identification method. Compared with a second-order resistance-capacitance (2-RC) model and recursive least squares method, small tracing voltage fluctuations were observed. The maximum battery voltage tracing error for the proposed model and parameter identification method is within 0.5%; this demonstrates the good performance of the model and the efficiency of the least square genetic algorithm to estimate the internal parameters of lithium-ion batteries. PMID:28212405

  12. Laboratory-based versus non-laboratory-based method for assessment of cardiovascular disease risk: the NHANES I Follow-up Study cohort

    PubMed Central

    Gaziano, Thomas A; Young, Cynthia R; Fitzmaurice, Garrett; Atwood, Sidney; Gaziano, J Michael

    2008-01-01

    Summary Background Around 80% of all cardiovascular deaths occur in developing countries. Assessment of those patients at high risk is an important strategy for prevention. Since developing countries have limited resources for prevention strategies that require laboratory testing, we assessed if a risk prediction method that did not require any laboratory tests could be as accurate as one requiring laboratory information. Methods The National Health and Nutrition Examination Survey (NHANES) was a prospective cohort study of 14 407 US participants aged between 25–74 years at the time they were first examined (between 1971 and 1975). Our follow-up study population included participants with complete information on these surveys who did not report a history of cardiovascular disease (myocardial infarction, heart failure, stroke, angina) or cancer, yielding an analysis dataset N=6186. We compared how well either method could predict first-time fatal and non-fatal cardiovascular disease events in this cohort. For the laboratory-based model, which required blood testing, we used standard risk factors to assess risk of cardiovascular disease: age, systolic blood pressure, smoking status, total cholesterol, reported diabetes status, and current treatment for hypertension. For the non-laboratory-based model, we substituted body-mass index for cholesterol. Findings In the cohort of 6186, there were 1529 first-time cardiovascular events and 578 (38%) deaths due to cardiovascular disease over 21 years. In women, the laboratory-based model was useful for predicting events, with a c statistic of 0·829. The c statistic of the non-laboratory-based model was 0·831. In men, the results were similar (0·784 for the laboratory-based model and 0·783 for the non-laboratory-based model). Results were similar between the laboratory-based and non-laboratory-based models in both men and women when restricted to fatal events only. Interpretation A method that uses non-laboratory-based risk factors predicted cardiovascular events as accurately as one that relied on laboratory-based values. This approach could simplify risk assessment in situations where laboratory testing is inconvenient or unavailable. PMID:18342687

  13. Predicting failure of hematopoietic stem cell mobilization before it starts: the predicted poor mobilizer (pPM) score.

    PubMed

    Olivieri, Jacopo; Attolico, Immacolata; Nuccorini, Roberta; Pascale, Sara Pasquina; Chiarucci, Martina; Poiani, Monica; Corradini, Paolo; Farina, Lucia; Gaidano, Gianluca; Nassi, Luca; Sica, Simona; Piccirillo, Nicola; Pioltelli, Pietro Enrico; Martino, Massimo; Moscato, Tiziana; Pini, Massimo; Zallio, Francesco; Ciceri, Fabio; Marktel, Sarah; Mengarelli, Andrea; Musto, Pellegrino; Capria, Saveria; Merli, Francesco; Codeluppi, Katia; Mele, Giuseppe; Lanza, Francesco; Specchia, Giorgina; Pastore, Domenico; Milone, Giuseppe; Saraceni, Francesco; Di Nardo, Elvira; Perseghin, Paolo; Olivieri, Attilio

    2018-04-01

    Predicting mobilization failure before it starts may enable patient-tailored strategies. Although consensus criteria for predicted PM (pPM) are available, their predictive performance has never been measured on real data. We retrospectively collected and analyzed 1318 mobilization procedures performed for MM and lymphoma patients in the plerixafor era. In our sample, 180/1318 (13.7%) were PM. The score resulting from published pPM criteria had sufficient performance for predicting PM, as measured by AUC (0.67, 95%CI: 0.63-0.72). We developed a new prediction model from multivariate analysis whose score (pPM-score) resulted in better AUC (0.80, 95%CI: 0.76-0.84, p < 0001). pPM-score included as risk factors: increasing age, diagnosis of NHL, positive bone marrow biopsy or cytopenias before mobilization, previous mobilization failure, priming strategy with G-CSF alone, or without upfront plerixafor. A simplified version of pPM-score was categorized using a cut-off to maximize positive likelihood ratio (15.7, 95%CI: 9.9-24.8); specificity was 98% (95%CI: 97-98.7%), sensitivity 31.7% (95%CI: 24.9-39%); positive predictive value in our sample was 71.3% (95%CI: 60-80.8%). Simplified pPM-score can "rule in" patients at very high risk for PM before starting mobilization, allowing changes in clinical management, such as choice of alternative priming strategies, to avoid highly likely mobilization failure.

  14. New paradigms for Salmonella source attribution based on microbial subtyping.

    PubMed

    Mughini-Gras, Lapo; Franz, Eelco; van Pelt, Wilfrid

    2018-05-01

    Microbial subtyping is the most common approach for Salmonella source attribution. Typically, attributions are computed using frequency-matching models like the Dutch and Danish models based on phenotyping data (serotyping, phage-typing, and antimicrobial resistance profiling). Herewith, we critically review three major paradigms facing Salmonella source attribution today: (i) the use of genotyping data, particularly Multi-Locus Variable Number of Tandem Repeats Analysis (MLVA), which is replacing traditional Salmonella phenotyping beyond serotyping; (ii) the integration of case-control data into source attribution to improve risk factor identification/characterization; (iii) the investigation of non-food sources, as attributions tend to focus on foods of animal origin only. Population genetics models or simplified MLVA schemes may provide feasible options for source attribution, although there is a strong need to explore novel modelling options as we move towards whole-genome sequencing as the standard. Classical case-control studies are enhanced by incorporating source attribution results, as individuals acquiring salmonellosis from different sources have different associated risk factors. Thus, the more such analyses are performed the better Salmonella epidemiology will be understood. Reparametrizing current models allows for inclusion of sources like reptiles, the study of which improves our understanding of Salmonella epidemiology beyond food to tackle the pathogen in a more holistic way. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Comparison of two instruments for assessing risk of postoperative nausea and vomiting.

    PubMed

    Kapoor, Rachna; Hola, Eric T; Adamson, Robert T; Mathis, A Scott

    2008-03-01

    Two instruments for assessing patients' risk of postoperative nausea and vomiting (PONV) were compared. The existing protocol (protocol 1) assessed PONV risk using 16 weighted risk factors and was used for both adults and pediatric patients. The new protocol (protocol 2) included a form for adults and a pediatric-specific form. The form for adults utilized the simplified risk score, calculated using a validated, nonweighted, 4-point scale, and categorized patients' risk of PONV as low, moderate, or high. The form for pediatric patients used a 7-point, non-weighted scale and categorized patients' risk of PONV as moderate or high. A list was generated of all patients who had surgery during August 2005, for whom protocol 1 was used, and during April 2006, for whom protocol 2 was used. Fifty patients from each time period were randomly selected for data analysis. Data collected included the percentage of the form completed, the development of PONV, the number of PONV risk factors, patient demographics, and the appropriateness of prophylaxis. The mean +/- S.D. number of PONV risk factors was significantly lower in the group treated according to protocol 2 ( p = 0.001), but fewer patients in this group were categorized as low or moderate risk and more patients were identified as high risk (p < 0.001). More patients assessed by protocol 2 received fewer interventions than recommended (p < 0.001); however, the frequency of PONV did not significantly differ between groups. Implementation of a validated and simplified PONV risk-assessment tool appeared to improve form completion rates and appropriate risk assessment; however, the rates of PONV remained similar and fewer patients received appropriate prophylaxis compared with patients assessed by the existing risk-assessment tool.

  16. Dissecting jets and missing energy searches using $n$-body extended simplified models

    DOE PAGES

    Cohen, Timothy; Dolan, Matthew J.; El Hedri, Sonia; ...

    2016-08-04

    Simplified Models are a useful way to characterize new physics scenarios for the LHC. Particle decays are often represented using non-renormalizable operators that involve the minimal number of fields required by symmetries. Generalizing to a wider class of decay operators allows one to model a variety of final states. This approach, which we dub the $n$-body extension of Simplified Models, provides a unifying treatment of the signal phase space resulting from a variety of signals. In this paper, we present the first application of this framework in the context of multijet plus missing energy searches. The main result of thismore » work is a global performance study with the goal of identifying which set of observables yields the best discriminating power against the largest Standard Model backgrounds for a wide range of signal jet multiplicities. Our analysis compares combinations of one, two and three variables, placing emphasis on the enhanced sensitivity gain resulting from non-trivial correlations. Utilizing boosted decision trees, we compare and classify the performance of missing energy, energy scale and energy structure observables. We demonstrate that including an observable from each of these three classes is required to achieve optimal performance. In conclusion, this work additionally serves to establish the utility of $n$-body extended Simplified Models as a diagnostic for unpacking the relative merits of different search strategies, thereby motivating their application to new physics signatures beyond jets and missing energy.« less

  17. Caries risk assessment in schoolchildren - a form based on Cariogram® software

    PubMed Central

    CABRAL, Renata Nunes; HILGERT, Leandro Augusto; FABER, Jorge; LEAL, Soraya Coelho

    2014-01-01

    Identifying caries risk factors is an important measure which contributes to best understanding of the cariogenic profile of the patient. The Cariogram® software provides this analysis, and protocols simplifying the method were suggested. Objectives The aim of this study was to determine whether a newly developed Caries Risk Assessment (CRA) form based on the Cariogram® software could classify schoolchildren according to their caries risk and to evaluate relationships between caries risk and the variables in the form. Material and Methods 150 schoolchildren aged 5 to 7 years old were included in this survey. Caries prevalence was obtained according to International Caries Detection and Assessment System (ICDAS) II. Information for filling in the form based on Cariogram® was collected clinically and from questionnaires sent to parents. Linear regression and a forward stepwise multiple regression model were applied to correlate the variables included in the form with the caries risk. Results Caries prevalence, in primary dentition, including enamel and dentine carious lesions was 98.6%, and 77.3% when only dentine lesions were considered. Eighty-six percent of the children were classified as at moderate caries risk. The forward stepwise multiple regression model result was significant (R2=0.904; p<0.00001), showing that the most significant factors influencing caries risk were caries experience, oral hygiene, frequency of food consumption, sugar consumption and fluoride sources. Conclusion The use of the form based on the Cariogram® software enabled classification of the schoolchildren at low, moderate and high caries risk. Caries experience, oral hygiene, frequency of food consumption, sugar consumption and fluoride sources are the variables that were shown to be highly correlated with caries risk. PMID:25466473

  18. A Risk Prediction Index for Advanced Colorectal Neoplasia at Screening Colonoscopy.

    PubMed

    Schroy, Paul C; Wong, John B; O'Brien, Michael J; Chen, Clara A; Griffith, John L

    2015-07-01

    Eliciting patient preferences within the context of shared decision making has been advocated for colorectal cancer screening. Risk stratification for advanced colorectal neoplasia (ACN) might facilitate more effective shared decision making when selecting an appropriate screening option. Our objective was to develop and validate a clinical index for estimating the probability of ACN at screening colonoscopy. We conducted a cross-sectional analysis of 3,543 asymptomatic, mostly average-risk patients 50-79 years of age undergoing screening colonoscopy at two urban safety net hospitals. Predictors of ACN were identified using multiple logistic regression. Model performance was internally validated using bootstrapping methods. The final index consisted of five independent predictors of risk (age, smoking, alcohol intake, height, and a combined sex/race/ethnicity variable). Smoking was the strongest predictor (net reclassification improvement (NRI), 8.4%) and height the weakest (NRI, 1.5%). Using a simplified weighted scoring system based on 0.5 increments of the adjusted odds ratio, the risk of ACN ranged from 3.2% (95% confidence interval (CI), 2.6-3.9) for the low-risk group (score ≤2) to 8.6% (95% CI, 7.4-9.7) for the intermediate/high-risk group (score 3-11). The model had moderate to good overall discrimination (C-statistic, 0.69; 95% CI, 0.66-0.72) and good calibration (P=0.73-0.93). A simple 5-item risk index based on readily available clinical data accurately stratifies average-risk patients into low- and intermediate/high-risk categories for ACN at screening colonoscopy. Uptake into clinical practice could facilitate more effective shared decision-making for CRC screening, particularly in situations where patient and provider test preferences differ.

  19. [Simplification of crop shortage water index and its application in drought remote sensing monitoring].

    PubMed

    Liu, Anlin; Li, Xingmin; He, Yanbo; Deng, Fengdong

    2004-02-01

    Based on the principle of energy balance, the method for calculating latent evaporation was simplified, and hence, the construction of the drought remote sensing monitoring model of crop water shortage index was also simplified. Since the modified model involved fewer parameters and reduced computing times, it was more suitable for the operation running in the routine services. After collecting the concerned meteorological elements and the NOAA/AVHRR image data, the new model was applied to monitor the spring drought in Guanzhong, Shanxi Province. The results showed that the monitoring results from the new model, which also took more considerations of the effects of the ground coverage conditions and meteorological elements such as wind speed and the water pressure, were much better than the results from the model of vegetation water supply index. From the view of the computing times, service effects and monitoring results, the simplified crop water shortage index model was more suitable for practical use. In addition, the reasons of the abnormal results of CWSI > 1 in some regions in the case studies were also discussed in this paper.

  20. A Micro-Grid Simulator Tool (SGridSim) using Effective Node-to-Node Complex Impedance (EN2NCI) Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Udhay Ravishankar; Milos manic

    2013-08-01

    This paper presents a micro-grid simulator tool useful for implementing and testing multi-agent controllers (SGridSim). As a common engineering practice it is important to have a tool that simplifies the modeling of the salient features of a desired system. In electric micro-grids, these salient features are the voltage and power distributions within the micro-grid. Current simplified electric power grid simulator tools such as PowerWorld, PowerSim, Gridlab, etc, model only the power distribution features of a desired micro-grid. Other power grid simulators such as Simulink, Modelica, etc, use detailed modeling to accommodate the voltage distribution features. This paper presents a SGridSimmore » micro-grid simulator tool that simplifies the modeling of both the voltage and power distribution features in a desired micro-grid. The SGridSim tool accomplishes this simplified modeling by using Effective Node-to-Node Complex Impedance (EN2NCI) models of components that typically make-up a micro-grid. The term EN2NCI models means that the impedance based components of a micro-grid are modeled as single impedances tied between their respective voltage nodes on the micro-grid. Hence the benefit of the presented SGridSim tool are 1) simulation of a micro-grid is performed strictly in the complex-domain; 2) faster simulation of a micro-grid by avoiding the simulation of detailed transients. An example micro-grid model was built using the SGridSim tool and tested to simulate both the voltage and power distribution features with a total absolute relative error of less than 6%.« less

  1. A Fast Proceduere for Optimizing Thermal Protection Systems of Re-Entry Vehicles

    NASA Astrophysics Data System (ADS)

    Ferraiuolo, M.; Riccio, A.; Tescione, D.; Gigliotti, M.

    The aim of the present work is to introduce a fast procedure to optimize thermal protection systems for re-entry vehicles subjected to high thermal loads. A simplified one-dimensional optimization process, performed in order to find the optimum design variables (lengths, sections etc.), is the first step of the proposed design procedure. Simultaneously, the most suitable materials able to sustain high temperatures and meeting the weight requirements are selected and positioned within the design layout. In this stage of the design procedure, simplified (generalized plane strain) FEM models are used when boundary and geometrical conditions allow the reduction of the degrees of freedom. Those simplified local FEM models can be useful because they are time-saving and very simple to build; they are essentially one dimensional and can be used for optimization processes in order to determine the optimum configuration with regard to weight, temperature and stresses. A triple-layer and a double-layer body, subjected to the same aero-thermal loads, have been optimized to minimize the overall weight. Full two and three-dimensional analyses are performed in order to validate those simplified models. Thermal-structural analyses and optimizations are executed by adopting the Ansys FEM code.

  2. Application of a simplified theory of ELF propagation to a simplified worldwide model of the ionosphere

    NASA Astrophysics Data System (ADS)

    Behroozi-Toosi, A. B.; Booker, H. G.

    1980-12-01

    The simplified theory of ELF wave propagation in the earth-ionosphere transmission lines developed by Booker (1980) is applied to a simplified worldwide model of the ionosphere. The theory, which involves the comparison of the local vertical refractive index gradient with the local wavelength in order to classify the altitude into regions of low and high gradient, is used for a model of electron and negative ion profiles in the D and E regions below 150 km. Attention is given to the frequency dependence of ELF propagation at a middle latitude under daytime conditions, the daytime latitude dependence of ELF propagation at the equinox, the effects of sunspot, seasonal and diurnal variations on propagation, nighttime propagation neglecting and including propagation above 100 km, and the effect on daytime ELF propagation of a sudden ionospheric disturbance. The numerical values obtained by the method for the propagation velocity and attenuation rate are shown to be in general agreement with the analytic Naval Ocean Systems Center computer program. It is concluded that the method employed gives more physical insights into propagation processes than any other method, while requiring less effort and providing maximal accuracy.

  3. Appropriateness guidelines and predictive rules to select patients for upper endoscopy: a nationwide multicenter study.

    PubMed

    Buri, Luigi; Hassan, Cesare; Bersani, Gianluca; Anti, Marcello; Bianco, Maria Antonietta; Cipolletta, Livio; Di Giulio, Emilio; Di Matteo, Giovanni; Familiari, Luigi; Ficano, Leonardo; Loriga, Pietro; Morini, Sergio; Pietropaolo, Vincenzo; Zambelli, Alessandro; Grossi, Enzo; Intraligi, Marco; Buscema, Massimo

    2010-06-01

    Selecting patients appropriately for upper endoscopy (EGD) is crucial for efficient use of endoscopy. The objective of this study was to compare different clinical strategies and statistical methods to select patients for EGD, namely appropriateness guidelines, age and/or alarm features, and multivariate and artificial neural network (ANN) models. A nationwide, multicenter, prospective study was undertaken in which consecutive patients referred for EGD during a 1-month period were enrolled. Before EGD, the endoscopist assessed referral appropriateness according to the American Society for Gastrointestinal Endoscopy (ASGE) guidelines, also collecting clinical and demographic variables. Outcomes of the study were detection of relevant findings and new diagnosis of malignancy at EGD. The accuracy of the following clinical strategies and predictive rules was compared: (i) ASGE appropriateness guidelines (indicated vs. not indicated), (ii) simplified rule (>or=45 years or alarm features vs. <45 years without alarm features), (iii) logistic regression model, and (iv) ANN models. A total of 8,252 patients were enrolled in 57 centers. Overall, 3,803 (46%) relevant findings and 132 (1.6%) new malignancies were detected. Sensitivity, specificity, and area under the receiver-operating characteristic curve (AUC) of the simplified rule were similar to that of the ASGE guidelines for both relevant findings (82%/26%/0.55 vs. 88%/27%/0.52) and cancer (97%/22%/0.58 vs. 98%/20%/0.58). Both logistic regression and ANN models seemed to be substantially more accurate in predicting new cases of malignancy, with an AUC of 0.82 and 0.87, respectively. A simple predictive rule based on age and alarm features is similarly effective to the more complex ASGE guidelines in selecting patients for EGD. Regression and ANN models may be useful in identifying a relatively small subgroup of patients at higher risk of cancer.

  4. Simplifying the complexity of resistance heterogeneity in metastasis

    PubMed Central

    Lavi, Orit; Greene, James M.; Levy, Doron; Gottesman, Michael M.

    2014-01-01

    The main goal of treatment regimens for metastasis is to control growth rates, not eradicate all cancer cells. Mathematical models offer methodologies that incorporate high-throughput data with dynamic effects on net growth. The ideal approach would simplify, but not over-simplify, a complex problem into meaningful and manageable estimators that predict a patient’s response to specific treatments. Here, we explore three fundamental approaches with different assumptions concerning resistance mechanisms, in which the cells are categorized into either discrete compartments or described by a continuous range of resistance levels. We argue in favor of modeling resistance as a continuum and demonstrate how integrating cellular growth rates, density-dependent versus exponential growth, and intratumoral heterogeneity improves predictions concerning the resistance heterogeneity of metastases. PMID:24491979

  5. Evaluation and simplification of the occupational slip, trip and fall risk-assessment test

    PubMed Central

    NAKAMURA, Takehiro; OYAMA, Ichiro; FUJINO, Yoshihisa; KUBO, Tatsuhiko; KADOWAKI, Koji; KUNIMOTO, Masamizu; ODOI, Haruka; TABATA, Hidetoshi; MATSUDA, Shinya

    2016-01-01

    Objective: The purpose of this investigation is to evaluate the efficacy of the occupational slip, trip and fall (STF) risk assessment test developed by the Japan Industrial Safety and Health Association (JISHA). We further intended to simplify the test to improve efficiency. Methods: A previous cohort study was performed using 540 employees aged ≥50 years who took the JISHA’s STF risk assessment test. We conducted multivariate analysis using these previous results as baseline values and answers to questionnaire items or score on physical fitness tests as variables. The screening efficiency of each model was evaluated based on the obtained receiver operating characteristic (ROC) curve. Results: The area under the ROC obtained in multivariate analysis was 0.79 when using all items. Six of the 25 questionnaire items were selected for stepwise analysis, giving an area under the ROC curve of 0.77. Conclusion: Based on the results of follow-up performed one year after the initial examination, we successfully determined the usefulness of the STF risk assessment test. Administering a questionnaire alone is sufficient for screening subjects at risk of STF during the subsequent one-year period. PMID:27021057

  6. A simplified clinical risk score predicts the need for early endoscopy in non-variceal upper gastrointestinal bleeding.

    PubMed

    Tammaro, Leonardo; Buda, Andrea; Di Paolo, Maria Carla; Zullo, Angelo; Hassan, Cesare; Riccio, Elisabetta; Vassallo, Roberto; Caserta, Luigi; Anderloni, Andrea; Natali, Alessandro

    2014-09-01

    Pre-endoscopic triage of patients who require an early upper endoscopy can improve management of patients with non-variceal upper gastrointestinal bleeding. To validate a new simplified clinical score (T-score) to assess the need of an early upper endoscopy in non variceal bleeding patients. Secondary outcomes were re-bleeding rate, 30-day bleeding-related mortality. In this prospective, multicentre study patients with bleeding who underwent upper endoscopy were enrolled. The accuracy for high risk endoscopic stigmata of the T-score was compared with that of the Glasgow Blatchford risk score. Overall, 602 patients underwent early upper endoscopy, and 472 presented with non-variceal bleeding. High risk endoscopic stigmata were detected in 145 (30.7%) cases. T-score sensitivity and specificity for high risk endoscopic stigmata and bleeding-related mortality was 96% and 30%, and 80% and 71%, respectively. No statistically difference in predicting high risk endoscopic stigmata between T-score and Glasgow Blatchford risk score was observed (ROC curve: 0.72 vs. 0.69, p=0.11). The two scores were also similar in predicting re-bleeding (ROC curve: 0.64 vs. 0.63, p=0.4) and 30-day bleeding-related mortality (ROC curve: 0.78 vs. 0.76, p=0.3). The T-score appeared to predict high risk endoscopic stigmata, re-bleeding and mortality with similar accuracy to Glasgow Blatchford risk score. Such a score may be helpful for the prediction of high-risk patients who need a very early therapeutic endoscopy. Copyright © 2014 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.

  7. A Simplified Model of Choice Behavior under Uncertainty

    PubMed Central

    Lin, Ching-Hung; Lin, Yu-Kai; Song, Tzu-Jiun; Huang, Jong-Tsun; Chiu, Yao-Chu

    2016-01-01

    The Iowa Gambling Task (IGT) has been standardized as a clinical assessment tool (Bechara, 2007). Nonetheless, numerous research groups have attempted to modify IGT models to optimize parameters for predicting the choice behavior of normal controls and patients. A decade ago, most researchers considered the expected utility (EU) model (Busemeyer and Stout, 2002) to be the optimal model for predicting choice behavior under uncertainty. However, in recent years, studies have demonstrated that models with the prospect utility (PU) function are more effective than the EU models in the IGT (Ahn et al., 2008). Nevertheless, after some preliminary tests based on our behavioral dataset and modeling, it was determined that the Ahn et al. (2008) PU model is not optimal due to some incompatible results. This study aims to modify the Ahn et al. (2008) PU model to a simplified model and used the IGT performance of 145 subjects as the benchmark data for comparison. In our simplified PU model, the best goodness-of-fit was found mostly as the value of α approached zero. More specifically, we retested the key parameters α, λ, and A in the PU model. Notably, the influence of the parameters α, λ, and A has a hierarchical power structure in terms of manipulating the goodness-of-fit in the PU model. Additionally, we found that the parameters λ and A may be ineffective when the parameter α is close to zero in the PU model. The present simplified model demonstrated that decision makers mostly adopted the strategy of gain-stay loss-shift rather than foreseeing the long-term outcome. However, there are other behavioral variables that are not well revealed under these dynamic-uncertainty situations. Therefore, the optimal behavioral models may not have been found yet. In short, the best model for predicting choice behavior under dynamic-uncertainty situations should be further evaluated. PMID:27582715

  8. Airflow and Particle Transport Through Human Airways: A Systematic Review

    NASA Astrophysics Data System (ADS)

    Kharat, S. B.; Deoghare, A. B.; Pandey, K. M.

    2017-08-01

    This paper describes review of the relevant literature about two phase analysis of air and particle flow through human airways. An emphasis of the review is placed on elaborating the steps involved in two phase analysis, which are Geometric modelling methods and Mathematical models. The first two parts describes various approaches that are followed for constructing an Airway model upon which analysis are conducted. Broad two categories of geometric modelling viz. Simplified modelling and Accurate modelling using medical scans are discussed briefly. Ease and limitations of simplified models, then examples of CT based models are discussed. In later part of the review different mathematical models implemented by researchers for analysis are briefed. Mathematical models used for Air and Particle phases are elaborated separately.

  9. Spatial exposure-hazard and landscape models for assessing the impact of GM crops on non-target organisms.

    PubMed

    Leclerc, Melen; Walker, Emily; Messéan, Antoine; Soubeyrand, Samuel

    2018-05-15

    The cultivation of Genetically Modified (GM) crops may have substantial impacts on populations of non-target organisms (NTOs) in agroecosystems. These impacts should be assessed at larger spatial scales than the cultivated field, and, as landscape-scale experiments are difficult, if not impossible, modelling approaches are needed to address landscape risk management. We present an original stochastic and spatially explicit modelling framework for assessing the risk at the landscape level. We use techniques from spatial statistics for simulating simplified landscapes made up of (aggregated or non-aggregated) GM fields, neutral fields and NTO's habitat areas. The dispersal of toxic pollen grains is obtained by convolving the emission of GM plants and validated dispersal kernel functions while the locations of exposed individuals are drawn from a point process. By taking into account the adherence of the ambient pollen on plants, the loss of pollen due to climatic events, and, an experimentally-validated mortality-dose function we predict risk maps and provide a distribution giving how the risk varies within exposed individuals in the landscape. Then, we consider the impact of the Bt maize on Inachis io in worst-case scenarii where exposed individuals are located in the vicinity of GM fields and pollen shedding overlaps with larval emergence. We perform a Global Sensitivity Analysis (GSA) to explore numerically how our input parameters influence the risk. Our results confirm the important effects of pollen emission and loss. Most interestingly they highlight that the optimal spatial distribution of GM fields that mitigates the risk depends on our knowledge of the habitats of NTOs, and finally, moderate the influence of the dispersal kernel function. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aaboud, M.; Aad, G.; Abbott, B.

    A search for squarks and gluinos in final states containing hadronic jets, missing transverse momentum but no electrons or muons is presented. The data were recorded in 2015 by the ATLAS experiment in √s=13 TeV proton–proton collisions at the Large Hadron Collider. No excess above the Standard Model background expectation was observed in 3.2 fb -1 of analyzed data. Results are interpreted within simplified models that assume R-parity is conserved and the neutralino is the lightest supersymmetric particle. An exclusion limit at the 95 % confidence level on the mass of the gluino is set at 1.51 TeV for amore » simplified model incorporating only a gluino octet and the lightest neutralino, assuming the lightest neutralino is massless. For a simplified model involving the strong production of mass-degenerate first- and second-generation squarks, squark masses below 1.03 TeV are excluded for a massless lightest neutralino. Finally, these limits substantially extend the region of supersymmetric parameter space excluded by previous measurements with the ATLAS detector.« less

  11. On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.

    PubMed

    Yamazaki, Keisuke

    2012-07-01

    Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Analysis of simplified heat transfer models for thermal property determination of nano-film by TDTR method

    NASA Astrophysics Data System (ADS)

    Wang, Xinwei; Chen, Zhe; Sun, Fangyuan; Zhang, Hang; Jiang, Yuyan; Tang, Dawei

    2018-03-01

    Heat transfer in nanostructures is of critical importance for a wide range of applications such as functional materials and thermal management of electronics. Time-domain thermoreflectance (TDTR) has been proved to be a reliable measurement technique for the thermal property determinations of nanoscale structures. However, it is difficult to determine more than three thermal properties at the same time. Heat transfer model simplifications can reduce the fitting variables and provide an alternative way for thermal property determination. In this paper, two simplified models are investigated and analyzed by the transform matrix method and simulations. TDTR measurements are performed on Al-SiO2-Si samples with different SiO2 thickness. Both theoretical and experimental results show that the simplified tri-layer model (STM) is reliable and suitable for thin film samples with a wide range of thickness. Furthermore, the STM can also extract the intrinsic thermal conductivity and interfacial thermal resistance from serial samples with different thickness.

  13. A Simplified Model for the Effect of Weld-Induced Residual Stresses on the Axial Ultimate Strength of Stiffened Plates

    NASA Astrophysics Data System (ADS)

    Chen, Bai-Qiao; Guedes Soares, C.

    2018-03-01

    The present work investigates the compressive axial ultimate strength of fillet-welded steel-plated ship structures subjected to uniaxial compression, in which the residual stresses in the welded plates are calculated by a thermo-elasto-plastic finite element analysis that is used to fit an idealized model of residual stress distribution. The numerical results of ultimate strength based on the simplified model of residual stress show good agreement with those of various methods including the International Association of Classification Societies (IACS) Common Structural Rules (CSR), leading to the conclusion that the simplified model can be effectively used to represent the distribution of residual stresses in steel-plated structures in a wide range of engineering applications. It is concluded that the widths of the tension zones in the welded plates have a quasi-linear behavior with respect to the plate slenderness. The effect of residual stress on the axial strength of the stiffened plate is analyzed and discussed.

  14. Constraint Logic Programming approach to protein structure prediction.

    PubMed

    Dal Palù, Alessandro; Dovier, Agostino; Fogolari, Federico

    2004-11-30

    The protein structure prediction problem is one of the most challenging problems in biological sciences. Many approaches have been proposed using database information and/or simplified protein models. The protein structure prediction problem can be cast in the form of an optimization problem. Notwithstanding its importance, the problem has very seldom been tackled by Constraint Logic Programming, a declarative programming paradigm suitable for solving combinatorial optimization problems. Constraint Logic Programming techniques have been applied to the protein structure prediction problem on the face-centered cube lattice model. Molecular dynamics techniques, endowed with the notion of constraint, have been also exploited. Even using a very simplified model, Constraint Logic Programming on the face-centered cube lattice model allowed us to obtain acceptable results for a few small proteins. As a test implementation their (known) secondary structure and the presence of disulfide bridges are used as constraints. Simplified structures obtained in this way have been converted to all atom models with plausible structure. Results have been compared with a similar approach using a well-established technique as molecular dynamics. The results obtained on small proteins show that Constraint Logic Programming techniques can be employed for studying protein simplified models, which can be converted into realistic all atom models. The advantage of Constraint Logic Programming over other, much more explored, methodologies, resides in the rapid software prototyping, in the easy way of encoding heuristics, and in exploiting all the advances made in this research area, e.g. in constraint propagation and its use for pruning the huge search space.

  15. Simplified realistic human head model for simulating Tumor Treating Fields (TTFields).

    PubMed

    Wenger, Cornelia; Bomzon, Ze'ev; Salvador, Ricardo; Basser, Peter J; Miranda, Pedro C

    2016-08-01

    Tumor Treating Fields (TTFields) are alternating electric fields in the intermediate frequency range (100-300 kHz) of low-intensity (1-3 V/cm). TTFields are an anti-mitotic treatment against solid tumors, which are approved for Glioblastoma Multiforme (GBM) patients. These electric fields are induced non-invasively by transducer arrays placed directly on the patient's scalp. Cell culture experiments showed that treatment efficacy is dependent on the induced field intensity. In clinical practice, a software called NovoTalTM uses head measurements to estimate the optimal array placement to maximize the electric field delivery to the tumor. Computational studies predict an increase in the tumor's electric field strength when adapting transducer arrays to its location. Ideally, a personalized head model could be created for each patient, to calculate the electric field distribution for the specific situation. Thus, the optimal transducer layout could be inferred from field calculation rather than distance measurements. Nonetheless, creating realistic head models of patients is time-consuming and often needs user interaction, because automated image segmentation is prone to failure. This study presents a first approach to creating simplified head models consisting of convex hulls of the tissue layers. The model is able to account for anisotropic conductivity in the cortical tissues by using a tensor representation estimated from Diffusion Tensor Imaging. The induced electric field distribution is compared in the simplified and realistic head models. The average field intensities in the brain and tumor are generally slightly higher in the realistic head model, with a maximal ratio of 114% for a simplified model with reasonable layer thicknesses. Thus, the present pipeline is a fast and efficient means towards personalized head models with less complexity involved in characterizing tissue interfaces, while enabling accurate predictions of electric field distribution.

  16. Towards a standard design model for quad-rotors: A review of current models, their accuracy and a novel simplified model

    NASA Astrophysics Data System (ADS)

    Amezquita-Brooks, Luis; Liceaga-Castro, Eduardo; Gonzalez-Sanchez, Mario; Garcia-Salazar, Octavio; Martinez-Vazquez, Daniel

    2017-11-01

    Applications based on quad-rotor-vehicles (QRV) are becoming increasingly wide-spread. Many of these applications require accurate mathematical representations for control design, simulation and estimation. However, there is no consensus on a standardized model for these purposes. In this article a review of the most common elements included in QRV models reported in the literature is presented. This survey shows that some elements are recurrent for typical non-aerobatic QRV applications; in particular, for control design and high-performance simulation. By synthesising the common features of the reviewed models a standard generic model SGM is proposed. The SGM is cast as a typical state-space model without memory-less transformations, a structure which is useful for simulation and controller design. The survey also shows that many QRV applications use simplified representations, which may be considered simplifications of the SGM here proposed. In order to assess the effectiveness of the simplified models, a comprehensive comparison based on digital simulations is presented. With this comparison, it is possible to determine the accuracy of each model under particular operating ranges. Such information is useful for the selection of a model according to a particular application. In addition to the models found in the literature, in this article a novel simplified model is derived. The main characteristics of this model are that its inner dynamics are linear, it has low complexity and it has a high level of accuracy in all the studied operating ranges, a characteristic found only in more complex representations. To complement the article the main elements of the SGM are evaluated with the aid of experimental data and the computational complexity of all surveyed models is briefly analysed. Finally, the article presents a discussion on how the structural characteristics of the models are useful to suggest particular QRV control structures.

  17. Simplification of antiretroviral therapy: a necessary step in the public health response to HIV/AIDS in resource-limited settings.

    PubMed

    Vitoria, Marco; Ford, Nathan; Doherty, Meg; Flexner, Charles

    2014-01-01

    The global scale-up of antiretroviral therapy (ART) over the past decade represents one of the great public health and human rights achievements of recent times. Moving from an individualized treatment approach to a simplified and standardized public health approach has been critical to ART scale-up, simplifying both prescribing practices and supply chain management. In terms of the latter, the risk of stock-outs can be reduced and simplified prescribing practices support task shifting of care to nursing and other non-physician clinicians; this strategy is critical to increase access to ART care in settings where physicians are limited in number. In order to support such simplification, successive World Health Organization guidelines for ART in resource-limited settings have aimed to reduce the number of recommended options for first-line ART in such settings. Future drug and regimen choices for resource-limited settings will likely be guided by the same principles that have led to the recommendation of a single preferred regimen and will favour drugs that have the following characteristics: minimal risk of failure, efficacy and tolerability, robustness and forgiveness, no overlapping resistance in treatment sequencing, convenience, affordability, and compatibility with anti-TB and anti-hepatitis treatments.

  18. A simplified model for tritium permeation transient predictions when trapping is active*1

    NASA Astrophysics Data System (ADS)

    Longhurst, G. R.

    1994-09-01

    This report describes a simplified one-dimensional tritium permeation and retention model. The model makes use of the same physical mechanisms as more sophisticated, time-transient codes such as implantation, recombination, diffusion, trapping and thermal gradient effects. It takes advantage of a number of simplifications and approximations to solve the steady-state problem and then provides interpolating functions to make estimates of intermediate states based on the steady-state solution. Comparison calculations with the verified and validated TMAP4 transient code show good agreement.

  19. The New York State risk score for predicting in-hospital/30-day mortality following percutaneous coronary intervention.

    PubMed

    Hannan, Edward L; Farrell, Louise Szypulski; Walford, Gary; Jacobs, Alice K; Berger, Peter B; Holmes, David R; Stamato, Nicholas J; Sharma, Samin; King, Spencer B

    2013-06-01

    This study sought to develop a percutaneous coronary intervention (PCI) risk score for in-hospital/30-day mortality. Risk scores are simplified linear scores that provide clinicians with quick estimates of patients' short-term mortality rates for informed consent and to determine the appropriate intervention. Earlier PCI risk scores were based on in-hospital mortality. However, for PCI, a substantial percentage of patients die within 30 days of the procedure after discharge. New York's Percutaneous Coronary Interventions Reporting System was used to develop an in-hospital/30-day logistic regression model for patients undergoing PCI in 2010, and this model was converted into a simple linear risk score that estimates mortality rates. The score was validated by applying it to 2009 New York PCI data. Subsequent analyses evaluated the ability of the score to predict complications and length of stay. A total of 54,223 patients were used to develop the risk score. There are 11 risk factors that make up the score, with risk factor scores ranging from 1 to 9, and the highest total score is 34. The score was validated based on patients undergoing PCI in the previous year, and accurately predicted mortality for all patients as well as patients who recently suffered a myocardial infarction (MI). The PCI risk score developed here enables clinicians to estimate in-hospital/30-day mortality very quickly and quite accurately. It accurately predicts mortality for patients undergoing PCI in the previous year and for MI patients, and is also moderately related to perioperative complications and length of stay. Copyright © 2013 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  20. Simulation of mercury capture by sorbent injection using a simplified model.

    PubMed

    Zhao, Bingtao; Zhang, Zhongxiao; Jin, Jing; Pan, Wei-Ping

    2009-10-30

    Mercury pollution by fossil fuel combustion or solid waste incineration is becoming the worldwide environmental concern. As an effective control technology, powdered sorbent injection (PSI) has been successfully used for mercury capture from flue gas with advantages of low cost and easy operation. In order to predict the mercury capture efficiency for PSI more conveniently, a simplified model, which is based on the theory of mass transfer, isothermal adsorption and mass balance, is developed in this paper. The comparisons between theoretical results of this model and experimental results by Meserole et al. [F.B. Meserole, R. Chang, T.R. Carrey, J. Machac, C.F.J. Richardson, Modeling mercury removal by sorbent injection, J. Air Waste Manage. Assoc. 49 (1999) 694-704] demonstrate that the simplified model is able to provide good predictive accuracy. Moreover, the effects of key parameters including the mass transfer coefficient, sorbent concentration, sorbent physical property and sorbent adsorption capacity on mercury adsorption efficiency are compared and evaluated. Finally, the sensitive analysis of impact factor indicates that the injected sorbent concentration plays most important role for mercury capture efficiency.

  1. Simplified and advanced modelling of traction control systems of heavy-haul locomotives

    NASA Astrophysics Data System (ADS)

    Spiryagin, Maksym; Wolfs, Peter; Szanto, Frank; Cole, Colin

    2015-05-01

    Improving tractive effort is a very complex task in locomotive design. It requires the development of not only mechanical systems but also power systems, traction machines and traction algorithms. At the initial design stage, traction algorithms can be verified by means of a simulation approach. A simple single wheelset simulation approach is not sufficient because all locomotive dynamics are not fully taken into consideration. Given that many traction control strategies exist, the best solution is to use more advanced approaches for such studies. This paper describes the modelling of a locomotive with a bogie traction control strategy based on a co-simulation approach in order to deliver more accurate results. The simplified and advanced modelling approaches of a locomotive electric power system are compared in this paper in order to answer a fundamental question. What level of modelling complexity is necessary for the investigation of the dynamic behaviours of a heavy-haul locomotive running under traction? The simulation results obtained provide some recommendations on simulation processes and the further implementation of advanced and simplified modelling approaches.

  2. Large deflections and vibrations of a tip pulled beam with variable transversal section

    NASA Astrophysics Data System (ADS)

    Kurka, P.; Izuka, J.; Gonzalez, P.; Teixeira, L. H.

    2016-10-01

    The use of long flexible probes in outdoors exploration vehicles, as opposed to short and rigid arms, is a convenient way to grant easier access to regions of scientific interest such as terrain slopes and cliff sides. Longer and taller arms can also provide information from a wider exploration horizon. The drawback of employing long and flexible exploration probes is the fact that its vibration is not easily controlled in real time operation by means of a simple analytic linear dynamic model. The numerical model required to describe the dynamics of a very long and flexible structure is often very large and of slow computational convergence. The present work proposes a simplified numerical model of a long flexible beam with variable cross section, which is statically deflected by a pulling cable. The paper compares the proposed simplified model with experimental data regarding the static and dynamic characteristics of a beam with variable cross section. The simulations show the effectiveness of the simplified dynamic model employed in an active control loop to suppress tip vibrations of the beam.

  3. Indoor Radon Concentration Related to Different Radon Areas and Indoor Radon Prediction

    NASA Astrophysics Data System (ADS)

    Juhásová Šenitková, Ingrid; Šál, Jiří

    2017-12-01

    Indoor radon has been observed in the buildings at areas with different radon risk potential. Preventive measures are based on control of main potential radon sources (soil gas, building material and supplied water) to avoid building of new houses above recommended indoor radon level 200 Bq/m3. Radon risk (index) estimation of individual building site bedrock in case of new house siting and building protection according technical building code are obligatory. Remedial actions in buildings built at high radon risk areas were carried out principally by unforced ventilation and anti-radon insulation. Significant differences were found in the level of radon concentration between rooms where radon reduction techniques were designed and those where it was not designed. The mathematical model based on radon exhalation from soil has been developed to describe the physical processes determining indoor radon concentration. The model is focused on combined radon diffusion through the slab and advection through the gap from sub-slab soil. In this model, radon emanated from building materials is considered not having a significant contribution to indoor radon concentration. Dimensional analysis and Gauss-Newton nonlinear least squares parametric regression were used to simplify the problem, identify essential input variables and find parameter values. The presented verification case study is introduced for real buildings with respect to various underground construction types. Presented paper gives picture of possible mathematical approach to indoor radon concentration prediction.

  4. A Comparison of Two Mathematical Modeling Frameworks for Evaluating Sexually Transmitted Infection Epidemiology.

    PubMed

    Johnson, Leigh F; Geffen, Nathan

    2016-03-01

    Different models of sexually transmitted infections (STIs) can yield substantially different conclusions about STI epidemiology, and it is important to understand how and why models differ. Frequency-dependent models make the simplifying assumption that STI incidence is proportional to STI prevalence in the population, whereas network models calculate STI incidence more realistically by classifying individuals according to their partners' STI status. We assessed a deterministic frequency-dependent model approximation to a microsimulation network model of STIs in South Africa. Sexual behavior and demographic parameters were identical in the 2 models. Six STIs were simulated using each model: HIV, herpes, syphilis, gonorrhea, chlamydia, and trichomoniasis. For all 6 STIs, the frequency-dependent model estimated a higher STI prevalence than the network model, with the difference between the 2 models being relatively large for the curable STIs. When the 2 models were fitted to the same STI prevalence data, the best-fitting parameters differed substantially between models, with the frequency-dependent model suggesting more immunity and lower transmission probabilities. The fitted frequency-dependent model estimated that the effects of a hypothetical elimination of concurrent partnerships and a reduction in commercial sex were both smaller than estimated by the fitted network model, whereas the latter model estimated a smaller impact of a reduction in unprotected sex in spousal relationships. The frequency-dependent assumption is problematic when modeling short-term STIs. Frequency-dependent models tend to underestimate the importance of high-risk groups in sustaining STI epidemics, while overestimating the importance of long-term partnerships and low-risk groups.

  5. Simplify to survive: prescriptive layouts ensure profitable scaling to 32nm and beyond

    NASA Astrophysics Data System (ADS)

    Liebmann, Lars; Pileggi, Larry; Hibbeler, Jason; Rovner, Vyacheslav; Jhaveri, Tejas; Northrop, Greg

    2009-03-01

    The time-to-market driven need to maintain concurrent process-design co-development, even in spite of discontinuous patterning, process, and device innovation is reiterated. The escalating design rule complexity resulting from increasing layout sensitivities in physical and electrical yield and the resulting risk to profitable technology scaling is reviewed. Shortcomings in traditional Design for Manufacturability (DfM) solutions are identified and contrasted to the highly successful integrated design-technology co-optimization used for SRAM and other memory arrays. The feasibility of extending memory-style design-technology co-optimization, based on a highly simplified layout environment, to logic chips is demonstrated. Layout density benefits, modeled patterning and electrical yield improvements, as well as substantially improved layout simplicity are quantified in a conventional versus template-based design comparison on a 65nm IBM PowerPC 405 microprocessor core. The adaptability of this highly regularized template-based design solution to different yield concerns and design styles is shown in the extension of this work to 32nm with an increased focus on interconnect redundancy. In closing, the work not covered in this paper, focused on the process side of the integrated process-design co-optimization, is introduced.

  6. A simplified method in comparison with comprehensive interaction incremental dynamic analysis to assess seismic performance of jacket-type offshore platforms

    NASA Astrophysics Data System (ADS)

    Zolfaghari, M. R.; Ajamy, A.; Asgarian, B.

    2015-12-01

    The primary goal of seismic reassessment procedures in oil platform codes is to determine the reliability of a platform under extreme earthquake loading. Therefore, in this paper, a simplified method is proposed to assess seismic performance of existing jacket-type offshore platforms (JTOP) in regions ranging from near-elastic to global collapse. The simplified method curve exploits well agreement between static pushover (SPO) curve and the entire summarized interaction incremental dynamic analysis (CI-IDA) curve of the platform. Although the CI-IDA method offers better understanding and better modelling of the phenomenon, it is a time-consuming and challenging task. To overcome the challenges, the simplified procedure, a fast and accurate approach, is introduced based on SPO analysis. Then, an existing JTOP in the Persian Gulf is presented to illustrate the procedure, and finally a comparison is made between the simplified method and CI-IDA results. The simplified method is very informative and practical for current engineering purposes. It is able to predict seismic performance elasticity to global dynamic instability with reasonable accuracy and little computational effort.

  7. Learning at work: competence development or competence-stress.

    PubMed

    Paulsson, Katarina; Ivergård, Toni; Hunt, Brian

    2005-03-01

    Changes in work and the ways in which it is carried out bring a need for upgrading workplace knowledge, skills and competencies. In today's workplaces, and for a number of reasons, workloads are higher than ever and stress is a growing concern (Health Risk Soc. 2(2) (2000) 173; Educat. Psychol. Meas. 61(5) (2001) 866). Increased demand for learning brings a risk that this will be an additional stress factor and thus a risk to health. Our research study is based on the control-demand-support model of Karasek and Theorell (Health Work: Stress, Productivity and the Reconstruction of Working Life, Basic Books/Harper, New York, 1990). We have used this model for our own empirical research with the aim to evaluate the model in the modern workplace. Our research enables us to expand the model in the light of current workplace conditions-especially those relating to learning. We report empirical data from a questionnaire survey of working conditions in two different branches of industry. We are able to define differences between companies in terms of working conditions and competence development. We describe and discuss the effects these conditions have on workplace competence development. Our research results show that increased workers' control of the learning process makes competence development more stimulating, is likely to simplify the work and reduces (learning-related) stress. It is therefore important that learning at work allows employees to control their learning and also allows time for the process of learning and reflection.

  8. Search for squarks and gluinos with the ATLAS detector in final states with jets and missing transverse momentum using TeV proton-proton collision data

    NASA Astrophysics Data System (ADS)

    Aad, G.; Abbott, B.; Abdallah, J.; Khalek, S. Abdel; Abdinov, O.; Aben, R.; Abi, B.; Abolins, M.; AbouZeid, O. S.; Abramowicz, H.; Abreu, H.; Abreu, R.; Abulaiti, Y.; Acharya, B. S.; Adamczyk, L.; Adams, D. L.; Adelman, J.; Adomeit, S.; Adye, T.; Agatonovic-Jovin, T.; Aguilar-Saavedra, J. A.; Agustoni, M.; Ahlen, S. P.; Ahmadov, F.; Aielli, G.; Akerstedt, H.; Åkesson, T. P. A.; Akimoto, G.; Akimov, A. V.; Alberghi, G. L.; Albert, J.; Albrand, S.; Verzini, M. J. Alconada; Aleksa, M.; Aleksandrov, I. N.; Alexa, C.; Alexander, G.; Alexandre, G.; Alexopoulos, T.; Alhroob, M.; Alimonti, G.; Alio, L.; Alison, J.; Allbrooke, B. M. M.; Allison, L. J.; Allport, P. P.; Almond, J.; Aloisio, A.; Alonso, A.; Alonso, F.; Alpigiani, C.; Altheimer, A.; Gonzalez, B. Alvarez; Alviggi, M. G.; Amako, K.; Coutinho, Y. Amaral; Amelung, C.; Amidei, D.; Santos, S. P. Amor Dos; Amorim, A.; Amoroso, S.; Amram, N.; Amundsen, G.; Anastopoulos, C.; Ancu, L. S.; Andari, N.; Andeen, T.; Anders, C. F.; Anders, G.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Anduaga, X. S.; Angelidakis, S.; Angelozzi, I.; Anger, P.; Angerami, A.; Anghinolfi, F.; Anisenkov, A. V.; Anjos, N.; Annovi, A.; Antonaki, A.; Antonelli, M.; Antonov, A.; Antos, J.; Anulli, F.; Aoki, M.; Bella, L. Aperio; Apolle, R.; Arabidze, G.; Aracena, I.; Arai, Y.; Araque, J. P.; Arce, A. T. H.; Arguin, J.-F.; Argyropoulos, S.; Arik, M.; Armbruster, A. J.; Arnaez, O.; Arnal, V.; Arnold, H.; Arratia, M.; Arslan, O.; Artamonov, A.; Artoni, G.; Asai, S.; Asbah, N.; Ashkenazi, A.; Åsman, B.; Asquith, L.; Assamagan, K.; Astalos, R.; Atkinson, M.; Atlay, N. B.; Auerbach, B.; Augsten, K.; Aurousseau, M.; Avolio, G.; Azuelos, G.; Azuma, Y.; Baak, M. A.; Baas, A.; Bacci, C.; Bachacou, H.; Bachas, K.; Backes, M.; Backhaus, M.; Mayes, J. Backus; Badescu, E.; Bagiacchi, P.; Bagnaia, P.; Bai, Y.; Bain, T.; Baines, J. T.; Baker, O. K.; Balek, P.; Balli, F.; Banas, E.; Banerjee, Sw.; Bannoura, A. A. E.; Bansal, V.; Bansil, H. S.; Barak, L.; Baranov, S. P.; Barberio, E. L.; Barberis, D.; Barbero, M.; Barillari, T.; Barisonzi, M.; Barklow, T.; Barlow, N.; Barnett, B. M.; Barnett, R. M.; Barnovska, Z.; Baroncelli, A.; Barone, G.; Barr, A. J.; Barreiro, F.; da Costa, J. Barreiro Guimarães; Bartoldus, R.; Barton, A. E.; Bartos, P.; Bartsch, V.; Bassalat, A.; Basye, A.; Bates, R. L.; Batkova, L.; Batley, J. R.; Battaglia, M.; Battistin, M.; Bauer, F.; Bawa, H. S.; Beau, T.; Beauchemin, P. H.; Beccherle, R.; Bechtle, P.; Beck, H. P.; Becker, K.; Becker, S.; Beckingham, M.; Becot, C.; Beddall, A. J.; Beddall, A.; Bedikian, S.; Bednyakov, V. A.; Bee, C. P.; Beemster, L. J.; Beermann, T. A.; Begel, M.; Behr, K.; Belanger-Champagne, C.; Bell, P. J.; Bell, W. H.; Bella, G.; Bellagamba, L.; Bellerive, A.; Bellomo, M.; Belotskiy, K.; Beltramello, O.; Benary, O.; Benchekroun, D.; Bendtz, K.; Benekos, N.; Benhammou, Y.; Noccioli, E. Benhar; Garcia, J. A. Benitez; Benjamin, D. P.; Bensinger, J. R.; Benslama, K.; Bentvelsen, S.; Berge, D.; Kuutmann, E. Bergeaas; Berger, N.; Berghaus, F.; Beringer, J.; Bernard, C.; Bernat, P.; Bernius, C.; Bernlochner, F. U.; Berry, T.; Berta, P.; Bertella, C.; Bertoli, G.; Bertolucci, F.; Bertsche, D.; Besana, M. I.; Besjes, G. J.; Bessidskaia, O.; Bessner, M. F.; Besson, N.; Betancourt, C.; Bethke, S.; Bhimji, W.; Bianchi, R. M.; Bianchini, L.; Bianco, M.; Biebel, O.; Bieniek, S. P.; Bierwagen, K.; Biesiada, J.; Biglietti, M.; De Mendizabal, J. Bilbao; Bilokon, H.; Bindi, M.; Binet, S.; Bingul, A.; Bini, C.; Black, C. W.; Black, J. E.; Black, K. M.; Blackburn, D.; Blair, R. E.; Blanchard, J.-B.; Blazek, T.; Bloch, I.; Blocker, C.; Blum, W.; Blumenschein, U.; Bobbink, G. J.; Bobrovnikov, V. S.; Bocchetta, S. S.; Bocci, A.; Bock, C.; Boddy, C. R.; Boehler, M.; Boek, T. T.; Bogaerts, J. A.; Bogdanchikov, A. G.; Bogouch, A.; Bohm, C.; Bohm, J.; Boisvert, V.; Bold, T.; Boldea, V.; Boldyrev, A. S.; Bomben, M.; Bona, M.; Boonekamp, M.; Borisov, A.; Borissov, G.; Borri, M.; Borroni, S.; Bortfeldt, J.; Bortolotto, V.; Bos, K.; Boscherini, D.; Bosman, M.; Boterenbrood, H.; Boudreau, J.; Bouffard, J.; Bouhova-Thacker, E. V.; Boumediene, D.; Bourdarios, C.; Bousson, N.; Boutouil, S.; Boveia, A.; Boyd, J.; Boyko, I. R.; Bracinik, J.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J. E.; Braun, H. M.; Brazzale, S. F.; Brelier, B.; Brendlinger, K.; Brennan, A. J.; Brenner, R.; Bressler, S.; Bristow, K.; Bristow, T. M.; Britton, D.; Brochu, F. M.; Brock, I.; Brock, R.; Bromberg, C.; Bronner, J.; Brooijmans, G.; Brooks, T.; Brooks, W. K.; Brosamer, J.; Brost, E.; Brown, J.; de Renstrom, P. A. Bruckman; Bruncko, D.; Bruneliere, R.; Brunet, S.; Bruni, A.; Bruni, G.; Bruschi, M.; Bryngemark, L.; Buanes, T.; Buat, Q.; Bucci, F.; Buchholz, P.; Buckingham, R. M.; Buckley, A. G.; Buda, S. I.; Budagov, I. A.; Buehrer, F.; Bugge, L.; Bugge, M. K.; Bulekov, O.; Bundock, A. C.; Burckhart, H.; Burdin, S.; Burghgrave, B.; Burke, S.; Burmeister, I.; Busato, E.; Büscher, D.; Büscher, V.; Bussey, P.; Buszello, C. P.; Butler, B.; Butler, J. M.; Butt, A. I.; Buttar, C. M.; Butterworth, J. M.; Butti, P.; Buttinger, W.; Buzatu, A.; Byszewski, M.; Urbán, S. Cabrera; Caforio, D.; Cakir, O.; Calafiura, P.; Calandri, A.; Calderini, G.; Calfayan, P.; Calkins, R.; Caloba, L. P.; Calvet, D.; Calvet, S.; Toro, R. Camacho; Camarda, S.; Cameron, D.; Caminada, L. M.; Armadans, R. Caminal; Campana, S.; Campanelli, M.; Campoverde, A.; Canale, V.; Canepa, A.; Bret, M. Cano; Cantero, J.; Cantrill, R.; Cao, T.; Garrido, M. D. M. Capeans; Caprini, I.; Caprini, M.; Capua, M.; Caputo, R.; Cardarelli, R.; Carli, T.; Carlino, G.; Carminati, L.; Caron, S.; Carquin, E.; Carrillo-Montoya, G. D.; Carter, J. R.; Carvalho, J.; Casadei, D.; Casado, M. P.; Casolino, M.; Castaneda-Miranda, E.; Castelli, A.; Gimenez, V. Castillo; Castro, N. F.; Catastini, P.; Catinaccio, A.; Catmore, J. R.; Cattai, A.; Cattani, G.; Caughron, S.; Cavaliere, V.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Ceradini, F.; Cerio, B.; Cerny, K.; Cerqueira, A. S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cerv, M.; Cervelli, A.; Cetin, S. A.; Chafaq, A.; Chakraborty, D.; Chalupkova, I.; Chang, P.; Chapleau, B.; Chapman, J. D.; Charfeddine, D.; Charlton, D. G.; Chau, C. C.; Barajas, C. A. Chavez; Cheatham, S.; Chegwidden, A.; Chekanov, S.; Chekulaev, S. V.; Chelkov, G. A.; Chelstowska, M. A.; Chen, C.; Chen, H.; Chen, K.; Chen, L.; Chen, S.; Chen, X.; Chen, Y.; Cheng, H. C.; Cheng, Y.; Cheplakov, A.; El Moursli, R. Cherkaoui; Chernyatin, V.; Cheu, E.; Chevalier, L.; Chiarella, V.; Chiefari, G.; Childers, J. T.; Chilingarov, A.; Chiodini, G.; Chisholm, A. S.; Chislett, R. T.; Chitan, A.; Chizhov, M. V.; Chouridou, S.; Chow, B. K. B.; Chromek-Burckhart, D.; Chu, M. L.; Chudoba, J.; Chwastowski, J. J.; Chytka, L.; Ciapetti, G.; Ciftci, A. K.; Ciftci, R.; Cinca, D.; Cindro, V.; Ciocio, A.; Cirkovic, P.; Citron, Z. H.; Citterio, M.; Ciubancan, M.; Clark, A.; Clark, P. J.; Clarke, R. N.; Cleland, W.; Clemens, J. C.; Clement, C.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Coffey, L.; Cogan, J. G.; Coggeshall, J.; Cole, B.; Cole, S.; Colijn, A. P.; Collot, J.; Colombo, T.; Colon, G.; Compostella, G.; Muiño, P. Conde; Coniavitis, E.; Conidi, M. C.; Connell, S. H.; Connelly, I. A.; Consonni, S. M.; Consorti, V.; Constantinescu, S.; Conta, C.; Conti, G.; Conventi, F.; Cooke, M.; Cooper, B. D.; Cooper-Sarkar, A. M.; Cooper-Smith, N. J.; Copic, K.; Cornelissen, T.; Corradi, M.; Corriveau, F.; Corso-Radu, A.; Cortes-Gonzalez, A.; Cortiana, G.; Costa, G.; Costa, M. J.; Costanzo, D.; Côté, D.; Cottin, G.; Cowan, G.; Cox, B. E.; Cranmer, K.; Cree, G.; Crépé-Renaudin, S.; Crescioli, F.; Cribbs, W. A.; Ortuzar, M. Crispin; Cristinziani, M.; Croft, V.; Crosetti, G.; Cuciuc, C.-M.; Donszelmann, T. Cuhadar; Cummings, J.; Curatolo, M.; Cuthbert, C.; Czirr, H.; Czodrowski, P.; Czyczula, Z.; D'Auria, S.; D'Onofrio, M.; Da Cunha Sargedas De Sousa, M. J.; Da Via, C.; Dabrowski, W.; Dafinca, A.; Dai, T.; Dale, O.; Dallaire, F.; Dallapiccola, C.; Dam, M.; Daniells, A. C.; Hoffmann, M. Dano; Dao, V.; Darbo, G.; Darmora, S.; Dassoulas, J. A.; Dattagupta, A.; Davey, W.; David, C.; Davidek, T.; Davies, E.; Davies, M.; Davignon, O.; Davison, A. R.; Davison, P.; Davygora, Y.; Dawe, E.; Dawson, I.; Daya-Ishmukhametova, R. K.; De, K.; de Asmundis, R.; De Castro, S.; De Cecco, S.; De Groot, N.; de Jong, P.; De la Torre, H.; De Lorenzi, F.; De Nooij, L.; De Pedis, D.; De Salvo, A.; De Sanctis, U.; De Santo, A.; De Vivie De Regie, J. B.; Dearnaley, W. J.; Debbe, R.; Debenedetti, C.; Dechenaux, B.; Dedovich, D. V.; Deigaard, I.; Del Peso, J.; Del Prete, T.; Deliot, F.; Delitzsch, C. M.; Deliyergiyev, M.; Dell'Acqua, A.; Dell'Asta, L.; Dell'Orso, M.; Pietra, M. Della; della Volpe, D.; Delmastro, M.; Delsart, P. A.; Deluca, C.; Demers, S.; Demichev, M.; Demilly, A.; Denisov, S. P.; Derendarz, D.; Derkaoui, J. E.; Derue, F.; Dervan, P.; Desch, K.; Deterre, C.; Deviveiros, P. O.; Dewhurst, A.; Dhaliwal, S.; Di Ciaccio, A.; Di Ciaccio, L.; Di Domenico, A.; Di Donato, C.; Di Girolamo, A.; Di Girolamo, B.; Di Mattia, A.; Di Micco, B.; Di Nardo, R.; Di Simone, A.; Di Sipio, R.; Di Valentino, D.; Dias, F. A.; Diaz, M. A.; Diehl, E. B.; Dietrich, J.; Dietzsch, T. A.; Diglio, S.; Dimitrievska, A.; Dingfelder, J.; Dionisi, C.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djobava, T.; do Vale, M. A. B.; Wemans, A. Do Valle; Doan, T. K. O.; Dobos, D.; Doglioni, C.; Doherty, T.; Dohmae, T.; Dolejsi, J.; Dolezal, Z.; Dolgoshein, B. A.; Donadelli, M.; Donati, S.; Dondero, P.; Donini, J.; Dopke, J.; Doria, A.; Dova, M. T.; Doyle, A. T.; Dris, M.; Dubbert, J.; Dube, S.; Dubreuil, E.; Duchovni, E.; Duckeck, G.; Ducu, O. A.; Duda, D.; Dudarev, A.; Dudziak, F.; Duflot, L.; Duguid, L.; Dührssen, M.; Dunford, M.; Yildiz, H. Duran; Düren, M.; Durglishvili, A.; Dwuznik, M.; Dyndal, M.; Ebke, J.; Edson, W.; Edwards, N. C.; Ehrenfeld, W.; Eifert, T.; Eigen, G.; Einsweiler, K.; Ekelof, T.; El Kacimi, M.; Ellert, M.; Elles, S.; Ellinghaus, F.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Emeliyanov, D.; Enari, Y.; Endner, O. C.; Endo, M.; Engelmann, R.; Erdmann, J.; Ereditato, A.; Eriksson, D.; Ernis, G.; Ernst, J.; Ernst, M.; Ernwein, J.; Errede, D.; Errede, S.; Ertel, E.; Escalier, M.; Esch, H.; Escobar, C.; Esposito, B.; Etienvre, A. I.; Etzion, E.; Evans, H.; Ezhilov, A.; Fabbri, L.; Facini, G.; Fakhrutdinov, R. M.; Falciano, S.; Falla, R. J.; Faltova, J.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farooque, T.; Farrell, S.; Farrington, S. M.; Farthouat, P.; Fassi, F.; Fassnacht, P.; Fassouliotis, D.; Favareto, A.; Fayard, L.; Federic, P.; Fedin, O. L.; Fedorko, W.; Fehling-Kaschek, M.; Feigl, S.; Feligioni, L.; Feng, C.; Feng, E. J.; Feng, H.; Fenyuk, A. B.; Perez, S. Fernandez; Ferrag, S.; Ferrando, J.; Ferrari, A.; Ferrari, P.; Ferrari, R.; de Lima, D. E. Ferreira; Ferrer, A.; Ferrere, D.; Ferretti, C.; Parodi, A. Ferretto; Fiascaris, M.; Fiedler, F.; Filipčič, A.; Filipuzzi, M.; Filthaut, F.; Fincke-Keeler, M.; Finelli, K. D.; Fiolhais, M. C. N.; Fiorini, L.; Firan, A.; Fischer, A.; Fischer, J.; Fisher, W. C.; Fitzgerald, E. A.; Flechl, M.; Fleck, I.; Fleischmann, P.; Fleischmann, S.; Fletcher, G. T.; Fletcher, G.; Flick, T.; Floderus, A.; Castillo, L. R. Flores; Bustos, A. C. Florez; Flowerdew, M. J.; Formica, A.; Forti, A.; Fortin, D.; Fournier, D.; Fox, H.; Fracchia, S.; Francavilla, P.; Franchini, M.; Franchino, S.; Francis, D.; Franklin, M.; Franz, S.; Fraternali, M.; French, S. T.; Friedrich, C.; Friedrich, F.; Froidevaux, D.; Frost, J. A.; Fukunaga, C.; Torregrosa, E. Fullana; Fulsom, B. G.; Fuster, J.; Gabaldon, C.; Gabizon, O.; Gabrielli, A.; Gabrielli, A.; Gadatsch, S.; Gadomski, S.; Gagliardi, G.; Gagnon, P.; Galea, C.; Galhardo, B.; Gallas, E. J.; Gallo, V.; Gallop, B. J.; Gallus, P.; Galster, G.; Gan, K. K.; Gandrajula, R. P.; Gao, J.; Gao, Y. S.; Walls, F. M. Garay; Garberson, F.; García, C.; Navarro, J. E. García; Garcia-Sciveres, M.; Gardner, R. W.; Garelli, N.; Garonne, V.; Gatti, C.; Gaudio, G.; Gaur, B.; Gauthier, L.; Gauzzi, P.; Gavrilenko, I. L.; Gay, C.; Gaycken, G.; Gazis, E. N.; Ge, P.; Gecse, Z.; Gee, C. N. P.; Geerts, D. A. A.; Geich-Gimbel, Ch.; Gellerstedt, K.; Gemme, C.; Gemmell, A.; Genest, M. H.; Gentile, S.; George, M.; George, S.; Gerbaudo, D.; Gershon, A.; Ghazlane, H.; Ghodbane, N.; Giacobbe, B.; Giagu, S.; Giangiobbe, V.; Giannetti, P.; Gianotti, F.; Gibbard, B.; Gibson, S. M.; Gilchriese, M.; Gillam, T. P. S.; Gillberg, D.; Gilles, G.; Gingrich, D. M.; Giokaris, N.; Giordani, M. P.; Giordano, R.; Giorgi, F. M.; Giorgi, F. M.; Giraud, P. F.; Giugni, D.; Giuliani, C.; Giulini, M.; Gjelsten, B. K.; Gkaitatzis, S.; Gkialas, I.; Gladilin, L. K.; Glasman, C.; Glatzer, J.; Glaysher, P. C. F.; Glazov, A.; Glonti, G. L.; Goblirsch-Kolb, M.; Goddard, J. R.; Godfrey, J.; Godlewski, J.; Goeringer, C.; Goldfarb, S.; Golling, T.; Golubkov, D.; Gomes, A.; Fajardo, L. S. Gomez; Gonçalo, R.; Da Costa, J. Goncalves Pinto Firmino; Gonella, L.; de la Hoz, S. González; Parra, G. Gonzalez; Gonzalez-Sevilla, S.; Goossens, L.; Gorbounov, P. A.; Gordon, H. A.; Gorelov, I.; Gorini, B.; Gorini, E.; Gorišek, A.; Gornicki, E.; Goshaw, A. T.; Gössling, C.; Gostkin, M. I.; Gouighri, M.; Goujdami, D.; Goulette, M. P.; Goussiou, A. G.; Goy, C.; Gozpinar, S.; Grabas, H. M. X.; Graber, L.; Grabowska-Bold, I.; Grafström, P.; Grahn, K.-J.; Gramling, J.; Gramstad, E.; Grancagnolo, S.; Grassi, V.; Gratchev, V.; Gray, H. M.; Graziani, E.; Grebenyuk, O. G.; Greenwood, Z. D.; Gregersen, K.; Gregor, I. M.; Grenier, P.; Griffiths, J.; Grillo, A. A.; Grimm, K.; Grinstein, S.; Gris, Ph.; Grishkevich, Y. V.; Grivaz, J.-F.; Grohs, J. P.; Grohsjean, A.; Gross, E.; Grosse-Knetter, J.; Grossi, G. C.; Groth-Jensen, J.; Grout, Z. J.; Guan, L.; Guescini, F.; Guest, D.; Gueta, O.; Guicheney, C.; Guido, E.; Guillemin, T.; Guindon, S.; Gul, U.; Gumpert, C.; Gunther, J.; Guo, J.; Gupta, S.; Gutierrez, P.; Ortiz, N. G. Gutierrez; Gutschow, C.; Guttman, N.; Guyot, C.; Gwenlan, C.; Gwilliam, C. B.; Haas, A.; Haber, C.; Hadavand, H. K.; Haddad, N.; Haefner, P.; Hageböck, S.; Hajduk, Z.; Hakobyan, H.; Haleem, M.; Hall, D.; Halladjian, G.; Hamacher, K.; Hamal, P.; Hamano, K.; Hamer, M.; Hamilton, A.; Hamilton, S.; Hamnett, P. G.; Han, L.; Hanagaki, K.; Hanawa, K.; Hance, M.; Hanke, P.; Hanna, R.; Hansen, J. B.; Hansen, J. D.; Hansen, P. H.; Hara, K.; Hard, A. S.; Harenberg, T.; Hariri, F.; Harkusha, S.; Harper, D.; Harrington, R. D.; Harris, O. M.; Harrison, P. F.; Hartjes, F.; Hasegawa, S.; Hasegawa, Y.; Hasib, A.; Hassani, S.; Haug, S.; Hauschild, M.; Hauser, R.; Havranek, M.; Hawkes, C. M.; Hawkings, R. J.; Hawkins, A. D.; Hayashi, T.; Hayden, D.; Hays, C. P.; Hayward, H. S.; Haywood, S. J.; Head, S. J.; Heck, T.; Hedberg, V.; Heelan, L.; Heim, S.; Heim, T.; Heinemann, B.; Heinrich, L.; Hejbal, J.; Helary, L.; Heller, C.; Heller, M.; Hellman, S.; Hellmich, D.; Helsens, C.; Henderson, J.; Henderson, R. C. W.; Heng, Y.; Hengler, C.; Henrichs, A.; Correia, A. M. Henriques; Henrot-Versille, S.; Hensel, C.; Herbert, G. H.; Jiménez, Y. Hernández; Herrberg-Schubert, R.; Herten, G.; Hertenberger, R.; Hervas, L.; Hesketh, G. G.; Hessey, N. P.; Hickling, R.; Higón-Rodriguez, E.; Hill, E.; Hill, J. C.; Hiller, K. H.; Hillert, S.; Hillier, S. J.; Hinchliffe, I.; Hines, E.; Hirose, M.; Hirschbuehl, D.; Hobbs, J.; Hod, N.; Hodgkinson, M. C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M. R.; Hoffman, J.; Hoffmann, D.; Hofmann, J. I.; Hohlfeld, M.; Holmes, T. R.; Hong, T. M.; van Huysduynen, L. Hooft; Hostachy, J.-Y.; Hou, S.; Hoummada, A.; Howard, J.; Howarth, J.; Hrabovsky, M.; Hristova, I.; Hrivnac, J.; Hryn'ova, T.; Hsu, C.; Hsu, P. J.; Hsu, S.-C.; Hu, D.; Hu, X.; Huang, Y.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Huffman, T. B.; Hughes, E. W.; Hughes, G.; Huhtinen, M.; Hülsing, T. A.; Hurwitz, M.; Huseynov, N.; Huston, J.; Huth, J.; Iacobucci, G.; Iakovidis, G.; Ibragimov, I.; Iconomidou-Fayard, L.; Ideal, E.; Iengo, P.; Igonkina, O.; Iizawa, T.; Ikegami, Y.; Ikematsu, K.; Ikeno, M.; Ilchenko, Y.; Iliadis, D.; Ilic, N.; Inamaru, Y.; Ince, T.; Ioannou, P.; Iodice, M.; Iordanidou, K.; Ippolito, V.; Quiles, A. Irles; Isaksson, C.; Ishino, M.; Ishitsuka, M.; Ishmukhametov, R.; Issever, C.; Istin, S.; Ponce, J. M. Iturbe; Iuppa, R.; Ivarsson, J.; Iwanski, W.; Iwasaki, H.; Izen, J. M.; Izzo, V.; Jackson, B.; Jackson, M.; Jackson, P.; Jaekel, M. R.; Jain, V.; Jakobs, K.; Jakobsen, S.; Jakoubek, T.; Jakubek, J.; Jamin, D. O.; Jana, D. K.; Jansen, E.; Jansen, H.; Janssen, J.; Janus, M.; Jarlskog, G.; Javadov, N.; Javůrek, T.; Jeanty, L.; Jejelava, J.; Jeng, G.-Y.; Jennens, D.; Jenni, P.; Jentzsch, J.; Jeske, C.; Jézéquel, S.; Ji, H.; Ji, W.; Jia, J.; Jiang, Y.; Belenguer, M. Jimenez; Jin, S.; Jinaru, A.; Jinnouchi, O.; Joergensen, M. D.; Johansson, K. E.; Johansson, P.; Johns, K. A.; Jon-And, K.; Jones, G.; Jones, R. W. L.; Jones, T. J.; Jongmanns, J.; Jorge, P. M.; Joshi, K. D.; Jovicevic, J.; Ju, X.; Jung, C. A.; Jungst, R. M.; Jussel, P.; Rozas, A. Juste; Kaci, M.; Kaczmarska, A.; Kado, M.; Kagan, H.; Kagan, M.; Kajomovitz, E.; Kalderon, C. W.; Kama, S.; Kamenshchikov, A.; Kanaya, N.; Kaneda, M.; Kaneti, S.; Kantserov, V. A.; Kanzaki, J.; Kaplan, B.; Kapliy, A.; Kar, D.; Karakostas, K.; Karastathis, N.; Karnevskiy, M.; Karpov, S. N.; Karthik, K.; Kartvelishvili, V.; Karyukhin, A. N.; Kashif, L.; Kasieczka, G.; Kass, R. D.; Kastanas, A.; Kataoka, Y.; Katre, A.; Katzy, J.; Kaushik, V.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kazama, S.; Kazanin, V. F.; Kazarinov, M. Y.; Keeler, R.; Kehoe, R.; Keil, M.; Keller, J. S.; Kempster, J. J.; Keoshkerian, H.; Kepka, O.; Kerševan, B. P.; Kersten, S.; Kessoku, K.; Keung, J.; Khalil-zada, F.; Khandanyan, H.; Khanov, A.; Khodinov, A.; Khomich, A.; Khoo, T. J.; Khoriauli, G.; Khoroshilov, A.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kim, H. Y.; Kim, H.; Kim, S. H.; Kimura, N.; Kind, O.; King, B. T.; King, M.; King, R. S. B.; King, S. B.; Kirk, J.; Kiryunin, A. E.; Kishimoto, T.; Kisielewska, D.; Kiss, F.; Kittelmann, T.; Kiuchi, K.; Kladiva, E.; Klein, M.; Klein, U.; Kleinknecht, K.; Klimek, P.; Klimentov, A.; Klingenberg, R.; Klinger, J. A.; Klioutchnikova, T.; Klok, P. F.; Kluge, E.-E.; Kluit, P.; Kluth, S.; Kneringer, E.; Knoops, E. B. F. G.; Knue, A.; Kobayashi, D.; Kobayashi, T.; Kobel, M.; Kocian, M.; Kodys, P.; Koevesarki, P.; Koffas, T.; Koffeman, E.; Kogan, L. A.; Kohlmann, S.; Kohout, Z.; Kohriki, T.; Koi, T.; Kolanoski, H.; Koletsou, I.; Koll, J.; Komar, A. A.; Komori, Y.; Kondo, T.; Kondrashova, N.; Köneke, K.; König, A. C.; König, S.; Kono, T.; Konoplich, R.; Konstantinidis, N.; Kopeliansky, R.; Koperny, S.; Köpke, L.; Kopp, A. K.; Korcyl, K.; Kordas, K.; Korn, A.; Korol, A. A.; Korolkov, I.; Korolkova, E. V.; Korotkov, V. A.; Kortner, O.; Kortner, S.; Kostyukhin, V. V.; Kotov, V. M.; Kotwal, A.; Kourkoumelis, C.; Kouskoura, V.; Koutsman, A.; Kowalewski, R.; Kowalski, T. Z.; Kozanecki, W.; Kozhin, A. S.; Kral, V.; Kramarenko, V. A.; Kramberger, G.; Krasnopevtsev, D.; Krasny, M. W.; Krasznahorkay, A.; Kraus, J. K.; Kravchenko, A.; Kreiss, S.; Kretz, M.; Kretzschmar, J.; Kreutzfeldt, K.; Krieger, P.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Krüger, H.; Kruker, T.; Krumnack, N.; Krumshteyn, Z. V.; Kruse, A.; Kruse, M. C.; Kruskal, M.; Kubota, T.; Kuday, S.; Kuehn, S.; Kugel, A.; Kuhl, A.; Kuhl, T.; Kukhtin, V.; Kulchitsky, Y.; Kuleshov, S.; Kuna, M.; Kunkle, J.; Kupco, A.; Kurashige, H.; Kurochkin, Y. A.; Kurumida, R.; Kus, V.; Kuwertz, E. S.; Kuze, M.; Kvita, J.; La Rosa, A.; La Rotonda, L.; Lacasta, C.; Lacava, F.; Lacey, J.; Lacker, H.; Lacour, D.; Lacuesta, V. R.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Laier, H.; Lambourne, L.; Lammers, S.; Lampen, C. L.; Lampl, W.; Lançon, E.; Landgraf, U.; Landon, M. P. J.; Lang, V. S.; Lankford, A. J.; Lanni, F.; Lantzsch, K.; Laplace, S.; Lapoire, C.; Laporte, J. F.; Lari, T.; Lassnig, M.; Laurelli, P.; Lavrijsen, W.; Law, A. T.; Laycock, P.; Le, B. T.; Le Dortz, O.; Le Guirriec, E.; Le Menedeu, E.; LeCompte, T.; Ledroit-Guillon, F.; Lee, C. A.; Lee, H.; Lee, J. S. H.; Lee, S. C.; Lee, L.; Lefebvre, G.; Lefebvre, M.; Legger, F.; Leggett, C.; Lehan, A.; Lehmacher, M.; Miotto, G. Lehmann; Lei, X.; Leight, W. A.; Leisos, A.; Leister, A. G.; Leite, M. A. L.; Leitner, R.; Lellouch, D.; Lemmer, B.; Leney, K. J. C.; Lenz, T.; Lenzen, G.; Lenzi, B.; Leone, R.; Leone, S.; Leonhardt, K.; Leonidopoulos, C.; Leontsinis, S.; Leroy, C.; Lester, C. G.; Lester, C. M.; Levchenko, M.; Levêque, J.; Levin, D.; Levinson, L. J.; Levy, M.; Lewis, A.; Lewis, G. H.; Leyko, A. M.; Leyton, M.; Li, B.; Li, B.; Li, H.; Li, H. L.; Li, L.; Li, L.; Li, S.; Li, Y.; Liang, Z.; Liao, H.; Liberti, B.; Lichard, P.; Lie, K.; Liebal, J.; Liebig, W.; Limbach, C.; Limosani, A.; Lin, S. C.; Lin, T. H.; Linde, F.; Lindquist, B. E.; Linnemann, J. T.; Lipeles, E.; Lipniacka, A.; Lisovyi, M.; Liss, T. M.; Lissauer, D.; Lister, A.; Litke, A. M.; Liu, B.; Liu, D.; Liu, J. B.; Liu, K.; Liu, L.; Liu, M.; Liu, M.; Liu, Y.; Livan, M.; Livermore, S. S. A.; Lleres, A.; Merino, J. Llorente; Lloyd, S. L.; Sterzo, F. Lo; Lobodzinska, E.; Loch, P.; Lockman, W. S.; Loddenkoetter, T.; Loebinger, F. K.; Loevschall-Jensen, A. E.; Loginov, A.; Loh, C. W.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Lombardo, V. P.; Long, B. A.; Long, J. D.; Long, R. E.; Lopes, L.; Mateos, D. Lopez; Paredes, B. Lopez; Paz, I. Lopez; Lorenz, J.; Martinez, N. Lorenzo; Losada, M.; Loscutoff, P.; Lou, X.; Lounis, A.; Love, J.; Love, P. A.; Lowe, A. J.; Lu, F.; Lubatti, H. J.; Luci, C.; Lucotte, A.; Luehring, F.; Lukas, W.; Luminari, L.; Lundberg, O.; Lund-Jensen, B.; Lungwitz, M.; Lynn, D.; Lysak, R.; Lytken, E.; Ma, H.; Ma, L. L.; Maccarrone, G.; Macchiolo, A.; Miguens, J. Machado; Macina, D.; Madaffari, D.; Madar, R.; Maddocks, H. J.; Mader, W. F.; Madsen, A.; Maeno, M.; Maeno, T.; Magradze, E.; Mahboubi, K.; Mahlstedt, J.; Mahmoud, S.; Maiani, C.; Maidantchik, C.; Maier, A. A.; Maio, A.; Majewski, S.; Makida, Y.; Makovec, N.; Mal, P.; Malaescu, B.; Malecki, Pa.; Maleev, V. P.; Malek, F.; Mallik, U.; Malon, D.; Malone, C.; Maltezos, S.; Malyshev, V. M.; Malyukov, S.; Mamuzic, J.; Mandelli, B.; Mandelli, L.; Mandić, I.; Mandrysch, R.; Maneira, J.; Manfredini, A.; de Andrade Filho, L. Manhaes; Ramos, J. A. Manjarres; Mann, A.; Manning, P. M.; Manousakis-Katsikakis, A.; Mansoulie, B.; Mantifel, R.; Mapelli, L.; March, L.; Marchand, J. F.; Marchiori, G.; Marcisovsky, M.; Marino, C. P.; Marjanovic, M.; Marques, C. N.; Marroquim, F.; Marsden, S. P.; Marshall, Z.; Marti, L. F.; Marti-Garcia, S.; Martin, B.; Martin, B.; Martin, T. A.; Martin, V. J.; dit Latour, B. Martin; Martinez, H.; Martinez, M.; Martin-Haugh, S.; Martyniuk, A. C.; Marx, M.; Marzano, F.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A. L.; Massa, I.; Massol, N.; Mastrandrea, P.; Mastroberardino, A.; Masubuchi, T.; Mättig, P.; Mattmann, J.; Maurer, J.; Maxfield, S. J.; Maximov, D. A.; Mazini, R.; Mazzaferro, L.; Goldrick, G. Mc; Kee, S. P. Mc; McCarn, A.; McCarthy, R. L.; McCarthy, T. G.; McCubbin, N. A.; McFarlane, K. W.; Mcfayden, J. A.; Mchedlidze, G.; McMahon, S. J.; McPherson, R. A.; Meade, A.; Mechnich, J.; Medinnis, M.; Meehan, S.; Mehlhase, S.; Mehta, A.; Meier, K.; Meineck, C.; Meirose, B.; Melachrinos, C.; Garcia, B. R. Mellado; Meloni, F.; Mengarelli, A.; Menke, S.; Meoni, E.; Mercurio, K. M.; Mergelmeyer, S.; Meric, N.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F. S.; Merritt, H.; Messina, A.; Metcalfe, J.; Mete, A. S.; Meyer, C.; Meyer, C.; Meyer, J.-P.; Meyer, J.; Middleton, R. P.; Migas, S.; Mijović, L.; Mikenberg, G.; Mikestikova, M.; Mikuž, M.; Miller, D. W.; Mills, C.; Milov, A.; Milstead, D. A.; Milstein, D.; Minaenko, A. A.; Minashvili, I. A.; Mincer, A. I.; Mindur, B.; Mineev, M.; Ming, Y.; Mir, L. M.; Mirabelli, G.; Mitani, T.; Mitrevski, J.; Mitsou, V. A.; Mitsui, S.; Miucci, A.; Miyagawa, P. S.; Mjörnmark, J. U.; Moa, T.; Mochizuki, K.; Mohapatra, S.; Mohr, W.; Molander, S.; Moles-Valls, R.; Mönig, K.; Monini, C.; Monk, J.; Monnier, E.; Berlingen, J. Montejo; Monticelli, F.; Monzani, S.; Moore, R. W.; Moraes, A.; Morange, N.; Moreno, D.; Llácer, M. Moreno; Morettini, P.; Morgenstern, M.; Morii, M.; Moritz, S.; Morley, A. K.; Mornacchi, G.; Morris, J. D.; Morvaj, L.; Moser, H. G.; Mosidze, M.; Moss, J.; Motohashi, K.; Mount, R.; Mountricha, E.; Mouraviev, S. V.; Moyse, E. J. W.; Muanza, S.; Mudd, R. D.; Mueller, F.; Mueller, J.; Mueller, K.; Mueller, T.; Mueller, T.; Muenstermann, D.; Munwes, Y.; Quijada, J. A. Murillo; Murray, W. J.; Musheghyan, H.; Musto, E.; Myagkov, A. G.; Myska, M.; Nackenhorst, O.; Nadal, J.; Nagai, K.; Nagai, R.; Nagai, Y.; Nagano, K.; Nagarkar, A.; Nagasaka, Y.; Nagel, M.; Nairz, A. M.; Nakahama, Y.; Nakamura, K.; Nakamura, T.; Nakano, I.; Namasivayam, H.; Nanava, G.; Narayan, R.; Nattermann, T.; Naumann, T.; Navarro, G.; Nayyar, R.; Neal, H. A.; Nechaeva, P. Yu.; Neep, T. J.; Negri, A.; Negri, G.; Negrini, M.; Nektarijevic, S.; Nelson, A.; Nelson, T. K.; Nemecek, S.; Nemethy, P.; Nepomuceno, A. A.; Nessi, M.; Neubauer, M. S.; Neumann, M.; Neves, R. M.; Nevski, P.; Newman, P. R.; Nguyen, D. H.; Nickerson, R. B.; Nicolaidou, R.; Nicquevert, B.; Nielsen, J.; Nikiforou, N.; Nikiforov, A.; Nikolaenko, V.; Nikolic-Audit, I.; Nikolics, K.; Nikolopoulos, K.; Nilsson, P.; Ninomiya, Y.; Nisati, A.; Nisius, R.; Nobe, T.; Nodulman, L.; Nomachi, M.; Nomidis, I.; Norberg, S.; Nordberg, M.; Nowak, S.; Nozaki, M.; Nozka, L.; Ntekas, K.; Hanninger, G. Nunes; Nunnemann, T.; Nurse, E.; Nuti, F.; O'Brien, B. J.; O'grady, F.; O'Neil, D. C.; O'Shea, V.; Oakham, F. G.; Oberlack, H.; Obermann, T.; Ocariz, J.; Ochi, A.; Ochoa, M. I.; Oda, S.; Odaka, S.; Ogren, H.; Oh, A.; Oh, S. H.; Ohm, C. C.; Ohman, H.; Ohshima, T.; Okamura, W.; Okawa, H.; Okumura, Y.; Okuyama, T.; Olariu, A.; Olchevski, A. G.; Pino, S. A. Olivares; Damazio, D. Oliveira; Garcia, E. Oliver; Olszewski, A.; Olszowska, J.; Onofre, A.; Onyisi, P. U. E.; Oram, C. J.; Oreglia, M. J.; Oren, Y.; Orestano, D.; Orlando, N.; Barrera, C. Oropeza; Orr, R. S.; Osculati, B.; Ospanov, R.; Otero y Garzon, G.; Otono, H.; Ouchrif, M.; Ouellette, E. A.; Ould-Saada, F.; Ouraou, A.; Oussoren, K. P.; Ouyang, Q.; Ovcharova, A.; Owen, M.; Ozcan, V. E.; Ozturk, N.; Pachal, K.; Pages, A. Pacheco; Aranda, C. Padilla; Pagáčová, M.; Griso, S. Pagan; Paganis, E.; Pahl, C.; Paige, F.; Pais, P.; Pajchel, K.; Palacino, G.; Palestini, S.; Palka, M.; Pallin, D.; Palma, A.; Palmer, J. D.; Pan, Y. B.; Panagiotopoulou, E.; Vazquez, J. G. Panduro; Pani, P.; Panikashvili, N.; Panitkin, S.; Pantea, D.; Paolozzi, L.; Papadopoulou, Th. D.; Papageorgiou, K.; Paramonov, A.; Hernandez, D. Paredes; Parker, M. A.; Parodi, F.; Parsons, J. A.; Parzefall, U.; Pasqualucci, E.; Passaggio, S.; Passeri, A.; Pastore, F.; Pastore, Fr.; Pásztor, G.; Pataraia, S.; Patel, N. D.; Pater, J. R.; Patricelli, S.; Pauly, T.; Pearce, J.; Pedersen, M.; Lopez, S. Pedraza; Pedro, R.; Peleganchuk, S. V.; Pelikan, D.; Peng, H.; Penning, B.; Penwell, J.; Perepelitsa, D. V.; Codina, E. Perez; G´ıa-Estañ, M. T. Pérez; Reale, V. Perez; Perini, L.; Pernegger, H.; Perrino, R.; Peschke, R.; Peshekhonov, V. D.; Peters, K.; Peters, R. F. Y.; Petersen, B. A.; Petersen, T. C.; Petit, E.; Petridis, A.; Petridou, C.; Petrolo, E.; Petrucci, F.; Pettersson, N. E.; Pezoa, R.; Phillips, P. W.; Piacquadio, G.; Pianori, E.; Picazio, A.; Piccaro, E.; Piccinini, M.; Piegaia, R.; Pignotti, D. T.; Pilcher, J. E.; Pilkington, A. D.; Pina, J.; Pinamonti, M.; Pinder, A.; Pinfold, J. L.; Pingel, A.; Pinto, B.; Pires, S.; Pitt, M.; Pizio, C.; Plazak, L.; Pleier, M.-A.; Pleskot, V.; Plotnikova, E.; Plucinski, P.; Poddar, S.; Podlyski, F.; Poettgen, R.; Poggioli, L.; Pohl, D.; Pohl, M.; Polesello, G.; Policicchio, A.; Polifka, R.; Polini, A.; Pollard, C. S.; Polychronakos, V.; Pommès, K.; Pontecorvo, L.; Pope, B. G.; Popeneciu, G. A.; Popovic, D. S.; Poppleton, A.; Bueso, X. Portell; Pospisil, S.; Potamianos, K.; Potrap, I. N.; Potter, C. J.; Potter, C. T.; Poulard, G.; Poveda, J.; Pozdnyakov, V.; Pralavorio, P.; Pranko, A.; Prasad, S.; Pravahan, R.; Prell, S.; Price, D.; Price, J.; Price, L. E.; Prieur, D.; Primavera, M.; Proissl, M.; Prokofiev, K.; Prokoshin, F.; Protopapadaki, E.; Protopopescu, S.; Proudfoot, J.; Przybycien, M.; Przysiezniak, H.; Ptacek, E.; Puddu, D.; Pueschel, E.; Puldon, D.; Purohit, M.; Puzo, P.; Qian, J.; Qin, G.; Qin, Y.; Quadt, A.; Quarrie, D. R.; Quayle, W. B.; Queitsch-Maitland, M.; Quilty, D.; Qureshi, A.; Radeka, V.; Radescu, V.; Radhakrishnan, S. K.; Radloff, P.; Rados, P.; Ragusa, F.; Rahal, G.; Rajagopalan, S.; Rammensee, M.; Randle-Conde, A. S.; Rangel-Smith, C.; Rao, K.; Rauscher, F.; Rave, T. C.; Ravenscroft, T.; Raymond, M.; Read, A. L.; Readioff, N. P.; Rebuzzi, D. M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reeves, K.; Rehnisch, L.; Reisin, H.; Relich, M.; Rembser, C.; Ren, H.; Ren, Z. L.; Renaud, A.; Rescigno, M.; Resconi, S.; Rezanova, O. L.; Reznicek, P.; Rezvani, R.; Richter, R.; Ridel, M.; Rieck, P.; Rieger, J.; Rijssenbeek, M.; Rimoldi, A.; Rinaldi, L.; Ritsch, E.; Riu, I.; Rizatdinova, F.; Rizvi, E.; Robertson, S. H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, J. E. M.; Robson, A.; Roda, C.; Rodrigues, L.; Roe, S.; Røhne, O.; Rolli, S.; Romaniouk, A.; Romano, M.; Adam, E. Romero; Rompotis, N.; Roos, L.; Ros, E.; Rosati, S.; Rosbach, K.; Rose, M.; Rosendahl, P. L.; Rosenthal, O.; Rossetti, V.; Rossi, E.; Rossi, L. P.; Rosten, R.; Rotaru, M.; Roth, I.; Rothberg, J.; Rousseau, D.; Royon, C. R.; Rozanov, A.; Rozen, Y.; Ruan, X.; Rubbo, F.; Rubinskiy, I.; Rud, V. I.; Rudolph, C.; Rudolph, M. S.; Rühr, F.; Ruiz-Martinez, A.; Rurikova, Z.; Rusakovich, N. A.; Ruschke, A.; Rutherfoord, J. P.; Ruthmann, N.; Ryabov, Y. F.; Rybar, M.; Rybkin, G.; Ryder, N. C.; Saavedra, A. F.; Sacerdoti, S.; Saddique, A.; Sadeh, I.; Sadrozinski, H. F.-W.; Sadykov, R.; Tehrani, F. Safai; Sakamoto, H.; Sakurai, Y.; Salamanna, G.; Salamon, A.; Saleem, M.; Salek, D.; De Bruin, P. H. Sales; Salihagic, D.; Salnikov, A.; Salt, J.; Ferrando, B. M. Salvachua; Salvatore, D.; Salvatore, F.; Salvucci, A.; Salzburger, A.; Sampsonidis, D.; Sanchez, A.; Sánchez, J.; Martinez, V. Sanchez; Sandaker, H.; Sandbach, R. L.; Sander, H. G.; Sanders, M. P.; Sandhoff, M.; Sandoval, T.; Sandoval, C.; Sandstroem, R.; Sankey, D. P. C.; Sansoni, A.; Santoni, C.; Santonico, R.; Santos, H.; Castillo, I. Santoyo; Sapp, K.; Sapronov, A.; Saraiva, J. G.; Sarrazin, B.; Sartisohn, G.; Sasaki, O.; Sasaki, Y.; Sauvage, G.; Sauvan, E.; Savard, P.; Savu, D. O.; Sawyer, C.; Sawyer, L.; Saxon, D. H.; Saxon, J.; Sbarra, C.; Sbrizzi, A.; Scanlon, T.; Scannicchio, D. A.; Scarcella, M.; Scarfone, V.; Schaarschmidt, J.; Schacht, P.; Schaefer, D.; Schaefer, R.; Schaepe, S.; Schaetzel, S.; Schäfer, U.; Schaffer, A. C.; Schaile, D.; Schamberger, R. D.; Scharf, V.; Schegelsky, V. A.; Scheirich, D.; Schernau, M.; Scherzer, M. I.; Schiavi, C.; Schieck, J.; Schillo, C.; Schioppa, M.; Schlenker, S.; Schmidt, E.; Schmieden, K.; Schmitt, C.; Schmitt, C.; Schmitt, S.; Schneider, B.; Schnellbach, Y. J.; Schnoor, U.; Schoeffel, L.; Schoening, A.; Schoenrock, B. D.; Schorlemmer, A. L. S.; Schott, M.; Schouten, D.; Schovancova, J.; Schramm, S.; Schreyer, M.; Schroeder, C.; Schuh, N.; Schultens, M. J.; Schultz-Coulon, H.-C.; Schulz, H.; Schumacher, M.; Schumm, B. A.; Schune, Ph.; Schwanenberger, C.; Schwartzman, A.; Schwegler, Ph.; Schwemling, Ph.; Schwienhorst, R.; Schwindling, J.; Schwindt, T.; Schwoerer, M.; Sciacca, F. G.; Scifo, E.; Sciolla, G.; Scott, W. G.; Scuri, F.; Scutti, F.; Searcy, J.; Sedov, G.; Sedykh, E.; Seidel, S. C.; Seiden, A.; Seifert, F.; Seixas, J. M.; Sekhniaidze, G.; Sekula, S. J.; Selbach, K. E.; Seliverstov, D. M.; Sellers, G.; Semprini-Cesari, N.; Serfon, C.; Serin, L.; Serkin, L.; Serre, T.; Seuster, R.; Severini, H.; Sfiligoj, T.; Sforza, F.; Sfyrla, A.; Shabalina, E.; Shamim, M.; Shan, L. Y.; Shang, R.; Shank, J. T.; Shapiro, M.; Shatalov, P. B.; Shaw, K.; Shehu, C. Y.; Sherwood, P.; Shi, L.; Shimizu, S.; Shimmin, C. O.; Shimojima, M.; Shiyakova, M.; Shmeleva, A.; Shochet, M. J.; Short, D.; Shrestha, S.; Shulga, E.; Shupe, M. A.; Shushkevich, S.; Sicho, P.; Sidiropoulou, O.; Sidorov, D.; Sidoti, A.; Siegert, F.; Sijacki, Dj.; Silva, J.; Silver, Y.; Silverstein, D.; Silverstein, S. B.; Simak, V.; Simard, O.; Simic, Lj.; Simion, S.; Simioni, E.; Simmons, B.; Simoniello, R.; Simonyan, M.; Sinervo, P.; Sinev, N. B.; Sipica, V.; Siragusa, G.; Sircar, A.; Sisakyan, A. N.; Sivoklokov, S. Yu.; Sjölin, J.; Sjursen, T. B.; Skottowe, H. P.; Skovpen, K. Yu.; Skubic, P.; Slater, M.; Slavicek, T.; Sliwa, K.; Smakhtin, V.; Smart, B. H.; Smestad, L.; Smirnov, S. Yu.; Smirnov, Y.; Smirnova, L. N.; Smirnova, O.; Smith, K. M.; Smizanska, M.; Smolek, K.; Snesarev, A. A.; Snidero, G.; Snyder, S.; Sobie, R.; Socher, F.; Soffer, A.; Soh, D. A.; Solans, C. A.; Solar, M.; Solc, J.; Soldatov, E. Yu.; Soldevila, U.; Camillocci, E. Solfaroli; Solodkov, A. A.; Soloshenko, A.; Solovyanov, O. V.; Solovyev, V.; Sommer, P.; Song, H. Y.; Soni, N.; Sood, A.; Sopczak, A.; Sopko, B.; Sopko, V.; Sorin, V.; Sosebee, M.; Soualah, R.; Soueid, P.; Soukharev, A. M.; South, D.; Spagnolo, S.; Spanò, F.; Spearman, W. R.; Spighi, R.; Spigo, G.; Spousta, M.; Spreitzer, T.; Spurlock, B.; St. Denis, R. D.; Staerz, S.; Stahlman, J.; Stamen, R.; Stanecka, E.; Stanek, R. W.; Stanescu, C.; Stanescu-Bellu, M.; Stanitzki, M. M.; Stapnes, S.; Starchenko, E. A.; Stark, J.; Staroba, P.; Starovoitov, P.; Staszewski, R.; Stavina, P.; Steinberg, P.; Stelzer, B.; Stelzer, H. J.; Stelzer-Chilton, O.; Stenzel, H.; Stern, S.; Stewart, G. A.; Stillings, J. A.; Stockton, M. C.; Stoebe, M.; Stoicea, G.; Stolte, P.; Stonjek, S.; Stradling, A. R.; Straessner, A.; Stramaglia, M. E.; Strandberg, J.; Strandberg, S.; Strandlie, A.; Strauss, E.; Strauss, M.; Strizenec, P.; Ströhmer, R.; Strom, D. M.; Stroynowski, R.; Stucci, S. A.; Stugu, B.; Styles, N. A.; Su, D.; Su, J.; Subramania, HS.; Subramaniam, R.; Succurro, A.; Sugaya, Y.; Suhr, C.; Suk, M.; Sulin, V. V.; Sultansoy, S.; Sumida, T.; Sun, X.; Sundermann, J. E.; Suruliz, K.; Susinno, G.; Sutton, M. R.; Suzuki, Y.; Svatos, M.; Swedish, S.; Swiatlowski, M.; Sykora, I.; Sykora, T.; Ta, D.; Taccini, C.; Tackmann, K.; Taenzer, J.; Taffard, A.; Tafirout, R.; Taiblum, N.; Takahashi, Y.; Takai, H.; Takashima, R.; Takeda, H.; Takeshita, T.; Takubo, Y.; Talby, M.; Talyshev, A. A.; Tam, J. Y. C.; Tan, K. G.; Tanaka, J.; Tanaka, R.; Tanaka, S.; Tanaka, S.; Tanasijczuk, A. J.; Tannenwald, B. B.; Tannoury, N.; Tapprogge, S.; Tarem, S.; Tarrade, F.; Tartarelli, G. F.; Tas, P.; Tasevsky, M.; Tashiro, T.; Tassi, E.; Delgado, A. Tavares; Tayalati, Y.; Taylor, F. E.; Taylor, G. N.; Taylor, W.; Teischinger, F. A.; Castanheira, M. Teixeira Dias; Teixeira-Dias, P.; Temming, K. K.; Ten Kate, H.; Teng, P. K.; Teoh, J. J.; Terada, S.; Terashi, K.; Terron, J.; Terzo, S.; Testa, M.; Teuscher, R. J.; Therhaag, J.; Theveneaux-Pelzer, T.; Thomas, J. P.; Thomas-Wilsker, J.; Thompson, E. N.; Thompson, P. D.; Thompson, P. D.; Thompson, A. S.; Thomsen, L. A.; Thomson, E.; Thomson, M.; Thong, W. M.; Thun, R. P.; Tian, F.; Tibbetts, M. J.; Tikhomirov, V. O.; Tikhonov, Yu. A.; Timoshenko, S.; Tiouchichine, E.; Tipton, P.; Tisserant, S.; Todorov, T.; Todorova-Nova, S.; Toggerson, B.; Tojo, J.; Tokár, S.; Tokushuku, K.; Tollefson, K.; Tomlinson, L.; Tomoto, M.; Tompkins, L.; Toms, K.; Topilin, N. D.; Torrence, E.; Torres, H.; Pastor, E. Torró; Toth, J.; Touchard, F.; Tovey, D. R.; Tran, H. L.; Trefzger, T.; Tremblet, L.; Tricoli, A.; Trigger, I. M.; Trincaz-Duvoid, S.; Tripiana, M. F.; Triplett, N.; Trischuk, W.; Trocmé, B.; Troncon, C.; Trottier-McDonald, M.; Trovatelli, M.; True, P.; Trzebinski, M.; Trzupek, A.; Tsarouchas, C.; Tseng, J. C.-L.; Tsiareshka, P. V.; Tsionou, D.; Tsipolitis, G.; Tsirintanis, N.; Tsiskaridze, S.; Tsiskaridze, V.; Tskhadadze, E. G.; Tsukerman, I. I.; Tsulaia, V.; Tsuno, S.; Tsybychev, D.; Tudorache, A.; Tudorache, V.; Tuna, A. N.; Tupputi, S. A.; Turchikhin, S.; Turecek, D.; Cakir, I. Turk; Turra, R.; Tuts, P. M.; Tykhonov, A.; Tylmad, M.; Tyndel, M.; Uchida, K.; Ueda, I.; Ueno, R.; Ughetto, M.; Ugland, M.; Uhlenbrock, M.; Ukegawa, F.; Unal, G.; Undrus, A.; Unel, G.; Ungaro, F. C.; Unno, Y.; Urbaniec, D.; Urquijo, P.; Usai, G.; Usanova, A.; Vacavant, L.; Vacek, V.; Vachon, B.; Valencic, N.; Valentinetti, S.; Valero, A.; Valery, L.; Valkar, S.; Gallego, E. Valladolid; Vallecorsa, S.; Ferrer, J. A. Valls; Van Den Wollenberg, W.; Van Der Deijl, P. C.; van der Geer, R.; van der Graaf, H.; Van Der Leeuw, R.; van der Ster, D.; van Eldik, N.; van Gemmeren, P.; Van Nieuwkoop, J.; van Vulpen, I.; van Woerden, M. C.; Vanadia, M.; Vandelli, W.; Vanguri, R.; Vaniachine, A.; Vankov, P.; Vannucci, F.; Vardanyan, G.; Vari, R.; Varnes, E. W.; Varol, T.; Varouchas, D.; Vartapetian, A.; Varvell, K. E.; Vazeille, F.; Schroeder, T. Vazquez; Veatch, J.; Veloso, F.; Veneziano, S.; Ventura, A.; Ventura, D.; Venturi, M.; Venturi, N.; Venturini, A.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, J. C.; Vest, A.; Vetterli, M. C.; Viazlo, O.; Vichou, I.; Vickey, T.; Boeriu, O. E. Vickey; Viehhauser, G. H. A.; Viel, S.; Vigne, R.; Villa, M.; Perez, M. Villaplana; Vilucchi, E.; Vincter, M. G.; Vinogradov, V. B.; Virzi, J.; Vivarelli, I.; Vaque, F. Vives; Vlachos, S.; Vladoiu, D.; Vlasak, M.; Vogel, A.; Vogel, M.; Vokac, P.; Volpi, G.; Volpi, M.; von der Schmitt, H.; von Radziewski, H.; von Toerne, E.; Vorobel, V.; Vorobev, K.; Vos, M.; Voss, R.; Vossebeld, J. H.; Vranjes, N.; Milosavljevic, M. Vranjes; Vrba, V.; Vreeswijk, M.; Anh, T. Vu; Vuillermet, R.; Vukotic, I.; Vykydal, Z.; Wagner, P.; Wagner, W.; Wahlberg, H.; Wahrmund, S.; Wakabayashi, J.; Walder, J.; Walker, R.; Walkowiak, W.; Wall, R.; Waller, P.; Walsh, B.; Wang, C.; Wang, C.; Wang, F.; Wang, H.; Wang, H.; Wang, J.; Wang, J.; Wang, K.; Wang, R.; Wang, S. M.; Wang, T.; Wang, X.; Wanotayaroj, C.; Warburton, A.; Ward, C. P.; Wardrope, D. R.; Warsinsky, M.; Washbrook, A.; Wasicki, C.; Watkins, P. M.; Watson, A. T.; Watson, I. J.; Watson, M. F.; Watts, G.; Watts, S.; Waugh, B. M.; Webb, S.; Weber, M. S.; Weber, S. W.; Webster, J. S.; Weidberg, A. R.; Weigell, P.; Weinert, B.; Weingarten, J.; Weiser, C.; Weits, H.; Wells, P. S.; Wenaus, T.; Wendland, D.; Weng, Z.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M.; Werner, P.; Wessels, M.; Wetter, J.; Whalen, K.; White, A.; White, M. J.; White, R.; White, S.; Whiteson, D.; Wicke, D.; Wickens, F. J.; Wiedenmann, W.; Wielers, M.; Wienemann, P.; Wiglesworth, C.; Wiik-Fuchs, L. A. M.; Wijeratne, P. A.; Wildauer, A.; Wildt, M. A.; Wilkens, H. G.; Will, J. Z.; Williams, H. H.; Williams, S.; Willis, C.; Willocq, S.; Wilson, A.; Wilson, J. A.; Wingerter-Seez, I.; Winklmeier, F.; Winter, B. T.; Wittgen, M.; Wittig, T.; Wittkowski, J.; Wollstadt, S. J.; Wolter, M. W.; Wolters, H.; Wosiek, B. K.; Wotschack, J.; Woudstra, M. J.; Wozniak, K. W.; Wright, M.; Wu, M.; Wu, S. L.; Wu, X.; Wu, Y.; Wulf, E.; Wyatt, T. R.; Wynne, B. M.; Xella, S.; Xiao, M.; Xu, D.; Xu, L.; Yabsley, B.; Yacoob, S.; Yamada, M.; Yamaguchi, H.; Yamaguchi, Y.; Yamamoto, A.; Yamamoto, K.; Yamamoto, S.; Yamamura, T.; Yamanaka, T.; Yamauchi, K.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, H.; Yang, U. K.; Yang, Y.; Yanush, S.; Yao, L.; Yao, W.-M.; Yasu, Y.; Yatsenko, E.; Wong, K. H. Yau; Ye, J.; Ye, S.; Yen, A. L.; Yildirim, E.; Yilmaz, M.; Yoosoofmiya, R.; Yorita, K.; Yoshida, R.; Yoshihara, K.; Young, C.; Young, C. J. S.; Youssef, S.; Yu, D. R.; Yu, J.; Yu, J. M.; Yu, J.; Yuan, L.; Yurkewicz, A.; Yusuff, I.; Zabinski, B.; Zaidan, R.; Zaitsev, A. M.; Zaman, A.; Zambito, S.; Zanello, L.; Zanzi, D.; Zeitnitz, C.; Zeman, M.; Zemla, A.; Zengel, K.; Zenin, O.; Ženiš, T.; Zerwas, D.; della Porta, G. Zevi; Zhang, D.; Zhang, F.; Zhang, H.; Zhang, J.; Zhang, L.; Zhang, X.; Zhang, Z.; Zhao, Z.; Zhemchugov, A.; Zhong, J.; Zhou, B.; Zhou, L.; Zhou, N.; Zhu, C. G.; Zhu, H.; Zhu, J.; Zhu, Y.; Zhuang, X.; Zhukov, K.; Zibell, A.; Zieminska, D.; Zimine, N. I.; Zimmermann, C.; Zimmermann, R.; Zimmermann, S.; Zimmermann, S.; Zinonos, Z.; Ziolkowski, M.; Zobernig, G.; Zoccoli, A.; zur Nedden, M.; Zurzolo, G.; Zutshi, V.; Zwalinski, L.

    2014-09-01

    A search for squarks and gluinos in final states containing high- p T jets, missing transverse momentum and no electrons or muons is presented. The data were recorded in 2012 by the ATLAS experiment in TeV proton-proton collisions at the Large Hadron Collider, with a total integrated luminosity of 20 .3 fb-1. Results are interpreted in a variety of simplified and specific supersymmetry-breaking models assuming that R-parity is conserved and that the lightest neutralino is the lightest supersymmetric particle. An exclusion limit at the 95% confidence level on the mass of the gluino is set at 1330 GeV for a simplified model incorporating only a gluino and the lightest neutralino. For a simplified model involving the strong production of first- and second-generation squarks, squark masses below 850 GeV (440 GeV) are excluded for a massless lightest neutralino, assuming mass degenerate (single light-flavour) squarks. In mSUGRA/CMSSM models with tan β = 30, A 0 = -2 m 0 and μ > 0, squarks and gluinos of equal mass are excluded for masses below 1700 GeV. Additional limits are set for non-universal Higgs mass models with gaugino mediation and for simplified models involving the pair production of gluinos, each decaying to a top squark and a top quark, with the top squark decaying to a charm quark and a neutralino. These limits extend the region of supersymmetric parameter space excluded by previous searches with the ATLAS detector. [Figure not available: see fulltext.

  9. Putting problem formulation at the forefront of GMO risk analysis.

    PubMed

    Tepfer, Mark; Racovita, Monica; Craig, Wendy

    2013-01-01

    When applying risk assessment and the broader process of risk analysis to decisions regarding the dissemination of genetically modified organisms (GMOs), the process has a tendency to become remarkably complex. Further, as greater numbers of countries consider authorising the large-scale dissemination of GMOs, and as GMOs with more complex traits reach late stages of development, there has been increasing concern about the burden posed by the complexity of risk analysis. We present here an improved approach for GMO risk analysis that gives a central role to problem formulation. Further, the risk analysis strategy has been clarified and simplified in order to make rigorously scientific risk assessment and risk analysis more broadly accessible to diverse stakeholder groups.

  10. [Simplified models for analysis of sources of risk and biomechanical overload in craft industries: practical application in confectioners, pasta and pizza makers].

    PubMed

    Placci, M; Cerbai, M

    2011-01-01

    The food industry is of great importance in Italy; it is second only to the engineering sector, involving about 440,000 workers. However, 90% of the food businesses have less than 10 employees and are exempt from legal obligation to provide a detailed Risk Assessment Document. The aim of the study was to identify the inconveniences and risks present in the workplaces analyzed with particular reference to biomechanical risk of the upper limbs and the lumbar spine. This preliminary study, carried out by using pre-mapping of the inconveniences and risks (5) and the "mini-checklist OCRA" (4), involved 15 small food businesses: ovens for baking bread, pastry shops, pizzerias and the production of "Piadina" (flat bread). Although undoubtedly with differences, confectioners, pasta makers, pizza makers and "piadinari" were exposed to similar risks. By analyzing the final graphs, action areas can be identified on which further risk analysis can be made. Exposure is mainly related to repetitive movements, manual handling of loads and a common occurrence is the risk of allergy to flour dust. There are real peaks in demand from customers, that inevitably increase work demands and consequently biomechanical overload. In future studies it will be interesting to investigate this aspect by studying the variations in work demand and the final exposure index of the working day.

  11. Elaborate SMART MCNP Modelling Using ANSYS and Its Applications

    NASA Astrophysics Data System (ADS)

    Song, Jaehoon; Surh, Han-bum; Kim, Seung-jin; Koo, Bonsueng

    2017-09-01

    An MCNP 3-dimensional model can be widely used to evaluate various design parameters such as a core design or shielding design. Conventionally, a simplified 3-dimensional MCNP model is applied to calculate these parameters because of the cumbersomeness of modelling by hand. ANSYS has a function for converting the CAD `stp' format into an MCNP input in the geometry part. Using ANSYS and a 3- dimensional CAD file, a very detailed and sophisticated MCNP 3-dimensional model can be generated. The MCNP model is applied to evaluate the assembly weighting factor at the ex-core detector of SMART, and the result is compared with a simplified MCNP SMART model and assembly weighting factor calculated by DORT, which is a deterministic Sn code.

  12. Peptide folding and aggregation studied using a simplified atomic model

    NASA Astrophysics Data System (ADS)

    Irbäck, Anders

    2005-05-01

    Using an atomic model with a simplified sequence-based potential, the folding properties of several different peptides are studied. Both α-helical (Trp cage, Fs) and β-sheet (GB1p, GB1m2, GB1m3, Betanova, LLM) peptides are considered. The model is able to fold these different peptides for one and the same choice of parameters, and the melting behaviour of the peptides (folded population against temperature) is in very good agreement with experimental data. Furthermore, using the same model with unchanged parameters, the aggregation behaviour of a fibril-forming fragment of the Alzheimer's A β peptide is studied, with very promising results.

  13. Effect of spine motion on mobility in quadruped running

    NASA Astrophysics Data System (ADS)

    Chen, Dongliang; Liu, Qi; Dong, Litao; Wang, Hong; Zhang, Qun

    2014-11-01

    Most of current running quadruped robots have similar construction: a stiff body and four compliant legs. Many researches have indicated that the stiff body without spine motion is a main factor in limitation of robots' mobility. Therefore, investigating spine motion is very important to build robots with better mobility. A planar quadruped robot is designed based on cheetahs' morphology. There is a spinal driving joint in the body of the robot. When the spinal driving joint acts, the robot has spine motion; otherwise, the robot has not spine motion. Six group prototype experiments with the robot are carried out to study the effect of spine motion on mobility. In each group, there are two comparative experiments: the spinal driving joint acts in one experiment but does not in the other experiment. The results of the prototype experiments indicate that the average speeds of the robot with spine motion are 8.7%-15.9% larger than those of the robot without spine motion. Furthermore, a simplified sagittal plane model of quadruped mammals is introduced. The simplified model also has a spinal driving joint. Using a similar process as the prototype experiments, six group simulation experiments with the simplified model are conducted. The results of the simulation experiments show that the maximum rear leg horizontal thrusts of the simplified mode with spine motion are 68.2%-71.3% larger than those of the simplified mode without spine motion. Hence, it is found that spine motion can increase the average running speed and the intrinsic reason of speed increase is the improvement of the maximum rear leg horizontal thrust.

  14. Artificial cell mimics as simplified models for the study of cell biology.

    PubMed

    Salehi-Reyhani, Ali; Ces, Oscar; Elani, Yuval

    2017-07-01

    Living cells are hugely complex chemical systems composed of a milieu of distinct chemical species (including DNA, proteins, lipids, and metabolites) interconnected with one another through a vast web of interactions: this complexity renders the study of cell biology in a quantitative and systematic manner a difficult task. There has been an increasing drive towards the utilization of artificial cells as cell mimics to alleviate this, a development that has been aided by recent advances in artificial cell construction. Cell mimics are simplified cell-like structures, composed from the bottom-up with precisely defined and tunable compositions. They allow specific facets of cell biology to be studied in isolation, in a simplified environment where control of variables can be achieved without interference from a living and responsive cell. This mini-review outlines the core principles of this approach and surveys recent key investigations that use cell mimics to address a wide range of biological questions. It will also place the field in the context of emerging trends, discuss the associated limitations, and outline future directions of the field. Impact statement Recent years have seen an increasing drive to construct cell mimics and use them as simplified experimental models to replicate and understand biological phenomena in a well-defined and controlled system. By summarizing the advances in this burgeoning field, and using case studies as a basis for discussion on the limitations and future directions of this approach, it is hoped that this minireview will spur others in the experimental biology community to use artificial cells as simplified models with which to probe biological systems.

  15. Measurements of the Exerted Pressure by Pelvic Circumferential Compression Devices

    PubMed Central

    Knops, Simon P; van Riel, Marcel P.J.M; Goossens, Richard H.M; van Lieshout, Esther M.M; Patka, Peter; Schipper, Inger B

    2010-01-01

    Background: Data on the efficacy and safety of non-invasive Pelvic Circumferential Compression Devices (PCCDs) is limited. Tissue damage may occur if a continuous pressure on the skin exceeding 9.3 kPa is sustained for more than two or three hours. The aim of this study was to gain insight into the pressure build-up at the interface, by measuring the PCCD-induced pressure when applying pulling forces to three different PCCDs (Pelvic Binder® , SAM-Sling ® and T-POD® ) in a simplified model. Methods: The resulting exerted pressures were measured at four ‘anatomical’ locations (right, left, posterior and anterior) in a model using a pressure measurement system consisting of pressure cuffs. Results: The exerted pressure varied substantially between the locations as well as between the PCCDs. Maximum pressures ranged from 18.9-23.3 kPa and from 19.2-27.5 kPa at the right location and left location, respectively. Pressures at the posterior location stayed below 18 kPa. At the anterior location pressures varied markedly between the different PCCDs. Conclusion: The circumferential compression by the different PCCDs showed high pressures measured at the four locations using a simplified model. Difference in design and functional characteristics of the PCCDs resulted in different pressure build-up at the four locations. When following the manufacturer’s instructions, the exerted pressure of all three PCCDs tested exceeded the tissue damaging level (9.3 kPa). In case of prolonged use in a clinical situation this might put patients at risk for developing tissue damage. PMID:20361001

  16. Seismic analysis of offshore wind turbines on bottom-fixed support structures.

    PubMed

    Alati, Natale; Failla, Giuseppe; Arena, Felice

    2015-02-28

    This study investigates the seismic response of a horizontal axis wind turbine on two bottom-fixed support structures for transitional water depths (30-60 m), a tripod and a jacket, both resting on pile foundations. Fully coupled, nonlinear time-domain simulations on full system models are carried out under combined wind-wave-earthquake loadings, for different load cases, considering fixed and flexible foundation models. It is shown that earthquake loading may cause a significant increase of stress resultant demands, even for moderate peak ground accelerations, and that fully coupled nonlinear time-domain simulations on full system models are essential to capture relevant information on the moment demand in the rotor blades, which cannot be predicted by analyses on simplified models allowed by existing standards. A comparison with some typical design load cases substantiates the need for an accurate seismic assessment in sites at risk from earthquakes. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  17. Simplified stock markets described by number operators

    NASA Astrophysics Data System (ADS)

    Bagarello, F.

    2009-06-01

    In this paper we continue our systematic analysis of the operatorial approach previously proposed in an economical context and we discuss a mixed toy model of a simplified stock market, i.e. a model in which the price of the shares is given as an input. We deduce the time evolution of the portfolio of the various traders of the market, as well as of other observable quantities. As in a previous paper, we solve the equations of motion by means of a fixed point like approximation.

  18. Impact of Relative Residence Times in Highly Distinct Environments on the Distribution of Heavy Drinkers

    PubMed Central

    Mubayi, Anuj; Greenwood, Priscilla E.; Castillo-Chávez, Carlos; Gruenewald, Paul; Gorman, Dennis M.

    2009-01-01

    Alcohol consumption is a function of social dynamics, environmental contexts, individuals’ preferences and family history. Empirical surveys have focused primarily on identification of risk factors for high-level drinking but have done little to clarify the underlying mechanisms at work. Also, there have been few attempts to apply nonlinear dynamics to the study of these mechanisms and processes at the population level. A simple framework where drinking is modeled as a socially contagious process in low- and high-risk connected environments is introduced. Individuals are classified as light, moderate (assumed mobile), and heavy drinkers. Moderate drinkers provide the link between both environments, that is, they are assumed to be the only individuals drinking in both settings. The focus here is on the effect of moderate drinkers, measured by the proportion of their time spent in “low-” versus “high-” risk drinking environments, on the distribution of drinkers. A simple model within our contact framework predicts that if the relative residence times of moderate drinkers is distributed randomly between low- and high-risk environments then the proportion of heavy drinkers is likely to be higher than expected. However, the full story even in a highly simplified setting is not so simple because “strong” local social mixing tends to increase high-risk drinking on its own. High levels of social interaction between light and moderate drinkers in low-risk environments can diminish the importance of the distribution of relative drinking times on the prevalence of heavy drinking. PMID:20161388

  19. Simplified three-dimensional model provides anatomical insights in lizards' caudal autotomy as printed illustration.

    PubMed

    De Amorim, Joana D C G; Travnik, Isadora; De Sousa, Bernadete M

    2015-03-01

    Lizards' caudal autotomy is a complex and vastly employed antipredator mechanism, with thorough anatomic adaptations involved. Due to its diminished size and intricate structures, vertebral anatomy is hard to be clearly conveyed to students and researchers of other areas. Three-dimensional models are prodigious tools in unveiling anatomical nuances. Some of the techniques used to create them can produce irregular and complicated forms, which despite being very accurate, lack didactical uniformity and simplicity. Since both are considered fundamental characteristics for comprehension, a simplified model could be the key to improve learning. The model here presented depicts the caudal osteology of Tropidurus itambere, and was designed to be concise, in order to be easily assimilated, yet complete, not to compromise the informative aspect. The creation process requires only basic skills in manipulating polygons in 3D modeling softwares, in addition to the appropriate knowledge of the structure to be modeled. As reference for the modeling, we used microscopic observation and a photograph database of the caudal structures. This way, no advanced laboratory equipment was needed and all biological materials were preserved for future research. Therefore, we propose a wider usage of simplified 3D models both in the classroom and as illustrations for scientific publications.

  20. INTERNAL DOSE AND RESPONSE IN REAL-TIME.

    EPA Science Inventory

    Abstract: Rapid temporal fluctuations in exposure may occur in a number of situations such as accidents or other unexpected acute releases of airborne substances. Often risk assessments overlook temporal exposure patterns under simplifying assumptions such as the use of time-wei...

  1. SIMPLIFYING EVALUATIONS OF GREEN CHEMISTRIES: HOW MUCH INFORMATION DO WE NEED?

    EPA Science Inventory

    Research within the U.S. EPA's National Risk Management Research Laboratory is developing a methodology for the evaluation of green chemistries. This methodology called GREENSCOPE (Gauging Reaction Effectiveness for the Environmental Sustainability of Chemistries with a multi-Ob...

  2. Impact of the revised International Prognostic Scoring System, cytogenetics and monosomal karyotype on outcome after allogeneic stem cell transplantation for myelodysplastic syndromes and secondary acute myeloid leukemia evolving from myelodysplastic syndromes: a retrospective multicenter study of the European Society of Blood and Marrow Transplantation

    PubMed Central

    Koenecke, Christian; Göhring, Gudrun; de Wreede, Liesbeth C.; van Biezen, Anja; Scheid, Christof; Volin, Liisa; Maertens, Johan; Finke, Jürgen; Schaap, Nicolaas; Robin, Marie; Passweg, Jakob; Cornelissen, Jan; Beelen, Dietrich; Heuser, Michael; de Witte, Theo; Kröger, Nicolaus

    2015-01-01

    The aim of this study was to determine the impact of the revised 5-group International Prognostic Scoring System cytogenetic classification on outcome after allogeneic stem cell transplantation in patients with myelodysplastic syndromes or secondary acute myeloid leukemia who were reported to the European Society for Blood and Marrow Transplantation database. A total of 903 patients had sufficient cytogenetic information available at stem cell transplantation to be classified according to the 5-group classification. Poor and very poor risk according to this classification was an independent predictor of shorter relapse-free survival (hazard ratio 1.40 and 2.14), overall survival (hazard ratio 1.38 and 2.14), and significantly higher cumulative incidence of relapse (hazard ratio 1.64 and 2.76), compared to patients with very good, good or intermediate risk. When comparing the predictive performance of a series of Cox models both for relapse-free survival and for overall survival, a model with simplified 5-group cytogenetics (merging very good, good and intermediate cytogenetics) performed best. Furthermore, monosomal karyotype is an additional negative predictor for outcome within patients of the poor, but not the very poor risk group of the 5-group classification. The revised International Prognostic Scoring System cytogenetic classification allows patients with myelodysplastic syndromes to be separated into three groups with clearly different outcomes after stem cell transplantation. Poor and very poor risk cytogenetics were strong predictors of poor patient outcome. The new cytogenetic classification added value to prediction of patient outcome compared to prediction models using only traditional risk factors or the 3-group International Prognostic Scoring System cytogenetic classification. PMID:25552702

  3. Using the failure mode and effects analysis model to improve parathyroid hormone and adrenocorticotropic hormone testing

    PubMed Central

    Magnezi, Racheli; Hemi, Asaf; Hemi, Rina

    2016-01-01

    Background Risk management in health care systems applies to all hospital employees and directors as they deal with human life and emergency routines. There is a constant need to decrease risk and increase patient safety in the hospital environment. The purpose of this article is to review the laboratory testing procedures for parathyroid hormone and adrenocorticotropic hormone (which are characterized by short half-lives) and to track failure modes and risks, and offer solutions to prevent them. During a routine quality improvement review at the Endocrine Laboratory in Tel Hashomer Hospital, we discovered these tests are frequently repeated unnecessarily due to multiple failures. The repetition of the tests inconveniences patients and leads to extra work for the laboratory and logistics personnel as well as the nurses and doctors who have to perform many tasks with limited resources. Methods A team of eight staff members accompanied by the Head of the Endocrine Laboratory formed the team for analysis. The failure mode and effects analysis model (FMEA) was used to analyze the laboratory testing procedure and was designed to simplify the process steps and indicate and rank possible failures. Results A total of 23 failure modes were found within the process, 19 of which were ranked by level of severity. The FMEA model prioritizes failures by their risk priority number (RPN). For example, the most serious failure was the delay after the samples were collected from the department (RPN =226.1). Conclusion This model helped us to visualize the process in a simple way. After analyzing the information, solutions were proposed to prevent failures, and a method to completely avoid the top four problems was also developed. PMID:27980440

  4. Realistic simplified gaugino-higgsino models in the MSSM

    NASA Astrophysics Data System (ADS)

    Fuks, Benjamin; Klasen, Michael; Schmiemann, Saskia; Sunder, Marthijn

    2018-03-01

    We present simplified MSSM models for light neutralinos and charginos with realistic mass spectra and realistic gaugino-higgsino mixing, that can be used in experimental searches at the LHC. The formerly used naive approach of defining mass spectra and mixing matrix elements manually and independently of each other does not yield genuine MSSM benchmarks. We suggest the use of less simplified, but realistic MSSM models, whose mass spectra and mixing matrix elements are the result of a proper matrix diagonalisation. We propose a novel strategy targeting the design of such benchmark scenarios, accounting for user-defined constraints in terms of masses and particle mixing. We apply it to the higgsino case and implement a scan in the four relevant underlying parameters {μ , tan β , M1, M2} for a given set of light neutralino and chargino masses. We define a measure for the quality of the obtained benchmarks, that also includes criteria to assess the higgsino content of the resulting charginos and neutralinos. We finally discuss the distribution of the resulting models in the MSSM parameter space as well as their implications for supersymmetric dark matter phenomenology.

  5. A cumulative energy demand indicator (CED), life cycle based, for industrial waste management decision making.

    PubMed

    Puig, Rita; Fullana-I-Palmer, Pere; Baquero, Grau; Riba, Jordi-Roger; Bala, Alba

    2013-12-01

    Life cycle thinking is a good approach to be used for environmental decision-support, although the complexity of the Life Cycle Assessment (LCA) studies sometimes prevents their wide use. The purpose of this paper is to show how LCA methodology can be simplified to be more useful for certain applications. In order to improve waste management in Catalonia (Spain), a Cumulative Energy Demand indicator (LCA-based) has been used to obtain four mathematical models to help the government in the decision of preventing or allowing a specific waste from going out of the borders. The conceptual equations and all the subsequent developments and assumptions made to obtain the simplified models are presented. One of the four models is discussed in detail, presenting the final simplified equation to be subsequently used by the government in decision making. The resulting model has been found to be scientifically robust, simple to implement and, above all, fulfilling its purpose: the limitation of waste transport out of Catalonia unless the waste recovery operations are significantly better and justify this transport. Copyright © 2013. Published by Elsevier Ltd.

  6. Revisiting simplified dark matter models in terms of AMS-02 and Fermi-LAT

    NASA Astrophysics Data System (ADS)

    Li, Tong

    2018-01-01

    We perform an analysis of the simplified dark matter models in the light of cosmic ray observables by AMS-02 and Fermi-LAT. We assume fermion, scalar or vector dark matter particle with a leptophobic spin-0 mediator that couples only to Standard Model quarks and dark matter via scalar and/or pseudo-scalar bilinear. The propagation and injection parameters of cosmic rays are determined by the observed fluxes of nuclei from AMS-02. We find that the AMS-02 observations are consistent with the dark matter framework within the uncertainties. The AMS-02 antiproton data prefer 30 (50) GeV - 5 TeV dark matter mass and require an effective annihilation cross section in the region of 4 × 10-27 (7 × 10-27) - 4 × 10-24 cm3/s for the simplified fermion (scalar and vector) dark matter models. The cross sections below 2 × 10-26 cm3/s can evade the constraint from Fermi-LAT dwarf galaxies for about 100 GeV dark matter mass.

  7. Indirect detection constraints on s- and t-channel simplified models of dark matter

    NASA Astrophysics Data System (ADS)

    Carpenter, Linda M.; Colburn, Russell; Goodman, Jessica; Linden, Tim

    2016-09-01

    Recent Fermi-LAT observations of dwarf spheroidal galaxies in the Milky Way have placed strong limits on the gamma-ray flux from dark matter annihilation. In order to produce the strongest limit on the dark matter annihilation cross section, the observations of each dwarf galaxy have typically been "stacked" in a joint-likelihood analysis, utilizing optical observations to constrain the dark matter density profile in each dwarf. These limits have typically been computed only for singular annihilation final states, such as b b ¯ or τ+τ- . In this paper, we generalize this approach by producing an independent joint-likelihood analysis to set constraints on models where the dark matter particle annihilates to multiple final-state fermions. We interpret these results in the context of the most popular simplified models, including those with s- and t-channel dark matter annihilation through scalar and vector mediators. We present our results as constraints on the minimum dark matter mass and the mediator sector parameters. Additionally, we compare our simplified model results to those of effective field theory contact interactions in the high-mass limit.

  8. Viewpoint on ISA TR84.0.02--simplified methods and fault tree analysis.

    PubMed

    Summers, A E

    2000-01-01

    ANSI/ISA-S84.01-1996 and IEC 61508 require the establishment of a safety integrity level for any safety instrumented system or safety related system used to mitigate risk. Each stage of design, operation, maintenance, and testing is judged against this safety integrity level. Quantitative techniques can be used to verify whether the safety integrity level is met. ISA-dTR84.0.02 is a technical report under development by ISA, which discusses how to apply quantitative analysis techniques to safety instrumented systems. This paper discusses two of those techniques: (1) Simplified equations and (2) Fault tree analysis.

  9. Derivation and validation of a novel risk score for safe discharge after acute lower gastrointestinal bleeding: a modelling study.

    PubMed

    Oakland, Kathryn; Jairath, Vipul; Uberoi, Raman; Guy, Richard; Ayaru, Lakshmana; Mortensen, Neil; Murphy, Mike F; Collins, Gary S

    2017-09-01

    Acute lower gastrointestinal bleeding is a common reason for emergency hospital admission, and identification of patients at low risk of harm, who are therefore suitable for outpatient investigation, is a clinical and research priority. We aimed to develop and externally validate a simple risk score to identify patients with lower gastrointestinal bleeding who could safely avoid hospital admission. We undertook model development with data from the National Comparative Audit of Lower Gastrointestinal Bleeding from 143 hospitals in the UK in 2015. Multivariable logistic regression modelling was used to identify predictors of safe discharge, defined as the absence of rebleeding, blood transfusion, therapeutic intervention, 28 day readmission, or death. The model was converted into a simplified risk scoring system and was externally validated in 288 patients admitted with lower gastrointestinal bleeding (184 safely discharged) from two UK hospitals (Charing Cross Hospital, London, and Hammersmith Hospital, London) that had not contributed data to the development cohort. We calculated C statistics for the new model and did a comparative assessment with six previously developed risk scores. Of 2336 prospectively identified admissions in the development cohort, 1599 (68%) were safely discharged. Age, sex, previous admission for lower gastrointestinal bleeding, rectal examination findings, heart rate, systolic blood pressure, and haemoglobin concentration strongly discriminated safe discharge in the development cohort (C statistic 0·84, 95% CI 0·82-0·86) and in the validation cohort (0·79, 0·73-0·84). Calibration plots showed the new risk score to have good calibration in the validation cohort. The score was better than the Rockall, Blatchford, Strate, BLEED, AIMS65, and NOBLADS scores in predicting safe discharge. A score of 8 or less predicts a 95% probability of safe discharge. We developed and validated a novel clinical prediction model with good discriminative performance to identify patients with lower gastrointestinal bleeding who are suitable for safe outpatient management, which has important economic and resource implications. Bowel Disease Research Foundation and National Health Service Blood and Transplant. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. From pest data to abundance-based risk maps combining eco-physiological knowledge, weather, and habitat variability.

    PubMed

    Lacasella, Federica; Marta, Silvio; Singh, Aditya; Stack Whitney, Kaitlin; Hamilton, Krista; Townsend, Phil; Kucharik, Christopher J; Meehan, Timothy D; Gratton, Claudio

    2017-03-01

    Noxious species, i.e., crop pest or invasive alien species, are major threats to both natural and managed ecosystems. Invasive pests are of special importance, and knowledge about their distribution and abundance is fundamental to minimize economic losses and prioritize management activities. Occurrence models are a common tool used to identify suitable zones and map priority areas (i.e., risk maps) for noxious species management, although they provide a simplified description of species dynamics (i.e., no indication on species density). An alternative is to use abundance models, but translating abundance data into risk maps is often challenging. Here, we describe a general framework for generating abundance-based risk maps using multi-year pest data. We used an extensive data set of 3968 records collected between 2003 and 2013 in Wisconsin during annual surveys of soybean aphid (SBA), an exotic invasive pest in this region. By using an integrative approach, we modelled SBA responses to weather, seasonal, and habitat variability using generalized additive models (GAMs). Our models showed good to excellent performance in predicting SBA occurrence and abundance (TSS = 0.70, AUC = 0.92; R 2  = 0.63). We found that temperature, precipitation, and growing degree days were the main drivers of SBA trends. In addition, a significant positive relationship between SBA abundance and the availability of overwintering habitats was observed. Our models showed aphid populations were also sensitive to thresholds associated with high and low temperatures, likely related to physiological tolerances of the insects. Finally, the resulting aphid predictions were integrated using a spatial prioritization algorithm ("Zonation") to produce an abundance-based risk map for the state of Wisconsin that emphasized the spatiotemporal consistency and magnitude of past infestation patterns. This abundance-based risk map can provide information on potential foci of pest outbreaks where scouting efforts and prophylactic measures should be concentrated. The approach we took is general, relatively simple, and can be applied to other species, habitats and geographical areas for which species abundance data and biotic and abiotic data are available. © 2016 by the Ecological Society of America.

  11. Prediction of liver disease in patients whose liver function tests have been checked in primary care: model development and validation using population-based observational cohorts.

    PubMed

    McLernon, David J; Donnan, Peter T; Sullivan, Frank M; Roderick, Paul; Rosenberg, William M; Ryder, Steve D; Dillon, John F

    2014-06-02

    To derive and validate a clinical prediction model to estimate the risk of liver disease diagnosis following liver function tests (LFTs) and to convert the model to a simplified scoring tool for use in primary care. Population-based observational cohort study of patients in Tayside Scotland identified as having their LFTs performed in primary care and followed for 2 years. Biochemistry data were linked to secondary care, prescriptions and mortality data to ascertain baseline characteristics of the derivation cohort. A separate validation cohort was obtained from 19 general practices across the rest of Scotland to externally validate the final model. Primary care, Tayside, Scotland. Derivation cohort: LFT results from 310 511 patients. After exclusions (including: patients under 16 years, patients having initial LFTs measured in secondary care, bilirubin >35 μmol/L, liver complications within 6 weeks and history of a liver condition), the derivation cohort contained 95 977 patients with no clinically apparent liver condition. Validation cohort: after exclusions, this cohort contained 11 653 patients. Diagnosis of a liver condition within 2 years. From the derivation cohort (n=95 977), 481 (0.5%) were diagnosed with a liver disease. The model showed good discrimination (C-statistic=0.78). Given the low prevalence of liver disease, the negative predictive values were high. Positive predictive values were low but rose to 20-30% for high-risk patients. This study successfully developed and validated a clinical prediction model and subsequent scoring tool, the Algorithm for Liver Function Investigations (ALFI), which can predict liver disease risk in patients with no clinically obvious liver disease who had their initial LFTs taken in primary care. ALFI can help general practitioners focus referral on a small subset of patients with higher predicted risk while continuing to address modifiable liver disease risk factors in those at lower risk. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  12. From LCAs to simplified models: a generic methodology applied to wind power electricity.

    PubMed

    Padey, Pierryves; Girard, Robin; le Boulch, Denis; Blanc, Isabelle

    2013-02-05

    This study presents a generic methodology to produce simplified models able to provide a comprehensive life cycle impact assessment of energy pathways. The methodology relies on the application of global sensitivity analysis to identify key parameters explaining the impact variability of systems over their life cycle. Simplified models are built upon the identification of such key parameters. The methodology is applied to one energy pathway: onshore wind turbines of medium size considering a large sample of possible configurations representative of European conditions. Among several technological, geographical, and methodological parameters, we identified the turbine load factor and the wind turbine lifetime as the most influent parameters. Greenhouse Gas (GHG) performances have been plotted as a function of these key parameters identified. Using these curves, GHG performances of a specific wind turbine can be estimated, thus avoiding the undertaking of an extensive Life Cycle Assessment (LCA). This methodology should be useful for decisions makers, providing them a robust but simple support tool for assessing the environmental performance of energy systems.

  13. Simplified Physics Based Models Research Topical Report on Task #2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, Srikanta; Ganesh, Priya

    We present a simplified-physics based approach, where only the most important physical processes are modeled, to develop and validate simplified predictive models of CO2 sequestration in deep saline formation. The system of interest is a single vertical well injecting supercritical CO2 into a 2-D layered reservoir-caprock system with variable layer permeabilities. We use a set of well-designed full-physics compositional simulations to understand key processes and parameters affecting pressure propagation and buoyant plume migration. Based on these simulations, we have developed correlations for dimensionless injectivity as a function of the slope of fractional-flow curve, variance of layer permeability values, and themore » nature of vertical permeability arrangement. The same variables, along with a modified gravity number, can be used to develop a correlation for the total storage efficiency within the CO2 plume footprint. Similar correlations are also developed to predict the average pressure within the injection reservoir, and the pressure buildup within the caprock.« less

  14. BioSimplify: an open source sentence simplification engine to improve recall in automatic biomedical information extraction.

    PubMed

    Jonnalagadda, Siddhartha; Gonzalez, Graciela

    2010-11-13

    BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scientific literature such as the abstracts indexed in PubMed. We tested our tool on its impact to the task of PPI extraction and it improved the f-score of the PPI tool by around 7%, with an improvement in recall of around 20%. The BioSimplify tool and test corpus can be downloaded from https://biosimplify.sourceforge.net.

  15. Fluid-line math model

    NASA Technical Reports Server (NTRS)

    Kandelman, A.; Nelson, D. J.

    1977-01-01

    Simplified mathematical model simulates large hydraulic systems on either analog or digital computers. Models of pumps, servoactuators, reservoirs, accumulators, and valves are connected generating systems containing six hundred elements.

  16. Classification model based on Raman spectra of selected morphological and biochemical tissue constituents for identification of atherosclerosis in human coronary arteries.

    PubMed

    Peres, Marines Bertolo; Silveira, Landulfo; Zângaro, Renato Amaro; Pacheco, Marcos Tadeu Tavares; Pasqualucci, Carlos Augusto

    2011-09-01

    This study presents the results of Raman spectroscopy applied to the classification of arterial tissue based on a simplified model using basal morphological and biochemical information extracted from the Raman spectra of arteries. The Raman spectrograph uses an 830-nm diode laser, imaging spectrograph, and a CCD camera. A total of 111 Raman spectra from arterial fragments were used to develop the model, and those spectra were compared to the spectra of collagen, fat cells, smooth muscle cells, calcification, and cholesterol in a linear fit model. Non-atherosclerotic (NA), fatty and fibrous-fatty atherosclerotic plaques (A) and calcified (C) arteries exhibited different spectral signatures related to different morphological structures presented in each tissue type. Discriminant analysis based on Mahalanobis distance was employed to classify the tissue type with respect to the relative intensity of each compound. This model was subsequently tested prospectively in a set of 55 spectra. The simplified diagnostic model showed that cholesterol, collagen, and adipocytes were the tissue constituents that gave the best classification capability and that those changes were correlated to histopathology. The simplified model, using spectra obtained from a few tissue morphological and biochemical constituents, showed feasibility by using a small amount of variables, easily extracted from gross samples.

  17. Text-Based Recall and Extra-Textual Generations Resulting from Simplified and Authentic Texts

    ERIC Educational Resources Information Center

    Crossley, Scott A.; McNamara, Danielle S.

    2016-01-01

    This study uses a moving windows self-paced reading task to assess text comprehension of beginning and intermediate-level simplified texts and authentic texts by L2 learners engaged in a text-retelling task. Linear mixed effects (LME) models revealed statistically significant main effects for reading proficiency and text level on the number of…

  18. The development of a Simplified, Effective, Labour Monitoring-to-Action (SELMA) tool for Better Outcomes in Labour Difficulty (BOLD): study protocol.

    PubMed

    Souza, João Paulo; Oladapo, Olufemi T; Bohren, Meghan A; Mugerwa, Kidza; Fawole, Bukola; Moscovici, Leonardo; Alves, Domingos; Perdona, Gleici; Oliveira-Ciabati, Livia; Vogel, Joshua P; Tunçalp, Özge; Zhang, Jim; Hofmeyr, Justus; Bahl, Rajiv; Gülmezoglu, A Metin

    2015-05-26

    The partograph is currently the main tool available to support decision-making of health professionals during labour. However, the rate of appropriate use of the partograph is disappointingly low. Apart from limitations that are associated with partograph use, evidence of positive impact on labour-related health outcomes is lacking. The main goal of this study is to develop a Simplified, Effective, Labour Monitoring-to-Action (SELMA) tool. The primary objectives are: to identify the essential elements of intrapartum monitoring that trigger the decision to use interventions aimed at preventing poor labour outcomes; to develop a simplified, monitoring-to-action algorithm for labour management; and to compare the diagnostic performance of SELMA and partograph algorithms as tools to identify women who are likely to develop poor labour-related outcomes. A prospective cohort study will be conducted in eight health facilities in Nigeria and Uganda (four facilities from each country). All women admitted for vaginal birth will comprise the study population (estimated sample size: 7,812 women). Data will be collected on maternal characteristics on admission, labour events and pregnancy outcomes by trained research assistants at the participating health facilities. Prediction models will be developed to identify women at risk of intrapartum-related perinatal death or morbidity (primary outcomes) throughout the course of labour. These predictions models will be used to assemble a decision-support tool that will be able to suggest the best course of action to avert adverse outcomes during the course of labour. To develop this set of prediction models, we will use up-to-date techniques of prognostic research, including identification of important predictors, assigning of relative weights to each predictor, estimation of the predictive performance of the model through calibration and discrimination, and determination of its potential for application using internal validation techniques. This research offers an opportunity to revisit the theoretical basis of the partograph. It is envisioned that the final product would help providers overcome the challenging tasks of promptly interpreting complex labour information and deriving appropriate clinical actions, and thus increase efficiency of the care process, enhance providers' competence and ultimately improve labour outcomes. Please see related articles ' http://dx.doi.org/10.1186/s12978-015-0027-6 ' and ' http://dx.doi.org/10.1186/s12978-015-0028-5 '.

  19. First contact, simplified technology, or risk anticipation? Defining primary health care.

    PubMed

    Frenk, J; González-Block, M A; Alvarez-Manilla, J M

    1990-11-01

    Elements important to defining primary health care (PHC) are discussed, with examples from Latin American countries. Topics are identified as follows: the origins and dilemmas of PHC, conflicting PHC values and practices, organizational changes and PHC, health care reforms and examples from Latin America, and the implications for medical education. The new paradigm for medical education and practice is in the classic Kuhn tradition. A paradigm for health care is an ideological model about the form, content, and organization of health care. There are rules that prescribe in a normative way how resources should be combined to produce health services. The current dominant paradigm is that of curative medicine, and the PHC paradigm assumes that a diversified health care team uses modern technology and resources to actively anticipate health damage and promote well being. The key word is anticipatory. As a consequence secondary care also needs to be redefined as actually treating the illness or damage itself. Organizations must be changed to establish this model. Contrasting primary, anticipatory health care with technical, curative medicine has been discussed over at least the past 150 years. An important development was the new model for developing countries which was a result of a Makerere, Kenya symposium on the Medicine of Poverty. The Western model of physicians acting independently and in a highly specialized fashion to address each patient's complaints was considered inappropriate. The concern must be for training and supervising auxiliaries, designing cost-effective systems, and a practice mode limited to what can actually be provided to the population. How to adapt this to existing medical systems was left undetermined. In 1978 with the WHO drive for health for all, there emerged different conceptions and models of PHC. Conceptually, PHC is realized when services are directed to identifying and modifying risk factors at the collective level, where the health team anticipates and prevents problems through active programming and community participation, and in secondary care, the doctors wait for the ill patient. Level of care and type of contact are subordinate to PHC. 1st contact and 1st level facilities are responsible for PHC, although secondary interventions (prenatal care) are handled. The best technology should be evaluated in terms of the capacity to anticipate severe, irreversible, or fatal damage. Simplified technology is not primitive technology.

  20. Hybrid simplified spherical harmonics with diffusion equation for light propagation in tissues.

    PubMed

    Chen, Xueli; Sun, Fangfang; Yang, Defu; Ren, Shenghan; Zhang, Qian; Liang, Jimin

    2015-08-21

    Aiming at the limitations of the simplified spherical harmonics approximation (SPN) and diffusion equation (DE) in describing the light propagation in tissues, a hybrid simplified spherical harmonics with diffusion equation (HSDE) based diffuse light transport model is proposed. In the HSDE model, the living body is first segmented into several major organs, and then the organs are divided into high scattering tissues and other tissues. DE and SPN are employed to describe the light propagation in these two kinds of tissues respectively, which are finally coupled using the established boundary coupling condition. The HSDE model makes full use of the advantages of SPN and DE, and abandons their disadvantages, so that it can provide a perfect balance between accuracy and computation time. Using the finite element method, the HSDE is solved for light flux density map on body surface. The accuracy and efficiency of the HSDE are validated with both regular geometries and digital mouse model based simulations. Corresponding results reveal that a comparable accuracy and much less computation time are achieved compared with the SPN model as well as a much better accuracy compared with the DE one.

  1. Calibration of a Spatial-Temporal Discrimination Model from Forward, Simultaneous, and Backward Masking

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J.; Beard, B. L.; Stone, Leland (Technical Monitor)

    1997-01-01

    We have been developing a simplified spatial-temporal discrimination model similar to our simplified spatial model in that masking is assumed to be a function of the local visible contrast energy. The overall spatial-temporal sensitivity of the model is calibrated to predict the detectability of targets on a uniform background. To calibrate the spatial-temporal integration functions that define local visible contrast energy, spatial-temporal masking data are required. Observer thresholds were measured (2IFC) for the detection of a 12 msec target stimulus in the presence of a 700 msec mask. Targets were 1, 3 or 9 c/deg sine wave gratings. Masks were either one of these gratings or two of them combined. The target was presented in 17 temporal positions with respect to the mask, including positions before, during and after the mask. Peak masking was found near mask onset and offset for 1 and 3 c/deg targets, while masking effects were more nearly uniform during the mask for the 9 c/deg target. As in the purely spatial case, the simplified model can not predict all the details of masking as a function of masking component spatial frequencies, but overall the prediction errors are small.

  2. Mono-W dark matter signals at the LHC: simplified model analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bell, Nicole F.; Cai, Yi; Leane, Rebecca K., E-mail: n.bell@unimelb.edu.au, E-mail: yi.cai@unimelb.edu.au, E-mail: rleane@physics.unimelb.edu.au

    2016-01-01

    We study mono-W signals of dark matter (DM) production at the LHC, in the context of gauge invariant renormalizable models. We analyze two simplified models, one involving an s-channel Z' mediator and the other a t-channel colored scalar mediator, and consider examples in which the DM-quark couplings are either isospin conserving or isospin violating after electroweak symmetry breaking. While previous work on mono-W signals have focused on isospin violating EFTs, obtaining very strong limits, we find that isospin violating effects are small once such physics is embedded into a gauge invariant simplified model. We thus find that the 8 TeVmore » mono-W results are much less constraining than those arising from mono-jet searches. Considering both the leptonic (mono-lepton) and hadronic (mono fat jet) decays of the W, we determine the 14 TeV LHC reach of the mono-W searches with 3000 fb{sup −1} of data. While a mono-W signal would provide an important complement to a mono-jet discovery channel, existing constraints on these models imply it will be a challenging signal to observe at the 14 TeV LHC.« less

  3. A simplified fragility analysis of fan type cable stayed bridges

    NASA Astrophysics Data System (ADS)

    Khan, R. A.; Datta, T. K.; Ahmad, S.

    2005-06-01

    A simplified fragility analysis of fan type cable stayed bridges using Probabilistic Risk Analysis (PRA) procedure is presented for determining their failure probability under random ground motion. Seismic input to the bridge support is considered to be a risk consistent response spectrum which is obtained from a separate analysis. For the response analysis, the bridge deck is modeles as a beam supported on spring at different points. The stiffnesses of the springs are determined by a separate 2D static analysis of cable-tower-deck system. The analysis provides a coupled stiffness matrix for the spring system. A continuum method of analysis using dynamic stiffness is used to determine the dynamic properties of the bridges. The response of the bridge deck is obtained by the response spectrum method of analysis as applied to multidegree of freedom system which duly takes into account the quasi-static component of bridge deck vibration. The fragility analysis includes uncertainties arising due to the variation in ground motion, material property, modeling, method of analysis, ductility factor and damage concentration effect. Probability of failure of the bridge deck is determined by the First Order Second Moment (FOSM) method of reliability. A three span double plane symmetrical fan type cable stayed bridge of total span 689 m, is used as an illustrative example. The fragility curves for the bridge deck failure are obtained under a number of parametric variations. Some of the important conclusions of the study indicate that (i) not only vertical component but also the horizontal component of ground motion has considerable effect on the probability of failure; (ii) ground motion with no time lag between support excitations provides a smaller probability of failure as compared to ground motion with very large time lag between support excitation; and (iii) probability of failure may considerably increase soft soil condition.

  4. Different risk scores consider different types of risks: the deficiencies of the 2015 ESPEN consensus on diagnostic criteria for malnutrition.

    PubMed

    Xu, Jingyong; Jiang, Zhuming

    2018-03-02

    In 2015, an European Society for the Parenteral and Enteral Nutrition malnutrition diagnosis consensus was published to unify the definition and simplify the diagnostic procedure of malnutrition, in which 'nutritional risk', 'malnutrition risk' and 'at risk of malnutrition' were referred to several times, and 'at risk of malnutrition' was encouraged to be coded and reimbursed in the International Classification of Diseases and diagnosis-related group system systems. However, there may be some mistakes when using the concepts of different 'risk' mentioned above. In this study, we aimed to explain different 'risks' using the original concept by different screening tools to clarify the definition and provide a recommendation for nutritional screening.

  5. Expert opinion on landslide susceptibility elicted by probabilistic inversion from scenario rankings

    NASA Astrophysics Data System (ADS)

    Lee, Katy; Dashwood, Claire; Lark, Murray

    2016-04-01

    For many natural hazards the opinion of experts, with experience in assessing susceptibility under different circumstances, is a valuable source of information on which to base risk assessments. This is particularly important where incomplete process understanding, and limited data, limit the scope to predict susceptibility by mechanistic or statistical modelling. The expert has a tacit model of a system, based on their understanding of processes and their field experience. This model may vary in quality, depending on the experience of the expert. There is considerable interest in how one may elicit expert understanding by a process which is transparent and robust, to provide a basis for decision support. One approach is to provide experts with a set of scenarios, and then to ask them to rank small overlapping subsets of these with respect to susceptibility. Methods of probabilistic inversion have been used to compute susceptibility scores for each scenario, implicit in the expert ranking. It is also possible to model these scores as functions of measurable properties of the scenarios. This approach has been used to assess susceptibility of animal populations to invasive diseases, to assess risk to vulnerable marine environments and to assess the risk in hypothetical novel technologies for food production. We will present the results of a study in which a group of geologists with varying degrees of expertise in assessing landslide hazards were asked to rank sets of hypothetical simplified scenarios with respect to land slide susceptibility. We examine the consistency of their rankings and the importance of different properties of the scenarios in the tacit susceptibility model that their rankings implied. Our results suggest that this is a promising approach to the problem of how experts can communicate their tacit model of uncertain systems to those who want to make use of their expertise.

  6. Assessing risk of breast cancer in an ethnically South-East Asia population (results of a multiple ethnic groups study).

    PubMed

    Gao, Fei; Machin, David; Chow, Khuan-Yew; Sim, Yu-Fan; Duffy, Stephen W; Matchar, David B; Goh, Chien-Hui; Chia, Kee-Seng

    2012-11-19

    Gail and others developed a model (GAIL) using age-at-menarche, age-at-birth of first live child, number of previous benign breast biopsy examinations, and number of first-degree-relatives with breast cancer as well as baseline age-specific breast cancer risks for predicting the 5-year risk of invasive breast cancer for Caucasian women. However, the validity of the model for projecting risk in South-East Asian women is uncertain. We evaluated GAIL and attempted to improve its performance for Singapore women of Chinese, Malay and Indian origins. Data from the Singapore Breast Screening Programme (SBSP) are used. Motivated by lower breast cancer incidence in many Asian countries, we utilised race-specific invasive breast cancer and other cause mortality rates for Singapore women to produce GAIL-SBSP. By using risk factor information from a nested case-control study within SBSP, alternative models incorporating fewer then additional risk factors were determined. Their accuracy was assessed by comparing the expected cases (E) with the observed (O) by the ratio (E/O) and 95% confidence interval (CI) and the respective concordance statistics estimated. From 28,883 women, GAIL-SBSP predicted 241.83 cases during the 5-year follow-up while 241 were reported (E/O=1.00, CI=0.88 to 1.14). Except for women who had two or more first-degree-relatives with breast cancer, satisfactory prediction was present in almost all risk categories. This agreement was reflected in Chinese and Malay, but not in Indian women. We also found that a simplified model (S-GAIL-SBSP) including only age-at-menarche, age-at-birth of first live child and number of first-degree-relatives performed similarly with associated concordance statistics of 0.5997. Taking account of body mass index and parity did not improve the calibration of S-GAIL-SBSP. GAIL can be refined by using national race-specific invasive breast cancer rates and mortality rates for causes other than breast cancer. A revised model containing only three variables (S-GAIL-SBSP) provides a simpler approach for projecting absolute risk of invasive breast cancer in South-East Asia women. Nevertheless its role in counseling the individual women regarding their risk of breast cancer remains problematical and needs to be validated in independent data.

  7. Assessing risk of breast cancer in an ethnically South-East Asia population (results of a multiple ethnic groups study)

    PubMed Central

    2012-01-01

    Background Gail and others developed a model (GAIL) using age-at-menarche, age-at-birth of first live child, number of previous benign breast biopsy examinations, and number of first-degree-relatives with breast cancer as well as baseline age-specific breast cancer risks for predicting the 5-year risk of invasive breast cancer for Caucasian women. However, the validity of the model for projecting risk in South-East Asian women is uncertain. We evaluated GAIL and attempted to improve its performance for Singapore women of Chinese, Malay and Indian origins. Methods Data from the Singapore Breast Screening Programme (SBSP) are used. Motivated by lower breast cancer incidence in many Asian countries, we utilised race-specific invasive breast cancer and other cause mortality rates for Singapore women to produce GAIL-SBSP. By using risk factor information from a nested case-control study within SBSP, alternative models incorporating fewer then additional risk factors were determined. Their accuracy was assessed by comparing the expected cases (E) with the observed (O) by the ratio (E/O) and 95% confidence interval (CI) and the respective concordance statistics estimated. Results From 28,883 women, GAIL-SBSP predicted 241.83 cases during the 5-year follow-up while 241 were reported (E/O=1.00, CI=0.88 to 1.14). Except for women who had two or more first-degree-relatives with breast cancer, satisfactory prediction was present in almost all risk categories. This agreement was reflected in Chinese and Malay, but not in Indian women. We also found that a simplified model (S-GAIL-SBSP) including only age-at-menarche, age-at-birth of first live child and number of first-degree-relatives performed similarly with associated concordance statistics of 0.5997. Taking account of body mass index and parity did not improve the calibration of S-GAIL-SBSP. Conclusions GAIL can be refined by using national race-specific invasive breast cancer rates and mortality rates for causes other than breast cancer. A revised model containing only three variables (S-GAIL-SBSP) provides a simpler approach for projecting absolute risk of invasive breast cancer in South-East Asia women. Nevertheless its role in counseling the individual women regarding their risk of breast cancer remains problematical and needs to be validated in independent data. PMID:23164155

  8. A user-oriented and computerized model for estimating vehicle ride quality

    NASA Technical Reports Server (NTRS)

    Leatherwood, J. D.; Barker, L. M.

    1984-01-01

    A simplified empirical model and computer program for estimating passenger ride comfort within air and surface transportation systems are described. The model is based on subjective ratings from more than 3000 persons who were exposed to controlled combinations of noise and vibration in the passenger ride quality apparatus. This model has the capability of transforming individual elements of a vehicle's noise and vibration environment into subjective discomfort units and then combining the subjective units to produce a single discomfort index typifying passenger acceptance of the environment. The computational procedures required to obtain discomfort estimates are discussed, and a user oriented ride comfort computer program is described. Examples illustrating application of the simplified model to helicopter and automobile ride environments are presented.

  9. Development of a model for on-line control of crystal growth by the AHP method

    NASA Astrophysics Data System (ADS)

    Gonik, M. A.; Lomokhova, A. V.; Gonik, M. M.; Kuliev, A. T.; Smirnov, A. D.

    2007-05-01

    The possibility to apply a simplified 2D model for heat transfer calculations in crystal growth by the axial heat close to phase interface (AHP) method is discussed in this paper. A comparison with global heat transfer calculations with the CGSim software was performed to confirm the accuracy of this model. The simplified model was shown to provide adequate results for the shape of the melt-crystal interface and temperature field in an opaque (Ge) and a transparent crystal (CsI:Tl). The model proposed is used for identification of the growth setup as a control object, for synthesis of a digital controller (PID controller at the present stage) and, finally, in on-line simulations of crystal growth control.

  10. Modeling the Earth System, volume 3

    NASA Technical Reports Server (NTRS)

    Ojima, Dennis (Editor)

    1992-01-01

    The topics covered fall under the following headings: critical gaps in the Earth system conceptual framework; development needs for simplified models; and validating Earth system models and their subcomponents.

  11. Predicting two-year mortality from discharge after acute coronary syndrome: An internationally-based risk score.

    PubMed

    Pocock, Stuart J; Huo, Yong; Van de Werf, Frans; Newsome, Simon; Chin, Chee Tang; Vega, Ana Maria; Medina, Jesús; Bueno, Héctor

    2017-08-01

    Long-term risk of post-discharge mortality associated with acute coronary syndrome remains a concern. The development of a model to reliably estimate two-year mortality risk from hospital discharge post-acute coronary syndrome will help guide treatment strategies. EPICOR (long-tErm follow uP of antithrombotic management patterns In acute CORonary syndrome patients, NCT01171404) and EPICOR Asia (EPICOR Asia, NCT01361386) are prospective observational studies of 23,489 patients hospitalized for an acute coronary syndrome event, who survived to discharge and were then followed up for two years. Patients were enrolled from 28 countries across Europe, Latin America and Asia. Risk scoring for two-year all-cause mortality risk was developed using identified predictive variables and forward stepwise Cox regression. Goodness-of-fit and discriminatory power was estimated. Within two years of discharge 5.5% of patients died. We identified 17 independent mortality predictors: age, low ejection fraction, no coronary revascularization/thrombolysis, elevated serum creatinine, poor EQ-5D score, low haemoglobin, previous cardiac or chronic obstructive pulmonary disease, elevated blood glucose, on diuretics or an aldosterone inhibitor at discharge, male sex, low educational level, in-hospital cardiac complications, low body mass index, ST-segment elevation myocardial infarction diagnosis, and Killip class. Geographic variation in mortality risk was seen following adjustment for other predictive variables. The developed risk-scoring system provided excellent discrimination ( c-statistic=0.80, 95% confidence interval=0.79-0.82) with a steep gradient in two-year mortality risk: >25% (top decile) vs. ~1% (bottom quintile). A simplified risk model with 11 predictors gave only slightly weaker discrimination ( c-statistic=0.79, 95% confidence interval =0.78-0.81). This risk score for two-year post-discharge mortality in acute coronary syndrome patients ( www.acsrisk.org ) can facilitate identification of high-risk patients and help guide tailored secondary prevention measures.

  12. Regional-scale brine migration along vertical pathways due to CO2 injection - Part 2: A simulated case study in the North German Basin

    NASA Astrophysics Data System (ADS)

    Kissinger, Alexander; Noack, Vera; Knopf, Stefan; Konrad, Wilfried; Scheer, Dirk; Class, Holger

    2017-06-01

    Saltwater intrusion into potential drinking water aquifers due to the injection of CO2 into deep saline aquifers is one of the hazards associated with the geological storage of CO2. Thus, in a site-specific risk assessment, models for predicting the fate of the displaced brine are required. Practical simulation of brine displacement involves decisions regarding the complexity of the model. The choice of an appropriate level of model complexity depends on multiple criteria: the target variable of interest, the relevant physical processes, the computational demand, the availability of data, and the data uncertainty. In this study, we set up a regional-scale geological model for a realistic (but not real) onshore site in the North German Basin with characteristic geological features for that region. A major aim of this work is to identify the relevant parameters controlling saltwater intrusion in a complex structural setting and to test the applicability of different model simplifications. The model that is used to identify relevant parameters fully couples flow in shallow freshwater aquifers and deep saline aquifers. This model also includes variable-density transport of salt and realistically incorporates surface boundary conditions with groundwater recharge. The complexity of this model is then reduced in several steps, by neglecting physical processes (two-phase flow near the injection well, variable-density flow) and by simplifying the complex geometry of the geological model. The results indicate that the initial salt distribution prior to the injection of CO2 is one of the key parameters controlling shallow aquifer salinization. However, determining the initial salt distribution involves large uncertainties in the regional-scale hydrogeological parameterization and requires complex and computationally demanding models (regional-scale variable-density salt transport). In order to evaluate strategies for minimizing leakage into shallow aquifers, other target variables can be considered, such as the volumetric leakage rate into shallow aquifers or the pressure buildup in the injection horizon. Our results show that simplified models, which neglect variable-density salt transport, can reach an acceptable agreement with more complex models.

  13. ePORT, NASA's Computer Database Program for System Safety Risk Management Oversight (Electronic Project Online Risk Tool)

    NASA Technical Reports Server (NTRS)

    Johnson, Paul W.

    2008-01-01

    ePORT (electronic Project Online Risk Tool) provides a systematic approach to using an electronic database program to manage a program/project risk management processes. This presentation will briefly cover the standard risk management procedures, then thoroughly cover NASA's Risk Management tool called ePORT. This electronic Project Online Risk Tool (ePORT) is a web-based risk management program that provides a common framework to capture and manage risks, independent of a programs/projects size and budget. It is used to thoroughly cover the risk management paradigm providing standardized evaluation criterion for common management reporting, ePORT improves Product Line, Center and Corporate Management insight, simplifies program/project manager reporting, and maintains an archive of data for historical reference.

  14. On the joint inversion of geophysical data for models of the coupled core-mantle system

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    1991-01-01

    Joint inversion of magnetic, earth rotation, geoid, and seismic data for a unified model of the coupled core-mantle system is proposed and shown to be possible. A sample objective function is offered and simplified by targeting results from independent inversions and summary travel time residuals instead of original observations. These data are parameterized in terms of a very simple, closed model of the topographically coupled core-mantle system. Minimization of the simplified objective function leads to a nonlinear inverse problem; an iterative method for solution is presented. Parameterization and method are emphasized; numerical results are not presented.

  15. Finite element strategies to satisfy clinical and engineering requirements in the field of percutaneous valves.

    PubMed

    Capelli, Claudio; Biglino, Giovanni; Petrini, Lorenza; Migliavacca, Francesco; Cosentino, Daria; Bonhoeffer, Philipp; Taylor, Andrew M; Schievano, Silvia

    2012-12-01

    Finite element (FE) modelling can be a very resourceful tool in the field of cardiovascular devices. To ensure result reliability, FE models must be validated experimentally against physical data. Their clinical application (e.g., patients' suitability, morphological evaluation) also requires fast simulation process and access to results, while engineering applications need highly accurate results. This study shows how FE models with different mesh discretisations can suit clinical and engineering requirements for studying a novel device designed for percutaneous valve implantation. Following sensitivity analysis and experimental characterisation of the materials, the stent-graft was first studied in a simplified geometry (i.e., compliant cylinder) and validated against in vitro data, and then in a patient-specific implantation site (i.e., distensible right ventricular outflow tract). Different meshing strategies using solid, beam and shell elements were tested. Results showed excellent agreement between computational and experimental data in the simplified implantation site. Beam elements were found to be convenient for clinical applications, providing reliable results in less than one hour in a patient-specific anatomical model. Solid elements remain the FE choice for engineering applications, albeit more computationally expensive (>100 times). This work also showed how information on device mechanical behaviour differs when acquired in a simplified model as opposed to a patient-specific model.

  16. Approximate method for calculating free vibrations of a large-wind-turbine tower structure

    NASA Technical Reports Server (NTRS)

    Das, S. C.; Linscott, B. S.

    1977-01-01

    A set of ordinary differential equations were derived for a simplified structural dynamic lumped-mass model of a typical large-wind-turbine tower structure. Dunkerley's equation was used to arrive at a solution for the fundamental natural frequencies of the tower in bending and torsion. The ERDA-NASA 100-kW wind turbine tower structure was modeled, and the fundamental frequencies were determined by the simplified method described. The approximate fundamental natural frequencies for the tower agree within 18 percent with test data and predictions analyzed.

  17. Exact Solution of the Gyration Radius of an Individual's Trajectory for a Simplified Human Regular Mobility Model

    NASA Astrophysics Data System (ADS)

    Yan, Xiao-Yong; Han, Xiao-Pu; Zhou, Tao; Wang, Bing-Hong

    2011-12-01

    We propose a simplified human regular mobility model to simulate an individual's daily travel with three sequential activities: commuting to workplace, going to do leisure activities and returning home. With the assumption that the individual has a constant travel speed and inferior limit of time at home and in work, we prove that the daily moving area of an individual is an ellipse, and finally obtain an exact solution of the gyration radius. The analytical solution captures the empirical observation well.

  18. ON SOME MATHEMATICAL PROBLEMS SUGGESTED BY BIOLOGICAL SCHEMES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luehr, C.

    1958-08-01

    A simplified model of a population which reproduces asexually and is subject to random mututions implying improvement in chances of survival and procreation is treated by a numerical calculation. The behavior of such a system is then summarized by an analytical formula. The paper is intended as the first one of a series devoted to mathematical studies of simplified genetic situations. (auth)

  19. Dental Caries and Enamel Defects in Very Low Birth Weight Adolescents

    PubMed Central

    Nelson, S.; Albert, J.M.; Lombardi, G.; Wishnek, S.; Asaad, G.; Kirchner, H.L.; Singer, L.T.

    2011-01-01

    Objectives The purpose of this study was to examine developmental enamel defects and dental caries in very low birth weight adolescents with high risk (HR-VLBW) and low risk (LR-VLBW) compared to full-term (term) adolescents. Methods The sample consisted of 224 subjects (80 HR-VLBW, 59 LR-VLBW, 85 term adolescents) recruited from an ongoing longitudinal study. Sociodemographic and medical information was available from birth. Dental examination of the adolescent at the 14-year visit included: enamel defects (opacity and hypoplasia); decayed, missing, filled teeth of incisors and molars (DMFT-IM) and of overall permanent teeth (DMFT); Simplified Oral Hygiene Index for debris/calculus on teeth, and sealant presence. A caregiver questionnaire completed simultaneously assessed dental behavior, access, insurance status and prevention factors. Hierarchical analysis utilized the zero-inflated negative binomial model and zero-inflated Poisson model. Results The zero-inflated negative binomial model controlling for sociodemographic variables indicated that the LR-VLBW group had an estimated 75% increase (p < 0.05) in number of demarcated opacities in the incisors and first molar teeth compared to the term group. Hierarchical modeling indicated that demarcated opacities were a significant predictor of DMFT-IM after control for relevant covariates. The term adolescents had significantly increased DMFT-IM and DMFT scores compared to the LR-VLBW adolescents. Conclusion LR-VLBW was a significant risk factor for increased enamel defects in the permanent incisors and first molars. Term children had increased caries compared to the LR-VLBW group. The effect of birth group and enamel defects on caries has to be investigated longitudinally from birth. PMID:20975268

  20. Advantage of the modified Lunn-McNeil technique over Kalbfleisch-Prentice technique in competing risks

    NASA Astrophysics Data System (ADS)

    Lukman, Iing; Ibrahim, Noor A.; Daud, Isa B.; Maarof, Fauziah; Hassan, Mohd N.

    2002-03-01

    Survival analysis algorithm is often applied in the data mining process. Cox regression is one of the survival analysis tools that has been used in many areas, and it can be used to analyze the failure times of aircraft crashed. Another survival analysis tool is the competing risks where we have more than one cause of failure acting simultaneously. Lunn-McNeil analyzed the competing risks in the survival model using Cox regression with censored data. The modified Lunn-McNeil technique is a simplify of the Lunn-McNeil technique. The Kalbfleisch-Prentice technique is involving fitting models separately from each type of failure, treating other failure types as censored. To compare the two techniques, (the modified Lunn-McNeil and Kalbfleisch-Prentice) a simulation study was performed. Samples with various sizes and censoring percentages were generated and fitted using both techniques. The study was conducted by comparing the inference of models, using Root Mean Square Error (RMSE), the power tests, and the Schoenfeld residual analysis. The power tests in this study were likelihood ratio test, Rao-score test, and Wald statistics. The Schoenfeld residual analysis was conducted to check the proportionality of the model through its covariates. The estimated parameters were computed for the cause-specific hazard situation. Results showed that the modified Lunn-McNeil technique was better than the Kalbfleisch-Prentice technique based on the RMSE measurement and Schoenfeld residual analysis. However, the Kalbfleisch-Prentice technique was better than the modified Lunn-McNeil technique based on power tests measurement.

  1. Evaluation and recommendation of sensitivity analysis methods for application to Stochastic Human Exposure and Dose Simulation models.

    PubMed

    Mokhtari, Amirhossein; Christopher Frey, H; Zheng, Junyu

    2006-11-01

    Sensitivity analyses of exposure or risk models can help identify the most significant factors to aid in risk management or to prioritize additional research to reduce uncertainty in the estimates. However, sensitivity analysis is challenged by non-linearity, interactions between inputs, and multiple days or time scales. Selected sensitivity analysis methods are evaluated with respect to their applicability to human exposure models with such features using a testbed. The testbed is a simplified version of a US Environmental Protection Agency's Stochastic Human Exposure and Dose Simulation (SHEDS) model. The methods evaluated include the Pearson and Spearman correlation, sample and rank regression, analysis of variance, Fourier amplitude sensitivity test (FAST), and Sobol's method. The first five methods are known as "sampling-based" techniques, wheras the latter two methods are known as "variance-based" techniques. The main objective of the test cases was to identify the main and total contributions of individual inputs to the output variance. Sobol's method and FAST directly quantified these measures of sensitivity. Results show that sensitivity of an input typically changed when evaluated under different time scales (e.g., daily versus monthly). All methods provided similar insights regarding less important inputs; however, Sobol's method and FAST provided more robust insights with respect to sensitivity of important inputs compared to the sampling-based techniques. Thus, the sampling-based methods can be used in a screening step to identify unimportant inputs, followed by application of more computationally intensive refined methods to a smaller set of inputs. The implications of time variation in sensitivity results for risk management are briefly discussed.

  2. Transmission Parameters of the 2001 Foot and Mouth Epidemic in Great Britain

    PubMed Central

    Chis Ster, Irina; Ferguson, Neil M.

    2007-01-01

    Despite intensive ongoing research, key aspects of the spatial-temporal evolution of the 2001 foot and mouth disease (FMD) epidemic in Great Britain (GB) remain unexplained. Here we develop a Markov Chain Monte Carlo (MCMC) method for estimating epidemiological parameters of the 2001 outbreak for a range of simple transmission models. We make the simplifying assumption that infectious farms were completely observed in 2001, equivalent to assuming that farms that were proactively culled but not diagnosed with FMD were not infectious, even if some were infected. We estimate how transmission parameters varied through time, highlighting the impact of the control measures on the progression of the epidemic. We demonstrate statistically significant evidence for assortative contact patterns between animals of the same species. Predictive risk maps of the transmission potential in different geographic areas of GB are presented for the fitted models. PMID:17551582

  3. Gradient retention prediction of acid-base analytes in reversed phase liquid chromatography: a simplified approach for acetonitrile-water mobile phases.

    PubMed

    Andrés, Axel; Rosés, Martí; Bosch, Elisabeth

    2014-11-28

    In previous work, a two-parameter model to predict chromatographic retention of ionizable analytes in gradient mode was proposed. However, the procedure required some previous experimental work to get a suitable description of the pKa change with the mobile phase composition. In the present study this previous experimental work has been simplified. The analyte pKa values have been calculated through equations whose coefficients vary depending on their functional group. Forced by this new approach, other simplifications regarding the retention of the totally neutral and totally ionized species also had to be performed. After the simplifications were applied, new prediction values were obtained and compared with the previously acquired experimental data. The simplified model gave pretty good predictions while saving a significant amount of time and resources. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Research study on stabilization and control: Modern sampled-data control theory. Continuous and discrete describing function analysis of the LST system. [with emphasis on the control moment gyroscope control loop

    NASA Technical Reports Server (NTRS)

    Kuo, B. C.; Singh, G.

    1974-01-01

    The dynamics of the Large Space Telescope (LST) control system were studied in order to arrive at a simplified model for computer simulation without loss of accuracy. The frictional nonlinearity of the Control Moment Gyroscope (CMG) Control Loop was analyzed in a model to obtain data for the following: (1) a continuous describing function for the gimbal friction nonlinearity; (2) a describing function of the CMG nonlinearity using an analytical torque equation; and (3) the discrete describing function and function plots for CMG functional linearity. Preliminary computer simulations are shown for the simplified LST system, first without, and then with analytical torque expressions. Transfer functions of the sampled-data LST system are also described. A final computer simulation is presented which uses elements of the simplified sampled-data LST system with analytical CMG frictional torque expressions.

  5. Study on a pattern classification method of soil quality based on simplified learning sample dataset

    USGS Publications Warehouse

    Zhang, Jiahua; Liu, S.; Hu, Y.; Tian, Y.

    2011-01-01

    Based on the massive soil information in current soil quality grade evaluation, this paper constructed an intelligent classification approach of soil quality grade depending on classical sampling techniques and disordered multiclassification Logistic regression model. As a case study to determine the learning sample capacity under certain confidence level and estimation accuracy, and use c-means algorithm to automatically extract the simplified learning sample dataset from the cultivated soil quality grade evaluation database for the study area, Long chuan county in Guangdong province, a disordered Logistic classifier model was then built and the calculation analysis steps of soil quality grade intelligent classification were given. The result indicated that the soil quality grade can be effectively learned and predicted by the extracted simplified dataset through this method, which changed the traditional method for soil quality grade evaluation. ?? 2011 IEEE.

  6. Prediction of Waitlist Mortality in Adult Heart Transplant Candidates: The Candidate Risk Score.

    PubMed

    Jasseron, Carine; Legeai, Camille; Jacquelinet, Christian; Leprince, Pascal; Cantrelle, Christelle; Audry, Benoît; Porcher, Raphael; Bastien, Olivier; Dorent, Richard

    2017-09-01

    The cardiac allocation system in France is currently based on urgency and geography. Medical urgency is defined by therapies without considering objective patient mortality risk factors. This study aimed to develop a waitlist mortality risk score from commonly available candidate variables. The study included all patients, aged 16 years or older, registered on the national registry CRISTAL for first single-organ heart transplantation between January 2010 and December 2014. This population was randomly divided in a 2:1 ratio into derivation and validation cohorts. The association of variables at listing with 1-year waitlist death or delisting for worsening medical condition was assessed within the derivation cohort. The predictors were used to generate a candidate risk score (CRS). Validation of the CRS was performed in the validation cohort. Concordance probability estimation (CPE) was used to evaluate the discriminative capacity of the models. During the study period, 2333 patients were newly listed. The derivation (n =1 555) and the validation cohorts (n = 778) were similar. Short-term mechanical circulatory support, natriuretic peptide decile, glomerular filtration rate, and total bilirubin level were included in a simplified model and incorporated into the score. The Concordance probability estimation of the CRS was 0.73 in the derivation cohort and 0.71 in the validation cohort. The correlation between observed and expected 1-year waitlist mortality in the validation cohort was 0.87. The candidate risk score provides an accurate objective prediction of waitlist mortality. It is currently being used to develop a modified cardiac allocation system in France.

  7. A Comparison of Simplified Two-dimensional Flow Models Exemplified by Water Flow in a Cavern

    NASA Astrophysics Data System (ADS)

    Prybytak, Dzmitry; Zima, Piotr

    2017-12-01

    The paper shows the results of a comparison of simplified models describing a two-dimensional water flow in the example of a water flow through a straight channel sector with a cavern. The following models were tested: the two-dimensional potential flow model, the Stokes model and the Navier-Stokes model. In order to solve the first two, the boundary element method was employed, whereas to solve the Navier-Stokes equations, the open-source code library OpenFOAM was applied. The results of numerical solutions were compared with the results of measurements carried out on a test stand in a hydraulic laboratory. The measurements were taken with an ADV probe (Acoustic Doppler Velocimeter). Finally, differences between the results obtained from the mathematical models and the results of laboratory measurements were analysed.

  8. [Simplified models of analysis of the sources of risk and biomechanical overload in craft industries: application experience in wooden barrel making].

    PubMed

    Pressiani, Sofia

    2011-01-01

    Barrel making is an ancient art evidence of which is found in paintings from the Egyptian era in the Mediterranean civilizations, and since it was an ideal shipping container its use spread quickly and on a wide scale. The study was conducted in a small high quality craft industry, applying the method which is the topic of this volume, i.e., risk pre-mapping by which it is possible to identify the presence or absence of occupational risk and any need for further evaluation. The study showed the presence of health risks for workers, which were mainly physical (noise, vibrations produced by machinery and equipment typical of the craft, and risks from hardwood dust, in addition to the traditional risks of biomechanical overload of the upper limbs. Today, the barrel maker is a professional craft of excellence and quality, dedicated to the creation of large barrels and vats, using woods of different essences depending on the organoleptic qualities that it is desired should influence the final flavour of the wine: the manual art and the equipment are ancient but the technology used to achieve the final product is necessarily advanced. So far issues of safety and health risks related to this type of industry have received little attention and the risks poorly evaluated, with the possibility that occupational diseases may occur among the operators, particularly of the skeletal-muscular type.

  9. Image segmentation using hidden Markov Gauss mixture models.

    PubMed

    Pyun, Kyungsuk; Lim, Johan; Won, Chee Sun; Gray, Robert M

    2007-07-01

    Image segmentation is an important tool in image processing and can serve as an efficient front end to sophisticated algorithms and thereby simplify subsequent processing. We develop a multiclass image segmentation method using hidden Markov Gauss mixture models (HMGMMs) and provide examples of segmentation of aerial images and textures. HMGMMs incorporate supervised learning, fitting the observation probability distribution given each class by a Gauss mixture estimated using vector quantization with a minimum discrimination information (MDI) distortion. We formulate the image segmentation problem using a maximum a posteriori criteria and find the hidden states that maximize the posterior density given the observation. We estimate both the hidden Markov parameter and hidden states using a stochastic expectation-maximization algorithm. Our results demonstrate that HMGMM provides better classification in terms of Bayes risk and spatial homogeneity of the classified objects than do several popular methods, including classification and regression trees, learning vector quantization, causal hidden Markov models (HMMs), and multiresolution HMMs. The computational load of HMGMM is similar to that of the causal HMM.

  10. Multiphase modeling of channelized pyroclastic density currents and the effect of confinement on mobility and entrainment

    NASA Astrophysics Data System (ADS)

    Kubo, A. I.; Dufek, J.

    2017-12-01

    Around explosive volcanic centers such as Mount Saint Helens, pyroclastic density currents (PDCs) pose a great risk to life and property. Understanding of the mobility and dynamics of PDCs and other gravity currents is vital to mitigating hazards of future eruptions. Evidence from pyroclastic deposits at Mount Saint Helens and one-dimensional modeling suggest that channelization of flows effectively increases run out distances. Dense flows are thought to scour and erode the bed leading to confinement for subsequent flows and could result in significant changes to predicted runout distance and mobility. Here, we present the results of three-dimensional multiphase models comparing confined and unconfined flows using simplified geometries. We focus on bed stress conditions as a proxy for conditions that could influence subsequent erosion and self-channelization. We also explore the controls on gas entrainment in all scenarios to determine how confinement impacts the particle concentration gradient, granular interactions, and mobility.

  11. Research on shock wave characteristics in the isolator of central strut rocket-based combined cycle engine under Ma5.5

    NASA Astrophysics Data System (ADS)

    Wei, Xianggeng; Xue, Rui; Qin, Fei; Hu, Chunbo; He, Guoqiang

    2017-11-01

    A numerical calculation of shock wave characteristics in the isolator of central strut rocket-based combined cycle (RBCC) engine fueled by kerosene was carried out in this paper. A 3D numerical model was established by the DES method. The kerosene chemical kinetic model used the 9-component and 12-step simplified mechanism model. Effects of fuel equivalence ratio, inflow total temperature and central strut rocket on-off on shock wave characteristics were studied under Ma5.5. Results demonstrated that with the increase of equivalence ratio, the leading shock wave moves toward upstream, accompanied with higher possibility of the inlet unstart. However, the leading shock wave moves toward downstream as the inflow total temperature rises. After the central strut rocket is closed, the leading shock wave moves toward downstream, which can reduce risks of the inlet unstart. State of the shear layer formed by the strut rocket jet flow and inflow can influence the shock train structure significantly.

  12. Strategic reasoning and bargaining in catastrophic climate change games

    NASA Astrophysics Data System (ADS)

    Verendel, Vilhelm; Johansson, Daniel J. A.; Lindgren, Kristian

    2016-03-01

    Two decades of international negotiations show that agreeing on emission levels for climate change mitigation is a hard challenge. However, if early warning signals were to show an upcoming tipping point with catastrophic damage, theory and experiments suggest this could simplify collective action to reduce greenhouse gas emissions. At the actual threshold, no country would have a free-ride incentive to increase emissions over the tipping point, but it remains for countries to negotiate their emission levels to reach these agreements. We model agents bargaining for emission levels using strategic reasoning to predict emission bids by others and ask how this affects the possibility of reaching agreements that avoid catastrophic damage. It is known that policy elites often use a higher degree of strategic reasoning, and in our model this increases the risk for climate catastrophe. Moreover, some forms of higher strategic reasoning make agreements to reduce greenhouse gases unstable. We use empirically informed levels of strategic reasoning when simulating the model.

  13. A Methodology for the Integration of a Mechanistic Source Term Analysis in a Probabilistic Framework for Advanced Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, Dave; Brunett, Acacia J.; Bucknor, Matthew

    GE Hitachi Nuclear Energy (GEH) and Argonne National Laboratory are currently engaged in a joint effort to modernize and develop probabilistic risk assessment (PRA) techniques for advanced non-light water reactors. At a high level, the primary outcome of this project will be the development of next-generation PRA methodologies that will enable risk-informed prioritization of safety- and reliability-focused research and development, while also identifying gaps that may be resolved through additional research. A subset of this effort is the development of PRA methodologies to conduct a mechanistic source term (MST) analysis for event sequences that could result in the release ofmore » radionuclides. The MST analysis seeks to realistically model and assess the transport, retention, and release of radionuclides from the reactor to the environment. The MST methods developed during this project seek to satisfy the requirements of the Mechanistic Source Term element of the ASME/ANS Non-LWR PRA standard. The MST methodology consists of separate analysis approaches for risk-significant and non-risk significant event sequences that may result in the release of radionuclides from the reactor. For risk-significant event sequences, the methodology focuses on a detailed assessment, using mechanistic models, of radionuclide release from the fuel, transport through and release from the primary system, transport in the containment, and finally release to the environment. The analysis approach for non-risk significant event sequences examines the possibility of large radionuclide releases due to events such as re-criticality or the complete loss of radionuclide barriers. This paper provides details on the MST methodology, including the interface between the MST analysis and other elements of the PRA, and provides a simplified example MST calculation for a sodium fast reactor.« less

  14. Development and validation of an acute kidney injury risk index for patients undergoing general surgery: results from a national data set.

    PubMed

    Kheterpal, Sachin; Tremper, Kevin K; Heung, Michael; Rosenberg, Andrew L; Englesbe, Michael; Shanks, Amy M; Campbell, Darrell A

    2009-03-01

    The authors sought to identify the incidence, risk factors, and mortality impact of acute kidney injury (AKI) after general surgery using a large and representative national clinical data set. The 2005-2006 American College of Surgeons-National Surgical Quality Improvement Program participant use data file is a compilation of outcome data from general surgery procedures performed in 121 US medical centers. The primary outcome was AKI within 30 days, defined as an increase in serum creatinine of at least 2 mg/dl or acute renal failure necessitating dialysis. A variety of patient comorbidities and operative characteristics were evaluated as possible predictors of AKI. A logistic regression full model fit was used to create an AKI model and risk index. Thirty-day mortality among patients with and without AKI was compared. Of 152,244 operations reviewed, 75,952 met the inclusion criteria, and 762 (1.0%) were complicated by AKI. The authors identified 11 independent preoperative predictors: age 56 yr or older, male sex, emergency surgery, intraperitoneal surgery, diabetes mellitus necessitating oral therapy, diabetes mellitus necessitating insulin therapy, active congestive heart failure, ascites, hypertension, mild preoperative renal insufficiency, and moderate preoperative renal insufficiency. The c statistic for a simplified risk index was 0.80 in the derivation and validation cohorts. Class V patients (six or more risk factors) had a 9% incidence of AKI. Overall, patients experiencing AKI had an eightfold increase in 30-day mortality. Approximately 1% of general surgery cases are complicated by AKI. The authors have developed a robust risk index based on easily identified preoperative comorbidities and patient characteristics.

  15. A simplified donor risk index for predicting outcome after deceased donor kidney transplantation.

    PubMed

    Watson, Christopher J E; Johnson, Rachel J; Birch, Rhiannon; Collett, Dave; Bradley, J Andrew

    2012-02-15

    We sought to determine the deceased donor factors associated with outcome after kidney transplantation and to develop a clinically applicable Kidney Donor Risk Index. Data from the UK Transplant Registry on 7620 adult recipients of adult deceased donor kidney transplants between 2000 and 2007 inclusive were analyzed. Donor factors potentially influencing transplant outcome were investigated using Cox regression, adjusting for significant recipient and transplant factors. A United Kingdom Kidney Donor Risk Index was derived from the model and validated. Donor age was the most significant factor predicting poor transplant outcome (hazard ratio for 18-39 and 60+ years relative to 40-59 years was 0.78 and 1.49, respectively, P<0.001). A history of donor hypertension was also associated with increased risk (hazard ratio 1.30, P=0.001), and increased donor body weight, longer hospital stay before death, and use of adrenaline were also significantly associated with poorer outcomes up to 3 years posttransplant. Other donor factors including donation after circulatory death, history of cardiothoracic disease, diabetes history, and terminal creatinine were not significant. A donor risk index based on the five significant donor factors was derived and confirmed to be prognostic of outcome in a validation cohort (concordance statistic 0.62). An index developed in the United States by Rao et al., Transplantation 2009; 88: 231-236, included 15 factors and gave a concordance statistic of 0.63 in the UK context, suggesting that our much simpler model has equivalent predictive ability. A Kidney Donor Risk Index based on five donor variables provides a clinically useful tool that may help with organ allocation and informed consent.

  16. Predicting the Individual Risk of Acute Severe Colitis at Diagnosis

    PubMed Central

    Cesarini, Monica; Collins, Gary S.; Rönnblom, Anders; Santos, Antonieta; Wang, Lai Mun; Sjöberg, Daniel; Parkes, Miles; Keshav, Satish

    2017-01-01

    Abstract Background and Aims: Acute severe colitis [ASC] is associated with major morbidity. We aimed to develop and externally validate an index that predicted ASC within 3 years of diagnosis. Methods: The development cohort included patients aged 16–89 years, diagnosed with ulcerative colitis [UC] in Oxford and followed for 3 years. Primary outcome was hospitalization for ASC, excluding patients admitted within 1 month of diagnosis. Multivariable logistic regression examined the adjusted association of seven risk factors with ASC. Backwards elimination produced a parsimonious model that was simplified to create an easy-to-use index. External validation occurred in separate cohorts from Cambridge, UK, and Uppsala, Sweden. Results: The development cohort [Oxford] included 34/111 patients who developed ASC within a median 14 months [range 1–29]. The final model applied the sum of 1 point each for extensive disease, C-reactive protein [CRP] > 10mg/l, or haemoglobin < 12g/dl F or < 14g/dl M at diagnosis, to give a score from 0/3 to 3/3. This predicted a 70% risk of developing ASC within 3 years [score 3/3]. Validation cohorts included different proportions with ASC [Cambridge = 25/96; Uppsala = 18/298]. Of those scoring 3/3 at diagnosis, 18/18 [Cambridge] and 12/13 [Uppsala] subsequently developed ASC. Discriminant ability [c-index, where 1.0 = perfect discrimination] was 0.81 [Oxford], 0.95 [Cambridge], 0.97 [Uppsala]. Internal validation using bootstrapping showed good calibration, with similar predicted risk across all cohorts. A nomogram predicted individual risk. Conclusions: An index applied at diagnosis reliably predicts the risk of ASC within 3 years in different populations. Patients with a score 3/3 at diagnosis may merit early immunomodulator therapy. PMID:27647858

  17. A method to assess the population-level consequences of wind energy facilities on bird and bat species: Chapter

    USGS Publications Warehouse

    Diffendorfer, James E.; Beston, Julie A.; Merrill, Matthew; Stanton, Jessica C.; Corum, Margo D.; Loss, Scott R.; Thogmartin, Wayne E.; Johnson, Douglas H.; Erickson, Richard A.; Heist, Kevin W.

    2016-01-01

    For this study, a methodology was developed for assessing impacts of wind energy generation on populations of birds and bats at regional to national scales. The approach combines existing methods in applied ecology for prioritizing species in terms of their potential risk from wind energy facilities and estimating impacts of fatalities on population status and trend caused by collisions with wind energy infrastructure. Methods include a qualitative prioritization approach, demographic models, and potential biological removal. The approach can be used to prioritize species in need of more thorough study as well as to identify species with minimal risk. However, the components of this methodology require simplifying assumptions and the data required may be unavailable or of poor quality for some species. These issues should be carefully considered before using the methodology. The approach will increase in value as more data become available and will broaden the understanding of anthropogenic sources of mortality on bird and bat populations.

  18. Space-based Networking Technology Developments in the Interplanetary Network Directorate Information Technology Program

    NASA Technical Reports Server (NTRS)

    Clare, Loren; Clement, B.; Gao, J.; Hutcherson, J.; Jennings, E.

    2006-01-01

    Described recent development of communications protocols, services, and associated tools targeted to reduce risk, reduce cost and increase efficiency of IND infrastructure and supported mission operations. Space-based networking technologies developed were: a) Provide differentiated quality of service (QoS) that will give precedence to traffic that users have selected as having the greatest importance and/or time-criticality; b) Improve the total value of information to users through the use of QoS prioritization techniques; c) Increase operational flexibility and improve command-response turnaround; d) Enable new class of networked and collaborative science missions; e) Simplify applications interfaces to communications services; and f) Reduce risk and cost from a common object model and automated scheduling and communications protocols. Technologies are described in three general areas: communications scheduling, middleware, and protocols. Additionally developed simulation environment, which provides comprehensive, quantitative understanding of the technologies performance within overall, evolving architecture, as well as ability to refine & optimize specific components.

  19. Application of Synchrotron Techniques in Environmental Science

    EPA Science Inventory

    The complexity of metal contaminated sites has and continues to be simplified to a measure of the total metal content. While total metal content is a critical measure in assessing risk of a contaminated site, total metal content alone does not provide predictive insights on the b...

  20. Possibility-induced simplified neutrosophic aggregation operators and their application to multi-criteria group decision-making

    NASA Astrophysics Data System (ADS)

    Şahin, Rıdvan; Liu, Peide

    2017-07-01

    Simplified neutrosophic set (SNS) is an appropriate tool used to express the incompleteness, indeterminacy and uncertainty of the evaluation objects in decision-making process. In this study, we define the concept of possibility SNS including two types of information such as the neutrosophic performance provided from the evaluation objects and its possibility degree using a value ranging from zero to one. Then by extending the existing neutrosophic information, aggregation models for SNSs that cannot be used effectively to fusion the two different information described above, we propose two novel neutrosophic aggregation operators considering possibility, which are named as a possibility-induced simplified neutrosophic weighted arithmetic averaging operator and possibility-induced simplified neutrosophic weighted geometric averaging operator, and discuss their properties. Moreover, we develop a useful method based on the proposed aggregation operators for solving a multi-criteria group decision-making problem with the possibility simplified neutrosophic information, in which the weights of decision-makers and decision criteria are calculated based on entropy measure. Finally, a practical example is utilised to show the practicality and effectiveness of the proposed method.

  1. Search for squarks and gluinos in final states with jets and missing transverse momentum at √s =13 TeV with the ATLAS detector

    DOE PAGES

    Aaboud, M.; Aad, G.; Abbott, B.; ...

    2016-07-12

    A search for squarks and gluinos in final states containing hadronic jets, missing transverse momentum but no electrons or muons is presented. The data were recorded in 2015 by the ATLAS experiment in √s=13 TeV proton–proton collisions at the Large Hadron Collider. No excess above the Standard Model background expectation was observed in 3.2 fb -1 of analyzed data. Results are interpreted within simplified models that assume R-parity is conserved and the neutralino is the lightest supersymmetric particle. An exclusion limit at the 95 % confidence level on the mass of the gluino is set at 1.51 TeV for amore » simplified model incorporating only a gluino octet and the lightest neutralino, assuming the lightest neutralino is massless. For a simplified model involving the strong production of mass-degenerate first- and second-generation squarks, squark masses below 1.03 TeV are excluded for a massless lightest neutralino. Finally, these limits substantially extend the region of supersymmetric parameter space excluded by previous measurements with the ATLAS detector.« less

  2. Absorption and scattering coefficient dependence of laser-Doppler flowmetry models for large tissue volumes.

    PubMed

    Binzoni, T; Leung, T S; Rüfenacht, D; Delpy, D T

    2006-01-21

    Based on quasi-elastic scattering theory (and random walk on a lattice approach), a model of laser-Doppler flowmetry (LDF) has been derived which can be applied to measurements in large tissue volumes (e.g. when the interoptode distance is >30 mm). The model holds for a semi-infinite medium and takes into account the transport-corrected scattering coefficient and the absorption coefficient of the tissue, and the scattering coefficient of the red blood cells. The model holds for anisotropic scattering and for multiple scattering of the photons by the moving scatterers of finite size. In particular, it has also been possible to take into account the simultaneous presence of both Brownian and pure translational movements. An analytical and simplified version of the model has also been derived and its validity investigated, for the case of measurements in human skeletal muscle tissue. It is shown that at large optode spacing it is possible to use the simplified model, taking into account only a 'mean' light pathlength, to predict the blood flow related parameters. It is also demonstrated that the 'classical' blood volume parameter, derived from LDF instruments, may not represent the actual blood volume variations when the investigated tissue volume is large. The simplified model does not need knowledge of the tissue optical parameters and thus should allow the development of very simple and cost-effective LDF hardware.

  3. Determination of the Ephemeris Accuracy for AJISAI, LAGEOS and ETALON Satellites, Obtained with A Simplified Numerical Motion Model Using the ILRS Coordinates

    NASA Astrophysics Data System (ADS)

    Kara, I. V.

    This paper describes a simplified numerical model of passive artificial Earth satellite (AES) motion. The model accuracy is determined using the International Laser Ranging Service (ILRS) highprecision coordinates. Those data are freely available on http://ilrs.gsfc.nasa.gov. The differential equations of the AES motion are solved by the Everhart numerical method of 17th and 19th orders with the integration step automatic correction. The comparison between the AES coordinates computed with the motion model and the ILRS coordinates enabled to determine the accuracy of the ephemerides obtained. As a result, the discrepancy of the computed Etalon-1 ephemerides from the ILRS data is about 10'' for a one-year ephemeris.

  4. Physical models for the normal YORP and diurnal Yarkovsky effects

    NASA Astrophysics Data System (ADS)

    Golubov, O.; Kravets, Y.; Krugly, Yu. N.; Scheeres, D. J.

    2016-06-01

    We propose an analytic model for the normal Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) and diurnal Yarkovsky effects experienced by a convex asteroid. Both the YORP torque and the Yarkovsky force are expressed as integrals of a universal function over the surface of an asteroid. Although in general this function can only be calculated numerically from the solution of the heat conductivity equation, approximate solutions can be obtained in quadratures for important limiting cases. We consider three such simplified models: Rubincam's approximation (zero heat conductivity), low thermal inertia limit (including the next order correction and thus valid for small heat conductivity), and high thermal inertia limit (valid for large heat conductivity). All three simplified models are compared with the exact solution.

  5. Simplified planar model of a car steering system with rack and pinion and McPherson suspension

    NASA Astrophysics Data System (ADS)

    Knapczyk, J.; Kucybała, P.

    2016-09-01

    The paper presents the analysis and optimization of steering system with rack and pinion and McPherson suspension using spatial model and equivalent simplified planar model. The dimension of the steering linkage that give minimum steering error can be estimated using planar model. The steering error is defined as the difference between the actual angle made by the outer front wheel during steering manoeuvers and the calculated angle for the same wheel based on the Ackerman principle. For a given linear rack displacement, a specified steering arms angular displacements are determined while simultaneously ensuring best transmission angle characteristics (i) without and (ii) with imposing linear correlation between input and output. Numerical examples are used to illustrate the proposed method.

  6. A critical examination of the validity of simplified models for radiant heat transfer analysis.

    NASA Technical Reports Server (NTRS)

    Toor, J. S.; Viskanta, R.

    1972-01-01

    Examination of the directional effects of the simplified models by comparing the experimental data with the predictions based on simple and more detailed models for the radiation characteristics of surfaces. Analytical results indicate that the constant property diffuse and specular models do not yield the upper and lower bounds on local radiant heat flux. In general, the constant property specular analysis yields higher values of irradiation than the constant property diffuse analysis. A diffuse surface in the enclosure appears to destroy the effect of specularity of the other surfaces. Semigray and gray analyses predict the irradiation reasonably well provided that the directional properties and the specularity of the surfaces are taken into account. The uniform and nonuniform radiosity diffuse models are in satisfactory agreement with each other.

  7. Assessing uncertainty in radar measurements on simplified meteorological scenarios

    NASA Astrophysics Data System (ADS)

    Molini, L.; Parodi, A.; Rebora, N.; Siccardi, F.

    2006-02-01

    A three-dimensional radar simulator model (RSM) developed by Haase (1998) is coupled with the nonhydrostatic mesoscale weather forecast model Lokal-Modell (LM). The radar simulator is able to model reflectivity measurements by using the following meteorological fields, generated by Lokal Modell, as inputs: temperature, pressure, water vapour content, cloud water content, cloud ice content, rain sedimentation flux and snow sedimentation flux. This work focuses on the assessment of some uncertainty sources associated with radar measurements: absorption by the atmospheric gases, e.g., molecular oxygen, water vapour, and nitrogen; attenuation due to the presence of a highly reflecting structure between the radar and a "target structure". RSM results for a simplified meteorological scenario, consisting of a humid updraft on a flat surface and four cells placed around it, are presented.

  8. A Simplified Three-Phase Model of Equiaxed Solidification for the Prediction of Microstructure and Macrosegregation in Castings

    NASA Astrophysics Data System (ADS)

    Tveito, Knut Omdal; Pakanati, Akash; M'Hamdi, Mohammed; Combeau, Hervé; Založnik, Miha

    2018-04-01

    Macrosegregation is a result of the interplay of various transport mechanisms, including natural convection, solidification shrinkage, and grain motion. Experimental observations also indicate the impact of grain morphology, ranging from dendritic to globular, on macrosegregation formation. To avoid the complexity arising due to modeling of an equiaxed dendritic grain, we present the development of a simplified three-phase, multiscale equiaxed dendritic solidification model based on the volume-averaging method, which accounts for the above-mentioned transport phenomena. The validity of the model is assessed by comparing it with the full three-phase model without simplifications. It is then applied to qualitatively analyze the impact of grain morphology on macrosegregation formation in an industrial scale direct chill cast aluminum alloy ingot.

  9. Modular modeling system for building distributed hydrologic models with a user-friendly software package

    NASA Astrophysics Data System (ADS)

    Wi, S.; Ray, P. A.; Brown, C.

    2015-12-01

    A software package developed to facilitate building distributed hydrologic models in a modular modeling system is presented. The software package provides a user-friendly graphical user interface that eases its practical use in water resources-related research and practice. The modular modeling system organizes the options available to users when assembling models according to the stages of hydrological cycle, such as potential evapotranspiration, soil moisture accounting, and snow/glacier melting processes. The software is intended to be a comprehensive tool that simplifies the task of developing, calibrating, validating, and using hydrologic models through the inclusion of intelligent automation to minimize user effort, and reduce opportunities for error. Processes so far automated include the definition of system boundaries (i.e., watershed delineation), climate and geographical input generation, and parameter calibration. Built-in post-processing toolkits greatly improve the functionality of the software as a decision support tool for water resources system management and planning. Example post-processing toolkits enable streamflow simulation at ungauged sites with predefined model parameters, and perform climate change risk assessment by means of the decision scaling approach. The software is validated through application to watersheds representing a variety of hydrologic regimes.

  10. A simplified method for elastic-plastic-creep structural analysis

    NASA Technical Reports Server (NTRS)

    Kaufman, A.

    1984-01-01

    A simplified inelastic analysis computer program (ANSYPM) was developed for predicting the stress-strain history at the critical location of a thermomechanically cycled structure from an elastic solution. The program uses an iterative and incremental procedure to estimate the plastic strains from the material stress-strain properties and a plasticity hardening model. Creep effects are calculated on the basis of stress relaxation at constant strain, creep at constant stress or a combination of stress relaxation and creep accumulation. The simplified method was exercised on a number of problems involving uniaxial and multiaxial loading, isothermal and nonisothermal conditions, dwell times at various points in the cycles, different materials and kinematic hardening. Good agreement was found between these analytical results and nonlinear finite element solutions for these problems. The simplified analysis program used less than 1 percent of the CPU time required for a nonlinear finite element analysis.

  11. A simplified method for elastic-plastic-creep structural analysis

    NASA Technical Reports Server (NTRS)

    Kaufman, A.

    1985-01-01

    A simplified inelastic analysis computer program (ANSYPM) was developed for predicting the stress-strain history at the critical location of a thermomechanically cycled structure from an elastic solution. The program uses an iterative and incremental procedure to estimate the plastic strains from the material stress-strain properties and a plasticity hardening model. Creep effects are calculated on the basis of stress relaxation at constant strain, creep at constant stress or a combination of stress relaxation and creep accumulation. The simplified method was exercised on a number of problems involving uniaxial and multiaxial loading, isothermal and nonisothermal conditions, dwell times at various points in the cycles, different materials and kinematic hardening. Good agreement was found between these analytical results and nonlinear finite element solutions for these problems. The simplified analysis program used less than 1 percent of the CPU time required for a nonlinear finite element analysis.

  12. Predicting the need for muscle flap salvage after open groin vascular procedures: a clinical assessment tool.

    PubMed

    Fischer, John P; Nelson, Jonas A; Shang, Eric K; Wink, Jason D; Wingate, Nicholas A; Woo, Edward Y; Jackson, Benjamin M; Kovach, Stephen J; Kanchwala, Suhail

    2014-12-01

    Groin wound complications after open vascular surgery procedures are common, morbid, and costly. The purpose of this study was to generate a simple, validated, clinically usable risk assessment tool for predicting groin wound morbidity after infra-inguinal vascular surgery. A retrospective review of consecutive patients undergoing groin cutdowns for femoral access between 2005-2011 was performed. Patients necessitating salvage flaps were compared to those who did not, and a stepwise logistic regression was performed and validated using a bootstrap technique. Utilising this analysis, a simplified risk score was developed to predict the risk of developing a wound which would necessitate salvage. A total of 925 patients were included in the study. The salvage flap rate was 11.2% (n = 104). Predictors determined by logistic regression included prior groin surgery (OR = 4.0, p < 0.001), prosthetic graft (OR = 2.7, p < 0.001), coronary artery disease (OR = 1.8, p = 0.019), peripheral arterial disease (OR = 5.0, p < 0.001), and obesity (OR = 1.7, p = 0.039). Based upon the respective logistic coefficients, a simplified scoring system was developed to enable the preoperative risk stratification regarding the likelihood of a significant complication which would require a salvage muscle flap. The c-statistic for the regression demonstrated excellent discrimination at 0.89. This study presents a simple, internally validated risk assessment tool that accurately predicts wound morbidity requiring flap salvage in open groin vascular surgery patients. The preoperatively high-risk patient can be identified and selectively targeted as a candidate for a prophylactic muscle flap.

  13. A neuronal network model with simplified tonotopicity for tinnitus generation and its relief by sound therapy.

    PubMed

    Nagashino, Hirofumi; Kinouchi, Yohsuke; Danesh, Ali A; Pandya, Abhijit S

    2013-01-01

    Tinnitus is the perception of sound in the ears or in the head where no external source is present. Sound therapy is one of the most effective techniques for tinnitus treatment that have been proposed. In order to investigate mechanisms of tinnitus generation and the clinical effects of sound therapy, we have proposed conceptual and computational models with plasticity using a neural oscillator or a neuronal network model. In the present paper, we propose a neuronal network model with simplified tonotopicity of the auditory system as more detailed structure. In this model an integrate-and-fire neuron model is employed and homeostatic plasticity is incorporated. The computer simulation results show that the present model can show the generation of oscillation and its cessation by external input. It suggests that the present framework is promising as a modeling for the tinnitus generation and the effects of sound therapy.

  14. Influence of Trabecular Bone on Peri-Implant Stress and Strain Based on Micro-CT Finite Element Modeling of Beagle Dog

    PubMed Central

    Liao, Sheng-hui; Zhu, Xing-hao; Xie, Jing; Sohodeb, Vikesh Kumar; Ding, Xi

    2016-01-01

    The objective of this investigation is to analyze the influence of trabecular microstructure modeling on the biomechanical distribution of the implant-bone interface. Two three-dimensional finite element mandible models, one with trabecular microstructure (a refined model) and one with macrostructure (a simplified model), were built. The values of equivalent stress at the implant-bone interface in the refined model increased compared with those of the simplified model and strain on the contrary. The distributions of stress and strain were more uniform in the refined model of trabecular microstructure, in which stress and strain were mainly concentrated in trabecular bone. It was concluded that simulation of trabecular bone microstructure had a significant effect on the distribution of stress and strain at the implant-bone interface. These results suggest that trabecular structures could disperse stress and strain and serve as load buffers. PMID:27403424

  15. Influence of Trabecular Bone on Peri-Implant Stress and Strain Based on Micro-CT Finite Element Modeling of Beagle Dog.

    PubMed

    Liao, Sheng-Hui; Zhu, Xing-Hao; Xie, Jing; Sohodeb, Vikesh Kumar; Ding, Xi

    2016-01-01

    The objective of this investigation is to analyze the influence of trabecular microstructure modeling on the biomechanical distribution of the implant-bone interface. Two three-dimensional finite element mandible models, one with trabecular microstructure (a refined model) and one with macrostructure (a simplified model), were built. The values of equivalent stress at the implant-bone interface in the refined model increased compared with those of the simplified model and strain on the contrary. The distributions of stress and strain were more uniform in the refined model of trabecular microstructure, in which stress and strain were mainly concentrated in trabecular bone. It was concluded that simulation of trabecular bone microstructure had a significant effect on the distribution of stress and strain at the implant-bone interface. These results suggest that trabecular structures could disperse stress and strain and serve as load buffers.

  16. Analytical investigation of the faster-is-slower effect with a simplified phenomenological model

    NASA Astrophysics Data System (ADS)

    Suzuno, K.; Tomoeda, A.; Ueyama, D.

    2013-11-01

    We investigate the mechanism of the phenomenon called the “faster-is-slower”effect in pedestrian flow studies analytically with a simplified phenomenological model. It is well known that the flow rate is maximized at a certain strength of the driving force in simulations using the social force model when we consider the discharge of self-driven particles through a bottleneck. In this study, we propose a phenomenological and analytical model based on a mechanics-based modeling to reveal the mechanism of the phenomenon. We show that our reduced system, with only a few degrees of freedom, still has similar properties to the original many-particle system and that the effect comes from the competition between the driving force and the nonlinear friction from the model. Moreover, we predict the parameter dependences on the effect from our model qualitatively, and they are confirmed numerically by using the social force model.

  17. Analyzing Power Supply and Demand on the ISS

    NASA Technical Reports Server (NTRS)

    Thomas, Justin; Pham, Tho; Halyard, Raymond; Conwell, Steve

    2006-01-01

    Station Power and Energy Evaluation Determiner (SPEED) is a Java application program for analyzing the supply and demand aspects of the electrical power system of the International Space Station (ISS). SPEED can be executed on any computer that supports version 1.4 or a subsequent version of the Java Runtime Environment. SPEED includes an analysis module, denoted the Simplified Battery Solar Array Model, which is a simplified engineering model of the ISS primary power system. This simplified model makes it possible to perform analyses quickly. SPEED also includes a user-friendly graphical-interface module, an input file system, a parameter-configuration module, an analysis-configuration-management subsystem, and an output subsystem. SPEED responds to input information on trajectory, shadowing, attitude, and pointing in either a state-of-charge mode or a power-availability mode. In the state-of-charge mode, SPEED calculates battery state-of-charge profiles, given a time-varying power-load profile. In the power-availability mode, SPEED determines the time-varying total available solar array and/or battery power output, given a minimum allowable battery state of charge.

  18. Optimal Asteroid Mass Determination from Planetary Range Observations: A Study of a Simplified Test Model

    NASA Technical Reports Server (NTRS)

    Kuchynka, P.; Laskar, J.; Fienga, A.

    2011-01-01

    Mars ranging observations are available over the past 10 years with an accuracy of a few meters. Such precise measurements of the Earth-Mars distance provide valuable constraints on the masses of the asteroids perturbing both planets. Today more than 30 asteroid masses have thus been estimated from planetary ranging data (see [1] and [2]). Obtaining unbiased mass estimations is nevertheless difficult. Various systematic errors can be introduced by imperfect reduction of spacecraft tracking observations to planetary ranging data. The large number of asteroids and the limited a priori knowledge of their masses is also an obstacle for parameter selection. Fitting in a model a mass of a negligible perturber, or on the contrary omitting a significant perturber, will induce important bias in determined asteroid masses. In this communication, we investigate a simplified version of the mass determination problem. Instead of planetary ranging observations from spacecraft or radar data, we consider synthetic ranging observations generated with the INPOP [2] ephemeris for a test model containing 25000 asteroids. We then suggest a method for optimal parameter selection and estimation in this simplified framework.

  19. On the slow dynamics of near-field acoustically levitated objects under High excitation frequencies

    NASA Astrophysics Data System (ADS)

    Ilssar, Dotan; Bucher, Izhak

    2015-10-01

    This paper introduces a simplified analytical model describing the governing dynamics of near-field acoustically levitated objects. The simplification converts the equation of motion coupled with the partial differential equation of a compressible fluid, into a compact, second order ordinary differential equation, where the local stiffness and damping are transparent. The simplified model allows one to more easily analyse and design near-field acoustic levitation based systems, and it also helps to devise closed-loop controller algorithms for such systems. Near-field acoustic levitation employs fast ultrasonic vibrations of a driving surface and exploits the viscosity and the compressibility of a gaseous medium to achieve average, load carrying pressure. It is demonstrated that the slow dynamics dominates the transient behaviour, while the time-scale associated with the fast, ultrasonic excitation has a small presence in the oscillations of the levitated object. Indeed, the present paper formulates the slow dynamics under an ultrasonic excitation without the need to explicitly consider the latter. The simplified model is compared with a numerical scheme based on Reynolds equation and with experiments, both showing reasonably good results.

  20. A simplified memory network model based on pattern formations

    NASA Astrophysics Data System (ADS)

    Xu, Kesheng; Zhang, Xiyun; Wang, Chaoqing; Liu, Zonghua

    2014-12-01

    Many experiments have evidenced the transition with different time scales from short-term memory (STM) to long-term memory (LTM) in mammalian brains, while its theoretical understanding is still under debate. To understand its underlying mechanism, it has recently been shown that it is possible to have a long-period rhythmic synchronous firing in a scale-free network, provided the existence of both the high-degree hubs and the loops formed by low-degree nodes. We here present a simplified memory network model to show that the self-sustained synchronous firing can be observed even without these two necessary conditions. This simplified network consists of two loops of coupled excitable neurons with different synaptic conductance and with one node being the sensory neuron to receive an external stimulus signal. This model can be further used to show how the diversity of firing patterns can be selectively formed by varying the signal frequency, duration of the stimulus and network topology, which corresponds to the patterns of STM and LTM with different time scales. A theoretical analysis is presented to explain the underlying mechanism of firing patterns.

  1. APPLICATION OF EPANET TO UNDERSTAND LEAD RELEASE IN PREMISE PLUMBING

    EPA Science Inventory

    This presentation describes the factors affecting lead concentration in tap water using an EPANET hydraulic model for a simplified home model, a realistic home model, and EPA's experimental home plumbing system.

  2. SEDIMENT GEOCHEMICAL MODEL

    EPA Science Inventory

    Until recently, sediment geochemical models (diagenetic models) have been only able to explain sedimentary flux and concentration profiles for a few simplified geochemical cycles (e.g., nitrogen, carbon and sulfur). However with advances in numerical methods, increased accuracy ...

  3. A Simplified Baseband Prefilter Model with Adaptive Kalman Filter for Ultra-Tight COMPASS/INS Integration

    PubMed Central

    Luo, Yong; Wu, Wenqi; Babu, Ravindra; Tang, Kanghua; Luo, Bing

    2012-01-01

    COMPASS is an indigenously developed Chinese global navigation satellite system and will share many features in common with GPS (Global Positioning System). Since the ultra-tight GPS/INS (Inertial Navigation System) integration shows its advantage over independent GPS receivers in many scenarios, the federated ultra-tight COMPASS/INS integration has been investigated in this paper, particularly, by proposing a simplified prefilter model. Compared with a traditional prefilter model, the state space of this simplified system contains only carrier phase, carrier frequency and carrier frequency rate tracking errors. A two-quadrant arctangent discriminator output is used as a measurement. Since the code tracking error related parameters were excluded from the state space of traditional prefilter models, the code/carrier divergence would destroy the carrier tracking process, and therefore an adaptive Kalman filter algorithm tuning process noise covariance matrix based on state correction sequence was incorporated to compensate for the divergence. The federated ultra-tight COMPASS/INS integration was implemented with a hardware COMPASS intermediate frequency (IF), and INS's accelerometers and gyroscopes signal sampling system. Field and simulation test results showed almost similar tracking and navigation performances for both the traditional prefilter model and the proposed system; however, the latter largely decreased the computational load. PMID:23012564

  4. EOSlib, Version 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woods, Nathan; Menikoff, Ralph

    2017-02-03

    Equilibrium thermodynamics underpins many of the technologies used throughout theoretical physics, yet verification of the various theoretical models in the open literature remains challenging. EOSlib provides a single, consistent, verifiable implementation of these models, in a single, easy-to-use software package. It consists of three parts: a software library implementing various published equation-of-state (EOS) models; a database of fitting parameters for various materials for these models; and a number of useful utility functions for simplifying thermodynamic calculations such as computing Hugoniot curves or Riemann problem solutions. Ready availability of this library will enable reliable code-to- code testing of equation-of-state implementations, asmore » well as a starting point for more rigorous verification work. EOSlib also provides a single, consistent API for its analytic and tabular EOS models, which simplifies the process of comparing models for a particular application.« less

  5. A mass action model of a Fibroblast Growth Factor signaling pathway and its simplification.

    PubMed

    Gaffney, E A; Heath, J K; Kwiatkowska, M Z

    2008-11-01

    We consider a kinetic law of mass action model for Fibroblast Growth Factor (FGF) signaling, focusing on the induction of the RAS-MAP kinase pathway via GRB2 binding. Our biologically simple model suffers a combinatorial explosion in the number of differential equations required to simulate the system. In addition to numerically solving the full model, we show that it can be accurately simplified. This requires combining matched asymptotics, the quasi-steady state hypothesis, and the fact subsets of the equations decouple asymptotically. Both the full and simplified models reproduce the qualitative dynamics observed experimentally and in previous stochastic models. The simplified model also elucidates both the qualitative features of GRB2 binding and the complex relationship between SHP2 levels, the rate SHP2 induces dephosphorylation and levels of bound GRB2. In addition to providing insight into the important and redundant features of FGF signaling, such work further highlights the usefulness of numerous simplification techniques in the study of mass action models of signal transduction, as also illustrated recently by Borisov and co-workers (Borisov et al. in Biophys. J. 89, 951-966, 2005, Biosystems 83, 152-166, 2006; Kiyatkin et al. in J. Biol. Chem. 281, 19925-19938, 2006). These developments will facilitate the construction of tractable models of FGF signaling, incorporating further biological realism, such as spatial effects or realistic binding stoichiometries, despite a more severe combinatorial explosion associated with the latter.

  6. Physically-based Assessment of Tropical Cyclone Damage and Economic Losses

    NASA Astrophysics Data System (ADS)

    Lin, N.

    2012-12-01

    Estimating damage and economic losses caused by tropical cyclones (TC) is a topic of considerable research interest in many scientific fields, including meteorology, structural and coastal engineering, and actuarial sciences. One approach is based on the empirical relationship between TC characteristics and loss data. Another is to model the physical mechanism of TC-induced damage. In this talk we discuss about the physically-based approach to predict TC damage and losses due to extreme wind and storm surge. We first present an integrated vulnerability model, which, for the first time, explicitly models the essential mechanisms causing wind damage to residential areas during storm passage, including windborne-debris impact and the pressure-debris interaction that may lead, in a chain reaction, to structural failures (Lin and Vanmarcke 2010; Lin et al. 2010a). This model can be used to predict the economic losses in a residential neighborhood (with hundreds of buildings) during a specific TC (Yau et al. 2011) or applied jointly with a TC risk model (e.g., Emanuel et al 2008) to estimate the expected losses over long time periods. Then we present a TC storm surge risk model that has been applied to New York City (Lin et al. 2010b; Lin et al. 2012; Aerts et al. 2012), Miami-Dade County, Florida (Klima et al. 2011), Galveston, Texas (Lickley, 2012), and other coastal areas around the world (e.g., Tampa, Florida; Persian Gulf; Darwin, Australia; Shanghai, China). These physically-based models are applicable to various coastal areas and have the capability to account for the change of the climate and coastal exposure over time. We also point out that, although made computationally efficient for risk assessment, these models are not suitable for regional or global analysis, which has been a focus of the empirically-based economic analysis (e.g., Hsiang and Narita 2012). A future research direction is to simplify the physically-based models, possibly through parameterization, and make connections to the global loss data and economic analysis.

  7. Simplified lipid guidelines: Prevention and management of cardiovascular disease in primary care.

    PubMed

    Allan, G Michael; Lindblad, Adrienne J; Comeau, Ann; Coppola, John; Hudson, Brianne; Mannarino, Marco; McMinis, Cindy; Padwal, Raj; Schelstraete, Christine; Zarnke, Kelly; Garrison, Scott; Cotton, Candra; Korownyk, Christina; McCormack, James; Nickel, Sharon; Kolber, Michael R

    2015-10-01

    To develop clinical practice guidelines for a simplified approach to primary prevention of cardiovascular disease (CVD), concentrating on CVD risk estimation and lipid management for primary care clinicians and their teams; we sought increased contribution from primary care professionals with little or no conflict of interest and focused on the highest level of evidence available. Nine health professionals (4 family physicians, 2 internal medicine specialists, 1 nurse practitioner, 1 registered nurse, and 1 pharmacist) and 1 nonvoting member (pharmacist project manager) comprised the overarching Lipid Pathway Committee (LPC). Member selection was based on profession, practice setting, and location, and members disclosed any actual or potential conflicts of interest. The guideline process was iterative through online posting, detailed evidence review, and telephone and online meetings. The LPC identified 12 priority questions to be addressed. The Evidence Review Group answered these questions. After review of the answers, key recommendations were derived through consensus of the LPC. The guidelines were drafted, refined, and distributed to a group of clinicians (family physicians, other specialists, pharmacists, nurses, and nurse practitioners) and patients for feedback, then refined again and finalized by the LPC. Recommendations are provided on screening and testing, risk assessments, interventions, follow-up, and the role of acetylsalicylic acid in primary prevention. These simplified lipid guidelines provide practical recommendations for prevention and treatment of CVD for primary care practitioners. All recommendations are intended to assist with, not dictate, decision making in conjunction with patients. Copyright© the College of Family Physicians of Canada.

  8. Simplified lipid guidelines

    PubMed Central

    Allan, G. Michael; Lindblad, Adrienne J.; Comeau, Ann; Coppola, John; Hudson, Brianne; Mannarino, Marco; McMinis, Cindy; Padwal, Raj; Schelstraete, Christine; Zarnke, Kelly; Garrison, Scott; Cotton, Candra; Korownyk, Christina; McCormack, James; Nickel, Sharon; Kolber, Michael R.

    2015-01-01

    Abstract Objective To develop clinical practice guidelines for a simplified approach to primary prevention of cardiovascular disease (CVD), concentrating on CVD risk estimation and lipid management for primary care clinicians and their teams; we sought increased contribution from primary care professionals with little or no conflict of interest and focused on the highest level of evidence available. Methods Nine health professionals (4 family physicians, 2 internal medicine specialists, 1 nurse practitioner, 1 registered nurse, and 1 pharmacist) and 1 nonvoting member (pharmacist project manager) comprised the overarching Lipid Pathway Committee (LPC). Member selection was based on profession, practice setting, and location, and members disclosed any actual or potential conflicts of interest. The guideline process was iterative through online posting, detailed evidence review, and telephone and online meetings. The LPC identified 12 priority questions to be addressed. The Evidence Review Group answered these questions. After review of the answers, key recommendations were derived through consensus of the LPC. The guidelines were drafted, refined, and distributed to a group of clinicians (family physicians, other specialists, pharmacists, nurses, and nurse practitioners) and patients for feedback, then refined again and finalized by the LPC. Recommendations Recommendations are provided on screening and testing, risk assessments, interventions, follow-up, and the role of acetylsalicylic acid in primary prevention. Conclusion These simplified lipid guidelines provide practical recommendations for prevention and treatment of CVD for primary care practitioners. All recommendations are intended to assist with, not dictate, decision making in conjunction with patients. PMID:26472792

  9. A simplified conjoint recognition paradigm for the measurement of gist and verbatim memory.

    PubMed

    Stahl, Christoph; Klauer, Karl Christoph

    2008-05-01

    The distinction between verbatim and gist memory traces has furthered the understanding of numerous phenomena in various fields, such as false memory research, research on reasoning and decision making, and cognitive development. To measure verbatim and gist memory empirically, an experimental paradigm and multinomial measurement model has been proposed but rarely applied. In the present article, a simplified conjoint recognition paradigm and multinomial model is introduced and validated as a measurement tool for the separate assessment of verbatim and gist memory processes. A Bayesian metacognitive framework is applied to validate guessing processes. Extensions of the model toward incorporating the processes of phantom recollection and erroneous recollection rejection are discussed.

  10. Energy-state formulation of lumped volume dynamic equations with application to a simplified free piston Stirling engine

    NASA Technical Reports Server (NTRS)

    Daniele, C. J.; Lorenzo, C. F.

    1979-01-01

    Lumped volume dynamic equations are derived using an energy state formulation. This technique requires that kinetic and potential energy state functions be written for the physical system being investigated. To account for losses in the system, a Rayleigh dissipation function is formed. Using these functions, a Lagrangian is formed and using Lagrange's equation, the equations of motion for the system are derived. The results of the application of this technique to a lumped volume are used to derive a model for the free piston Stirling engine. The model was simplified and programmed on an analog computer. Results are given comparing the model response with experimental data.

  11. Energy-state formulation of lumped volume dynamic equations with application to a simplified free piston Stirling engine

    NASA Technical Reports Server (NTRS)

    Daniele, C. J.; Lorenzo, C. F.

    1979-01-01

    Lumped volume dynamic equations are derived using an energy-state formulation. This technique requires that kinetic and potential energy state functions be written for the physical system being investigated. To account for losses in the system, a Rayleigh dissipation function is also formed. Using these functions, a Lagrangian is formed and using Lagrange's equation, the equations of motion for the system are derived. The results of the application of this technique to a lumped volume are used to derive a model for the free-piston Stirling engine. The model was simplified and programmed on an analog computer. Results are given comparing the model response with experimental data.

  12. Two tradeoffs between economy and reliability in loss of load probability constrained unit commitment

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Wang, Mingqiang; Ning, Xingyao

    2018-02-01

    Spinning reserve (SR) should be scheduled considering the balance between economy and reliability. To address the computational intractability cursed by the computation of loss of load probability (LOLP), many probabilistic methods use simplified formulations of LOLP to improve the computational efficiency. Two tradeoffs embedded in the SR optimization model are not explicitly analyzed in these methods. In this paper, two tradeoffs including primary tradeoff and secondary tradeoff between economy and reliability in the maximum LOLP constrained unit commitment (UC) model are explored and analyzed in a small system and in IEEE-RTS System. The analysis on the two tradeoffs can help in establishing new efficient simplified LOLP formulations and new SR optimization models.

  13. Simplified Analysis of Pulse Detonation Rocket Engine Blowdown Gasdynamics and Performance

    NASA Technical Reports Server (NTRS)

    Morris, C. I.; Rodgers, Stephen L. (Technical Monitor)

    2002-01-01

    Pulse detonation rocket engines (PDREs) offer potential performance improvements over conventional designs, but represent a challenging modellng task. A simplified model for an idealized, straight-tube, single-shot PDRE blowdown process and thrust determination is described and implemented. In order to form an assessment of the accuracy of the model, the flowfield time history is compared to experimental data from Stanford University. Parametric Studies of the effect of mixture stoichiometry, initial fill temperature, and blowdown pressure ratio on the performance of a PDRE are performed using the model. PDRE performance is also compared with a conventional steady-state rocket engine over a range of pressure ratios using similar gasdynamic assumptions.

  14. Field-aligned currents and large-scale magnetospheric electric fields

    NASA Technical Reports Server (NTRS)

    Dangelo, N.

    1979-01-01

    The existence of field-aligned currents (FAC) at northern and southern high latitudes was confirmed by a number of observations, most clearly by experiments on the TRIAD and ISIS 2 satellites. The high-latitude FAC system is used to relate what is presently known about the large-scale pattern of high-latitude ionospheric electric fields and their relation to solar wind parameters. Recently a simplified model was presented for polar cap electric fields. The model is of considerable help in visualizing the large-scale features of FAC systems. A summary of the FAC observations is given. The simplified model is used to visualize how the FAC systems are driven by their generators.

  15. A semi-phenomenological model to predict the acoustic behavior of fully and partially reticulated polyurethane foams

    NASA Astrophysics Data System (ADS)

    Doutres, Olivier; Atalla, Noureddine; Dong, Kevin

    2013-02-01

    This paper proposes simple semi-phenomenological models to predict the sound absorption efficiency of highly porous polyurethane foams from microstructure characterization. In a previous paper [J. Appl. Phys. 110, 064901 (2011)], the authors presented a 3-parameter semi-phenomenological model linking the microstructure properties of fully and partially reticulated isotropic polyurethane foams (i.e., strut length l, strut thickness t, and reticulation rate Rw) to the macroscopic non-acoustic parameters involved in the classical Johnson-Champoux-Allard model (i.e., porosity ϕ, airflow resistivity σ, tortuosity α∝, viscous Λ, and thermal Λ' characteristic lengths). The model was based on existing scaling laws, validated for fully reticulated polyurethane foams, and improved using both geometrical and empirical approaches to account for the presence of membrane closing the pores. This 3-parameter model is applied to six polyurethane foams in this paper and is found highly sensitive to the microstructure characterization; particularly to strut's dimensions. A simplified micro-/macro model is then presented. It is based on the cell size Cs and reticulation rate Rw only, assuming that the geometric ratio between strut length l and strut thickness t is known. This simplified model, called the 2-parameter model, considerably simplifies the microstructure characterization procedure. A comparison of the two proposed semi-phenomenological models is presented using six polyurethane foams being either fully or partially reticulated, isotropic or anisotropic. It is shown that the 2-parameter model is less sensitive to measurement uncertainties compared to the original model and allows a better estimation of polyurethane foams sound absorption behavior.

  16. In-hospital fall-risk screening in 4,735 geriatric patients from the LUCAS project.

    PubMed

    Neumann, L; Hoffmann, V S; Golgert, S; Hasford, J; Von Renteln-Kruse, W

    2013-03-01

    In-hospital falls in older patients are frequent, but the identification of patients at risk of falling is challenging. Aim of this study was to improve the identification of high-risk patients. Therefore, a simplified screening-tool was developed, validated, and compared to the STRATIFY predictive accuracy. Retrospective analysis of 4,735 patients; evaluation of predictive accuracy of STRATIFY and its single risk factors, as well as age, gender and psychotropic medication; splitting the dataset into a learning and a validation sample for modelling fall-risk screening and independent, temporal validation. Geriatric clinic at an academic teaching hospital in Hamburg, Germany. 4,735 hospitalised patients ≥65 years. Sensitivity, specificity, positive and negative predictive value, Odds Ratios, Youden-Index and the rates of falls and fallers were calculated. There were 10.7% fallers, and the fall rate was 7.9/1,000 hospital days. In the learning sample, mental alteration (OR 2.9), fall history (OR 2.1), and insecure mobility (Barthel-Index items 'transfer' + 'walking' score = 5, 10 or 15) (OR 2.3) had the most strongest association to falls. The LUCAS Fall-Risk Screening uses these risk factors, and patients with ≥2 risk factors contributed to the high-risk group (30.9%). In the validation sample, STRATIFY SENS was 56.8, SPEC 59.6, PPV 13.5 and NPV 92.6 vs. LUCAS Fall-Risk Screening was SENS 46.0, SPEC 71.1, PPV 14.9 and NPV 92.3. Both the STRATIFY and the LUCAS Fall-Risk Screening showed comparable results in defining a high-risk group. Impaired mobility and cognitive status were closely associated to falls. The results do underscore the importance of functional status as essential fall-risk factor in older hospitalised patients.

  17. A Cluster-Randomized, Controlled Trial of a Simplified Multifaceted Management Program for Individuals at High Cardiovascular Risk (SimCard Trial) in Rural Tibet, China, and Haryana, India.

    PubMed

    Tian, Maoyi; Ajay, Vamadevan S; Dunzhu, Danzeng; Hameed, Safraj S; Li, Xian; Liu, Zhong; Li, Cong; Chen, Hao; Cho, KaWing; Li, Ruilai; Zhao, Xingshan; Jindal, Devraj; Rawal, Ishita; Ali, Mohammed K; Peterson, Eric D; Ji, Jiachao; Amarchand, Ritvik; Krishnan, Anand; Tandon, Nikhil; Xu, Li-Qun; Wu, Yangfeng; Prabhakaran, Dorairaj; Yan, Lijing L

    2015-09-01

    In rural areas in China and India, the cardiovascular disease burden is high but economic and healthcare resources are limited. This study (the Simplified Cardiovascular Management Study [SimCard]) aims to develop and evaluate a simplified cardiovascular management program delivered by community health workers with the aid of a smartphone-based electronic decision support system. The SimCard study was a yearlong cluster-randomized, controlled trial conducted in 47 villages (27 in China and 20 in India). Recruited for the study were 2086 individuals with high cardiovascular risk (aged ≥40 years with self-reported history of coronary heart disease, stroke, diabetes mellitus, and/or measured systolic blood pressure ≥160 mm Hg). Participants in the intervention villages were managed by community health workers through an Android-powered app on a monthly basis focusing on 2 medication use and 2 lifestyle modifications. In comparison with the control group, the intervention group had a 25.5% (P<0.001) higher net increase in the primary outcome of the proportion of patient-reported antihypertensive medication use pre- and post-intervention. There were also significant differences in certain secondary outcomes: aspirin use (net difference: 17.1%; P<0.001) and systolic blood pressure (-2.7 mm Hg; P=0.04). However, no significant changes were observed in the lifestyle factors. The intervention was culturally tailored, and country-specific results revealed important differences between the regions. The results indicate that the simplified cardiovascular management program improved quality of primary care and clinical outcomes in resource-poor settings in China and India. Larger trials in more places are needed to ascertain the potential impacts on mortality and morbidity outcomes. URL: http://www.clinicaltrials.gov. Unique identifier: NCT01503814. © 2015 American Heart Association, Inc.

  18. Assessing the Risk of Aquifer Salinization in a Large-Scale Coastal Irrigation Scheme in Southern Italy

    NASA Astrophysics Data System (ADS)

    Zaccaria, Daniele; Passarella, Giuseppe; D'Agostino, Daniela; Giordano, Raffaele; Sandoval-Solis, Samuel; Maggi, Sabino; Bruno, Delia; Foglia, Laura

    2017-04-01

    A research study was conducted on a coastal irrigated agricultural area of southern Italy to assess the risks of aquifer degradation likely resulting from the intensive groundwater pumping from individual farm wells and reduced aquifer recharge. Information were collected both from farmers and delivery system's operators during a survey conducted in 2012 revealing that farmers depend mainly on groundwater with the aim to achieve flexible irrigation management as opposed to the rigid rotational delivery service of surface water supply provided by the local water management agency. The study area is intensively farmed by small land-holding growers with high-value micro-irrigated horticultural crops. Our team appraised the soil and aquifer degradation hazards using a simplified procedure for environmental risk assessment that allowed identifying the risk-generating processes, evaluating the magnitude of impacts, and estimating the overall risks significance. We also collected the stakeholders' perceptions on agricultural water management and use through field interviews, whereas parallel investigations revealed significant aquifer salinity increase during the recent years. As a final step, some preliminary risk mitigation options were appraised by exploring the growers' response to possible changes of irrigation deliveries by the water management agency. The present study integrated multi-annual observations, data interpretation, and modelling efforts, which jointly enabled the analysis of complex water management scenarios and the development of informed decisions. Keywords: Environmental risk assessment, Fuzzy cognitive maps, Groundwater degradation, Seawater intrusion

  19. Using metal-ligand binding characteristics to predict metal toxicity: quantitative ion character-activity relationships (QICARs).

    PubMed Central

    Newman, M C; McCloskey, J T; Tatara, C P

    1998-01-01

    Ecological risk assessment can be enhanced with predictive models for metal toxicity. Modelings of published data were done under the simplifying assumption that intermetal trends in toxicity reflect relative metal-ligand complex stabilities. This idea has been invoked successfully since 1904 but has yet to be applied widely in quantitative ecotoxicology. Intermetal trends in toxicity were successfully modeled with ion characteristics reflecting metal binding to ligands for a wide range of effects. Most models were useful for predictive purposes based on an F-ratio criterion and cross-validation, but anomalous predictions did occur if speciation was ignored. In general, models for metals with the same valence (i.e., divalent metals) were better than those combining mono-, di-, and trivalent metals. The softness parameter (sigma p) and the absolute value of the log of the first hydrolysis constant ([symbol: see text] log KOH [symbol: see text]) were especially useful in model construction. Also, delta E0 contributed substantially to several of the two-variable models. In contrast, quantitative attempts to predict metal interactions in binary mixtures based on metal-ligand complex stabilities were not successful. PMID:9860900

  20. Uas for Archaeology - New Perspectives on Aerial Documentation

    NASA Astrophysics Data System (ADS)

    Fallavollita, P.; Balsi, M.; Esposito, S.; Melis, M. G.; Milanese, M.; Zappino, L.

    2013-08-01

    In this work some Unmanned Aerial Systems applications are discussed and applied to archaeological sites survey and 3D model reconstructions. Interesting results are shown for three important and different aged sites on north Sardinia (Italy). An easy and simplified procedure has proposed permitting the adoption of multi-rotor aircrafts for daily archaeological survey during excavation and documentation, involving state of art in UAS design, flight control systems, high definition sensor cameras and innovative photogrammetric software tools. Very high quality 3D models results are shown and discussed and how they have been simplified the archaeologist work and decisions.

  1. An exact solution of a simplified two-phase plume model. [for solid propellant rocket

    NASA Technical Reports Server (NTRS)

    Wang, S.-Y.; Roberts, B. B.

    1974-01-01

    An exact solution of a simplified two-phase, gas-particle, rocket exhaust plume model is presented. It may be used to make the upper-bound estimation of the heat flux and pressure loads due to particle impingement on the objects existing in the rocket exhaust plume. By including the correction factors to be determined experimentally, the present technique will provide realistic data concerning the heat and aerodynamic loads on these objects for design purposes. Excellent agreement in trend between the best available computer solution and the present exact solution is shown.

  2. A computationally efficient description of heterogeneous freezing: A simplified version of the Soccer ball model

    NASA Astrophysics Data System (ADS)

    Niedermeier, Dennis; Ervens, Barbara; Clauss, Tina; Voigtländer, Jens; Wex, Heike; Hartmann, Susan; Stratmann, Frank

    2014-01-01

    In a recent study, the Soccer ball model (SBM) was introduced for modeling and/or parameterizing heterogeneous ice nucleation processes. The model applies classical nucleation theory. It allows for a consistent description of both apparently singular and stochastic ice nucleation behavior, by distributing contact angles over the nucleation sites of a particle population assuming a Gaussian probability density function. The original SBM utilizes the Monte Carlo technique, which hampers its usage in atmospheric models, as fairly time-consuming calculations must be performed to obtain statistically significant results. Thus, we have developed a simplified and computationally more efficient version of the SBM. We successfully used the new SBM to parameterize experimental nucleation data of, e.g., bacterial ice nucleation. Both SBMs give identical results; however, the new model is computationally less expensive as confirmed by cloud parcel simulations. Therefore, it is a suitable tool for describing heterogeneous ice nucleation processes in atmospheric models.

  3. RANS modeling of scalar dispersion from localized sources within a simplified urban-area model

    NASA Astrophysics Data System (ADS)

    Rossi, Riccardo; Capra, Stefano; Iaccarino, Gianluca

    2011-11-01

    The dispersion of a passive scalar downstream a localized source within a simplified urban-like geometry is examined by means of RANS scalar flux models. The computations are conducted under conditions of neutral stability and for three different incoming wind directions (0°, 45°, 90°) at a roughness Reynolds number of Ret = 391. A Reynolds stress transport model is used to close the flow governing equations whereas both the standard eddy-diffusivity closure and algebraic flux models are employed to close the transport equation for the passive scalar. The comparison with a DNS database shows improved reliability from algebraic scalar flux models towards predicting both the mean concentration and the plume structure. Since algebraic flux models do not increase substantially the computational effort, the results indicate that the use of tensorial-diffusivity can be promising tool for dispersion simulations for the urban environment.

  4. [Work-related musculo-skeletal disorders in apiculture: a biomechanical approach to the risk assessment].

    PubMed

    Maina, G; Sorasio, D; Rossi, F; Zito, D; Perrelli, E; Baracco, A

    2012-01-01

    The risk assessment in apiculture points out methodological problems due to discontinuities and variability of exposure. This study analyzes a comprehensive set of potential determinants influencing the biomechanical risks in apiarists using recognized technical standards to ensure the technical-scientific accuracy; it offers a simplified methodological toolkit to be used in the risk assessment process and provides a user-friendly computer application. The toolkit asks the beekeeper to specify, for each month, the total number of hours worked, specifying the distribution among different tasks. As a result, the application calculates the average index risk and the peak index risk. The evidence of the study indicates that there are activities in this occupational area with biomechanical risks that remain for some tasks, while reducing the exposure time.

  5. Application of Molecular Typing Results in Source Attribution Models: The Case of Multiple Locus Variable Number Tandem Repeat Analysis (MLVA) of Salmonella Isolates Obtained from Integrated Surveillance in Denmark.

    PubMed

    de Knegt, Leonardo V; Pires, Sara M; Löfström, Charlotta; Sørensen, Gitte; Pedersen, Karl; Torpdahl, Mia; Nielsen, Eva M; Hald, Tine

    2016-03-01

    Salmonella is an important cause of bacterial foodborne infections in Denmark. To identify the main animal-food sources of human salmonellosis, risk managers have relied on a routine application of a microbial subtyping-based source attribution model since 1995. In 2013, multiple locus variable number tandem repeat analysis (MLVA) substituted phage typing as the subtyping method for surveillance of S. Enteritidis and S. Typhimurium isolated from animals, food, and humans in Denmark. The purpose of this study was to develop a modeling approach applying a combination of serovars, MLVA types, and antibiotic resistance profiles for the Salmonella source attribution, and assess the utility of the results for the food safety decisionmakers. Full and simplified MLVA schemes from surveillance data were tested, and model fit and consistency of results were assessed using statistical measures. We conclude that loci schemes STTR5/STTR10/STTR3 for S. Typhimurium and SE9/SE5/SE2/SE1/SE3 for S. Enteritidis can be used in microbial subtyping-based source attribution models. Based on the results, we discuss that an adjustment of the discriminatory level of the subtyping method applied often will be required to fit the purpose of the study and the available data. The issues discussed are also considered highly relevant when applying, e.g., extended multi-locus sequence typing or next-generation sequencing techniques. © 2015 Society for Risk Analysis.

  6. Single frequency GPS measurements in real-time artificial satellite orbit determination

    NASA Astrophysics Data System (ADS)

    Chiaradia, orbit determination A. P. M.; Kuga, H. K.; Prado, A. F. B. A.

    2003-07-01

    A simplified and compact algorithm with low computational cost providing an accuracy around tens of meters for artificial satellite orbit determination in real-time and on-board is developed in this work. The state estimation method is the extended Kalman filter. The Cowell's method is used to propagate the state vector, through a simple Runge-Kutta numerical integrator of fourth order with fixed step size. The modeled forces are due to the geopotential up to 50th order and degree of JGM-2 model. To time-update the state error covariance matrix, it is considered a simplified force model. In other words, in computing the state transition matrix, the effect of J 2 (Earth flattening) is analytically considered, which unloads dramatically the processing time. In the measurement model, the single frequency GPS pseudorange is used, considering the effects of the ionospheric delay, clock offsets of the GPS and user satellites, and relativistic effects. To validate this model, real live data are used from Topex/Poseidon satellite and the results are compared with the Topex/Poseidon Precision Orbit Ephemeris (POE) generated by NASA/JPL, for several test cases. It is concluded that this compact algorithm enables accuracies of tens of meters with such simplified force model, analytical approach for computing the transition matrix, and a cheap GPS receiver providing single frequency pseudorange measurements.

  7. A Simplified Land Model (SLM) for use in cloud-resolving models: Formulation and evaluation

    NASA Astrophysics Data System (ADS)

    Lee, Jungmin M.; Khairoutdinov, Marat

    2015-09-01

    A Simplified Land Model (SLM) that uses a minimalist set of parameters with a single-layer vegetation and multilevel soil structure has been developed distinguishing canopy and undercanopy energy budgets. The primary motivation has been to design a land model for use in the System for Atmospheric Modeling (SAM) cloud-resolving model to study land-atmosphere interactions with a sufficient level of realism. SLM uses simplified expressions for the transport of heat, moisture, momentum, and radiation in soil-vegetation system. The SLM performance has been evaluated over several land surface types using summertime tower observations of micrometeorological and biophysical data from three AmeriFlux sites, which include grassland, cropland, and deciduous-broadleaf forest. In general, the SLM captures the observed diurnal cycle of surface energy budget and soil temperature reasonably well, although reproducing the evolution of soil moisture, especially after rain events, has been challenging. The SLM coupled to SAM has been applied to the case of summertime shallow cumulus convection over land based on the Atmospheric Radiation Measurements (ARM) Southern Great Plain (SGP) observations. The simulated surface latent and sensible heat fluxes as well as the evolution of thermodynamic profiles in convective boundary layer agree well with the estimates based on the observations. Sensitivity of atmospheric boundary layer development to the soil moisture and different land cover types has been also examined.

  8. Limitations of the commonly used simplified laterally uniform optical fiber probe-tissue interface in Monte Carlo simulations of diffuse reflectance

    PubMed Central

    Naglič, Peter; Pernuš, Franjo; Likar, Boštjan; Bürmen, Miran

    2015-01-01

    Light propagation models often simplify the interface between the optical fiber probe tip and tissue to a laterally uniform boundary with mismatched refractive indices. Such simplification neglects the precise optical properties of the commonly used probe tip materials, e.g. stainless steel or black epoxy. In this paper, we investigate the limitations of the laterally uniform probe-tissue interface in Monte Carlo simulations of diffuse reflectance. In comparison to a realistic probe-tissue interface that accounts for the layout and properties of the probe tip materials, the simplified laterally uniform interface is shown to introduce significant errors into the simulated diffuse reflectance. PMID:26504647

  9. The new hospice compliance plan: defining and addressing risk areas. Part 3.

    PubMed

    Jones, D H; Woods, K

    2000-07-01

    The recently released OIG guidelines to ensure compliance with federal and state statutes, rules, and regulations, and private-payor health care program requirements provide a blueprint for developing such programs. This is the last of three installments that focus specifically on the 28 risk areas identified in the guidance and offer strategies for incorporating them in a hospice compliance program. The authors have organized the 28 risk areas under 9 topic domains to simplify the task of tackling the guidance. This article covers the areas of nursing home care, marketing, and Conditions of Participation.

  10. Incorporating Hydroepidemiology into the Epidemia Malaria Early Warning System

    NASA Astrophysics Data System (ADS)

    Wimberly, M. C.; Merkord, C. L.; Henebry, G. M.; Senay, G. B.

    2014-12-01

    Early warning of the timing and locations of malaria epidemics can facilitate the targeting of resources for prevention and emergency response. In response to this need, we are developing the Epidemic Prognosis Incorporating Disease and Environmental Monitoring for Integrated Assessment (EPIDEMIA) computer system. EPIDEMIA incorporates software for capturing, processing, and integrating environmental and epidemiological data from multiple sources; data assimilation techniques that continually update models and forecasts; and a web-based interface that makes the resulting information available to public health decision makers. The system will enable forecasts that incorporate lagged responses to environmental risk factors as well as information about recent trends in malaria cases. Because the egg, larval, and pupal stages of mosquito development occur in aquatic habitats, information about the spatial and temporal distributions of stagnant water bodies is critical for modeling malaria risk. Potential sources of hydrological data include satellite-derived rainfall estimates, evapotranspiration (ET) calculated using a simplified surface energy balance model, and estimates of soil moisture and fractional water cover from passive microwave radiometry. We used partial least squares regression to analyze and visualize seasonal patterns of these variables in relation to malaria cases using data from 49 districts in the Amhara region of Ethiopia. Seasonal patterns of rainfall were strongly associated with the incidence and seasonality of malaria across the region, and model fit was improved by the addition of remotely-sensed ET and soil moisture variables. The results highlight the importance of remotely-sensed hydrological data for modeling malaria risk in this region and emphasize the value of an ensemble approach that utilizes multiple sources of information about precipitation and land surface wetness. These variables will be incorporated into the forecasting models at the core of the EPIDEMIA system, and. future model development will involve a cycle of continuous forecasting, accuracy assessment, and model refinement.

  11. Test of a simplified modeling approach for nitrogen transfer in agricultural subsurface-drained catchments

    NASA Astrophysics Data System (ADS)

    Henine, Hocine; Julien, Tournebize; Jaan, Pärn; Ülo, Mander

    2017-04-01

    In agricultural areas, nitrogen (N) pollution load to surface waters depends on land use, agricultural practices, harvested N output, as well as the hydrology and climate of the catchment. Most of N transfer models need to use large complex data sets, which are generally difficult to collect at larger scale (>km2). The main objective of this study is to carry out a hydrological and a geochemistry modeling by using a simplified data set (land use/crop, fertilizer input, N losses from plots). The modelling approach was tested in the subsurface-drained Orgeval catchment (Paris Basin, France) based on following assumptions: Subsurface tile drains are considered as a giant lysimeter system. N concentration in drain outlets is representative for agricultural practices upstream. Analysis of observed N load (90% of total N) shows 62% of export during the winter. We considered prewinter nitrate (NO3) pool (PWNP) in soils at the beginning of hydrological drainage season as a driving factor for N losses. PWNP results from the part of NO3 not used by crops or the mineralization part of organic matter during the preceding summer and autumn. Considering these assumptions, we used PWNP as simplified input data for the modelling of N transport. Thus, NO3 losses are mainly influenced by the denitrification capacity of soils and stream water. The well-known HYPE model was used to perform water and N losses modelling. The hydrological simulation was calibrated with the observation data at different sub-catchments. We performed a hydrograph separation validated on the thermal and isotopic tracer studies and the general knowledge of the behavior of Orgeval catchment. Our results show a good correlation between the model and the observations (a Nash-Sutcliffe coefficient of 0.75 for water discharge and 0.7 for N flux). Likewise, comparison of calibrated PWNP values with the results from a field survey (annual PWNP campaign) showed significant positive correlation. One can conclude that the simplified modeling approach using PWNP as a driving factor for the evaluation of N losses from drained agricultural catchments gave satisfactory results and we can propose this approach for a wider use.

  12. A simplified computational memory model from information processing.

    PubMed

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-11-23

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view.

  13. A simplified model for dynamics of cell rolling and cell-surface adhesion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cimrák, Ivan, E-mail: ivan.cimrak@fri.uniza.sk

    2015-03-10

    We propose a three dimensional model for the adhesion and rolling of biological cells on surfaces. We study cells moving in shear flow above a wall to which they can adhere via specific receptor-ligand bonds based on receptors from selectin as well as integrin family. The computational fluid dynamics are governed by the lattice-Boltzmann method. The movement and the deformation of the cells is described by the immersed boundary method. Both methods are fully coupled by implementing a two-way fluid-structure interaction. The adhesion mechanism is modelled by adhesive bonds including stochastic rules for their creation and rupture. We explore amore » simplified model with dissociation rate independent of the length of the bonds. We demonstrate that this model is able to resemble the mesoscopic properties, such as velocity of rolling cells.« less

  14. Pececillo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, Neil; Jibben, Zechariah; Brady, Peter

    2017-06-28

    Pececillo is a proxy-app for the open source Truchas metal processing code (LA-CC-15-097). It implements many of the physics models used in Truchas: free-surface, incompressible Navier-Stokes fluid dynamics (e.g., water waves); heat transport, material phase change, view factor thermal radiation; species advection-diffusion; quasi-static, elastic/plastic solid mechanics with contact; electomagnetics (Maxwell's equations). The models are simplified versions that retain the fundamental computational complexity of the Truchas models while omitting many non-essential features and modeling capabilities. The purpose is to expose Truchas algorithms in a greatly simplified context where computer science problems related to parallel performance on advanced architectures can be moremore » easily investigated. While Pececillo is capable of performing simulations representative of typical Truchas metal casting, welding, and additive manufacturing simulations, it lacks many of the modeling capabilites needed for real applications.« less

  15. Simplifications in modelling of dynamical response of coupled electro-mechanical system

    NASA Astrophysics Data System (ADS)

    Darula, Radoslav; Sorokin, Sergey

    2016-12-01

    The choice of a most suitable model of an electro-mechanical system depends on many variables, such as a scale of the system, type and frequency range of its operation, or power requirements. The article focuses on the model of the electromagnetic element used in passive regime (no feedback loops are assumed) and a general lumped parameter model (a conventional mass-spring-damper system coupled to an electric circuit consisting of a resistance, an inductance and a capacitance) is compared with its simplified version, where the full RLC circuit is replaced with its RL simplification, i.e. the capacitance of the electric system is neglected and just its inductance and the resistance are considered. From the comparison of dynamical responses of these systems, the range of applicability of a simplified model is assessed for free as well as forced vibration.

  16. Theoretical and experimental investigation of architected core materials incorporating negative stiffness elements

    NASA Astrophysics Data System (ADS)

    Chang, Chia-Ming; Keefe, Andrew; Carter, William B.; Henry, Christopher P.; McKnight, Geoff P.

    2014-04-01

    Structural assemblies incorporating negative stiffness elements have been shown to provide both tunable damping properties and simultaneous high stiffness and damping over prescribed displacement regions. In this paper we explore the design space for negative stiffness based assemblies using analytical modeling combined with finite element analysis. A simplified spring model demonstrates the effects of element stiffness, geometry, and preloads on the damping and stiffness performance. Simplified analytical models were validated for realistic structural implementations through finite element analysis. A series of complementary experiments was conducted to compare with modeling and determine the effects of each element on the system response. The measured damping performance follows the theoretical predictions obtained by analytical modeling. We applied these concepts to a novel sandwich core structure that exhibited combined stiffness and damping properties 8 times greater than existing foam core technologies.

  17. Temperature and solute-transport simulation in streamflow using a Lagrangian reference frame

    USGS Publications Warehouse

    Jobson, Harvey E.

    1980-01-01

    A computer program for simulating one-dimensional, unsteady temperature and solute transport in a river has been developed and documented for general use. The solution approach to the convective-diffusion equation uses a moving reference frame (Lagrangian) which greatly simplifies the mathematics of the solution procedure and dramatically reduces errors caused by numerical dispersion. The model documentation is presented as a series of four programs of increasing complexity. The conservative transport model can be used to route a single conservative substance. The simplified temperature model is used to predict water temperature in rivers when only temperature and windspeed data are available. The complete temperature model is highly accurate but requires rather complete meteorological data. Finally, the 10-parameter model can be used to route as many as 10 interacting constituents through a river reach. (USGS)

  18. A MODELLING FRAMEWORK FOR MERCURY CYCLING IN LAKE MICHIGAN

    EPA Science Inventory

    A time-dependent mercury model was developed to describe mercury cycling in Lake Michigan. The model addresses dynamic relationships between net mercury loadings and the resulting concentrations of mercury species in the water and sediment. The simplified predictive modeling fram...

  19. Lie integrable cases of the simplified multistrain/two-stream model for tuberculosis and dengue fever

    NASA Astrophysics Data System (ADS)

    Nucci, M. C.; Leach, P. G. L.

    2007-09-01

    We apply the techniques of Lie's symmetry analysis to a caricature of the simplified multistrain model of Castillo-Chavez and Feng [C. Castillo-Chavez, Z. Feng, To treat or not to treat: The case of tuberculosis, J. Math. Biol. 35 (1997) 629-656] for the transmission of tuberculosis and the coupled two-stream vector-based model of Feng and Velasco-Hernandez [Z. Feng, J.X. Velasco-Hernandez, Competitive exclusion in a vector-host model for the dengue fever, J. Math. Biol. 35 (1997) 523-544] to identify the combinations of parameters which lead to the existence of nontrivial symmetries. In particular we identify those combinations which lead to the possibility of the linearization of the system and provide the corresponding solutions. Many instances of additional symmetry are analyzed.

  20. Testing a thermo-chemo-hydro-geomechanical model for gas hydrate-bearing sediments using triaxial compression laboratory experiments

    NASA Astrophysics Data System (ADS)

    Gupta, S.; Deusner, C.; Haeckel, M.; Helmig, R.; Wohlmuth, B.

    2017-09-01

    Natural gas hydrates are considered a potential resource for gas production on industrial scales. Gas hydrates contribute to the strength and stiffness of the hydrate-bearing sediments. During gas production, the geomechanical stability of the sediment is compromised. Due to the potential geotechnical risks and process management issues, the mechanical behavior of the gas hydrate-bearing sediments needs to be carefully considered. In this study, we describe a coupling concept that simplifies the mathematical description of the complex interactions occurring during gas production by isolating the effects of sediment deformation and hydrate phase changes. Central to this coupling concept is the assumption that the soil grains form the load-bearing solid skeleton, while the gas hydrate enhances the mechanical properties of this skeleton. We focus on testing this coupling concept in capturing the overall impact of geomechanics on gas production behavior though numerical simulation of a high-pressure isotropic compression experiment combined with methane hydrate formation and dissociation. We consider a linear-elastic stress-strain relationship because it is uniquely defined and easy to calibrate. Since, in reality, the geomechanical response of the hydrate-bearing sediment is typically inelastic and is characterized by a significant shear-volumetric coupling, we control the experiment very carefully in order to keep the sample deformations small and well within the assumptions of poroelasticity. The closely coordinated experimental and numerical procedures enable us to validate the proposed simplified geomechanics-to-flow coupling, and set an important precursor toward enhancing our coupled hydro-geomechanical hydrate reservoir simulator with more suitable elastoplastic constitutive models.

  1. Risk-based planning analysis for a single levee

    NASA Astrophysics Data System (ADS)

    Hui, Rui; Jachens, Elizabeth; Lund, Jay

    2016-04-01

    Traditional risk-based analysis for levee planning focuses primarily on overtopping failure. Although many levees fail before overtopping, few planning studies explicitly include intermediate geotechnical failures in flood risk analysis. This study develops a risk-based model for two simplified levee failure modes: overtopping failure and overall intermediate geotechnical failure from through-seepage, determined by the levee cross section represented by levee height and crown width. Overtopping failure is based only on water level and levee height, while through-seepage failure depends on many geotechnical factors as well, mathematically represented here as a function of levee crown width using levee fragility curves developed from professional judgment or analysis. These levee planning decisions are optimized to minimize the annual expected total cost, which sums expected (residual) annual flood damage and annualized construction costs. Applicability of this optimization approach to planning new levees or upgrading existing levees is demonstrated preliminarily for a levee on a small river protecting agricultural land, and a major levee on a large river protecting a more valuable urban area. Optimized results show higher likelihood of intermediate geotechnical failure than overtopping failure. The effects of uncertainty in levee fragility curves, economic damage potential, construction costs, and hydrology (changing climate) are explored. Optimal levee crown width is more sensitive to these uncertainties than height, while the derived general principles and guidelines for risk-based optimal levee planning remain the same.

  2. Simplified cost models for prefeasibility mineral evaluations

    USGS Publications Warehouse

    Camm, Thomas W.

    1991-01-01

    This report contains 2 open pit models, 6 underground mine models, 11 mill models, and cost equations for access roads, power lines, and tailings ponds. In addition, adjustment factors for variation in haulage distances are provided for open pit models and variation in mining depths for underground models.

  3. "Bohr's Atomic Model."

    ERIC Educational Resources Information Center

    Willden, Jeff

    2001-01-01

    "Bohr's Atomic Model" is a small interactive multimedia program that introduces the viewer to a simplified model of the atom. This interactive simulation lets students build an atom using an atomic construction set. The underlying design methodology for "Bohr's Atomic Model" is model-centered instruction, which means the central model of the…

  4. A practical nonlocal model for heat transport in magnetized laser plasmas

    NASA Astrophysics Data System (ADS)

    Nicolaï, Ph. D.; Feugeas, J.-L. A.; Schurtz, G. P.

    2006-03-01

    A model of nonlocal transport for multidimensional radiation magnetohydrodynamics codes is presented. In laser produced plasmas, it is now believed that the heat transport can be strongly modified by the nonlocal nature of the electron conduction. Other mechanisms, such as self-generated magnetic fields, may also affect the heat transport. The model described in this work, based on simplified Fokker-Planck equations aims at extending the model of G. Schurtz, Ph. Nicolaï, and M. Busquet [Phys. Plasmas 7, 4238 (2000)] to magnetized plasmas. A complete system of nonlocal equations is derived from kinetic equations with self-consistent electric and magnetic fields. These equations are analyzed and simplified in order to be implemented into large laser fusion codes and coupled to other relevant physics. The model is applied to two laser configurations that demonstrate the main features of the model and point out the nonlocal Righi-Leduc effect in a multidimensional case.

  5. Model and experiments to optimize co-adaptation in a simplified myoelectric control system.

    PubMed

    Couraud, M; Cattaert, D; Paclet, F; Oudeyer, P Y; de Rugy, A

    2018-04-01

    To compensate for a limb lost in an amputation, myoelectric prostheses use surface electromyography (EMG) from the remaining muscles to control the prosthesis. Despite considerable progress, myoelectric controls remain markedly different from the way we normally control movements, and require intense user adaptation. To overcome this, our goal is to explore concurrent machine co-adaptation techniques that are developed in the field of brain-machine interface, and that are beginning to be used in myoelectric controls. We combined a simplified myoelectric control with a perturbation for which human adaptation is well characterized and modeled, in order to explore co-adaptation settings in a principled manner. First, we reproduced results obtained in a classical visuomotor rotation paradigm in our simplified myoelectric context, where we rotate the muscle pulling vectors used to reconstruct wrist force from EMG. Then, a model of human adaptation in response to directional error was used to simulate various co-adaptation settings, where perturbations and machine co-adaptation are both applied on muscle pulling vectors. These simulations established that a relatively low gain of machine co-adaptation that minimizes final errors generates slow and incomplete adaptation, while higher gains increase adaptation rate but also errors by amplifying noise. After experimental verification on real subjects, we tested a variable gain that cumulates the advantages of both, and implemented it with directionally tuned neurons similar to those used to model human adaptation. This enables machine co-adaptation to locally improve myoelectric control, and to absorb more challenging perturbations. The simplified context used here enabled to explore co-adaptation settings in both simulations and experiments, and to raise important considerations such as the need for a variable gain encoded locally. The benefits and limits of extending this approach to more complex and functional myoelectric contexts are discussed.

  6. A Simplified Approach for Simultaneous Measurements of Wavefront Velocity and Curvature in the Heart Using Activation Times.

    PubMed

    Mazeh, Nachaat; Haines, David E; Kay, Matthew W; Roth, Bradley J

    2013-12-01

    The velocity and curvature of a wave front are important factors governing the propagation of electrical activity through cardiac tissue, particularly during heart arrhythmias of clinical importance such as fibrillation. Presently, no simple computational model exists to determine these values simultaneously. The proposed model uses the arrival times at four or five sites to determine the wave front speed ( v ), direction (θ), and radius of curvature (ROC) ( r 0 ). If the arrival times are measured, then v , θ, and r 0 can be found from differences in arrival times and the distance between these sites. During isotropic conduction, we found good correlation between measured values of the ROC r 0 and the distance from the unipolar stimulus ( r = 0.9043 and p < 0.0001). The conduction velocity (m/s) was correlated ( r = 0.998, p < 0.0001) using our method (mean = 0.2403, SD = 0.0533) and an empirical method (mean = 0.2352, SD = 0.0560). The model was applied to a condition of anisotropy and a complex case of reentry with a high voltage extra stimulus. Again, results show good correlation between our simplified approach and established methods for multiple wavefront morphologies. In conclusion, insignificant measurement errors were observed between this simplified approach and an approach that was more computationally demanding. Accuracy was maintained when the requirement that ε (ε = b/r 0 , ratio of recording site spacing over wave fronts ROC) was between 0.001 and 0.5. The present simplified model can be applied to a variety of clinical conditions to predict behavior of planar, elliptical, and reentrant wave fronts. It may be used to study the genesis and propagation of rotors in human arrhythmias and could lead to rotor mapping using low density endocardial recording electrodes.

  7. Computational Analyses of Pressurization in Cryogenic Tanks

    NASA Technical Reports Server (NTRS)

    Ahuja, Vineet; Hosangadi, Ashvin; Mattick, Stephen; Lee, Chun P.; Field, Robert E.; Ryan, Harry

    2008-01-01

    A) Advanced Gas/Liquid Framework with Real Fluids Property Routines: I. A multi-fluid formulation in the preconditioned CRUNCH CFD(Registered TradeMark) code developed where a mixture of liquid and gases can be specified: a) Various options for Equation of state specification available (from simplified ideal fluid mixtures, to real fluid EOS such as SRK or BWR models). b) Vaporization of liquids driven by pressure value relative to vapor pressure and combustion of vapors allowed. c) Extensive validation has been undertaken. II. Currently working on developing primary break-up models and surface tension effects for more rigorous phase-change modeling and interfacial dynamics B) Framework Applied to Run-time Tanks at Ground Test Facilities C) Framework Used For J-2 Upper Stage Tank Modeling: 1) NASA MSFC tank pressurization: a) Hydrogen and oxygen tank pre-press, repress and draining being modeled at NASA MSFC. 2) NASA AMES tank safety effort a) liquid hydrogen and oxygen are separated by a baffle in the J-2 tank. We are modeling pressure rise and possible combustion if a hole develops in the baffle and liquid hydrogen leaks into the oxygen tank. Tank pressure rise rates simulated and risk of combustion evaluated.

  8. Shielding analyses of an AB-BNCT facility using Monte Carlo simulations and simplified methods

    NASA Astrophysics Data System (ADS)

    Lai, Bo-Lun; Sheu, Rong-Jiun

    2017-09-01

    Accurate Monte Carlo simulations and simplified methods were used to investigate the shielding requirements of a hypothetical accelerator-based boron neutron capture therapy (AB-BNCT) facility that included an accelerator room and a patient treatment room. The epithermal neutron beam for BNCT purpose was generated by coupling a neutron production target with a specially designed beam shaping assembly (BSA), which was embedded in the partition wall between the two rooms. Neutrons were produced from a beryllium target bombarded by 1-mA 30-MeV protons. The MCNP6-generated surface sources around all the exterior surfaces of the BSA were established to facilitate repeated Monte Carlo shielding calculations. In addition, three simplified models based on a point-source line-of-sight approximation were developed and their predictions were compared with the reference Monte Carlo results. The comparison determined which model resulted in better dose estimation, forming the basis of future design activities for the first ABBNCT facility in Taiwan.

  9. Evaluation of a Novel Laser-assisted Coronary Anastomotic Connector - the Trinity Clip - in a Porcine Off-pump Bypass Model

    PubMed Central

    Stecher, David; Bronkers, Glenn; Noest, Jappe O.T.; Tulleken, Cornelis A.F.; Hoefer, Imo E.; van Herwerden, Lex A.; Pasterkamp, Gerard; Buijsrogge, Marc P.

    2014-01-01

    To simplify and facilitate beating heart (i.e., off-pump), minimally invasive coronary artery bypass surgery, a new coronary anastomotic connector, the Trinity Clip, is developed based on the excimer laser-assisted nonocclusive anastomosis technique. The Trinity Clip connector enables simplified, sutureless, and nonocclusive connection of the graft to the coronary artery, and an excimer laser catheter laser-punches the opening of the anastomosis. Consequently, owing to the complete nonocclusive anastomosis construction, coronary conditioning (i.e., occluding or shunting) is not necessary, in contrast to the conventional anastomotic technique, hence simplifying the off-pump bypass procedure. Prior to clinical application in coronary artery bypass grafting, the safety and quality of this novel connector will be evaluated in a long-term experimental porcine off-pump coronary artery bypass (OPCAB) study. In this paper, we describe how to evaluate the coronary anastomosis in the porcine OPCAB model using various techniques to assess its quality. Representative results are summarized and visually demonstrated. PMID:25490000

  10. A simplified method for assessing particle deposition rate in aircraft cabins

    NASA Astrophysics Data System (ADS)

    You, Ruoyu; Zhao, Bin

    2013-03-01

    Particle deposition in aircraft cabins is important for the exposure of passengers to particulate matter, as well as the airborne infectious diseases. In this study, a simplified method is proposed for initial and quick assessment of particle deposition rate in aircraft cabins. The method included: collecting the inclined angle, area, characteristic length, and freestream air velocity for each surface in a cabin; estimating the friction velocity based on the characteristic length and freestream air velocity; modeling the particle deposition velocity using the empirical equation we developed previously; and then calculating the particle deposition rate. The particle deposition rates for the fully-occupied, half-occupied, 1/4-occupied and empty first-class cabin of the MD-82 commercial airliner were estimated. The results show that the occupancy did not significantly influence the particle deposition rate of the cabin. Furthermore, the simplified human model can be used in the assessment with acceptable accuracy. Finally, the comparison results show that the particle deposition rate of aircraft cabins and indoor environments are quite similar.

  11. Long-term safety assessment of trench-type surface repository at Chernobyl, Ukraine - computer model and comparison with results from simplified models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haverkamp, B.; Krone, J.; Shybetskyi, I.

    2013-07-01

    The Radioactive Waste Disposal Facility (RWDF) Buryakovka was constructed in 1986 as part of the intervention measures after the accident at Chernobyl NPP (ChNPP). Today, the surface repository for solid low and intermediate level waste (LILW) is still being operated but its maximum capacity is nearly reached. Long-existing plans for increasing the capacity of the facility shall be implemented in the framework of the European Commission INSC Programme (Instrument for Nuclear Safety Co-operation). Within the first phase of this project, DBE Technology GmbH prepared a safety analysis report of the facility in its current state (SAR) and a preliminary safetymore » analysis report (PSAR) for a future extended facility based on the planned enlargement. In addition to a detailed mathematical model, also simplified models have been developed to verify results of the former one and enhance confidence in the results. Comparison of the results show that - depending on the boundary conditions - simplifications like modeling the multi trench repository as one generic trench might have very limited influence on the overall results compared to the general uncertainties associated with respective long-term calculations. In addition to their value in regard to verification of more complex models which is important to increase confidence in the overall results, such simplified models can also offer the possibility to carry out time consuming calculations like probabilistic calculations or detailed sensitivity analysis in an economic manner. (authors)« less

  12. Mathematical Description of Complex Chemical Kinetics and Application to CFD Modeling Codes

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1993-01-01

    A major effort in combustion research at the present time is devoted to the theoretical modeling of practical combustion systems. These include turbojet and ramjet air-breathing engines as well as ground-based gas-turbine power generating systems. The ability to use computational modeling extensively in designing these products not only saves time and money, but also helps designers meet the quite rigorous environmental standards that have been imposed on all combustion devices. The goal is to combine the very complex solution of the Navier-Stokes flow equations with realistic turbulence and heat-release models into a single computer code. Such a computational fluid-dynamic (CFD) code simulates the coupling of fluid mechanics with the chemistry of combustion to describe the practical devices. This paper will focus on the task of developing a simplified chemical model which can predict realistic heat-release rates as well as species composition profiles, and is also computationally rapid. We first discuss the mathematical techniques used to describe a complex, multistep fuel oxidation chemical reaction and develop a detailed mechanism for the process. We then show how this mechanism may be reduced and simplified to give an approximate model which adequately predicts heat release rates and a limited number of species composition profiles, but is computationally much faster than the original one. Only such a model can be incorporated into a CFD code without adding significantly to long computation times. Finally, we present some of the recent advances in the development of these simplified chemical mechanisms.

  13. Computer vision-based method for classification of wheat grains using artificial neural network.

    PubMed

    Sabanci, Kadir; Kayabasi, Ahmet; Toktas, Abdurrahim

    2017-06-01

    A simplified computer vision-based application using artificial neural network (ANN) depending on multilayer perceptron (MLP) for accurately classifying wheat grains into bread or durum is presented. The images of 100 bread and 100 durum wheat grains are taken via a high-resolution camera and subjected to pre-processing. The main visual features of four dimensions, three colors and five textures are acquired using image-processing techniques (IPTs). A total of 21 visual features are reproduced from the 12 main features to diversify the input population for training and testing the ANN model. The data sets of visual features are considered as input parameters of the ANN model. The ANN with four different input data subsets is modelled to classify the wheat grains into bread or durum. The ANN model is trained with 180 grains and its accuracy tested with 20 grains from a total of 200 wheat grains. Seven input parameters that are most effective on the classifying results are determined using the correlation-based CfsSubsetEval algorithm to simplify the ANN model. The results of the ANN model are compared in terms of accuracy rate. The best result is achieved with a mean absolute error (MAE) of 9.8 × 10 -6 by the simplified ANN model. This shows that the proposed classifier based on computer vision can be successfully exploited to automatically classify a variety of grains. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  14. Mathematical description of complex chemical kinetics and application to CFD modeling codes

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1993-01-01

    A major effort in combustion research at the present time is devoted to the theoretical modeling of practical combustion systems. These include turbojet and ramjet air-breathing engines as well as ground-based gas-turbine power generating systems. The ability to use computational modeling extensively in designing these products not only saves time and money, but also helps designers meet the quite rigorous environmental standards that have been imposed on all combustion devices. The goal is to combine the very complex solution of the Navier-Stokes flow equations with realistic turbulence and heat-release models into a single computer code. Such a computational fluid-dynamic (CFD) code simulates the coupling of fluid mechanics with the chemistry of combustion to describe the practical devices. This paper will focus on the task of developing a simplified chemical model which can predict realistic heat-release rates as well as species composition profiles, and is also computationally rapid. We first discuss the mathematical techniques used to describe a complex, multistep fuel oxidation chemical reaction and develop a detailed mechanism for the process. We then show how this mechanism may be reduced and simplified to give an approximate model which adequately predicts heat release rates and a limited number of species composition profiles, but is computationally much faster than the original one. Only such a model can be incorporated into a CFD code without adding significantly to long computation times. Finally, we present some of the recent advances in the development of these simplified chemical mechanisms.

  15. Uncertainty in surface water flood risk modelling

    NASA Astrophysics Data System (ADS)

    Butler, J. B.; Martin, D. N.; Roberts, E.; Domuah, R.

    2009-04-01

    Two thirds of the flooding that occurred in the UK during summer 2007 was as a result of surface water (otherwise known as ‘pluvial') rather than river or coastal flooding. In response, the Environment Agency and Interim Pitt Reviews have highlighted the need for surface water risk mapping and warning tools to identify, and prepare for, flooding induced by heavy rainfall events. This need is compounded by the likely increase in rainfall intensities due to climate change. The Association of British Insurers has called for the Environment Agency to commission nationwide flood risk maps showing the relative risk of flooding from all sources. At the wider European scale, the recently-published EC Directive on the assessment and management of flood risks will require Member States to evaluate, map and model flood risk from a variety of sources. As such, there is now a clear and immediate requirement for the development of techniques for assessing and managing surface water flood risk across large areas. This paper describes an approach for integrating rainfall, drainage network and high-resolution topographic data using Flowroute™, a high-resolution flood mapping and modelling platform, to produce deterministic surface water flood risk maps. Information is provided from UK case studies to enable assessment and validation of modelled results using historical flood information and insurance claims data. Flowroute was co-developed with flood scientists at Cambridge University specifically to simulate river dynamics and floodplain inundation in complex, congested urban areas in a highly computationally efficient manner. It utilises high-resolution topographic information to route flows around individual buildings so as to enable the prediction of flood depths, extents, durations and velocities. As such, the model forms an ideal platform for the development of surface water flood risk modelling and mapping capabilities. The 2-dimensional component of Flowroute employs uniform flow formulae (Manning's Equation) to direct flow over the model domain, sourcing water from the channel or sea so as to provide a detailed representation of river and coastal flood risk. The initial development step was to include spatially-distributed rainfall as a new source term within the model domain. This required optimisation to improve computational efficiency, given the ubiquity of ‘wet' cells early on in the simulation. Collaboration with UK water companies has provided detailed drainage information, and from this a simplified representation of the drainage system has been included in the model via the inclusion of sinks and sources of water from the drainage network. This approach has clear advantages relative to a fully coupled method both in terms of reduced input data requirements and computational overhead. Further, given the difficulties associated with obtaining drainage information over large areas, tests were conducted to evaluate uncertainties associated with excluding drainage information and the impact that this has upon flood model predictions. This information can be used, for example, to inform insurance underwriting strategies and loss estimation as well as for emergency response and planning purposes. The Flowroute surface-water flood risk platform enables efficient mapping of areas sensitive to flooding from high-intensity rainfall events due to topography and drainage infrastructure. As such, the technology has widespread potential for use as a risk mapping tool by the UK Environment Agency, European Member States, water authorities, local governments and the insurance industry. Keywords: Surface water flooding, Model Uncertainty, Insurance Underwriting, Flood inundation modelling, Risk mapping.

  16. Percentage body fat ranges associated with metabolic syndrome risk: results based on the third National Health and Nutrition Examination Survey (1988-1994).

    PubMed

    Zhu, Shankuan; Wang, ZiMian; Shen, Wei; Heymsfield, Steven B; Heshka, Stanley

    2003-08-01

    Increasing attention has focused on the association between body fatness and related metabolic risk factors. The quantitative link between percentage body fat (%BF) and the risk of metabolic syndrome is unknown. The objectives were to determine the risk [odds ratios (ORs)] of metabolic syndrome based on %BF in black and white men and women in the United States and to provide corresponding ranges of %BF associated with a risk equivalent to body mass index (BMI; in kg/m(2)). The subjects were participants in the third National Health and Nutrition Examination Survey and were divided into those with and without the metabolic syndrome. OR equations were derived from logistic regression models for %BF and BMI, with the 25th percentile in the study population as the reference. Ranges were developed by associating %BF with the equivalent risk of metabolic syndrome based on established BMI cutoffs. Four sets (men, women, black, and white) of OR curves were generated for %BF and for BMI by using data from 8259 adults. The ORs for metabolic syndrome were lower in blacks than in whites at any given %BF or BMI. The developed cutoffs for %BF differed between men and women but showed only small race and age effects. A simplified set of sex-specific %BF ranges for the risk of metabolic syndrome were developed. The risk of metabolic syndrome can be established from measured %BF by using either the developed OR curves or %BF thresholds at traditional BMI cutoffs. This information should prove useful in both clinical and research settings.

  17. Influence of backup bearings and support structure dynamics on the behavior of rotors with active supports

    NASA Technical Reports Server (NTRS)

    Flowers, George T.

    1994-01-01

    Progress over the past year includes the following: A simplified rotor model with a flexible shaft and backup bearings has been developed. A simple rotor model which includes a flexible disk and bearings with clearance has been developed and the dynamics of the model investigated. A rotor model based upon the T-501 engine has been developed which includes backup bearing effects. Parallel simulation runs are being conducted using an ANSYS based finite element model of the T-501. The magnetic bearing test rig is currently floating and dynamics/control tests are being conducted. A paper has been written that documents the work using the T-501 engine model. Work has continued with the simplified model. The finite element model is currently being modified to include the effects of foundation dynamics. A literature search for material on foil bearings has been conducted. A finite element model is being developed for a magnetic bearing in series with a foil backup bearing.

  18. Simplified process model discovery based on role-oriented genetic mining.

    PubMed

    Zhao, Weidong; Liu, Xi; Dai, Weihui

    2014-01-01

    Process mining is automated acquisition of process models from event logs. Although many process mining techniques have been developed, most of them are based on control flow. Meanwhile, the existing role-oriented process mining methods focus on correctness and integrity of roles while ignoring role complexity of the process model, which directly impacts understandability and quality of the model. To address these problems, we propose a genetic programming approach to mine the simplified process model. Using a new metric of process complexity in terms of roles as the fitness function, we can find simpler process models. The new role complexity metric of process models is designed from role cohesion and coupling, and applied to discover roles in process models. Moreover, the higher fitness derived from role complexity metric also provides a guideline for redesigning process models. Finally, we conduct case study and experiments to show that the proposed method is more effective for streamlining the process by comparing with related studies.

  19. National Policy Agenda to Reduce the Burden of Student Debt

    ERIC Educational Resources Information Center

    Institute for College Access & Success, 2014

    2014-01-01

    Since 2005, "The Institute for College Access & Success" (TICAS) and its Project on Student Debt have worked to reduce the risks and burdens of student debt. TICAS helped create and improve income-based repayment plans to keep federal loan payments manageable; strengthen Pell Grants, which reduce the need to borrow; and simplify the…

  20. Constructing and Modifying Sequence Statistics for relevent Using informR in 𝖱

    PubMed Central

    Marcum, Christopher Steven; Butts, Carter T.

    2015-01-01

    The informR package greatly simplifies the analysis of complex event histories in 𝖱 by providing user friendly tools to build sufficient statistics for the relevent package. Historically, building sufficient statistics to model event sequences (of the form a→b) using the egocentric generalization of Butts’ (2008) relational event framework for modeling social action has been cumbersome. The informR package simplifies the construction of the complex list of arrays needed by the rem() model fitting for a variety of cases involving egocentric event data, multiple event types, and/or support constraints. This paper introduces these tools using examples from real data extracted from the American Time Use Survey. PMID:26185488

  1. Scale Interactions in the Tropics from a Simple Multi-Cloud Model

    NASA Astrophysics Data System (ADS)

    Niu, X.; Biello, J. A.

    2017-12-01

    Our lack of a complete understanding of the interaction between the moisture convection and equatorial waves remains an impediment in the numerical simulation of large-scale organization, such as the Madden-Julian Oscillation (MJO). The aim of this project is to understand interactions across spatial scales in the tropics from a simplified framework for scale interactions while a using a simplified framework to describe the basic features of moist convection. Using multiple asymptotic scales, Biello and Majda[1] derived a multi-scale model of moist tropical dynamics (IMMD[1]), which separates three regimes: the planetary scale climatology, the synoptic scale waves, and the planetary scale anomalies regime. The scales and strength of the observed MJO would categorize it in the regime of planetary scale anomalies - which themselves are forced from non-linear upscale fluxes from the synoptic scales waves. In order to close this model and determine whether it provides a self-consistent theory of the MJO. A model for diabatic heating due to moist convection must be implemented along with the IMMD. The multi-cloud parameterization is a model proposed by Khouider and Majda[2] to describe the three basic cloud types (congestus, deep and stratiform) that are most responsible for tropical diabatic heating. We implement a simplified version of the multi-cloud model that is based on results derived from large eddy simulations of convection [3]. We present this simplified multi-cloud model and show results of numerical experiments beginning with a variety of convective forcing states. Preliminary results on upscale fluxes, from synoptic scales to planetary scale anomalies, will be presented. [1] Biello J A, Majda A J. Intraseasonal multi-scale moist dynamics of the tropical atmosphere[J]. Communications in Mathematical Sciences, 2010, 8(2): 519-540. [2] Khouider B, Majda A J. A simple multicloud parameterization for convectively coupled tropical waves. Part I: Linear analysis[J]. Journal of the atmospheric sciences, 2006, 63(4): 1308-1323. [3] Dorrestijn J, Crommelin D T, Biello J A, et al. A data-driven multi-cloud model for stochastic parametrization of deep convection[J]. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 2013, 371(1991): 20120374.

  2. Quantifying uncertainty in high-resolution coupled hydrodynamic-ecosystem models

    NASA Astrophysics Data System (ADS)

    Allen, J. I.; Somerfield, P. J.; Gilbert, F. J.

    2007-01-01

    Marine ecosystem models are becoming increasingly complex and sophisticated, and are being used to estimate the effects of future changes in the earth system with a view to informing important policy decisions. Despite their potential importance, far too little attention has been, and is generally, paid to model errors and the extent to which model outputs actually relate to real-world processes. With the increasing complexity of the models themselves comes an increasing complexity among model results. If we are to develop useful modelling tools for the marine environment we need to be able to understand and quantify the uncertainties inherent in the simulations. Analysing errors within highly multivariate model outputs, and relating them to even more complex and multivariate observational data, are not trivial tasks. Here we describe the application of a series of techniques, including a 2-stage self-organising map (SOM), non-parametric multivariate analysis, and error statistics, to a complex spatio-temporal model run for the period 1988-1989 in the Southern North Sea, coinciding with the North Sea Project which collected a wealth of observational data. We use model output, large spatio-temporally resolved data sets and a combination of methodologies (SOM, MDS, uncertainty metrics) to simplify the problem and to provide tractable information on model performance. The use of a SOM as a clustering tool allows us to simplify the dimensions of the problem while the use of MDS on independent data grouped according to the SOM classification allows us to validate the SOM. The combination of classification and uncertainty metrics allows us to pinpoint the variables and associated processes which require attention in each region. We recommend the use of this combination of techniques for simplifying complex comparisons of model outputs with real data, and analysis of error distributions.

  3. Multi-body dynamics modelling of seated human body under exposure to whole-body vibration.

    PubMed

    Yoshimura, Takuya; Nakai, Kazuma; Tamaoki, Gen

    2005-07-01

    In vehicle systems occupational drivers might expose themselves to vibration for a long time. This may cause illness of the spine such as chronic lumbago or low back pain. Therefore, it is necessary to evaluate the influence of vibration to the spinal column and to make up appropriate guidelines or counter plans. In ISO2631-1 or ISO2631-5 assessment of vibration effects to human in the view of adverse-health effect was already presented. However, it is necessary to carry out further research to understand the effect of vibration to human body to examine their validity and to prepare for the future revision. This paper shows the detail measurement of human response to vibration, and the modelling of the seated human body for the assessment of the vibration risk. The vibration transmissibilities from the seat surface to the spinal column and to the head are measured during the exposure to vertical excitation. The modal paramters of seated subject are extracted in order to understand the dominant natural modes. For the evaluation of adverse-health effect the multi-body modelling of the spinal column is introduced. A simplified model having 10 DOFs is counstructed so that the transmissibilities of the model fit to those of experiment. The transient response analysis is illustrated when a half-sine input is applied. The relative displacements of vertebrae are evaluated, which can be a basis for the assessment of vibration risk. It is suggested that the multi-body dynamic model is used to evaluate the vibration effect to the spinal column for seated subjects.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, Timothy; Dolan, Matthew J.; El Hedri, Sonia

    Simplified Models are a useful way to characterize new physics scenarios for the LHC. Particle decays are often represented using non-renormalizable operators that involve the minimal number of fields required by symmetries. Generalizing to a wider class of decay operators allows one to model a variety of final states. This approach, which we dub the $n$-body extension of Simplified Models, provides a unifying treatment of the signal phase space resulting from a variety of signals. In this paper, we present the first application of this framework in the context of multijet plus missing energy searches. The main result of thismore » work is a global performance study with the goal of identifying which set of observables yields the best discriminating power against the largest Standard Model backgrounds for a wide range of signal jet multiplicities. Our analysis compares combinations of one, two and three variables, placing emphasis on the enhanced sensitivity gain resulting from non-trivial correlations. Utilizing boosted decision trees, we compare and classify the performance of missing energy, energy scale and energy structure observables. We demonstrate that including an observable from each of these three classes is required to achieve optimal performance. In conclusion, this work additionally serves to establish the utility of $n$-body extended Simplified Models as a diagnostic for unpacking the relative merits of different search strategies, thereby motivating their application to new physics signatures beyond jets and missing energy.« less

  5. Delta-9-Tetrahydrocannabinol-Induced Dopamine Release as a Function of Psychosis Risk: 18F-Fallypride Positron Emission Tomography Study

    PubMed Central

    Kuepper, Rebecca; Ceccarini, Jenny; Lataster, Johan; van Os, Jim; van Kroonenburgh, Marinus; van Gerven, Joop M. A.; Marcelis, Machteld; Van Laere, Koen; Henquet, Cécile

    2013-01-01

    Cannabis use is associated with psychosis, particularly in those with expression of, or vulnerability for, psychotic illness. The biological underpinnings of these differential associations, however, remain largely unknown. We used Positron Emission Tomography and 18F-fallypride to test the hypothesis that genetic risk for psychosis is expressed by differential induction of dopamine release by Δ9-THC (delta-9-tetrahydrocannabinol, the main psychoactive ingredient of cannabis). In a single dynamic PET scanning session, striatal dopamine release after pulmonary administration of Δ9-THC was measured in 9 healthy cannabis users (average risk psychotic disorder), 8 patients with psychotic disorder (high risk psychotic disorder) and 7 un-related first-degree relatives (intermediate risk psychotic disorder). PET data were analyzed applying the linear extension of the simplified reference region model (LSRRM), which accounts for time-dependent changes in 18F-fallypride displacement. Voxel-based statistical maps, representing specific D2/3 binding changes, were computed to localize areas with increased ligand displacement after Δ9-THC administration, reflecting dopamine release. While Δ9-THC was not associated with dopamine release in the control group, significant ligand displacement induced by Δ9-THC in striatal subregions, indicative of dopamine release, was detected in both patients and relatives. This was most pronounced in caudate nucleus. This is the first study to demonstrate differential sensitivity to Δ9-THC in terms of increased endogenous dopamine release in individuals at risk for psychosis. PMID:23936196

  6. Bioaerosol Dispersion in Relation with Wastewater Reuse for Crop Irrigation. (Experiments to understand emission processes with enteric virus and risks modeling).

    NASA Astrophysics Data System (ADS)

    Courault, D.; Girardin, G.; Capowiez, L.; Albert, I.; Krawczyk, C.; Ball, C.; Salemkour, A.; Bon, F.; Perelle, S.; Fraisse, A.; Renault, P.; Amato, P.

    2014-12-01

    Bio-aerosols consist of microorganisms or biological particles that become airborne depending on various environmental factors. Recycling of wastewater (WW) for irrigation can cope with the issues of water availability, and it can also threaten Human health if the pathogens present in WW are aerosolized during sprinkling irrigation or wind events. Among the variety of micro-organisms found in WW, enteric viruses can reach significant amounts, because most of the WW treatments are not completely efficient. These viruses are particularly resistant in the environment and responsibles of numerous digestive diseases (gastroenteritis, hepatitis…). Few quantities are enough to make people sick (102 pfu). Several knowledge gaps exist to better estimate the risks for Human exposure, and on the virus transfer from irrigation up to the respiratory track. A research program funded by the French government (INSU), gathering multi disciplinary teams aims at better understanding virus fate in air and health risks from WW reuse. Experiments were conducted under controlled conditions in order to prioritize the main factors impacting virus aerosolization. Irrigation with water loaded with safe surrogates of Hepatitis A virus (Murine Mengo Virus) was applied on small plots covered by channels in which the wind speed varied. Various situations have been investigated (wet/dry surfaces, strong/mild winds, clean/waste water). Air samples were collected above plots using impingers and filters after irrigation for several days. Viruses were quantified by RT-qPCR. The results showed that impingers were more efficient in airborne virus recovering than filters. Among environmental factors, Wind speed was the main factor explaining virus concentration in the air after irrigation. A Quantitative Microbial Risk Assessment approach has been chosen to assess the health effects on the population. The main modeling steps will be presented, including a simplified dispersion model coupled with a dose-response assessment to characterize the risk.

  7. Assessment selection in human-automation interaction studies: The Failure-GAM2E and review of assessment methods for highly automated driving.

    PubMed

    Grane, Camilla

    2018-01-01

    Highly automated driving will change driver's behavioural patterns. Traditional methods used for assessing manual driving will only be applicable for the parts of human-automation interaction where the driver intervenes such as in hand-over and take-over situations. Therefore, driver behaviour assessment will need to adapt to the new driving scenarios. This paper aims at simplifying the process of selecting appropriate assessment methods. Thirty-five papers were reviewed to examine potential and relevant methods. The review showed that many studies still relies on traditional driving assessment methods. A new method, the Failure-GAM 2 E model, with purpose to aid assessment selection when planning a study, is proposed and exemplified in the paper. Failure-GAM 2 E includes a systematic step-by-step procedure defining the situation, failures (Failure), goals (G), actions (A), subjective methods (M), objective methods (M) and equipment (E). The use of Failure-GAM 2 E in a study example resulted in a well-reasoned assessment plan, a new way of measuring trust through feet movements and a proposed Optimal Risk Management Model. Failure-GAM 2 E and the Optimal Risk Management Model are believed to support the planning process for research studies in the field of human-automation interaction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Numerical simulation of fluid flow through simplified blade cascade with prescribed harmonic motion using discontinuous Galerkin method

    NASA Astrophysics Data System (ADS)

    Vimmr, Jan; Bublík, Ondřej; Prausová, Helena; Hála, Jindřich; Pešek, Luděk

    2018-06-01

    This paper deals with a numerical simulation of compressible viscous fluid flow around three flat plates with prescribed harmonic motion. This arrangement presents a simplified blade cascade with forward wave motion. The aim of this simulation is to determine the aerodynamic forces acting on the flat plates. The mathematical model describing this problem is formed by Favre-averaged system of Navier-Stokes equations in arbitrary Lagrangian-Eulerian (ALE) formulation completed by one-equation Spalart-Allmaras turbulence model. The simulation was performed using the developed in-house CFD software based on discontinuous Galerkin method, which offers high order of accuracy.

  9. SIMPL: A Simplified Model-Based Program for the Analysis and Visualization of Groundwater Rebound in Abandoned Mines to Prevent Contamination of Water and Soils by Acid Mine Drainage

    PubMed Central

    Kim, Sung-Min

    2018-01-01

    Cessation of dewatering following underground mine closure typically results in groundwater rebound, because mine voids and surrounding strata undergo flooding up to the levels of the decant points, such as shafts and drifts. SIMPL (Simplified groundwater program In Mine workings using the Pipe equation and Lumped parameter model), a simplified lumped parameter model-based program for predicting groundwater levels in abandoned mines, is presented herein. The program comprises a simulation engine module, 3D visualization module, and graphical user interface, which aids data processing, analysis, and visualization of results. The 3D viewer facilitates effective visualization of the predicted groundwater level rebound phenomenon together with a topographic map, mine drift, goaf, and geological properties from borehole data. SIMPL is applied to data from the Dongwon coal mine and Dalsung copper mine in Korea, with strong similarities in simulated and observed results. By considering mine workings and interpond connections, SIMPL can thus be used to effectively analyze and visualize groundwater rebound. In addition, the predictions by SIMPL can be utilized to prevent the surrounding environment (water and soil) from being polluted by acid mine drainage. PMID:29747480

  10. A statistical approach to evaluate flood risk at the regional level: an application to Italy

    NASA Astrophysics Data System (ADS)

    Rossi, Mauro; Marchesini, Ivan; Salvati, Paola; Donnini, Marco; Guzzetti, Fausto; Sterlacchini, Simone; Zazzeri, Marco; Bonazzi, Alessandro; Carlesi, Andrea

    2016-04-01

    Floods are frequent and widespread in Italy, causing every year multiple fatalities and extensive damages to public and private structures. A pre-requisite for the development of mitigation schemes, including financial instruments such as insurance, is the ability to quantify their costs starting from the estimation of the underlying flood hazard. However, comprehensive and coherent information on flood prone areas, and estimates on the frequency and intensity of flood events, are not often available at scales appropriate for risk pooling and diversification. In Italy, River Basins Hydrogeological Plans (PAI), prepared by basin administrations, are the basic descriptive, regulatory, technical and operational tools for environmental planning in flood prone areas. Nevertheless, such plans do not cover the entire Italian territory, having significant gaps along the minor hydrographic network and in ungauged basins. Several process-based modelling approaches have been used by different basin administrations for the flood hazard assessment, resulting in an inhomogeneous hazard zonation of the territory. As a result, flood hazard assessments expected and damage estimations across the different Italian basin administrations are not always coherent. To overcome these limitations, we propose a simplified multivariate statistical approach for the regional flood hazard zonation coupled with a flood impact model. This modelling approach has been applied in different Italian basin administrations, allowing a preliminary but coherent and comparable estimation of the flood hazard and the relative impact. Model performances are evaluated comparing the predicted flood prone areas with the corresponding PAI zonation. The proposed approach will provide standardized information (following the EU Floods Directive specifications) on flood risk at a regional level which can in turn be more readily applied to assess flood economic impacts. Furthermore, in the assumption of an appropriate flood risk statistical characterization, the proposed procedure could be applied straightforward outside the national borders, particularly in areas with similar geo-environmental settings.

  11. Simplified 4-Step Transportation Planning Process For Any Sized Area

    DOT National Transportation Integrated Search

    1999-01-01

    This paper presents a streamlined version of the Washington, D.C. region's : 4-step travel demand forecasting model. The purpose for streamlining the : model was to have a model that could: replicate the regional model, and be run : in a new s...

  12. APPLICATION OF EPANET TO UNDERSTAND LEAD ...

    EPA Pesticide Factsheets

    This presentation describes the factors affecting lead concentration in tap water using an EPANET hydraulic model for a simplified home model, a realistic home model, and EPA's experimental home plumbing system. This presentation describes the factors affecting lead concentration in tap water using an EPANET hydraulic model.

  13. Performance of the Finnish Diabetes Risk Score and a Simplified Finnish Diabetes Risk Score in a Community-Based, Cross-Sectional Programme for Screening of Undiagnosed Type 2 Diabetes Mellitus and Dysglycaemia in Madrid, Spain: The SPREDIA-2 Study.

    PubMed

    Salinero-Fort, M A; Burgos-Lunar, C; Lahoz, C; Mostaza, J M; Abánades-Herranz, J C; Laguna-Cuesta, F; Estirado-de Cabo, E; García-Iglesias, F; González-Alegre, T; Fernández-Puntero, B; Montesano-Sánchez, L; Vicent-López, D; Cornejo-Del Río, V; Fernández-García, P J; Sánchez-Arroyo, V; Sabín-Rodríguez, C; López-López, S; Patrón-Barandio, P; Gómez-Campelo, P

    2016-01-01

    To evaluate the performance of the Finnish Diabetes Risk Score (FINDRISC) and a simplified FINDRISC score (MADRISC) in screening for undiagnosed type 2 diabetes mellitus (UT2DM) and dysglycaemia. A population-based, cross-sectional, descriptive study was carried out with participants with UT2DM, ranged between 45-74 years and lived in two districts in the north of metropolitan Madrid (Spain). The FINDRISC and MADRISC scores were evaluated using the area under the receiver operating characteristic curve method (ROC-AUC). Four different gold standards were used for UT2DM and any dysglycaemia, as follows: fasting plasma glucose (FPG), oral glucose tolerance test (OGTT), HbA1c, and OGTT or HbA1c. Dysglycaemia and UT2DM were defined according to American Diabetes Association criteria. The study population comprised 1,426 participants (832 females and 594 males) with a mean age of 62 years (SD = 6.1). When HbA1c or OGTT criteria were used, the prevalence of UT2DM was 7.4% (10.4% in men and 5.2% in women; p<0.01) and the FINDRISC ROC-AUC for UT2DM was 0.72 (95% CI, 0.69-0.74). The optimal cut-off point was ≥13 (sensitivity = 63.8%, specificity = 65.1%). The ROC-AUC of MADRISC was 0.76 (95% CI, 0.72-0.81) with ≥13 as the optimal cut-off point (sensitivity = 84.8%, specificity = 54.6%). FINDRISC score ≥12 for detecting any dysglycaemia offered the best cut-off point when HbA1c alone or OGTT and HbA1c were the criteria used. FINDRISC proved to be a useful instrument in screening for dysglycaemia and UT2DM. In the screening of UT2DM, the simplified MADRISC performed as well as FINDRISC.

  14. Interactions between lower urinary tract symptoms and cardiovascular risk factors determine distinct patterns of erectile dysfunction: a latent class analysis.

    PubMed

    Barbosa, João A B A; Muracca, Eduardo; Nakano, Élcio; Assalin, Adriana R; Cordeiro, Paulo; Paranhos, Mario; Cury, José; Srougi, Miguel; Antunes, Alberto A

    2013-12-01

    An epidemiological association between lower urinary tract symptoms and erectile dysfunction is well established. However, interactions among multiple risk factors and the role of each in pathological mechanisms are not fully elucidated We enrolled 898 men undergoing prostate cancer screening for evaluation with the International Prostate Symptom Score (I-PSS) and simplified International Index of Erectile Function-5 (IIEF-5) questionnaires. Age, race, hypertension, diabetes, dyslipidemia, metabolic syndrome, cardiovascular disease, serum hormones and anthropometric parameters were also evaluated. Risk factors for erectile dysfunction were identified by logistic regression. The 333 men with at least mild to moderate erectile dysfunction (IIEF 16 or less) were included in a latent class model to identify relationships across erectile dysfunction risk factors. Age, hypertension, diabetes, lower urinary tract symptoms and cardiovascular event were independent predictors of erectile dysfunction (p<0.05). We identified 3 latent classes of patients with erectile dysfunction (R2 entropy=0.82). Latent class 1 had younger men at low cardiovascular risk and a moderate/high prevalence of lower urinary tract symptoms. Latent class 2 had the oldest patients at moderate cardiovascular risk with an increased prevalence of lower urinary tract symptoms. Latent class 3 had men of intermediate age with the highest prevalence of cardiovascular risk factors and lower urinary tract symptoms. Erectile dysfunction severity and lower urinary tract symptoms increased from latent class 1 to 3. Risk factor interactions determined different severities of lower urinary tract symptoms and erectile dysfunction. The effect of lower urinary tract symptoms and cardiovascular risk outweighed that of age. While in the youngest patients lower urinary tract symptoms acted as a single risk factor for erectile dysfunction, the contribution of vascular disease resulted in significantly more severe dysfunction. Applying a risk factor interaction model to prospective trials could reveal distinct classes of drug responses and help define optimal treatment strategies for specific groups. Copyright © 2013 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  15. Origami-based mechanical metamaterials with tunable frequency band structures (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Yasuda, Hiromi; Pratt, Riley; Yang, Jinkyu

    2017-04-01

    We investigate wave dynamics in origami-based mechanical metamaterials composed of bellows-like origami structures, specifically the Tachi-Miura Polyhedron (TMP). One of the unique features of the TMP is that its structural deformations take place only along the crease lines, therefore the structure can be made of rigid plates and hinges. By utilizing this feature, we introduce linear torsional springs to model the crease lines and derive the force and displacement relationship of the TMP structure along the longitudinal direction. Our analysis shows strain softening/hardening behaviors in compression/tensile regions respectively, and the force-displacement curve can be manipulated by altering the initial configuration of the TMP (e.g., the initial folding angle). We also fabricate physical prototypes and measure the force-displacement behavior to verify our analytical model. Based on this static analysis on the TMP, we simplify the TMP structure into a linkage model, preserving the tunable strain softening/hardening behaviors. Dynamic analysis is also conducted numerically to analyze the frequency response of the simplified TMP unit cell under harmonic excitations. The simplified TMP exhibits a transition between linear and nonlinear behaviors, which depends on the amplitude of the excitation and the initial configuration. In addition, we design a 1D system composed of simplified TMP unit cells and analyze the relationship between frequency and wave number. If two different configurations of the unit cell (e.g., different initial folding angles) are connected in an alternating arrangement, the system develops frequency bandgaps. These unique static/dynamic behaviors can be exploited to design engineering devices which can handle vibrations and impact in an efficient manner.

  16. Oscillating water column structural model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Copeland, Guild; Bull, Diana L; Jepsen, Richard Alan

    2014-09-01

    An oscillating water column (OWC) wave energy converter is a structure with an opening to the ocean below the free surface, i.e. a structure with a moonpool. Two structural models for a non-axisymmetric terminator design OWC, the Backward Bent Duct Buoy (BBDB) are discussed in this report. The results of this structural model design study are intended to inform experiments and modeling underway in support of the U.S. Department of Energy (DOE) initiated Reference Model Project (RMP). A detailed design developed by Re Vision Consulting used stiffeners and girders to stabilize the structure against the hydrostatic loads experienced by amore » BBDB device. Additional support plates were added to this structure to account for loads arising from the mooring line attachment points. A simplified structure was designed in a modular fashion. This simplified design allows easy alterations to the buoyancy chambers and uncomplicated analysis of resulting changes in buoyancy.« less

  17. A simplified computational memory model from information processing

    PubMed Central

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-01-01

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view. PMID:27876847

  18. Measuring Appetite with the Simplified Nutritional Appetite Questionnaire Identifies Hospitalised Older People at Risk of Worse Health Outcomes.

    PubMed

    Pilgrim, A L; Baylis, D; Jameson, K A; Cooper, C; Sayer, A A; Robinson, S M; Roberts, H C

    2016-01-01

    Poor appetite is commonly reported by older people but is rarely measured. The Simplified Nutritional Appetite Questionnaire (SNAQ) was validated to predict weight loss in community dwelling older adults but has been little used in hospitals. We evaluated it in older women on admission to hospital and examined associations with healthcare outcomes. Longitudinal observational with follow-up at six months. Female acute Medicine for Older People wards at a University hospital in England. 179 female inpatients. Age, weight, Body Mass Index (BMI), grip strength, SNAQ, Barthel Index Score, Mini Mental State Examination (MMSE), Geriatric Depression Scale: Short Form (GDS-SF), Malnutrition Universal Screening Tool (MUST), category of domicile and receipt of care were measured soon after admission and repeated at six month follow-up. The length of hospital stay (LOS), hospital acquired infection, readmissions and deaths by follow-up were recorded. 179 female participants mean age 87 (SD 4.7) years were recruited. 42% of participants had a low SNAQ score (<14, indicating poor appetite). A low SNAQ score was associated with an increased risk of hospital acquired infection (OR 3.53; 95% CI: 1.48, 8.41; p=0.004) and with risk of death (HR 2.29; 95% CI: 1.12, 4.68; p = 0.023) by follow-up. Poor appetite was common among the older hospitalised women studied, and was associated with higher risk of poor healthcare outcomes.

  19. Measuring Appetite with the Simplified Nutritional Appetite Questionnaire Identifies Hospitalised Older People at Risk of Worse Health Outcomes

    PubMed Central

    PILGRIM, A.L.; BAYLIS, D.; JAMESON, K.A.; COOPER, C.; SAYER, A.A.; ROBINSON, S.M.; ROBERTS, H.C.

    2016-01-01

    Objectives Poor appetite is commonly reported by older people but is rarely measured. The Simplified Nutritional Appetite Questionnaire (SNAQ) was validated to predict weight loss in community dwelling older adults but has been little used in hospitals. We evaluated it in older women on admission to hospital and examined associations with healthcare outcomes. Design Longitudinal observational with follow-up at six months. Setting Female acute Medicine for Older People wards at a University hospital in England. Participants 179 female inpatients. Measurements Age, weight, Body Mass Index (BMI), grip strength, SNAQ, Barthel Index Score, Mini Mental State Examination (MMSE), Geriatric Depression Scale: Short Form (GDS-SF), Malnutrition Universal Screening Tool (MUST), category of domicile and receipt of care were measured soon after admission and repeated at six month follow-up. The length of hospital stay (LOS), hospital acquired infection, readmissions and deaths by follow-up were recorded. Results 179 female participants mean age 87 (SD 4.7) years were recruited. 42% of participants had a low SNAQ score (<14, indicating poor appetite). A low SNAQ score was associated with an increased risk of hospital acquired infection (OR 3.53; 95% CI: 1.48, 8.41; p=0.004) and with risk of death (HR 2.29; 95% CI: 1.12, 4.68; p = 0.023) by follow-up. Conclusion Poor appetite was common among the older hospitalised women studied, and was associated with higher risk of poor healthcare outcomes. PMID:26728926

  20. Predictors of survival in patients with recurrent ovarian cancer undergoing secondary cytoreductive surgery based on the pooled analysis of an international collaborative cohort

    PubMed Central

    Zang, R Y; Harter, P; Chi, D S; Sehouli, J; Jiang, R; Tropé, C G; Ayhan, A; Cormio, G; Xing, Y; Wollschlaeger, K M; Braicu, E I; Rabbitt, C A; Oksefjell, H; Tian, W J; Fotopoulou, C; Pfisterer, J; du Bois, A; Berek, J S

    2011-01-01

    Background: This study aims to identify prognostic factors and to develop a risk model predicting survival in patients undergoing secondary cytoreductive surgery (SCR) for recurrent epithelial ovarian cancer. Methods: Individual data of 1100 patients with recurrent ovarian cancer of a progression-free interval at least 6 months who underwent SCR were pooled analysed. A simplified scoring system for each independent prognostic factor was developed according to its coefficient. Internal validation was performed to assess the discrimination of the model. Results: Complete SCR was strongly associated with the improvement of survival, with a median survival of 57.7 months, when compared with 27.0 months in those with residual disease of 0.1–1 cm and 15.6 months in those with residual disease of >1 cm, respectively (P<0.0001). Progression-free interval (⩽23.1 months vs >23.1 months, hazard ratio (HR): 1.72; score: 2), ascites at recurrence (present vs absent, HR: 1.27; score: 1), extent of recurrence (multiple vs localised disease, HR: 1.38; score: 1) as well as residual disease after SCR (R1 vs R0, HR: 1.90, score: 2; R2 vs R0, HR: 3.0, score: 4) entered into the risk model. Conclusion: This prognostic model may provide evidence to predict survival benefit from secondary cytoreduction in patients with recurrent ovarian cancer. PMID:21878937

  1. A new market risk model for cogeneration project financing---combined heat and power development without a power purchase agreement

    NASA Astrophysics Data System (ADS)

    Lockwood, Timothy A.

    Federal legislative changes in 2006 no longer entitle cogeneration project financings by law to receive the benefit of a power purchase agreement underwritten by an investment-grade investor-owned utility. Consequently, this research explored the need for a new market-risk model for future cogeneration and combined heat and power (CHP) project financing. CHP project investment represents a potentially enormous energy efficiency benefit through its application by reducing fossil fuel use up to 55% when compared to traditional energy generation, and concurrently eliminates constituent air emissions up to 50%, including global warming gases. As a supplemental approach to a comprehensive technical analysis, a quantitative multivariate modeling was also used to test the statistical validity and reliability of host facility energy demand and CHP supply ratios in predicting the economic performance of CHP project financing. The resulting analytical models, although not statistically reliable at this time, suggest a radically simplified CHP design method for future profitable CHP investments using four easily attainable energy ratios. This design method shows that financially successful CHP adoption occurs when the average system heat-to-power-ratio supply is less than or equal to the average host-convertible-energy-ratio, and when the average nominally-rated capacity is less than average host facility-load-factor demands. New CHP investments can play a role in solving the world-wide problem of accommodating growing energy demand while preserving our precious and irreplaceable air quality for future generations.

  2. Crash Padding Research : Volume II. Constitutive Equation Models.

    DOT National Transportation Integrated Search

    1986-08-01

    Several simplified one-dimensional constitutive equations for viscoelastic materials are reviewed and found to be inadequate for representing the impact-response performance of strongly nonlinear materials. Two multi-parameter empirical models are de...

  3. Simplified equation for Young's modulus of CNT reinforced concrete

    NASA Astrophysics Data System (ADS)

    Chandran, RameshBabu; Gifty Honeyta A, Maria

    2017-12-01

    This research investigation focuses on finite element modeling of carbon nanotube (CNT) reinforced concrete matrix for three grades of concrete namely M40, M60 and M120. Representative volume element (RVE) was adopted and one-eighth model depicting the CNT reinforced concrete matrix was simulated using FEA software ANSYS17.2. Adopting random orientation of CNTs, with nine fibre volume fractions from 0.1% to 0.9%, finite element modeling simulations replicated exactly the CNT reinforced concrete matrix. Upon evaluations of the model, the longitudinal and transverse Young's modulus of elasticity of the CNT reinforced concrete was arrived. The graphical plots between various fibre volume fractions and the concrete grade revealed simplified equation for estimating the young's modulus. It also exploited the fact that the concrete grade does not have significant impact in CNT reinforced concrete matrix.

  4. Modeling of Radiative Heat Transfer in an Electric Arc Furnace

    NASA Astrophysics Data System (ADS)

    Opitz, Florian; Treffinger, Peter; Wöllenstein, Jürgen

    2017-12-01

    Radiation is an important means of heat transfer inside an electric arc furnace (EAF). To gain insight into the complex processes of heat transfer inside the EAF vessel, not only radiation from the surfaces but also emission and absorption of the gas phase and the dust cloud need to be considered. Furthermore, the radiative heat exchange depends on the geometrical configuration which is continuously changing throughout the process. The present paper introduces a system model of the EAF which takes into account the radiative heat transfer between the surfaces and the participating medium. This is attained by the development of a simplified geometrical model, the use of a weighted-sum-of-gray-gases model, and a simplified consideration of dust radiation. The simulation results were compared with the data of real EAF plants available in literature.

  5. [A simplified occupational health and safety management system designed for small enterprises. Initial validation results].

    PubMed

    Bacchi, Romana; Veneri, L; Ghini, P; Caso, Maria Alessandra; Baldassarri, Giovanna; Renzetti, F; Santarelli, R

    2009-01-01

    Occupational Health and Safety Management Systems (OHSMS) are known to be effective in improving safety at work. Unfortunately they are often too resource-heavy for small businesses. The aim of this project was to develop and test a simplified model of OHSMS suitable for small enterprises. The model consists of 7 procedures and various operating forms and check lists, that guide the enterprise in managing safety at work. The model was tested in 15 volunteer enterprises. In most of the enterprises two audits showed increased awareness and participation of workers; better definition and formalisation of respon sibilities in 8 firms; election of Union Safety Representatives in over one quarter of the enterprises; improvement of safety equipment. The study also helped identify areas where the model could be improved by simplification of unnecessarily complex and redundant procedures.

  6. AUTOMATIC CALIBRATION OF A STOCHASTIC-LAGRANGIAN TRANSPORT MODEL (SLAM)

    EPA Science Inventory

    Numerical models are a useful tool in evaluating and designing NAPL remediation systems. Traditional constitutive finite difference and finite element models are complex and expensive to apply. For this reason, this paper presents the application of a simplified stochastic-Lagran...

  7. A THREE-DIMENSIONAL MODEL ASSESSMENT OF THE GLOBAL DISTRIBUTION OF HEXACHLOROBENZENE

    EPA Science Inventory

    The distributions of persistent organic pollutants (POPs) in the global environment have been studied typically with box/fugacity models with simplified treatments of atmospheric transport processes1. Such models are incapable of simulating the complex three-dimensional mechanis...

  8. Using a bias aware EnKF to account for unresolved structure in an unsaturated zone model

    NASA Astrophysics Data System (ADS)

    Erdal, D.; Neuweiler, I.; Wollschläger, U.

    2014-01-01

    When predicting flow in the unsaturated zone, any method for modeling the flow will have to define how, and to what level, the subsurface structure is resolved. In this paper, we use the Ensemble Kalman Filter to assimilate local soil water content observations from both a synthetic layered lysimeter and a real field experiment in layered soil in an unsaturated water flow model. We investigate the use of colored noise bias corrections to account for unresolved subsurface layering in a homogeneous model and compare this approach with a fully resolved model. In both models, we use a simplified model parameterization in the Ensemble Kalman Filter. The results show that the use of bias corrections can increase the predictive capability of a simplified homogeneous flow model if the bias corrections are applied to the model states. If correct knowledge of the layering structure is available, the fully resolved model performs best. However, if no, or erroneous, layering is used in the model, the use of a homogeneous model with bias corrections can be the better choice for modeling the behavior of the system.

  9. Development of flood probability charts for urban drainage network in coastal areas through a simplified joint assessment approach

    NASA Astrophysics Data System (ADS)

    Archetti, R.; Bolognesi, A.; Casadio, A.; Maglionico, M.

    2011-04-01

    The operating conditions of urban drainage networks during storm events certainly depend on the hydraulic conveying capacity of conduits but also on downstream boundary conditions. This is particularly true in costal areas where the level of the receiving water body is directly or indirectly affected by tidal or wave effects. In such cases, not just different rainfall conditions (varying intensity and duration), but also different sea-levels and their effects on the network operation should be considered. This paper aims to study the behaviour of a seaside town storm sewer network, estimating the threshold condition for flooding and proposing a simplified method to assess the urban flooding severity as a function of either climate variables. The case study is a portion of the drainage system of Rimini (Italy), implemented and numerically modelled by means of InfoWorks CS code. The hydraulic simulation of the sewerage system has therefore allowed to identify the percentage of nodes of the drainage system where flooding is expected to occur. Combining these percentages with both climate variables values has lead to the definition charts representing the combined degree of risk "sea-rainfall" for the drainage system under investigation. A final comparison between such charts and the results obtained from a one-year sea-rainfall time series has confirmed the reliability of the analysis.

  10. Development of flood probability charts for urban drainage network in coastal areas through a simplified joint assessment approach

    NASA Astrophysics Data System (ADS)

    Archetti, R.; Bolognesi, A.; Casadio, A.; Maglionico, M.

    2011-10-01

    The operating conditions of urban drainage networks during storm events depend on the hydraulic conveying capacity of conduits and also on downstream boundary conditions. This is particularly true in coastal areas where the level of the receiving water body is directly or indirectly affected by tidal or wave effects. In such cases, not just different rainfall conditions (varying intensity and duration), but also different sea-levels and their effects on the network operation should be considered. This paper aims to study the behaviour of a seaside town storm sewer network, estimating the threshold condition for flooding and proposing a simplified method to assess the urban flooding severity as a function of climate variables. The case study is a portion of the drainage system of Rimini (Italy), implemented and numerically modelled by means of InfoWorks CS code. The hydraulic simulation of the sewerage system identified the percentage of nodes of the drainage system where flooding is expected to occur. Combining these percentages with both climate variables' values has lead to the definition of charts representing the combined degree of risk "rainfall-sea level" for the drainage system under investigation. A final comparison between such charts and the results obtained from a one-year rainfall-sea level time series has demonstrated the reliability of the analysis.

  11. Impact and Cost-effectiveness of 3 Doses of 9-Valent Human Papillomavirus (HPV) Vaccine Among US Females Previously Vaccinated With 4-Valent HPV Vaccine.

    PubMed

    Chesson, Harrell W; Laprise, Jean-François; Brisson, Marc; Markowitz, Lauri E

    2016-06-01

    We estimated the potential impact and cost-effectiveness of providing 3-doses of nonavalent human papillomavirus (HPV) vaccine (9vHPV) to females aged 13-18 years who had previously completed a series of quadrivalent HPV vaccine (4vHPV), a strategy we refer to as "additional 9vHPV vaccination." We used 2 distinct models: (1) the simplified model, which is among the most basic of the published dynamic HPV models, and (2) the US HPV-ADVISE model, a complex, stochastic, individual-based transmission-dynamic model. When assuming no 4vHPV cross-protection, the incremental cost per quality-adjusted life-year (QALY) gained by additional 9vHPV vaccination was $146 200 in the simplified model and $108 200 in the US HPV-ADVISE model ($191 800 when assuming 4vHPV cross-protection). In 1-way sensitivity analyses in the scenario of no 4vHPV cross-protection, the simplified model results ranged from $70 300 to $182 000, and the US HPV-ADVISE model results ranged from $97 600 to $118 900. The average cost per QALY gained by additional 9vHPV vaccination exceeded $100 000 in both models. However, the results varied considerably in sensitivity and uncertainty analyses. Additional 9vHPV vaccination is likely not as efficient as many other potential HPV vaccination strategies, such as increasing primary 9vHPV vaccine coverage. Published by Oxford University Press for the Infectious Diseases Society of America 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  12. How much expert knowledge is it worth to put in conceptual hydrological models?

    NASA Astrophysics Data System (ADS)

    Antonetti, Manuel; Zappa, Massimiliano

    2017-04-01

    Both modellers and experimentalists agree on using expert knowledge to improve our conceptual hydrological simulations on ungauged basins. However, they use expert knowledge differently for both hydrologically mapping the landscape and parameterising a given hydrological model. Modellers use generally very simplified (e.g. topography-based) mapping approaches and put most of the knowledge for constraining the model by defining parameter and process relational rules. In contrast, experimentalists tend to invest all their detailed and qualitative knowledge about processes to obtain a spatial distribution of areas with different dominant runoff generation processes (DRPs) as realistic as possible, and for defining plausible narrow value ranges for each model parameter. Since, most of the times, the modelling goal is exclusively to simulate runoff at a specific site, even strongly simplified hydrological classifications can lead to satisfying results due to equifinality of hydrological models, overfitting problems and the numerous uncertainty sources affecting runoff simulations. Therefore, to test to which extent expert knowledge can improve simulation results under uncertainty, we applied a typical modellers' modelling framework relying on parameter and process constraints defined based on expert knowledge to several catchments on the Swiss Plateau. To map the spatial distribution of the DRPs, mapping approaches with increasing involvement of expert knowledge were used. Simulation results highlighted the potential added value of using all the expert knowledge available on a catchment. Also, combinations of event types and landscapes, where even a simplified mapping approach can lead to satisfying results, were identified. Finally, the uncertainty originated by the different mapping approaches was compared with the one linked to meteorological input data and catchment initial conditions.

  13. Cloud computing can simplify HIT infrastructure management.

    PubMed

    Glaser, John

    2011-08-01

    Software as a Service (SaaS), built on cloud computing technology, is emerging as the forerunner in IT infrastructure because it helps healthcare providers reduce capital investments. Cloud computing leads to predictable, monthly, fixed operating expenses for hospital IT staff. Outsourced cloud computing facilities are state-of-the-art data centers boasting some of the most sophisticated networking equipment on the market. The SaaS model helps hospitals safeguard against technology obsolescence, minimizes maintenance requirements, and simplifies management.

  14. Sexing California gulls using morphometrics and discriminant function analysis

    USGS Publications Warehouse

    Herring, Garth; Ackerman, Joshua T.; Eagles-Smith, Collin A.; Takekawa, John Y.

    2010-01-01

    A discriminant function analysis (DFA) model was developed with DNA sex verification so that external morphology could be used to sex 203 adult California Gulls (Larus californicus) in San Francisco Bay (SFB). The best model was 97% accurate and included head-to-bill length, culmen depth at the gonys, and wing length. Using an iterative process, the model was simplified to a single measurement (head-to-bill length) that still assigned sex correctly 94% of the time. A previous California Gull sex determination model developed for a population in Wyoming was then assessed by fitting SFB California Gull measurement data to the Wyoming model; this new model failed to converge on the same measurements as those originally used by the Wyoming model. Results from the SFB discriminant function model were compared to the Wyoming model results (by using SFB data with the Wyoming model); the SFB model was 7% more accurate for SFB California gulls. The simplified DFA model (head-to-bill length only) provided highly accurate results (94%) and minimized the measurements and time required to accurately sex California Gulls.

  15. Size and complexity in model financial systems.

    PubMed

    Arinaminpathy, Nimalan; Kapadia, Sujit; May, Robert M

    2012-11-06

    The global financial crisis has precipitated an increasing appreciation of the need for a systemic perspective toward financial stability. For example: What role do large banks play in systemic risk? How should capital adequacy standards recognize this role? How is stability shaped by concentration and diversification in the financial system? We explore these questions using a deliberately simplified, dynamic model of a banking system that combines three different channels for direct transmission of contagion from one bank to another: liquidity hoarding, asset price contagion, and the propagation of defaults via counterparty credit risk. Importantly, we also introduce a mechanism for capturing how swings in "confidence" in the system may contribute to instability. Our results highlight that the importance of relatively large, well-connected banks in system stability scales more than proportionately with their size: the impact of their collapse arises not only from their connectivity, but also from their effect on confidence in the system. Imposing tougher capital requirements on larger banks than smaller ones can thus enhance the resilience of the system. Moreover, these effects are more pronounced in more concentrated systems, and continue to apply, even when allowing for potential diversification benefits that may be realized by larger banks. We discuss some tentative implications for policy, as well as conceptual analogies in ecosystem stability and in the control of infectious diseases.

  16. Modified optimal control pilot model for computer-aided design and analysis

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Schmidt, David K.

    1992-01-01

    This paper presents the theoretical development of a modified optimal control pilot model based upon the optimal control model (OCM) of the human operator developed by Kleinman, Baron, and Levison. This model is input compatible with the OCM and retains other key aspects of the OCM, such as a linear quadratic solution for the pilot gains with inclusion of control rate in the cost function, a Kalman estimator, and the ability to account for attention allocation and perception threshold effects. An algorithm designed for each implementation in current dynamic systems analysis and design software is presented. Example results based upon the analysis of a tracking task using three basic dynamic systems are compared with measured results and with similar analyses performed with the OCM and two previously proposed simplified optimal pilot models. The pilot frequency responses and error statistics obtained with this modified optimal control model are shown to compare more favorably to the measured experimental results than the other previously proposed simplified models evaluated.

  17. Simplified and refined structural modeling for economical flutter analysis and design

    NASA Technical Reports Server (NTRS)

    Ricketts, R. H.; Sobieszczanski, J.

    1977-01-01

    A coordinated use of two finite-element models of different levels of refinement is presented to reduce the computer cost of the repetitive flutter analysis commonly encountered in structural resizing to meet flutter requirements. One model, termed a refined model (RM), represents a high degree of detail needed for strength-sizing and flutter analysis of an airframe. The other model, called a simplified model (SM), has a relatively much smaller number of elements and degrees-of-freedom. A systematic method of deriving an SM from a given RM is described. The method consists of judgmental and numerical operations to make the stiffness and mass of the SM elements equivalent to the corresponding substructures of RM. The structural data are automatically transferred between the two models. The bulk of analysis is performed on the SM with periodical verifications carried out by analysis of the RM. In a numerical example of a supersonic cruise aircraft with an arrow wing, this approach permitted substantial savings in computer costs and acceleration of the job turn-around.

  18. Multi-phase CFD modeling of solid sorbent carbon capture system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, E. M.; DeCroix, D.; Breault, R.

    2013-07-01

    Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian–Eulerian and Eulerian–Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian–Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian–Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian–Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less

  19. Multi-Phase CFD Modeling of Solid Sorbent Carbon Capture System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, Emily M.; DeCroix, David; Breault, Ronald W.

    2013-07-30

    Computational fluid dynamics (CFD) simulations are used to investigate a low temperature post-combustion carbon capture reactor. The CFD models are based on a small scale solid sorbent carbon capture reactor design from ADA-ES and Southern Company. The reactor is a fluidized bed design based on a silica-supported amine sorbent. CFD models using both Eulerian-Eulerian and Eulerian-Lagrangian multi-phase modeling methods are developed to investigate the hydrodynamics and adsorption of carbon dioxide in the reactor. Models developed in both FLUENT® and BARRACUDA are presented to explore the strengths and weaknesses of state of the art CFD codes for modeling multi-phase carbon capturemore » reactors. The results of the simulations show that the FLUENT® Eulerian-Lagrangian simulations (DDPM) are unstable for the given reactor design; while the BARRACUDA Eulerian-Lagrangian model is able to simulate the system given appropriate simplifying assumptions. FLUENT® Eulerian-Eulerian simulations also provide a stable solution for the carbon capture reactor given the appropriate simplifying assumptions.« less

  20. Characterization and calibration of a viscoelastic simplified potential energy clock model for inorganic glasses

    DOE PAGES

    Chambers, Robert S.; Tandon, Rajan; Stavig, Mark E.

    2015-07-07

    In this study, to analyze the stresses and strains generated during the solidification of glass-forming materials, stress and volume relaxation must be predicted accurately. Although the modeling attributes required to depict physical aging in organic glassy thermosets strongly resemble the structural relaxation in inorganic glasses, the historical modeling approaches have been distinctly different. To determine whether a common constitutive framework can be applied to both classes of materials, the nonlinear viscoelastic simplified potential energy clock (SPEC) model, developed originally for glassy thermosets, was calibrated for the Schott 8061 inorganic glass and used to analyze a number of tests. A practicalmore » methodology for material characterization and model calibration is discussed, and the structural relaxation mechanism is interpreted in the context of SPEC model constitutive equations. SPEC predictions compared to inorganic glass data collected from thermal strain measurements and creep tests demonstrate the ability to achieve engineering accuracy and make the SPEC model feasible for engineering applications involving a much broader class of glassy materials.« less

Top