Sample records for improved standard model

  1. Surgical Process Improvement: Impact of a Standardized Care Model With Electronic Decision Support to Improve Compliance With SCIP Inf-9.

    PubMed

    Cook, David J; Thompson, Jeffrey E; Suri, Rakesh; Prinsen, Sharon K

    2014-01-01

    The absence of standardization in surgical care process, exemplified in a "solution shop" model, can lead to unwarranted variation, increased cost, and reduced quality. A comprehensive effort was undertaken to improve quality of care around indwelling bladder catheter use following surgery by creating a "focused factory" model within the cardiac surgical practice. Baseline compliance with Surgical Care Improvement Inf-9, removal of urinary catheter by the end of surgical postoperative day 2, was determined. Comparison of baseline data to postintervention results showed clinically important reductions in the duration of indwelling bladder catheters as well as marked reduction in practice variation. Following the intervention, Surgical Care Improvement Inf-9 guidelines were met in 97% of patients. Although clinical quality improvement was notable, the process to accomplish this-identification of patients suitable for standardized pathways, protocol application, and electronic systems to support the standardized practice model-has potentially greater relevance than the specific clinical results. © 2013 by the American College of Medical Quality.

  2. Big bang nucleosynthesis - The standard model and alternatives

    NASA Technical Reports Server (NTRS)

    Schramm, David N.

    1991-01-01

    The standard homogeneous-isotropic calculation of the big bang cosmological model is reviewed, and alternate models are discussed. The standard model is shown to agree with the light element abundances for He-4, H-2, He-3, and Li-7 that are available. Improved observational data from recent LEP collider and SLC results are discussed. The data agree with the standard model in terms of the number of neutrinos, and provide improved information regarding neutron lifetimes. Alternate models are reviewed which describe different scenarios for decaying matter or quark-hadron induced inhomogeneities. The baryonic density relative to the critical density in the alternate models is similar to that of the standard model when they are made to fit the abundances. This reinforces the conclusion that the baryonic density relative to critical density is about 0.06, and also reinforces the need for both nonbaryonic dark matter and dark baryonic matter.

  3. Naturalness of Electroweak Symmetry Breaking while Waiting for the LHC

    NASA Astrophysics Data System (ADS)

    Espinosa, J. R.

    2007-06-01

    After revisiting the hierarchy problem of the Standard Model and its implications for the scale of New Physics, I consider the finetuning problem of electroweak symmetry breaking in several scenarios beyond the Standard Model: SUSY, Little Higgs and "improved naturalness" models. The main conclusions are that: New Physics should appear on the reach of the LHC; some SUSY models can solve the hierarchy problem with acceptable residual tuning; Little Higgs models generically suffer from large tunings, many times hidden; and, finally, that "improved naturalness" models do not generically improve the naturalness of the SM.

  4. Realism of Indian Summer Monsoon Simulation in a Quarter Degree Global Climate Model

    NASA Astrophysics Data System (ADS)

    Salunke, P.; Mishra, S. K.; Sahany, S.; Gupta, K.

    2017-12-01

    This study assesses the fidelity of Indian Summer Monsoon (ISM) simulations using a global model at an ultra-high horizontal resolution (UHR) of 0.25°. The model used was the atmospheric component of the Community Earth System Model version 1.2.0 (CESM 1.2.0) developed at the National Center for Atmospheric Research (NCAR). Precipitation and temperature over the Indian region were analyzed for a wide range of space and time scales to evaluate the fidelity of the model under UHR, with special emphasis on the ISM simulations during the period of June-through-September (JJAS). Comparing the UHR simulations with observed data from the India Meteorological Department (IMD) over the Indian land, it was found that 0.25° resolution significantly improved spatial rainfall patterns over many regions, including the Western Ghats and the South-Eastern peninsula as compared to the standard model resolution. Convective and large-scale rainfall components were analyzed using the European Centre for Medium Range Weather Forecast (ECMWF) Re-Analysis (ERA)-Interim (ERA-I) data and it was found that at 0.25° resolution, there was an overall increase in the large-scale component and an associated decrease in the convective component of rainfall as compared to the standard model resolution. Analysis of the diurnal cycle of rainfall suggests a significant improvement in the phase characteristics simulated by the UHR model as compared to the standard model resolution. Analysis of the annual cycle of rainfall, however, failed to show any significant improvement in the UHR model as compared to the standard version. Surface temperature analysis showed small improvements in the UHR model simulations as compared to the standard version. Thus, one may conclude that there are some significant improvements in the ISM simulations using a 0.25° global model, although there is still plenty of scope for further improvement in certain aspects of the annual cycle of rainfall.

  5. Implementing Model-Check for Employee and Management Satisfaction

    NASA Technical Reports Server (NTRS)

    Jones, Corey; LaPha, Steven

    2013-01-01

    This presentation will discuss methods to which ModelCheck can be implemented to not only improve model quality, but also satisfy both employees and management through different sets of quality checks. This approach allows a standard set of modeling practices to be upheld throughout a company, with minimal interaction required by the end user. The presenter will demonstrate how to create multiple ModelCheck standards, preventing users from evading the system, and how it can improve the quality of drawings and models.

  6. The impact of statistical adjustment on conditional standard errors of measurement in the assessment of physician communication skills.

    PubMed

    Raymond, Mark R; Clauser, Brian E; Furman, Gail E

    2010-10-01

    The use of standardized patients to assess communication skills is now an essential part of assessing a physician's readiness for practice. To improve the reliability of communication scores, it has become increasingly common in recent years to use statistical models to adjust ratings provided by standardized patients. This study employed ordinary least squares regression to adjust ratings, and then used generalizability theory to evaluate the impact of these adjustments on score reliability and the overall standard error of measurement. In addition, conditional standard errors of measurement were computed for both observed and adjusted scores to determine whether the improvements in measurement precision were uniform across the score distribution. Results indicated that measurement was generally less precise for communication ratings toward the lower end of the score distribution; and the improvement in measurement precision afforded by statistical modeling varied slightly across the score distribution such that the most improvement occurred in the upper-middle range of the score scale. Possible reasons for these patterns in measurement precision are discussed, as are the limitations of the statistical models used for adjusting performance ratings.

  7. Progress in the improved lattice calculation of direct CP-violation in the Standard Model

    NASA Astrophysics Data System (ADS)

    Kelly, Christopher

    2018-03-01

    We discuss the ongoing effort by the RBC & UKQCD collaborations to improve our lattice calculation of the measure of Standard Model direct CP violation, ɛ', with physical kinematics. We present our progress in decreasing the (dominant) statistical error and discuss other related activities aimed at reducing the systematic errors.

  8. Effect of Time Varying Gravity on DORIS processing for ITRF2013

    NASA Astrophysics Data System (ADS)

    Zelensky, N. P.; Lemoine, F. G.; Chinn, D. S.; Beall, J. W.; Melachroinos, S. A.; Beckley, B. D.; Pavlis, D.; Wimert, J.

    2013-12-01

    Computations are under way to develop a new time series of DORIS SINEX solutions to contribute to the development of the new realization of the terrestrial reference frame (c.f. ITRF2013). One of the improvements that are envisaged is the application of improved models of time-variable gravity in the background orbit modeling. At GSFC we have developed a time series of spherical harmonics to degree and order 5 (using the GOC02S model as a base), based on the processing of SLR and DORIS data to 14 satellites from 1993 to 2013. This is compared with the standard approach used in ITRF2008, based on the static model EIGEN-GL04S1 which included secular variations in only a few select coefficients. Previous work on altimeter satellite POD (c.f. TOPEX/Poseidon, Jason-1, Jason-2) has shown that the standard model is not adequate and orbit improvements are observed with application of more detailed models of time-variable gravity. In this study, we quantify the impact of TVG modeling on DORIS satellite POD, and ascertain the impact on DORIS station positions estimated weekly from 1993 to 2013. The numerous recent improvements to SLR and DORIS processing at GSFC include a more complete compliance to IERS2010 standards, improvements to SLR/DORIS measurement modeling, and improved non-conservative force modeling to DORIS satellites. These improvements will affect gravity coefficient estimates, POD, and the station solutions. Tests evaluate the impact of time varying gravity on tracking data residuals, station consistency, and the geocenter and scale reference frame parameters.

  9. Air Quality Modeling | Air Quality Planning & Standards | US ...

    EPA Pesticide Factsheets

    2016-06-08

    The basic mission of the Office of Air Quality Planning and Standards is to preserve and improve the quality of our nation's air. One facet of accomplishing this goal requires that new and existing air pollution sources be modeled for compliance with the National Ambient Air Quality Standards (NAAQS).

  10. Creating Better Child Care Jobs: Model Work Standards for Teaching Staff in Center-Based Child Care.

    ERIC Educational Resources Information Center

    Center for the Child Care Workforce, Washington, DC.

    This document presents model work standards articulating components of the child care center-based work environment that enable teachers to do their jobs well. These standards establish criteria to assess child care work environments and identify areas to improve in order to assure good jobs for adults and good care for children. The standards are…

  11. An improved genetic algorithm for designing optimal temporal patterns of neural stimulation

    NASA Astrophysics Data System (ADS)

    Cassar, Isaac R.; Titus, Nathan D.; Grill, Warren M.

    2017-12-01

    Objective. Electrical neuromodulation therapies typically apply constant frequency stimulation, but non-regular temporal patterns of stimulation may be more effective and more efficient. However, the design space for temporal patterns is exceedingly large, and model-based optimization is required for pattern design. We designed and implemented a modified genetic algorithm (GA) intended for design optimal temporal patterns of electrical neuromodulation. Approach. We tested and modified standard GA methods for application to designing temporal patterns of neural stimulation. We evaluated each modification individually and all modifications collectively by comparing performance to the standard GA across three test functions and two biophysically-based models of neural stimulation. Main results. The proposed modifications of the GA significantly improved performance across the test functions and performed best when all were used collectively. The standard GA found patterns that outperformed fixed-frequency, clinically-standard patterns in biophysically-based models of neural stimulation, but the modified GA, in many fewer iterations, consistently converged to higher-scoring, non-regular patterns of stimulation. Significance. The proposed improvements to standard GA methodology reduced the number of iterations required for convergence and identified superior solutions.

  12. Research on Generating Method of Embedded Software Test Document Based on Dynamic Model

    NASA Astrophysics Data System (ADS)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying

    2018-03-01

    This paper provides a dynamic model-based test document generation method for embedded software that provides automatic generation of two documents: test requirements specification documentation and configuration item test documentation. This method enables dynamic test requirements to be implemented in dynamic models, enabling dynamic test demand tracking to be easily generated; able to automatically generate standardized, standardized test requirements and test documentation, improved document-related content inconsistency and lack of integrity And other issues, improve the efficiency.

  13. Status of the AIAA Modeling and Simulation Format Standard

    NASA Technical Reports Server (NTRS)

    Jackson, E. Bruce; Hildreth, Bruce L.

    2008-01-01

    The current draft AIAA Standard for flight simulation models represents an on-going effort to improve the productivity of practitioners of the art of digital flight simulation (one of the original digital computer applications). This initial release provides the capability for the efficient representation and exchange of an aerodynamic model in full fidelity; the DAVE-ML format can be easily imported (with development of site-specific import tools) in an unambiguous way with automatic verification. An attractive feature of the standard is the ability to coexist with existing legacy software or tools. The draft Standard is currently limited in scope to static elements of dynamic flight simulations; however, these static elements represent the bulk of typical flight simulation mathematical models. It is already seeing application within U.S. and Australian government agencies in an effort to improve productivity and reduce model rehosting overhead. An existing tool allows import of DAVE-ML models into a popular simulation modeling and analysis tool, and other community-contributed tools and libraries can simplify the use of DAVE-ML compliant models at compile- or run-time of high-fidelity flight simulation.

  14. 77 FR 27814 - Model Safety Evaluation for Plant-Specific Adoption of Technical Specifications Task Force...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-11

    ... NUCLEAR REGULATORY COMMISSION [Project No. 753; NRC-2012-0019] Model Safety Evaluation for Plant... Regulatory Commission (NRC) is announcing the availability of the model safety evaluation (SE) for plant... the Improved Standard Technical Specification (ISTS), NUREG-1431, ``Standard Technical Specifications...

  15. Microgravity Electron Electric Dipole Moment Experiment with a Cold Atom Beam

    NASA Technical Reports Server (NTRS)

    Gould, Harvey

    2003-01-01

    New physics beyond the Standard Model: The small CP violation contained in the Standard Model is insufficient to account for the baryon/antibaryon asymmetry in the universe. New sources of CP violation are provided by extensions to the Standard Model. They contain CP-violating phases that couple directly to leptons and from which a large electron electric dipole moment (EDM) may be generated. Observation of an electron EDM would be proof of a Standard Model extension because the Standard Model only allows an electron EDM of less than 10(exppp -57) C-m (S.I. units; 1 C-m = 1.6 x 10(exp -21) e-cm). A null result, however, constrains models and improving the limit tightens constraints, further restricting the models.

  16. [Optimization of the parameters of microcirculatory structural adaptation model based on improved quantum-behaved particle swarm optimization algorithm].

    PubMed

    Pan, Qing; Yao, Jialiang; Wang, Ruofan; Cao, Ping; Ning, Gangmin; Fang, Luping

    2017-08-01

    The vessels in the microcirculation keep adjusting their structure to meet the functional requirements of the different tissues. A previously developed theoretical model can reproduce the process of vascular structural adaptation to help the study of the microcirculatory physiology. However, until now, such model lacks the appropriate methods for its parameter settings with subsequent limitation of further applications. This study proposed an improved quantum-behaved particle swarm optimization (QPSO) algorithm for setting the parameter values in this model. The optimization was performed on a real mesenteric microvascular network of rat. The results showed that the improved QPSO was superior to the standard particle swarm optimization, the standard QPSO and the previously reported Downhill algorithm. We conclude that the improved QPSO leads to a better agreement between mathematical simulation and animal experiment, rendering the model more reliable in future physiological studies.

  17. Improving automation standards via semantic modelling: Application to ISA88.

    PubMed

    Dombayci, Canan; Farreres, Javier; Rodríguez, Horacio; Espuña, Antonio; Graells, Moisès

    2017-03-01

    Standardization is essential for automation. Extensibility, scalability, and reusability are important features for automation software that rely in the efficient modelling of the addressed systems. The work presented here is from the ongoing development of a methodology for semi-automatic ontology construction methodology from technical documents. The main aim of this work is to systematically check the consistency of technical documents and support the improvement of technical document consistency. The formalization of conceptual models and the subsequent writing of technical standards are simultaneously analyzed, and guidelines proposed for application to future technical standards. Three paradigms are discussed for the development of domain ontologies from technical documents, starting from the current state of the art, continuing with the intermediate method presented and used in this paper, and ending with the suggested paradigm for the future. The ISA88 Standard is taken as a representative case study. Linguistic techniques from the semi-automatic ontology construction methodology is applied to the ISA88 Standard and different modelling and standardization aspects that are worth sharing with the automation community is addressed. This study discusses different paradigms for developing and sharing conceptual models for the subsequent development of automation software, along with presenting the systematic consistency checking method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  18. The effect of the Mihalas, Hummer, and Daeppen equation of state and the molecular opacity on the standard solar model

    NASA Technical Reports Server (NTRS)

    Kim, Y.-C.; Demarque, P.; Guenther, D. B.

    1991-01-01

    Improvements to the Yale Rotating Stellar Evolution Code (YREC) by incorporating the Mihalas-Hummer-Daeppen equation of state, an improved opacity interpolation routine, and the effects of molecular opacities, calculated at Los Alamos, have been made. the effect of each of the improvements on the standard solar model has been tested independently by computing the corresponding solar nonradial oscillation frequencies. According to these tests, the Mihalas-Hummer-Daeppen equation of state has very little effect on the model's low l p-mode oscillation spectrum compared to the model using the existing analytical equation of state implemented in YREC. On the other hand, the molecular opacity does improve the model's oscillation spectrum. The effect of molecular opacity on the computed solar oscillation frequencies is much larger than that of the Mihalas-Hummer-Daeppen equation of state. together, the two improvements to the physics reduce the discrepancy with observations by 10 microHz for the low l modes.

  19. Using the Modification Index and Standardized Expected Parameter Change for Model Modification

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.

    2012-01-01

    Model modification is oftentimes conducted after discovering a badly fitting structural equation model. During the modification process, the modification index (MI) and the standardized expected parameter change (SEPC) are 2 statistics that may be used to aid in the selection of parameters to add to a model to improve the fit. The purpose of this…

  20. The Effect of ISO 9001 and the EFQM Model on Improving Hospital Performance: A Systematic Review.

    PubMed

    Yousefinezhadi, Taraneh; Mohamadi, Efat; Safari Palangi, Hossein; Akbari Sari, Ali

    2015-12-01

    This study aimed to explore the effect of the International Organization for Standardization (ISO) ISO 9001 standard and the European foundation for quality management (EFQM) model on improving hospital performance. PubMed, Embase and the Cochrane Library databases were searched. In addition, Elsevier and Springer were searched as main publishers in the field of health sciences. We included empirical studies with any design that had used ISO 9001 or the EFQM model to improve the quality of healthcare. Data were collected and tabulated into a data extraction sheet that was specifically designed for this study. The collected data included authors' names, country, year of publication, intervention, improvement aims, setting, length of program, study design, and outcomes. Seven out of the 121 studies that were retrieved met the inclusion criteria. Three studies assessed the EFQM model and four studies assessed the ISO 9001 standard. Use of the EFQM model increased the degree of patient satisfaction and the number of hospital admissions and reduced the average length of stay, the delay on the surgical waiting list, and the number of emergency re-admissions. ISO 9001 also increased the degree of patient satisfaction and patient safety, increased cost-effectiveness, improved the hospital admissions process, and reduced the percentage of unscheduled returns to the hospital. Generally, there is a lack of robust and high quality empirical evidence regarding the effects of ISO 9001 and the EFQM model on the quality care provided by and the performance of hospitals. However, the limited evidence shows that ISO 9001 and the EFQM model might improve hospital performance.

  1. Boosting drug named entity recognition using an aggregate classifier.

    PubMed

    Korkontzelos, Ioannis; Piliouras, Dimitrios; Dowsey, Andrew W; Ananiadou, Sophia

    2015-10-01

    Drug named entity recognition (NER) is a critical step for complex biomedical NLP tasks such as the extraction of pharmacogenomic, pharmacodynamic and pharmacokinetic parameters. Large quantities of high quality training data are almost always a prerequisite for employing supervised machine-learning techniques to achieve high classification performance. However, the human labour needed to produce and maintain such resources is a significant limitation. In this study, we improve the performance of drug NER without relying exclusively on manual annotations. We perform drug NER using either a small gold-standard corpus (120 abstracts) or no corpus at all. In our approach, we develop a voting system to combine a number of heterogeneous models, based on dictionary knowledge, gold-standard corpora and silver annotations, to enhance performance. To improve recall, we employed genetic programming to evolve 11 regular-expression patterns that capture common drug suffixes and used them as an extra means for recognition. Our approach uses a dictionary of drug names, i.e. DrugBank, a small manually annotated corpus, i.e. the pharmacokinetic corpus, and a part of the UKPMC database, as raw biomedical text. Gold-standard and silver annotated data are used to train maximum entropy and multinomial logistic regression classifiers. Aggregating drug NER methods, based on gold-standard annotations, dictionary knowledge and patterns, improved the performance on models trained on gold-standard annotations, only, achieving a maximum F-score of 95%. In addition, combining models trained on silver annotations, dictionary knowledge and patterns are shown to achieve comparable performance to models trained exclusively on gold-standard data. The main reason appears to be the morphological similarities shared among drug names. We conclude that gold-standard data are not a hard requirement for drug NER. Combining heterogeneous models build on dictionary knowledge can achieve similar or comparable classification performance with that of the best performing model trained on gold-standard annotations. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  2. Economic analysis of tree improvement: A status report

    Treesearch

    George F. Dutrow

    1974-01-01

    Review of current literature establishes that most authors believe that tree improvement expands production, although some point out drawbacks and alternatives. Both softwood and hardwood improvement programs have been analyzed. The authors used various models, economic assumptions, and standards of measurement, but available data were limited. Future models shouId...

  3. Finite element based model predictive control for active vibration suppression of a one-link flexible manipulator.

    PubMed

    Dubay, Rickey; Hassan, Marwan; Li, Chunying; Charest, Meaghan

    2014-09-01

    This paper presents a unique approach for active vibration control of a one-link flexible manipulator. The method combines a finite element model of the manipulator and an advanced model predictive controller to suppress vibration at its tip. This hybrid methodology improves significantly over the standard application of a predictive controller for vibration control. The finite element model used in place of standard modelling in the control algorithm provides a more accurate prediction of dynamic behavior, resulting in enhanced control. Closed loop control experiments were performed using the flexible manipulator, instrumented with strain gauges and piezoelectric actuators. In all instances, experimental and simulation results demonstrate that the finite element based predictive controller provides improved active vibration suppression in comparison with using a standard predictive control strategy. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Robotic virtual reality simulation plus standard robotic orientation versus standard robotic orientation alone: a randomized controlled trial.

    PubMed

    Vaccaro, Christine M; Crisp, Catrina C; Fellner, Angela N; Jackson, Christopher; Kleeman, Steven D; Pavelka, James

    2013-01-01

    The objective of this study was to compare the effect of virtual reality simulation training plus robotic orientation versus robotic orientation alone on performance of surgical tasks using an inanimate model. Surgical resident physicians were enrolled in this assessor-blinded randomized controlled trial. Residents were randomized to receive either (1) robotic virtual reality simulation training plus standard robotic orientation or (2) standard robotic orientation alone. Performance of surgical tasks was assessed at baseline and after the intervention. Nine of 33 modules from the da Vinci Skills Simulator were chosen. Experts in robotic surgery evaluated each resident's videotaped performance of the inanimate model using the Global Rating Scale (GRS) and Objective Structured Assessment of Technical Skills-modified for robotic-assisted surgery (rOSATS). Nine resident physicians were enrolled in the simulation group and 9 in the control group. As a whole, participants improved their total time, time to incision, and suture time from baseline to repeat testing on the inanimate model (P = 0.001, 0.003, <0.001, respectively). Both groups improved their GRS and rOSATS scores significantly (both P < 0.001); however, the GRS overall pass rate was higher in the simulation group compared with the control group (89% vs 44%, P = 0.066). Standard robotic orientation and/or robotic virtual reality simulation improve surgical skills on an inanimate model, although this may be a function of the initial "practice" on the inanimate model and repeat testing of a known task. However, robotic virtual reality simulation training increases GRS pass rates consistent with improved robotic technical skills learned in a virtual reality environment.

  5. Connecting dark matter annihilation to the vertex functions of Standard Model fermions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Jason; Light, Christopher, E-mail: jkumar@hawaii.edu, E-mail: lightc@hawaii.edu

    We consider scenarios in which dark matter is a Majorana fermion which couples to Standard Model fermions through the exchange of charged mediating particles. The matrix elements for various dark matter annihilation processes are then related to one-loop corrections to the fermion-photon vertex, where dark matter and the charged mediators run in the loop. In particular, in the limit where Standard Model fermion helicity mixing is suppressed, the cross section for dark matter annihilation to various final states is related to corrections to the Standard Model fermion charge form factor. These corrections can be extracted in a gauge-invariant manner frommore » collider cross sections. Although current measurements from colliders are not precise enough to provide useful constraints on dark matter annihilation, improved measurements at future experiments, such as the International Linear Collider, could improve these constraints by several orders of magnitude, allowing them to surpass the limits obtainable by direct observation.« less

  6. Teacher Leader Model Standards: Implications for Preparation, Policy, and Practice

    ERIC Educational Resources Information Center

    Berg, Jill Harrison; Carver, Cynthia L.; Mangin, Melinda M.

    2014-01-01

    Teacher leadership is increasingly recognized as a resource for instructional improvement. Consequently, teacher leader initiatives have expanded rapidly despite limited knowledge about how to prepare and support teacher leaders. In this context, the "Teacher Leader Model Standards" represent an important development in the field. In…

  7. The Effect of ISO 9001 and the EFQM Model on Improving Hospital Performance: A Systematic Review

    PubMed Central

    Yousefinezhadi, Taraneh; Mohamadi, Efat; Safari Palangi, Hossein; Akbari Sari, Ali

    2015-01-01

    Context: This study aimed to explore the effect of the International Organization for Standardization (ISO) ISO 9001 standard and the European foundation for quality management (EFQM) model on improving hospital performance. Evidence Acquisition: PubMed, Embase and the Cochrane Library databases were searched. In addition, Elsevier and Springer were searched as main publishers in the field of health sciences. We included empirical studies with any design that had used ISO 9001 or the EFQM model to improve the quality of healthcare. Data were collected and tabulated into a data extraction sheet that was specifically designed for this study. The collected data included authors’ names, country, year of publication, intervention, improvement aims, setting, length of program, study design, and outcomes. Results: Seven out of the 121 studies that were retrieved met the inclusion criteria. Three studies assessed the EFQM model and four studies assessed the ISO 9001 standard. Use of the EFQM model increased the degree of patient satisfaction and the number of hospital admissions and reduced the average length of stay, the delay on the surgical waiting list, and the number of emergency re-admissions. ISO 9001 also increased the degree of patient satisfaction and patient safety, increased cost-effectiveness, improved the hospital admissions process, and reduced the percentage of unscheduled returns to the hospital. Conclusions: Generally, there is a lack of robust and high quality empirical evidence regarding the effects of ISO 9001 and the EFQM model on the quality care provided by and the performance of hospitals. However, the limited evidence shows that ISO 9001 and the EFQM model might improve hospital performance. PMID:26756012

  8. Testing the standard model by precision measurement of the weak charges of quarks.

    PubMed

    Young, R D; Carlini, R D; Thomas, A W; Roche, J

    2007-09-21

    In a global analysis of the latest parity-violating electron scattering measurements on nuclear targets, we demonstrate a significant improvement in the experimental knowledge of the weak neutral-current lepton-quark interactions at low energy. The precision of this new result, combined with earlier atomic parity-violation measurements, places tight constraints on the size of possible contributions from physics beyond the standard model. Consequently, this result improves the lower-bound on the scale of relevant new physics to approximately 1 TeV.

  9. Sodium Nitroprusside Enhanced Cardiopulmonary Resuscitation Improves Short Term Survival in a Porcine Model of Ischemic Refractory Ventricular Fibrillation

    PubMed Central

    Yannopoulos, Demetris; Bartos, Jason A.; George, Stephen A.; Sideris, George; Voicu, Sebastian; Oestreich, Brett; Matsuura, Timothy; Shekar, Kadambari; Rees, Jennifer; Aufderheide, Tom P.

    2017-01-01

    Introduction Sodium nitroprusside (SNP) enhanced CPR (SNPeCPR) demonstrates increased vital organ blood flow and survival in multiple porcine models. We developed a new, coronary occlusion/ischemia model of prolonged resuscitation, mimicking the majority of out-of-hospital cardiac arrests presenting with shockable rhythms. Hypothesis SNPeCPR will increase short term (4-hour) survival compared to standard 2015 Advanced Cardiac Life Support (ACLS) guidelines in an ischemic refractory ventricular fibrillation (VF), prolonged CPR model. Methods Sixteen anesthetized pigs had the ostial left anterior descending artery occluded leading to ischemic VF arrest. VF was untreated for 5 minutes. Basic life support was performed for 10 minutes. At minute 10 (EMS arrival), animals received either SNPeCPR (n=8) or standard ACLS (n=8). Defibrillation (200J) occurred every 3 minutes. CPR continued for a total of 45 minutes, then the balloon was deflated simulating revascularization. CPR continued until return of spontaneous circulation (ROSC) or a total of 60 minutes, if unsuccessful. SNPeCPR animals received 2 mg of SNP at minute 10 followed by 1 mg every 5 minutes until ROSC. Standard ACLS animals received 0.5 mg epinephrine every 5 minutes until ROSC. Primary endpoints were ROSC and 4-hour survival. Results All SNPeCPR animals (8/8) achieved sustained ROSC versus 2/8 standard ACLS animals within one hour of resuscitation (p=0.04). The 4-hour survival was significantly improved with SNPeCPR compared to standard ACLS, 7/8 versus 1/8 respectively, p=0.0019. Conclusion SNPeCPR significantly improved ROSC and 4-hour survival compared with standard ACLS CPR in a porcine model of prolonged ischemic, refractory VF cardiac arrest. PMID:27771299

  10. Sodium nitroprusside enhanced cardiopulmonary resuscitation improves short term survival in a porcine model of ischemic refractory ventricular fibrillation.

    PubMed

    Yannopoulos, Demetris; Bartos, Jason A; George, Stephen A; Sideris, George; Voicu, Sebastian; Oestreich, Brett; Matsuura, Timothy; Shekar, Kadambari; Rees, Jennifer; Aufderheide, Tom P

    2017-01-01

    Sodium nitroprusside (SNP) enhanced CPR (SNPeCPR) demonstrates increased vital organ blood flow and survival in multiple porcine models. We developed a new, coronary occlusion/ischemia model of prolonged resuscitation, mimicking the majority of out-of-hospital cardiac arrests presenting with shockable rhythms. SNPeCPR will increase short term (4-h) survival compared to standard 2015 Advanced Cardiac Life Support (ACLS) guidelines in an ischemic refractory ventricular fibrillation (VF), prolonged CPR model. Sixteen anesthetized pigs had the ostial left anterior descending artery occluded leading to ischemic VF arrest. VF was untreated for 5min. Basic life support was performed for 10min. At minute 10 (EMS arrival), animals received either SNPeCPR (n=8) or standard ACLS (n=8). Defibrillation (200J) occurred every 3min. CPR continued for a total of 45min, then the balloon was deflated simulating revascularization. CPR continued until return of spontaneous circulation (ROSC) or a total of 60min, if unsuccessful. SNPeCPR animals received 2mg of SNP at minute 10 followed by 1mg every 5min until ROSC. Standard ACLS animals received 0.5mg epinephrine every 5min until ROSC. Primary endpoints were ROSC and 4-h survival. All SNPeCPR animals (8/8) achieved sustained ROSC versus 2/8 standard ACLS animals within one hour of resuscitation (p=0.04). The 4-h survival was significantly improved with SNPeCPR compared to standard ACLS, 7/8 versus 1/8 respectively, p=0.0019. SNPeCPR significantly improved ROSC and 4-h survival compared with standard ACLS CPR in a porcine model of prolonged ischemic, refractory VF cardiac arrest. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Incorporating Midbrain Adaptation to Mean Sound Level Improves Models of Auditory Cortical Processing

    PubMed Central

    Schoppe, Oliver; King, Andrew J.; Schnupp, Jan W.H.; Harper, Nicol S.

    2016-01-01

    Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear–nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it introduces no new free parameters. Incorporating the adaptive coding properties of neurons will likely improve receptive field models in other sensory modalities too. PMID:26758822

  12. Modeling Major Adverse Outcomes of Pediatric and Adult Patients With Congenital Heart Disease Undergoing Cardiac Catheterization: Observations From the NCDR IMPACT Registry (National Cardiovascular Data Registry Improving Pediatric and Adult Congenital Treatment).

    PubMed

    Jayaram, Natalie; Spertus, John A; Kennedy, Kevin F; Vincent, Robert; Martin, Gerard R; Curtis, Jeptha P; Nykanen, David; Moore, Phillip M; Bergersen, Lisa

    2017-11-21

    Risk standardization for adverse events after congenital cardiac catheterization is needed to equitably compare patient outcomes among different hospitals as a foundation for quality improvement. The goal of this project was to develop a risk-standardization methodology to adjust for patient characteristics when comparing major adverse outcomes in the NCDR's (National Cardiovascular Data Registry) IMPACT Registry (Improving Pediatric and Adult Congenital Treatment). Between January 2011 and March 2014, 39 725 consecutive patients within IMPACT undergoing cardiac catheterization were identified. Given the heterogeneity of interventional procedures for congenital heart disease, new procedure-type risk categories were derived with empirical data and expert opinion, as were markers of hemodynamic vulnerability. A multivariable hierarchical logistic regression model to identify patient and procedural characteristics predictive of a major adverse event or death after cardiac catheterization was derived in 70% of the cohort and validated in the remaining 30%. The rate of major adverse event or death was 7.1% and 7.2% in the derivation and validation cohorts, respectively. Six procedure-type risk categories and 6 independent indicators of hemodynamic vulnerability were identified. The final risk adjustment model included procedure-type risk category, number of hemodynamic vulnerability indicators, renal insufficiency, single-ventricle physiology, and coagulation disorder. The model had good discrimination, with a C-statistic of 0.76 and 0.75 in the derivation and validation cohorts, respectively. Model calibration in the validation cohort was excellent, with a slope of 0.97 (standard error, 0.04; P value [for difference from 1] =0.53) and an intercept of 0.007 (standard error, 0.12; P value [for difference from 0] =0.95). The creation of a validated risk-standardization model for adverse outcomes after congenital cardiac catheterization can support reporting of risk-adjusted outcomes in the IMPACT Registry as a foundation for quality improvement. © 2017 American Heart Association, Inc.

  13. Physics-Based Fragment Acceleration Modeling for Pressurized Tank Burst Risk Assessments

    NASA Technical Reports Server (NTRS)

    Manning, Ted A.; Lawrence, Scott L.

    2014-01-01

    As part of comprehensive efforts to develop physics-based risk assessment techniques for space systems at NASA, coupled computational fluid and rigid body dynamic simulations were carried out to investigate the flow mechanisms that accelerate tank fragments in bursting pressurized vessels. Simulations of several configurations were compared to analyses based on the industry-standard Baker explosion model, and were used to formulate an improved version of the model. The standard model, which neglects an external fluid, was found to agree best with simulation results only in configurations where the internal-to-external pressure ratio is very high and fragment curvature is small. The improved model introduces terms that accommodate an external fluid and better account for variations based on circumferential fragment count. Physics-based analysis was critical in increasing the model's range of applicability. The improved tank burst model can be used to produce more accurate risk assessments of space vehicle failure modes that involve high-speed debris, such as exploding propellant tanks and bursting rocket engines.

  14. Increasing EHR system usability through standards: Conformance criteria in the HL7 EHR-system functional model.

    PubMed

    Meehan, Rebecca A; Mon, Donald T; Kelly, Kandace M; Rocca, Mitra; Dickinson, Gary; Ritter, John; Johnson, Constance M

    2016-10-01

    Though substantial work has been done on the usability of health information technology, improvements in electronic health record system (EHR) usability have been slow, creating frustration, distrust of EHRs and the use of potentially unsafe work-arounds. Usability standards could be part of the solution for improving EHR usability. EHR system functional requirements and standards have been used successfully in the past to specify system behavior, the criteria of which have been gradually implemented in EHR systems through certification programs and other national health IT strategies. Similarly, functional requirements and standards for usability can help address the multitude of sequelae associated with poor usability. This paper describes the evidence-based functional requirements for usability contained in the Health Level Seven (HL7) EHR System Functional Model, and the benefits of open and voluntary EHR system usability standards. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. MASTER: a model to improve and standardize clinical breakpoints for antimicrobial susceptibility testing using forecast probabilities.

    PubMed

    Blöchliger, Nicolas; Keller, Peter M; Böttger, Erik C; Hombach, Michael

    2017-09-01

    The procedure for setting clinical breakpoints (CBPs) for antimicrobial susceptibility has been poorly standardized with respect to population data, pharmacokinetic parameters and clinical outcome. Tools to standardize CBP setting could result in improved antibiogram forecast probabilities. We propose a model to estimate probabilities for methodological categorization errors and defined zones of methodological uncertainty (ZMUs), i.e. ranges of zone diameters that cannot reliably be classified. The impact of ZMUs on methodological error rates was used for CBP optimization. The model distinguishes theoretical true inhibition zone diameters from observed diameters, which suffer from methodological variation. True diameter distributions are described with a normal mixture model. The model was fitted to observed inhibition zone diameters of clinical Escherichia coli strains. Repeated measurements for a quality control strain were used to quantify methodological variation. For 9 of 13 antibiotics analysed, our model predicted error rates of < 0.1% applying current EUCAST CBPs. Error rates were > 0.1% for ampicillin, cefoxitin, cefuroxime and amoxicillin/clavulanic acid. Increasing the susceptible CBP (cefoxitin) and introducing ZMUs (ampicillin, cefuroxime, amoxicillin/clavulanic acid) decreased error rates to < 0.1%. ZMUs contained low numbers of isolates for ampicillin and cefuroxime (3% and 6%), whereas the ZMU for amoxicillin/clavulanic acid contained 41% of all isolates and was considered not practical. We demonstrate that CBPs can be improved and standardized by minimizing methodological categorization error rates. ZMUs may be introduced if an intermediate zone is not appropriate for pharmacokinetic/pharmacodynamic or drug dosing reasons. Optimized CBPs will provide a standardized antibiotic susceptibility testing interpretation at a defined level of probability. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Improving atomic force microscopy imaging by a direct inverse asymmetric PI hysteresis model.

    PubMed

    Wang, Dong; Yu, Peng; Wang, Feifei; Chan, Ho-Yin; Zhou, Lei; Dong, Zaili; Liu, Lianqing; Li, Wen Jung

    2015-02-03

    A modified Prandtl-Ishlinskii (PI) model, referred to as a direct inverse asymmetric PI (DIAPI) model in this paper, was implemented to reduce the displacement error between a predicted model and the actual trajectory of a piezoelectric actuator which is commonly found in AFM systems. Due to the nonlinearity of the piezoelectric actuator, the standard symmetric PI model cannot precisely describe the asymmetric motion of the actuator. In order to improve the accuracy of AFM scans, two series of slope parameters were introduced in the PI model to describe both the voltage-increase-loop (trace) and voltage-decrease-loop (retrace). A feedforward controller based on the DIAPI model was implemented to compensate hysteresis. Performance of the DIAPI model and the feedforward controller were validated by scanning micro-lenses and standard silicon grating using a custom-built AFM.

  17. Charge to Road Map Development Sessions

    NASA Technical Reports Server (NTRS)

    Barth, Janet

    2004-01-01

    Develop a road map for new standard Model applications radiation belt models. Model applications: Spacecraft and instruments. Reduce risk. Reduce cost. Improve performance. Increase system lifetime. Reduce risk to astronauts.

  18. DSSTOX WEBSITE LAUNCH: IMPROVING PUBLIC ACCESS TO DATABASES FOR BUILDING STRUCTURE-TOXICITY PREDICTION MODELS

    EPA Science Inventory

    DSSTox Website Launch: Improving Public Access to Databases for Building Structure-Toxicity Prediction Models
    Ann M. Richard
    US Environmental Protection Agency, Research Triangle Park, NC, USA

    Distributed: Decentralized set of standardized, field-delimited databases,...

  19. Business School's Performance Management System Standards Design

    ERIC Educational Resources Information Center

    Azis, Anton Mulyono; Simatupang, Togar M.; Wibisono, Dermawan; Basri, Mursyid Hasan

    2014-01-01

    This paper aims to compare various Performance Management Systems (PMS) for business school in order to find the strengths of each standard as inputs to design new model of PMS. There are many critical aspects and gaps notified for new model to improve performance and even recognized that self evaluation performance management is not well…

  20. Does attainment of Piaget's formal operational level of cognitive development predict student understanding of scientific models?

    NASA Astrophysics Data System (ADS)

    Lahti, Richard Dennis, II

    Knowledge of scientific models and their uses is a concept that has become a key benchmark in many of the science standards of the past 30 years, including the proposed Next Generation Science Standards. Knowledge of models is linked to other important nature of science concepts such as theory change which are also rising in prominence in newer standards. Effective methods of instruction will need to be developed to enable students to achieve these standards. The literature reveals an inconsistent history of success with modeling education. These same studies point to a possible cognitive development component which might explain why some students succeeded and others failed. An environmental science course, rich in modeling experiences, was used to test both the extent to which knowledge of models and modeling could be improved over the course of one semester, and more importantly, to identify if cognitive ability was related to this improvement. In addition, nature of science knowledge, particularly related to theories and theory change, was also examined. Pretest and posttest results on modeling (SUMS) and nature of science (SUSSI), as well as data from the modeling activities themselves, was collected. Cognitive ability was measured (CTSR) as a covariate. Students' gain in six of seven categories of modeling knowledge was at least medium (Cohen's d >.5) and moderately correlated to CTSR for two of seven categories. Nature of science gains were smaller, although more strongly correlated with CTSR. Student success at creating a model was related to CTSR, significantly in three of five sub-categories. These results suggest that explicit, reflective experience with models can increase student knowledge of models and modeling (although higher cognitive ability students may have more success), but successfully creating models may depend more heavily on cognitive ability. This finding in particular has implications in the grade placement of modeling standards and curriculum chosen to help these students, particularly those with low cognitive ability, to meet the standards.

  1. Interacting multiple model forward filtering and backward smoothing for maneuvering target tracking

    NASA Astrophysics Data System (ADS)

    Nandakumaran, N.; Sutharsan, S.; Tharmarasa, R.; Lang, Tom; McDonald, Mike; Kirubarajan, T.

    2009-08-01

    The Interacting Multiple Model (IMM) estimator has been proven to be effective in tracking agile targets. Smoothing or retrodiction, which uses measurements beyond the current estimation time, provides better estimates of target states. Various methods have been proposed for multiple model smoothing in the literature. In this paper, a new smoothing method, which involves forward filtering followed by backward smoothing while maintaining the fundamental spirit of the IMM, is proposed. The forward filtering is performed using the standard IMM recursion, while the backward smoothing is performed using a novel interacting smoothing recursion. This backward recursion mimics the IMM estimator in the backward direction, where each mode conditioned smoother uses standard Kalman smoothing recursion. Resulting algorithm provides improved but delayed estimates of target states. Simulation studies are performed to demonstrate the improved performance with a maneuvering target scenario. The comparison with existing methods confirms the improved smoothing accuracy. This improvement results from avoiding the augmented state vector used by other algorithms. In addition, the new technique to account for model switching in smoothing is a key in improving the performance.

  2. Search for Muonic Dark Forces at BABAR

    NASA Astrophysics Data System (ADS)

    Godang, Romulus

    2017-04-01

    Many models of physics beyond Standard Model predict the existence of light Higgs states, dark photons, and new gauge bosons mediating interactions between dark sectors and the Standard Model. Using a full data sample collected with the BABAR detector at the PEP-II e+e- collider, we report searches for a light non-Standard Model Higgs boson, dark photon, and a new muonic dark force mediated by a gauge boson (Z') coupling only to the second and third lepton families. Our results significantly improve upon the current bounds and further constrain the remaining region of the allowed parameter space.

  3. Dark matter, constrained minimal supersymmetric standard model, and lattice QCD.

    PubMed

    Giedt, Joel; Thomas, Anthony W; Young, Ross D

    2009-11-13

    Recent lattice measurements have given accurate estimates of the quark condensates in the proton. We use these results to significantly improve the dark matter predictions in benchmark models within the constrained minimal supersymmetric standard model. The predicted spin-independent cross sections are at least an order of magnitude smaller than previously suggested and our results have significant consequences for dark matter searches.

  4. Use of Statechart Assertions for Modeling Human-in-the-Loop Security Analysis and Decision-Making Processes

    DTIC Science & Technology

    2012-06-01

    THIS PAGE INTENTIONALLY LEFT BLANK xv LIST OF ACRONYMS AND ABBREVIATIONS BPM Business Process Model BPMN Business Process Modeling Notation C&A...checking leads to an improvement in the quality and success of enterprise software development. Business Process Modeling Notation ( BPMN ) is an...emerging standard that allows business processes to be captured in a standardized format. BPMN lacks formal semantics which leaves many of its features

  5. Assessment of the Draft AIAA S-119 Flight Dynamic Model Exchange Standard

    NASA Technical Reports Server (NTRS)

    Jackson, E. Bruce; Murri, Daniel G.; Hill, Melissa A.; Jessick, Matthew V.; Penn, John M.; Hasan, David A.; Crues, Edwin Z.; Falck, Robert D.; McCarthy, Thomas G.; Vuong, Nghia; hide

    2011-01-01

    An assessment of a draft AIAA standard for flight dynamics model exchange, ANSI/AIAA S-119-2011, was conducted on behalf of NASA by a team from the NASA Engineering and Safety Center. The assessment included adding the capability of importing standard models into real-time simulation facilities at several NASA Centers as well as into analysis simulation tools. All participants were successful at importing two example models into their respective simulation frameworks by using existing software libraries or by writing new import tools. Deficiencies in the libraries and format documentation were identified and fixed; suggestions for improvements to the standard were provided to the AIAA. An innovative tool to generate C code directly from such a model was developed. Performance of the software libraries compared favorably with compiled code. As a result of this assessment, several NASA Centers can now import standard models directly into their simulations. NASA is considering adopting the now-published S-119 standard as an internal recommended practice.

  6. Setting, Evaluating, and Maintaining Certification Standards with the Rasch Model.

    ERIC Educational Resources Information Center

    Grosse, Martin E.; Wright, Benjamin D.

    1986-01-01

    Based on the standard setting procedures or the American Board of Preventive Medicine for their Core Test, this article describes how Rasch measurement can facilitate using test content judgments in setting a standard. Rasch measurement can then be used to evaluate and improve the precision of the standard and to hold it constant across time.…

  7. The Muon $g$-$2$ Experiment at Fermilab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gohn, Wesley

    A new measurement of the anomalous magnetic moment of the muon,more » $$a_{\\mu} \\equiv (g-2)/2$$, will be performed at the Fermi National Accelerator Laboratory with data taking beginning in 2017. The most recent measurement, performed at Brookhaven National Laboratory (BNL) and completed in 2001, shows a 3.5 standard deviation discrepancy with the standard model value of $$a_\\mu$$. The new measurement will accumulate 21 times the BNL statistics using upgraded magnet, detector, and storage ring systems, enabling a measurement of $$a_\\mu$$ to 140 ppb, a factor of 4 improvement in the uncertainty the previous measurement. This improvement in precision, combined with recent improvements in our understanding of the QCD contributions to the muon $g$-$2$, could provide a discrepancy from the standard model greater than 7$$\\sigma$$ if the central value is the same as that measured by the BNL experiment, which would be a clear indication of new physics.« less

  8. The Value of Harmonizing Multiple Improvement Technologies: A Process Improvement Professional’s View

    DTIC Science & Technology

    2008-03-01

    maturity models and ISO standards, specifically CMMI, CMMI-ACQ and ISO 12207 . Also, the improvement group supplemented their selection of these...compliant with the technologies and standards that are important to the business. Lockheed Martin IS&GS has integrated CMMI, EIA 632, ISO 12207 , and Six...geographically dispersed organization. [Siviy 07-1] Northrop Grumman Mission Systems has integrated CMMI, ISO 9001, AS9100, and Six Sigma, as well as a

  9. Incorporating single-side sparing in models for predicting parotid dose sparing in head and neck IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Lulin, E-mail: lulin.yuan@duke.edu; Wu, Q. Jackie; Yin, Fang-Fang

    2014-02-15

    Purpose: Sparing of single-side parotid gland is a common practice in head-and-neck (HN) intensity modulated radiation therapy (IMRT) planning. It is a special case of dose sparing tradeoff between different organs-at-risk. The authors describe an improved mathematical model for predicting achievable dose sparing in parotid glands in HN IMRT planning that incorporates single-side sparing considerations based on patient anatomy and learning from prior plan data. Methods: Among 68 HN cases analyzed retrospectively, 35 cases had physician prescribed single-side parotid sparing preferences. The single-side sparing model was trained with cases which had single-side sparing preferences, while the standard model was trainedmore » with the remainder of cases. A receiver operating characteristics (ROC) analysis was performed to determine the best criterion that separates the two case groups using the physician's single-side sparing prescription as ground truth. The final predictive model (combined model) takes into account the single-side sparing by switching between the standard and single-side sparing models according to the single-side sparing criterion. The models were tested with 20 additional cases. The significance of the improvement of prediction accuracy by the combined model over the standard model was evaluated using the Wilcoxon rank-sum test. Results: Using the ROC analysis, the best single-side sparing criterion is (1) the predicted median dose of one parotid is higher than 24 Gy; and (2) that of the other is higher than 7 Gy. This criterion gives a true positive rate of 0.82 and a false positive rate of 0.19, respectively. For the bilateral sparing cases, the combined and the standard models performed equally well, with the median of the prediction errors for parotid median dose being 0.34 Gy by both models (p = 0.81). For the single-side sparing cases, the standard model overestimates the median dose by 7.8 Gy on average, while the predictions by the combined model differ from actual values by only 2.2 Gy (p = 0.005). Similarly, the sum of residues between the modeled and the actual plan DVHs is the same for the bilateral sparing cases by both models (p = 0.67), while the standard model predicts significantly higher DVHs than the combined model for the single-side sparing cases (p = 0.01). Conclusions: The combined model for predicting parotid sparing that takes into account single-side sparing improves the prediction accuracy over the previous model.« less

  10. Incorporating single-side sparing in models for predicting parotid dose sparing in head and neck IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Lulin, E-mail: lulin.yuan@duke.edu; Wu, Q. Jackie; Yin, Fang-Fang

    Purpose: Sparing of single-side parotid gland is a common practice in head-and-neck (HN) intensity modulated radiation therapy (IMRT) planning. It is a special case of dose sparing tradeoff between different organs-at-risk. The authors describe an improved mathematical model for predicting achievable dose sparing in parotid glands in HN IMRT planning that incorporates single-side sparing considerations based on patient anatomy and learning from prior plan data. Methods: Among 68 HN cases analyzed retrospectively, 35 cases had physician prescribed single-side parotid sparing preferences. The single-side sparing model was trained with cases which had single-side sparing preferences, while the standard model was trainedmore » with the remainder of cases. A receiver operating characteristics (ROC) analysis was performed to determine the best criterion that separates the two case groups using the physician's single-side sparing prescription as ground truth. The final predictive model (combined model) takes into account the single-side sparing by switching between the standard and single-side sparing models according to the single-side sparing criterion. The models were tested with 20 additional cases. The significance of the improvement of prediction accuracy by the combined model over the standard model was evaluated using the Wilcoxon rank-sum test. Results: Using the ROC analysis, the best single-side sparing criterion is (1) the predicted median dose of one parotid is higher than 24 Gy; and (2) that of the other is higher than 7 Gy. This criterion gives a true positive rate of 0.82 and a false positive rate of 0.19, respectively. For the bilateral sparing cases, the combined and the standard models performed equally well, with the median of the prediction errors for parotid median dose being 0.34 Gy by both models (p = 0.81). For the single-side sparing cases, the standard model overestimates the median dose by 7.8 Gy on average, while the predictions by the combined model differ from actual values by only 2.2 Gy (p = 0.005). Similarly, the sum of residues between the modeled and the actual plan DVHs is the same for the bilateral sparing cases by both models (p = 0.67), while the standard model predicts significantly higher DVHs than the combined model for the single-side sparing cases (p = 0.01). Conclusions: The combined model for predicting parotid sparing that takes into account single-side sparing improves the prediction accuracy over the previous model.« less

  11. Improving naturalness in warped models with a heavy bulk Higgs boson

    NASA Astrophysics Data System (ADS)

    Cabrer, Joan A.; von Gersdorff, Gero; Quirós, Mariano

    2011-08-01

    A standard-model-like Higgs boson should be light in order to comply with electroweak precision measurements from LEP. We consider five-dimensional warped models—with a deformation of the metric in the IR region—as UV completions of the standard model with a heavy Higgs boson. Provided the Higgs boson propagates in the five-dimensional bulk the Kaluza Klein (KK) modes of the gauge bosons can compensate for the Higgs boson contribution to oblique parameters while their masses lie within the range of the LHC. The little hierarchy between KK scale and Higgs mass essentially disappears and the naturalness of the model greatly improves with respect to the Anti-de Sitter (Randall-Sundrum) model. In fact the fine-tuning is better than 10% for all values of the Higgs boson mass.

  12. Improving Atomic Force Microscopy Imaging by a Direct Inverse Asymmetric PI Hysteresis Model

    PubMed Central

    Wang, Dong; Yu, Peng; Wang, Feifei; Chan, Ho-Yin; Zhou, Lei; Dong, Zaili; Liu, Lianqing; Li, Wen Jung

    2015-01-01

    A modified Prandtl–Ishlinskii (PI) model, referred to as a direct inverse asymmetric PI (DIAPI) model in this paper, was implemented to reduce the displacement error between a predicted model and the actual trajectory of a piezoelectric actuator which is commonly found in AFM systems. Due to the nonlinearity of the piezoelectric actuator, the standard symmetric PI model cannot precisely describe the asymmetric motion of the actuator. In order to improve the accuracy of AFM scans, two series of slope parameters were introduced in the PI model to describe both the voltage-increase-loop (trace) and voltage-decrease-loop (retrace). A feedforward controller based on the DIAPI model was implemented to compensate hysteresis. Performance of the DIAPI model and the feedforward controller were validated by scanning micro-lenses and standard silicon grating using a custom-built AFM. PMID:25654719

  13. Towards Improved Finite Element Modelling of the Interaction of Elastic Waves with Complex Defect Geometries

    NASA Astrophysics Data System (ADS)

    Rajagopal, P.; Drozdz, M.; Lowe, M. J. S.

    2009-03-01

    A solution to the problem of improving the finite element (FE) modeling of elastic wave-defect interaction is sought by reconsidering the conventional opinion on meshing strategy. The standard approach using uniform square elements imposes severe limitations in representing complex defect outlines but this is thought to improve when the mesh is made finer. Free meshing algorithms available widely in commercial packages of late can cope with difficult features well but they are thought to cause scattering by the irregular mesh itself. This paper examines whether the benefits offered by free meshing in representing defects better outweigh the inaccuracies due to mesh scattering. If using the standard mesh, the questions whether mesh refinement leads to improved results and whether a practical strategy can be constructed are considered.

  14. Improving Project Management Using Formal Models and Architectures

    NASA Technical Reports Server (NTRS)

    Kahn, Theodore; Sturken, Ian

    2011-01-01

    This talk discusses the advantages formal modeling and architecture brings to project management. These emerging technologies have both great potential and challenges for improving information available for decision-making. The presentation covers standards, tools and cultural issues needing consideration, and includes lessons learned from projects the presenters have worked on.

  15. Teaching Scientific Practices: Meeting the Challenge of Change

    ERIC Educational Resources Information Center

    Osborne, Jonathan

    2014-01-01

    This paper provides a rationale for the changes advocated by the Framework for K-12 Science Education and the Next Generation Science Standards. It provides an argument for why the model embedded in the Next Generation Science Standards is seen as an improvement. The Case made here is that the underlying model that the new Framework presents of…

  16. Decreasing patient identification band errors by standardizing processes.

    PubMed

    Walley, Susan Chu; Berger, Stephanie; Harris, Yolanda; Gallizzi, Gina; Hayes, Leslie

    2013-04-01

    Patient identification (ID) bands are an essential component in patient ID. Quality improvement methodology has been applied as a model to reduce ID band errors although previous studies have not addressed standardization of ID bands. Our specific aim was to decrease ID band errors by 50% in a 12-month period. The Six Sigma DMAIC (define, measure, analyze, improve, and control) quality improvement model was the framework for this study. ID bands at a tertiary care pediatric hospital were audited from January 2011 to January 2012 with continued audits to June 2012 to confirm the new process was in control. After analysis, the major improvement strategy implemented was standardization of styles of ID bands and labels. Additional interventions included educational initiatives regarding the new ID band processes and disseminating institutional and nursing unit data. A total of 4556 ID bands were audited with a preimprovement ID band error average rate of 9.2%. Significant variation in the ID band process was observed, including styles of ID bands. Interventions were focused on standardization of the ID band and labels. The ID band error rate improved to 5.2% in 9 months (95% confidence interval: 2.5-5.5; P < .001) and was maintained for 8 months. Standardization of ID bands and labels in conjunction with other interventions resulted in a statistical decrease in ID band error rates. This decrease in ID band error rates was maintained over the subsequent 8 months.

  17. A Generalized Form of Context-Dependent Psychophysiological Interactions (gPPI): A Comparison to Standard Approaches

    PubMed Central

    McLaren, Donald G.; Ries, Michele L.; Xu, Guofan; Johnson, Sterling C.

    2012-01-01

    Functional MRI (fMRI) allows one to study task-related regional responses and task-dependent connectivity analysis using psychophysiological interaction (PPI) methods. The latter affords the additional opportunity to understand how brain regions interact in a task-dependent manner. The current implementation of PPI in Statistical Parametric Mapping (SPM8) is configured primarily to assess connectivity differences between two task conditions, when in practice fMRI tasks frequently employ more than two conditions. Here we evaluate how a generalized form of context-dependent PPI (gPPI; http://www.nitrc.org/projects/gppi), which is configured to automatically accommodate more than two task conditions in the same PPI model by spanning the entire experimental space, compares to the standard implementation in SPM8. These comparisons are made using both simulations and an empirical dataset. In the simulated dataset, we compare the interaction beta estimates to their expected values and model fit using the Akaike Information Criterion (AIC). We found that interaction beta estimates in gPPI were robust to different simulated data models, were not different from the expected beta value, and had better model fits than when using standard PPI (sPPI) methods. In the empirical dataset, we compare the model fit of the gPPI approach to sPPI. We found that the gPPI approach improved model fit compared to sPPI. There were several regions that became non-significant with gPPI. These regions all showed significantly better model fits with gPPI. Also, there were several regions where task-dependent connectivity was only detected using gPPI methods, also with improved model fit. Regions that were detected with all methods had more similar model fits. These results suggest that gPPI may have greater sensitivity and specificity than standard implementation in SPM. This notion is tempered slightly as there is no gold standard; however, data simulations with a known outcome support our conclusions about gPPI. In sum, the generalized form of context-dependent PPI approach has increased flexibility of statistical modeling, and potentially improves model fit, specificity to true negative findings, and sensitivity to true positive findings. PMID:22484411

  18. Form Factor Measurements at BESIII for an Improved Standard Model Prediction of the Muon g-2

    NASA Astrophysics Data System (ADS)

    Destefanis, Marco

    The anomalous part of the magnetic moment of the muon, (g-2)μ, allows for one of the most precise tests of the Standard Model of particle physics. We report on recent results by the BESIII Collaboration of exclusive hadronic cross section channels, such as the 2π, 3π, and 4π final states. These measurements are of utmost importance for an improved calculation of the hadronic vacuum polarization contribution of (g-2)μ, which currenty is limiting the overall Standard Model prediction of this quantity. BESIII has furthermore also intiatated a programme of spacelike transition form factor measurements, which can be used for a determination of the hadronic light-by-light contribution of (g-2)μ in a data-driven approach. These results are of relevance in view of the new and direct measurements of (g-2)μ as foreseen at Fermilab/USA and J-PARC/Japan.

  19. Improving Low-Dose Blood-Brain Barrier Permeability Quantification Using Sparse High-Dose Induced Prior for Patlak Model

    PubMed Central

    Fang, Ruogu; Karlsson, Kolbeinn; Chen, Tsuhan; Sanelli, Pina C.

    2014-01-01

    Blood-brain-barrier permeability (BBBP) measurements extracted from the perfusion computed tomography (PCT) using the Patlak model can be a valuable indicator to predict hemorrhagic transformation in patients with acute stroke. Unfortunately, the standard Patlak model based PCT requires excessive radiation exposure, which raised attention on radiation safety. Minimizing radiation dose is of high value in clinical practice but can degrade the image quality due to the introduced severe noise. The purpose of this work is to construct high quality BBBP maps from low-dose PCT data by using the brain structural similarity between different individuals and the relations between the high- and low-dose maps. The proposed sparse high-dose induced (shd-Patlak) model performs by building a high-dose induced prior for the Patlak model with a set of location adaptive dictionaries, followed by an optimized estimation of BBBP map with the prior regularized Patlak model. Evaluation with the simulated low-dose clinical brain PCT datasets clearly demonstrate that the shd-Patlak model can achieve more significant gains than the standard Patlak model with improved visual quality, higher fidelity to the gold standard and more accurate details for clinical analysis. PMID:24200529

  20. ASRM standard embryo transfer protocol template: a committee opinion.

    PubMed

    Penzias, Alan; Bendikson, Kristin; Butts, Samantha; Coutifaris, Christos; Falcone, Tommaso; Fossum, Gregory; Gitlin, Susan; Gracia, Clarisa; Hansen, Karl; Mersereau, Jennifer; Odem, Randall; Rebar, Robert; Reindollar, Richard; Rosen, Mitchell; Sandlow, Jay; Vernon, Michael

    2017-04-01

    Standardization improves performance and safety. A template for standardizing the embryo transfer procedure is presented here with 12 basic steps supported by published scientific literature and a survey of common practice of SART programs; it can be used by ART practices to model their own standard protocol. Copyright © 2017 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  1. Design of experiments enhanced statistical process control for wind tunnel check standard testing

    NASA Astrophysics Data System (ADS)

    Phillips, Ben D.

    The current wind tunnel check standard testing program at NASA Langley Research Center is focused on increasing data quality, uncertainty quantification and overall control and improvement of wind tunnel measurement processes. The statistical process control (SPC) methodology employed in the check standard testing program allows for the tracking of variations in measurements over time as well as an overall assessment of facility health. While the SPC approach can and does provide researchers with valuable information, it has certain limitations in the areas of process improvement and uncertainty quantification. It is thought by utilizing design of experiments methodology in conjunction with the current SPC practices that one can efficiently and more robustly characterize uncertainties and develop enhanced process improvement procedures. In this research, methodologies were developed to generate regression models for wind tunnel calibration coefficients, balance force coefficients and wind tunnel flow angularities. The coefficients of these regression models were then tracked in statistical process control charts, giving a higher level of understanding of the processes. The methodology outlined is sufficiently generic such that this research can be applicable to any wind tunnel check standard testing program.

  2. Model for selecting quality standards for a salad bar through identifying elements of customer satisfaction.

    PubMed

    Ouellet, D; Norback, J P

    1993-11-01

    Continuous quality improvement is the new requirement of the Joint Commission on Accreditation of Healthcare Organizations. This means that meeting quality standards will not be enough. Dietitians will need to improve those standards and the way they are selected. Because quality is defined in terms of the customers, all quality improvement projects must start by defining what customers want. Using a salad bar as an example, this article presents and illustrates a technique developed in Japan to identify which elements in a product or service will satisfy or dissatisfy consumers. Using a model and a questionnaire format developed by Kano and coworkers, 273 students were surveyed to classify six quality elements of a salad bar. Four elements showed a dominant "must-be" characteristic: food freshness, labeling of the dressings, no spills in the food, and no spills on the salad bar. The two other elements (food easy to reach and food variety) showed a dominant one-dimensional characteristic. By better understanding consumer perceptions of quality elements, foodservice managers can select quality standards that focus on what really matters to their consumers.

  3. Improving the Power of GWAS and Avoiding Confounding from Population Stratification with PC-Select

    PubMed Central

    Tucker, George; Price, Alkes L.; Berger, Bonnie

    2014-01-01

    Using a reduced subset of SNPs in a linear mixed model can improve power for genome-wide association studies, yet this can result in insufficient correction for population stratification. We propose a hybrid approach using principal components that does not inflate statistics in the presence of population stratification and improves power over standard linear mixed models. PMID:24788602

  4. A standard telemental health evaluation model: the time is now.

    PubMed

    Kramer, Greg M; Shore, Jay H; Mishkind, Matt C; Friedl, Karl E; Poropatich, Ronald K; Gahm, Gregory A

    2012-05-01

    The telehealth field has advanced historic promises to improve access, cost, and quality of care. However, the extent to which it is delivering on its promises is unclear as the scientific evidence needed to justify success is still emerging. Many have identified the need to advance the scientific knowledge base to better quantify success. One method for advancing that knowledge base is a standard telemental health evaluation model. Telemental health is defined here as the provision of mental health services using live, interactive video-teleconferencing technology. Evaluation in the telemental health field largely consists of descriptive and small pilot studies, is often defined by the individual goals of the specific programs, and is typically focused on only one outcome. The field should adopt new evaluation methods that consider the co-adaptive interaction between users (patients and providers), healthcare costs and savings, and the rapid evolution in communication technologies. Acceptance of a standard evaluation model will improve perceptions of telemental health as an established field, promote development of a sounder empirical base, promote interagency collaboration, and provide a framework for more multidisciplinary research that integrates measuring the impact of the technology and the overall healthcare aspect. We suggest that consideration of a standard model is timely given where telemental health is at in terms of its stage of scientific progress. We will broadly recommend some elements of what such a standard evaluation model might include for telemental health and suggest a way forward for adopting such a model.

  5. Template for success: using a resident-designed sign-out template in the handover of patient care.

    PubMed

    Clark, Clancy J; Sindell, Sarah L; Koehler, Richard P

    2011-01-01

    Report our implementation of a standardized handover process in a general surgery residency program. The standardized handover process, sign-out template, method of implementation, and continuous quality improvement process were designed by general surgery residents with support of faculty and senior hospital administration using standard work principles and business models of the Virginia Mason Production System and the Toyota Production System. Nonprofit, tertiary referral teaching hospital. General surgery residents, residency faculty, patient care providers, and hospital administration. After instruction in quality improvement initiatives, a team of general surgery residents designed a sign-out process using an electronic template and standard procedures. The initial implementation phase resulted in 73% compliance. Using resident-driven continuous quality improvement processes, real-time feedback enabled residents to modify and improve this process, eventually attaining 100% compliance and acceptance by residents. The creation of a standardized template and protocol for patient handovers might eliminate communication failures. Encouraging residents to participate in this process can establish the groundwork for successful implementation of a standardized handover process. Integrating a continuous quality-improvement process into such an initiative can promote active participation of busy general surgery residents and lead to successful implementation of standard procedures. Copyright © 2011 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  6. PV System 'Availability' as a Reliability Metric -- Improving Standards, Contract Language and Performance Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klise, Geoffrey T.; Hill, Roger; Walker, Andy

    The use of the term 'availability' to describe a photovoltaic (PV) system and power plant has been fraught with confusion for many years. A term that is meant to describe equipment operational status is often omitted, misapplied or inaccurately combined with PV performance metrics due to attempts to measure performance and reliability through the lens of traditional power plant language. This paper discusses three areas where current research in standards, contract language and performance modeling is improving the way availability is used with regards to photovoltaic systems and power plants.

  7. Improving Accuracy and Relevance of Race/Ethnicity Data: Results of a Statewide Collaboration in Hawaii.

    PubMed

    Pellegrin, Karen L; Miyamura, Jill B; Ma, Carolyn; Taniguchi, Ronald

    2016-01-01

    Current race/ethnicity categories established by the U.S. Office of Management and Budget are neither reliable nor valid for understanding health disparities or for tracking improvements in this area. In Hawaii, statewide hospitals have collaborated to collect race/ethnicity data using a standardized method consistent with recommended practices that overcome the problems with the federal categories. The purpose of this observational study was to determine the impact of this collaboration on key measures of race/ethnicity documentation. After this collaborative effort, the number of standardized categories available across hospitals increased from 6 to 34, and the percent of inpatients with documented race/ethnicity increased from 88 to 96%. This improved standardized methodology is now the foundation for tracking population health indicators statewide and focusing quality improvement efforts. The approach used in Hawaii can serve as a model for other states and regions. Ultimately, the ability to standardize data collection methodology across states and regions will be needed to track improvements nationally.

  8. Deploying initial attack resources for wildfire suppression: spatial coordination, budget constraints, and capacity constraints

    Treesearch

    Yohan Lee; Jeremy S. Fried; Heidi J. Albers; Robert G. Haight

    2013-01-01

    We combine a scenario-based, standard-response optimization model with stochastic simulation to improve the efficiency of resource deployment for initial attack on wildland fires in three planning units in California. The optimization model minimizes the expected number of fires that do not receive a standard response--defined as the number of resources by type that...

  9. Analysis of rocket engine injection combustion processes

    NASA Technical Reports Server (NTRS)

    Salmon, J. W.; Saltzman, D. H.

    1977-01-01

    Mixing methodology improvement for the JANNAF DER and CICM injection/combustion analysis computer programs was accomplished. ZOM plane prediction model development was improved for installation into the new standardized DER computer program. An intra-element mixing model developing approach was recommended for gas/liquid coaxial injection elements for possible future incorporation into the CICM computer program.

  10. Study on convection improvement of standard vacuum tube

    NASA Astrophysics Data System (ADS)

    He, J. H.; Du, W. P.; Qi, R. R.; He, J. X.

    2017-11-01

    For the standard all-glass vacuum tube collector, enhancing the vacuum tube axial natural convection can improve its thermal efficiency. According to the study of the standard all-glass vacuum tube, three kinds of guide plates which can inhibit the radial convection and increase axial natural convection are designed, and theory model is established. Experiments were carried out on vacuum tubes with three types of baffles and standard vacuum tubes without the improvement. The results show that T-type guide plate is better than that of Y-type guide plate on restraining convection and increasing axial radial convection effect, Y type is better than that of flat plate type, all guide plates are better than no change; the thermal efficiency of the tube was 2.6% higher than that of the unmodified standard vacuum tube. The efficiency of the system in the experiment can be increased by 3.1%.

  11. Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Dale; Selby, Neil

    2012-08-14

    Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.

  12. NASA Handbook for Models and Simulations: An Implementation Guide for NASA-STD-7009

    NASA Technical Reports Server (NTRS)

    Steele, Martin J.

    2013-01-01

    The purpose of this Handbook is to provide technical information, clarification, examples, processes, and techniques to help institute good modeling and simulation practices in the National Aeronautics and Space Administration (NASA). As a companion guide to NASA-STD- 7009, Standard for Models and Simulations, this Handbook provides a broader scope of information than may be included in a Standard and promotes good practices in the production, use, and consumption of NASA modeling and simulation products. NASA-STD-7009 specifies what a modeling and simulation activity shall or should do (in the requirements) but does not prescribe how the requirements are to be met, which varies with the specific engineering discipline, or who is responsible for complying with the requirements, which depends on the size and type of project. A guidance document, which is not constrained by the requirements of a Standard, is better suited to address these additional aspects and provide necessary clarification. This Handbook stems from the Space Shuttle Columbia Accident Investigation (2003), which called for Agency-wide improvements in the "development, documentation, and operation of models and simulations"' that subsequently elicited additional guidance from the NASA Office of the Chief Engineer to include "a standard method to assess the credibility of the models and simulations."2 General methods applicable across the broad spectrum of model and simulation (M&S) disciplines were sought to help guide the modeling and simulation processes within NASA and to provide for consistent reporting ofM&S activities and analysis results. From this, the standardized process for the M&S activity was developed. The major contents of this Handbook are the implementation details of the general M&S requirements ofNASA-STD-7009, including explanations, examples, and suggestions for improving the credibility assessment of an M&S-based analysis.

  13. The ODD protocol: A review and first update

    USGS Publications Warehouse

    Grimm, Volker; Berger, Uta; DeAngelis, Donald L.; Polhill, J. Gary; Giske, Jarl; Railsback, Steve F.

    2010-01-01

    The 'ODD' (Overview, Design concepts, and Details) protocol was published in 2006 to standardize the published descriptions of individual-based and agent-based models (ABMs). The primary objectives of ODD are to make model descriptions more understandable and complete, thereby making ABMs less subject to criticism for being irreproducible. We have systematically evaluated existing uses of the ODD protocol and identified, as expected, parts of ODD needing improvement and clarification. Accordingly, we revise the definition of ODD to clarify aspects of the original version and thereby facilitate future standardization of ABM descriptions. We discuss frequently raised critiques in ODD but also two emerging, and unanticipated, benefits: ODD improves the rigorous formulation of models and helps make the theoretical foundations of large models more visible. Although the protocol was designed for ABMs, it can help with documenting any large, complex model, alleviating some general objections against such models.

  14. Absolute Spectrophotometric Calibration to 1% from the FUV through the near-IR

    NASA Astrophysics Data System (ADS)

    Finley, David

    2005-07-01

    We propose a significant improvement to the existing HST calibration. The current calibration is based on three primary DA white dwarf standards, GD 71, GD 153, and G 191-B2B. The standard fluxes are calculated using NLTE models, with effective temperatures and gravities that were derived from Balmer line fits using LTE models. We propose to improve the accuracy and internal consistency of the calibration by deriving corrected effective temperatures and gravities based on fitting the observed line profiles with updated NLTE models, and including the fit results from multiple STIS spectra, rather than the {usually} 1 or 2 ground-based spectra used previously. We will also determine the fluxes for 5 new, fainter primary or secondary standards, extending the standard V magnitude lower limit from 13.4 to 16.5, and extending the wavelength coverage from 0.1 to 2.5 micron. The goal is to achieve an overall flux accuracy of 1%, which will be needed, for example, for the upcoming supernova survey missions to measure the equation of state of the dark energy that is accelerating the expansion of the universe.

  15. Influence of standardization on the precision (reproducibility) of dental cast analysis with virtual 3-dimensional models.

    PubMed

    Hayashi, Kazuo; Chung, Onejune; Park, Seojung; Lee, Seung-Pyo; Sachdeva, Rohit C L; Mizoguchi, Itaru

    2015-03-01

    Virtual 3-dimensional (3D) models obtained by scanning of physical casts have become an alternative to conventional dental cast analysis in orthodontic treatment. If the precision (reproducibility) of virtual 3D model analysis can be further improved, digital orthodontics could be even more widely accepted. The purpose of this study was to clarify the influence of "standardization" of the target points for dental cast analysis using virtual 3D models. Physical plaster models were also measured to obtain additional information. Five sets of dental casts were used. The dental casts were scanned with R700 (3Shape, Copenhagen, Denmark) and REXCAN DS2 3D (Solutionix, Seoul, Korea) scanners. In this study, 3 system and software packages were used: SureSmile (OraMetrix, Richardson, Tex), Rapidform (Inus, Seoul, Korea), and I-DEAS (SDRC, Milford, Conn). Without standardization, the maximum differences were observed between the SureSmile software and the Rapidform software (0.39 mm ± 0.07). With standardization, the maximum differences were observed between the SureSmile software and measurements with a digital caliper (0.099 mm ± 0.01), and this difference was significantly greater (P <0.05) than the 2 other mean difference values. Furthermore, the results of this study showed that the mean differences "WITH" standardization were significantly lower than those "WITHOUT" standardization for all systems, software packages, or methods. The results showed that elimination of the influence of usability or habituation is important for improving the reproducibility of dental cast analysis. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  16. Designing Excellence and Quality Model for Training Centers of Primary Health Care: A Delphi Method Study.

    PubMed

    Tabrizi, Jafar-Sadegh; Farahbakhsh, Mostafa; Shahgoli, Javad; Rahbar, Mohammad Reza; Naghavi-Behzad, Mohammad; Ahadi, Hamid-Reza; Azami-Aghdash, Saber

    2015-10-01

    Excellence and quality models are comprehensive methods for improving the quality of healthcare. The aim of this study was to design excellence and quality model for training centers of primary health care using Delphi method. In this study, Delphi method was used. First, comprehensive information were collected using literature review. In extracted references, 39 models were identified from 34 countries and related sub-criteria and standards were extracted from 34 models (from primary 39 models). Then primary pattern including 8 criteria, 55 sub-criteria, and 236 standards was developed as a Delphi questionnaire and evaluated in four stages by 9 specialists of health care system in Tabriz and 50 specialists from all around the country. Designed primary model (8 criteria, 55 sub-criteria, and 236 standards) were concluded with 8 criteria, 45 sub-criteria, and 192 standards after 4 stages of evaluations by specialists. Major criteria of the model are leadership, strategic and operational planning, resource management, information analysis, human resources management, process management, costumer results, and functional results, where the top score was assigned as 1000 by specialists. Functional results had the maximum score of 195 whereas planning had the minimum score of 60. Furthermore the most and the least sub-criteria was for leadership with 10 sub-criteria and strategic planning with 3 sub-criteria, respectively. The model that introduced in this research has been designed following 34 reference models of the world. This model could provide a proper frame for managers of health system in improving quality.

  17. SU-E-I-33: Initial Evaluation of Model-Based Iterative CT Reconstruction Using Standard Image Quality Phantoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gingold, E; Dave, J

    2014-06-01

    Purpose: The purpose of this study was to compare a new model-based iterative reconstruction with existing reconstruction methods (filtered backprojection and basic iterative reconstruction) using quantitative analysis of standard image quality phantom images. Methods: An ACR accreditation phantom (Gammex 464) and a CATPHAN600 phantom were scanned using 3 routine clinical acquisition protocols (adult axial brain, adult abdomen, and pediatric abdomen) on a Philips iCT system. Each scan was acquired using default conditions and 75%, 50% and 25% dose levels. Images were reconstructed using standard filtered backprojection (FBP), conventional iterative reconstruction (iDose4) and a prototype model-based iterative reconstruction (IMR). Phantom measurementsmore » included CT number accuracy, contrast to noise ratio (CNR), modulation transfer function (MTF), low contrast detectability (LCD), and noise power spectrum (NPS). Results: The choice of reconstruction method had no effect on CT number accuracy, or MTF (p<0.01). The CNR of a 6 HU contrast target was improved by 1–67% with iDose4 relative to FBP, while IMR improved CNR by 145–367% across all protocols and dose levels. Within each scan protocol, the CNR improvement from IMR vs FBP showed a general trend of greater improvement at lower dose levels. NPS magnitude was greatest for FBP and lowest for IMR. The NPS of the IMR reconstruction showed a pronounced decrease with increasing spatial frequency, consistent with the unusual noise texture seen in IMR images. Conclusion: Iterative Model Reconstruction reduces noise and improves contrast-to-noise ratio without sacrificing spatial resolution in CT phantom images. This offers the possibility of radiation dose reduction and improved low contrast detectability compared with filtered backprojection or conventional iterative reconstruction.« less

  18. Dynamical Evolution of Planetary Embryos

    NASA Technical Reports Server (NTRS)

    Wetherill, George W.

    2002-01-01

    During the past decade, progress has been made by relating the 'standard model' for the formation of planetary systems to computational and observational advances. A significant contribution to this has been provided by this grant. The consequence of this is that the rigor of the physical modeling has improved considerably. This has identified discrepancies between the predictions of the standard model and recent observations of extrasolar planets. In some cases, the discrepancies can be resolved by recognition of the stochastic nature of the planetary formation process, leading to variations in the final state of a planetary system. In other cases, it seems more likely that there are major deficiencies in the standard model, requiring our identifying variations to the model that are not so strongly constrained to our Solar System.

  19. A standardized data structure for describing and exchanging data from remeasured growth and yield plots

    Treesearch

    Michael D. Sweet; John C. Byrne

    1990-01-01

    Proposes standard data definitions and format to facilitate the sharing of growth and yield permanent plot data for the development, testing, and improvement of tree or stand growth models. The data structure presented provides standards for documenting sampling design, plot location and summary descriptors, measurement dates, treatments, site attributes, and...

  20. Seven propositions of the science of improvement: exploring foundations.

    PubMed

    Perla, Rocco J; Provost, Lloyd P; Parry, Gareth J

    2013-01-01

    The phrase "Science of Improvement" or "Improvement Science" is commonly used today by a range of people and professions to mean different things, creating confusion to those trying to learn about improvement. In this article, we briefly define the concepts of improvement and science, and review the history of the consideration of "improvement" as a science. We trace key concepts and ideas in improvement to their philosophical and theoretical foundation with a focus on Deming's System of Profound Knowledge. We suggest that Deming's system has a firm association with many contemporary and historic philosophic and scientific debates and concepts. With reference to these debates and concepts, we identify 7 propositions that provide the scientific and philosophical foundation for the science of improvement. A standard view of the science of improvement does not presently exist that is grounded in the philosophical and theoretical basis of the field. The 7 propositions outlined here demonstrate the value of examining the underpinnings of improvement. This is needed to both advance the field and minimize confusion about what the phrase "science of improvement" represents. We argue that advanced scientists of improvement are those who like Deming and Shewhart can integrate ideas, concepts, and models between scientific disciplines for the purpose of developing more robust improvement models, tools, and techniques with a focus on application and problem solving in real world contexts. The epistemological foundations and theoretical basis of the science of improvement and its reasoning methods need to be critically examined to ensure its continued development and relevance. If improvement efforts and projects in health care are to be characterized under the canon of science, then health care professionals engaged in quality improvement work would benefit from a standard set of core principles, a standard lexicon, and an understanding of the evolution of the science of improvement.

  1. Tests of local Lorentz invariance violation of gravity in the standard model extension with pulsars.

    PubMed

    Shao, Lijing

    2014-03-21

    The standard model extension is an effective field theory introducing all possible Lorentz-violating (LV) operators to the standard model and general relativity (GR). In the pure-gravity sector of minimal standard model extension, nine coefficients describe dominant observable deviations from GR. We systematically implemented 27 tests from 13 pulsar systems to tightly constrain eight linear combinations of these coefficients with extensive Monte Carlo simulations. It constitutes the first detailed and systematic test of the pure-gravity sector of minimal standard model extension with the state-of-the-art pulsar observations. No deviation from GR was detected. The limits of LV coefficients are expressed in the canonical Sun-centered celestial-equatorial frame for the convenience of further studies. They are all improved by significant factors of tens to hundreds with existing ones. As a consequence, Einstein's equivalence principle is verified substantially further by pulsar experiments in terms of local Lorentz invariance in gravity.

  2. Effective Report Preparation: Streamlining the Reporting Process. AIR 1999 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Dalrymple, Margaret; Wang, Mindy; Frost, Jacquelyn

    This paper describes the processes and techniques used to improve and streamline the standard student reports used at Purdue University (Indiana). Various models for analyzing reporting processes are described, especially the model used in the study, the Shewart or Deming Cycle, a method that aids in continuous analysis and improvement through a…

  3. Antidepressant activity of standardized extract of Bacopa monniera in experimental models of depression in rats.

    PubMed

    Sairam, K; Dorababu, M; Goel, R K; Bhattacharya, S K

    2002-04-01

    Bacopa monniera Wettst. (syn. Herpestis monniera L.; Scrophulariaceae) is a commonly used Ayurvedic drug for mental disorders. The standardized extract was reported earlier to have significant anti-oxidant effect, anxiolytic activity and improve memory retention in Alzheimer's disease. Presently, the standardized methanolic extract of Bacopa monniera (bacoside A - 38.0+/-0.9) was investigated for potential antidepressant activity in rodent models of depression. The effect was compared with the standard antidepressant drug imipramine (15 mg/kg, ip). The extract when given in the dose of 20 and 40 mg/kg, orally once daily for 5 days was found to have significant antidepressant activity in forced swim and learned helplessness models of depression and was comparable to that of imipramine.

  4. A hemodynamic-directed approach to pediatric cardiopulmonary resuscitation (HD-CPR) improves survival.

    PubMed

    Morgan, Ryan W; Kilbaugh, Todd J; Shoap, Wesley; Bratinov, George; Lin, Yuxi; Hsieh, Ting-Chang; Nadkarni, Vinay M; Berg, Robert A; Sutton, Robert M

    2017-02-01

    Most pediatric in-hospital cardiac arrests (IHCAs) occur in ICUs where invasive hemodynamic monitoring is frequently available. Titrating cardiopulmonary resuscitation (CPR) to the hemodynamic response of the individual improves survival in preclinical models of adult cardiac arrest. The objective of this study was to determine if titrating CPR to systolic blood pressure (SBP) and coronary perfusion pressure (CoPP) in a pediatric porcine model of asphyxia-associated ventricular fibrillation (VF) IHCA would improve survival as compared to traditional CPR. After 7min of asphyxia followed by VF, 4-week-old piglets received either hemodynamic-directed CPR (HD-CPR; compression depth titrated to SBP of 90mmHg and vasopressor administration to maintain CoPP ≥20mmHg); or Standard Care (compression depth 1/3 of the anterior-posterior chest diameter and epinephrine every 4min). All animals received CPR for 10min prior to the first defibrillation attempt. CPR was continued for a maximum of 20min. Protocolized intensive care was provided to all surviving animals for 4h. The primary outcome was 4-h survival. Survival rate was greater with HD-CPR (12/12) than Standard Care (6/10; p=0.03). CoPP during HD-CPR was higher compared to Standard Care (point estimate +8.1mmHg, CI 95 : 0.5-15.8mmHg; p=0.04). Chest compression depth was lower with HD-CPR than Standard Care (point estimate -14.0mm, CI95: -9.6 to -18.4mm; p<0.01). Prior to the first defibrillation attempt, more vasopressor doses were administered with HD-CPR vs. Standard Care (median 5 vs. 2; p<0.01). Hemodynamic-directed CPR improves short-term survival compared to standard depth-targeted CPR in a porcine model of pediatric asphyxia-associated VF IHCA. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. A Hemodynamic-Directed Approach to Pediatric Cardiopulmonary Resuscitation (HD-CPR) Improves Survival

    PubMed Central

    Morgan, Ryan W.; Kilbaugh, Todd J.; Shoap, Wesley; Bratinov, George; Lin, Yuxi; Hsieh, Ting-Chang; Nadkarni, Vinay M.; Berg, Robert A.; Sutton, Robert M.

    2016-01-01

    Aim Most pediatric in-hositalcardiac arrests(IHCAs) occur in ICUs where invasive hemodynamic monitoring is frequently available. Titrating cardiopulmonary resuscitation (CPR) to the hemodynamic response of the individual improves survival in preclinical models of adult cardiac arrest. The objective of this study was to determine if titrating CPR to systolic blood pressure (SBP) and coronary perfusion pressure (CoPP) in a pediatric porcine model of asphyxia-associated ventricular fibrillation (VF) IHCA would improve survival as compared to traditional CPR. Methods After 7 minutes of asphyxia followed by VF, 4-week-old piglets received either Hemodynamic-Directed CPR (HD-CPR; compression depth titrated to SBP of 90mmHg and vasopressor administration to maintain CoPP ≥20mmHg); or Standard Care (compression depth 1/3 of the anterior-posterior chest diameter and epinephrine every 4 minutes). All animals received CPR for 10 minutes prior to the first defibrillation attempt. CPR was continued for a maximum of 20 minutes. Protocolized intensive care was provided to all surviving animals for 4 hours. The primary outcome was 4-hour survival. Results Survival rate was greater with HD-CPR (12/12) than Standard Care (6/10; p=0.03). CoPP during HD-CPR was higher compared to Standard Care (point estimate +8.1mmHg, CI95: 0.5–15.8mmHg; p=0.04). Chest compression depth was lower with HD-CPR than Standard Care (point estimate 14.0mm, CI95: 9.6–18.4mm; p<0.01). Prior to the first defibrillation attempt, more vasopressor doses were administered with HD-CPR versus Standard Care (median 5 versus 2; p<0.01). Conclusions Hemodynamic-directed CPR improves short-term survival compared to standard depth-targeted CPR in a porcine model of pediatric asphyxia-associated VF IHCA. PMID:27923692

  6. Improving the Interoperability of Disaster Models: a Case Study of Proposing Fireml for Forest Fire Model

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Wang, F.; Meng, Q.; Li, Z.; Liu, B.; Zheng, X.

    2018-04-01

    This paper presents a new standardized data format named Fire Markup Language (FireML), extended by the Geography Markup Language (GML) of OGC, to elaborate upon the fire hazard model. The proposed FireML is able to standardize the input and output documents of a fire model for effectively communicating with different disaster management systems to ensure a good interoperability. To demonstrate the usage of FireML and testify its feasibility, an adopted forest fire spread model being compatible with FireML is described. And a 3DGIS disaster management system is developed to simulate the dynamic procedure of forest fire spread with the defined FireML documents. The proposed approach will enlighten ones who work on other disaster models' standardization work.

  7. An improved null model for assessing the net effects of multiple stressors on communities.

    PubMed

    Thompson, Patrick L; MacLennan, Megan M; Vinebrooke, Rolf D

    2018-01-01

    Ecological stressors (i.e., environmental factors outside their normal range of variation) can mediate each other through their interactions, leading to unexpected combined effects on communities. Determining whether the net effect of stressors is ecologically surprising requires comparing their cumulative impact to a null model that represents the linear combination of their individual effects (i.e., an additive expectation). However, we show that standard additive and multiplicative null models that base their predictions on the effects of single stressors on community properties (e.g., species richness or biomass) do not provide this linear expectation, leading to incorrect interpretations of antagonistic and synergistic responses by communities. We present an alternative, the compositional null model, which instead bases its predictions on the effects of stressors on individual species, and then aggregates them to the community level. Simulations demonstrate the improved ability of the compositional null model to accurately provide a linear expectation of the net effect of stressors. We simulate the response of communities to paired stressors that affect species in a purely additive fashion and compare the relative abilities of the compositional null model and two standard community property null models (additive and multiplicative) to predict these linear changes in species richness and community biomass across different combinations (both positive, negative, or opposite) and intensities of stressors. The compositional model predicts the linear effects of multiple stressors under almost all scenarios, allowing for proper classification of net effects, whereas the standard null models do not. Our findings suggest that current estimates of the prevalence of ecological surprises on communities based on community property null models are unreliable, and should be improved by integrating the responses of individual species to the community level as does our compositional null model. © 2017 John Wiley & Sons Ltd.

  8. California Diploma Project Technical Report II: Alignment Study--Alignment Study of the Health Sciences and Medical Technology Draft Standards and California's Exit Level Common Core State Standards

    ERIC Educational Resources Information Center

    McGaughy, Charis; de Gonzalez, Alicia

    2012-01-01

    The California Department of Education is in the process of revising the Career and Technical Education (CTE) Model Curriculum Standards. The Educational Policy Improvement Center (EPIC) conducted an investigation of the draft version of the Health Sciences and Medical Technology Standards (Health Science). The purpose of the study is to…

  9. Effects of Tropospheric Spatio-Temporal Correlated Noise on the Analysis of Space Geodetic Data

    NASA Technical Reports Server (NTRS)

    Romero-Wolf, A.; Jacobs, C. S.; Ratcliff, J. T.

    2012-01-01

    The standard VLBI analysis models the distribution of measurement noise as Gaussian. Because the price of recording bits is steadily decreasing, thermal errors will soon no longer dominate. As a result, it is expected that troposphere and instrumentation/clock errors will increasingly become more dominant. Given that both of these errors have correlated spectra, properly modeling the error distributions will become increasingly relevant for optimal analysis. We discuss the advantages of modeling the correlations between tropospheric delays using a Kolmogorov spectrum and the frozen flow assumption pioneered by Treuhaft and Lanyi. We then apply these correlated noise spectra to the weighting of VLBI data analysis for two case studies: X/Ka-band global astrometry and Earth orientation. In both cases we see improved results when the analyses are weighted with correlated noise models vs. the standard uncorrelated models. The X/Ka astrometric scatter improved by approx.10% and the systematic Delta delta vs. delta slope decreased by approx. 50%. The TEMPO Earth orientation results improved by 17% in baseline transverse and 27% in baseline vertical.

  10. ATMOSPHERIC MODEL DEVELOPMENT

    EPA Science Inventory

    This task provides credible state of the art air quality models and guidance for use in implementation of National Ambient Air Quality Standards for ozone and PM. This research effort is to develop and improve air quality models, such as the Community Multiscale Air Quality (CMA...

  11. Testing the Standard Model by precision measurement of the weak charges of quarks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross Young; Roger Carlini; Anthony Thomas

    In a global analysis of the latest parity-violating electron scattering measurements on nuclear targets, we demonstrate a significant improvement in the experimental knowledge of the weak neutral-current lepton-quark interactions at low-energy. The precision of this new result, combined with earlier atomic parity-violation measurements, limits the magnitude of possible contributions from physics beyond the Standard Model - setting a model-independent, lower-bound on the scale of new physics at ~1 TeV.

  12. Synchronized Trajectories in a Climate "Supermodel"

    NASA Astrophysics Data System (ADS)

    Duane, Gregory; Schevenhoven, Francine; Selten, Frank

    2017-04-01

    Differences in climate projections among state-of-the-art models can be resolved by connecting the models in run-time, either through inter-model nudging or by directly combining the tendencies for corresponding variables. Since it is clearly established that averaging model outputs typically results in improvement as compared to any individual model output, averaged re-initializations at typical analysis time intervals also seems appropriate. The resulting "supermodel" is more like a single model than it is like an ensemble, because the constituent models tend to synchronize even with limited inter-model coupling. Thus one can examine the properties of specific trajectories, rather than averaging the statistical properties of the separate models. We apply this strategy to a study of the index cycle in a supermodel constructed from several imperfect copies of the SPEEDO model (a global primitive-equation atmosphere-ocean-land climate model). As with blocking frequency, typical weather statistics of interest like probabilities of heat waves or extreme precipitation events, are improved as compared to the standard multi-model ensemble approach. In contrast to the standard approach, the supermodel approach provides detailed descriptions of typical actual events.

  13. A quality improvement initiative to reduce necrotizing enterocolitis across hospital systems.

    PubMed

    Nathan, Amy T; Ward, Laura; Schibler, Kurt; Moyer, Laurel; South, Andrew; Kaplan, Heather C

    2018-04-20

    Necrotizing enterocolitis (NEC) is a devastating intestinal disease in premature infants. Local rates of NEC were unacceptably high. We hypothesized that utilizing quality improvement methodology to standardize care and apply evidence-based practices would reduce our rate of NEC. A multidisciplinary team used the model for improvement to prioritize interventions. Three neonatal intensive care units (NICUs) developed a standardized feeding protocol for very low birth weight (VLBW) infants, and employed strategies to increase the use of human milk, maximize intestinal perfusion, and promote a healthy microbiome. The primary outcome measure, NEC in VLBW infants, decreased from 0.17 cases/100 VLBW patient days to 0.029, an 83% reduction, while the compliance with a standardized feeding protocol improved. Through reliable implementation of evidence-based practices, this project reduced the regional rate of NEC by 83%. A key outcome and primary driver of success was standardization across multiple NICUs, resulting in consistent application of best practices and reduction in variation.

  14. Harmonization of standards for parabolic trough collector testing in solar thermal power plants

    NASA Astrophysics Data System (ADS)

    Sallaberry, Fabienne; Valenzuela, Loreto; Palacin, Luis G.; Leon, Javier; Fischer, Stephan; Bohren, Andreas

    2017-06-01

    The technology of parabolic trough collectors (PTC) is used widely in concentrating Solar Power (CSP) plants worldwide. However this type of large-size collectors cannot be officially tested by an accredited laboratory and certified by an accredited certification body so far, as there is no standard adapted to its particularity, and the current published standard for solar thermal collectors are not completely applicable to them. Recently some standardization committees have been working on this technology. This paper aims to give a summary of the standardized testing methodology of large-size PTC for CSP plants, giving the physical model chosen for modeling the thermal performance of the collector in the new revision of standard ISO 9806 and the points still to be improved in the standard draft IEC 62862-3-2. In this paper, a summary of the testing validation performed on one parabolic trough collector installed in one of the test facilities at the Plataforma Solar de Almería (PSA) with this new model is also presented.

  15. Visually guided tube thoracostomy insertion comparison to standard of care in a large animal model.

    PubMed

    Hernandez, Matthew C; Vogelsang, David; Anderson, Jeff R; Thiels, Cornelius A; Beilman, Gregory; Zielinski, Martin D; Aho, Johnathon M

    2017-04-01

    Tube thoracostomy (TT) is a lifesaving procedure for a variety of thoracic pathologies. The most commonly utilized method for placement involves open dissection and blind insertion. Image guided placement is commonly utilized but is limited by an inability to see distal placement location. Unfortunately, TT is not without complications. We aim to demonstrate the feasibility of a disposable device to allow for visually directed TT placement compared to the standard of care in a large animal model. Three swine were sequentially orotracheally intubated and anesthetized. TT was conducted utilizing a novel visualization device, tube thoracostomy visual trocar (TTVT) and standard of care (open technique). Position of the TT in the chest cavity were recorded using direct thoracoscopic inspection and radiographic imaging with the operator blinded to results. Complications were evaluated using a validated complication grading system. Standard descriptive statistical analyses were performed. Thirty TT were placed, 15 using TTVT technique, 15 using standard of care open technique. All of the TT placed using TTVT were without complication and in optimal position. Conversely, 27% of TT placed using standard of care open technique resulted in complications. Necropsy revealed no injury to intrathoracic organs. Visual directed TT placement using TTVT is feasible and non-inferior to the standard of care in a large animal model. This improvement in instrumentation has the potential to greatly improve the safety of TT. Further study in humans is required. Therapeutic Level II. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Improving particle filters in rainfall-runoff models: application of the resample-move step and development of the ensemble Gaussian particle filter

    NASA Astrophysics Data System (ADS)

    Plaza Guingla, D. A.; Pauwels, V. R.; De Lannoy, G. J.; Matgen, P.; Giustarini, L.; De Keyser, R.

    2012-12-01

    The objective of this work is to analyze the improvement in the performance of the particle filter by including a resample-move step or by using a modified Gaussian particle filter. Specifically, the standard particle filter structure is altered by the inclusion of the Markov chain Monte Carlo move step. The second choice adopted in this study uses the moments of an ensemble Kalman filter analysis to define the importance density function within the Gaussian particle filter structure. Both variants of the standard particle filter are used in the assimilation of densely sampled discharge records into a conceptual rainfall-runoff model. In order to quantify the obtained improvement, discharge root mean square errors are compared for different particle filters, as well as for the ensemble Kalman filter. First, a synthetic experiment is carried out. The results indicate that the performance of the standard particle filter can be improved by the inclusion of the resample-move step, but its effectiveness is limited to situations with limited particle impoverishment. The results also show that the modified Gaussian particle filter outperforms the rest of the filters. Second, a real experiment is carried out in order to validate the findings from the synthetic experiment. The addition of the resample-move step does not show a considerable improvement due to performance limitations in the standard particle filter with real data. On the other hand, when an optimal importance density function is used in the Gaussian particle filter, the results show a considerably improved performance of the particle filter.

  17. Improved Strategies and Optimization of Calibration Models for Real-time PCR Absolute Quantification

    EPA Science Inventory

    Real-time PCR absolute quantification applications rely on the use of standard curves to make estimates of DNA target concentrations in unknown samples. Traditional absolute quantification approaches dictate that a standard curve must accompany each experimental run. However, t...

  18. The Impact of Statistical Adjustment on Conditional Standard Errors of Measurement in the Assessment of Physician Communication Skills

    ERIC Educational Resources Information Center

    Raymond, Mark R.; Clauser, Brian E.; Furman, Gail E.

    2010-01-01

    The use of standardized patients to assess communication skills is now an essential part of assessing a physician's readiness for practice. To improve the reliability of communication scores, it has become increasingly common in recent years to use statistical models to adjust ratings provided by standardized patients. This study employed ordinary…

  19. Isotropy of low redshift type Ia supernovae: A Bayesian analysis

    NASA Astrophysics Data System (ADS)

    Andrade, U.; Bengaly, C. A. P.; Alcaniz, J. S.; Santos, B.

    2018-04-01

    The standard cosmology strongly relies upon the cosmological principle, which consists on the hypotheses of large scale isotropy and homogeneity of the Universe. Testing these assumptions is, therefore, crucial to determining if there are deviations from the standard cosmological paradigm. In this paper, we use the latest type Ia supernova compilations, namely JLA and Union2.1 to test the cosmological isotropy at low redshift ranges (z <0.1 ). This is performed through a Bayesian selection analysis, in which we compare the standard, isotropic model, with another one including a dipole correction due to peculiar velocities. The full covariance matrix of SN distance uncertainties are taken into account. We find that the JLA sample favors the standard model, whilst the Union2.1 results are inconclusive, yet the constraints from both compilations are in agreement with previous analyses. We conclude that there is no evidence for a dipole anisotropy from nearby supernova compilations, albeit this test should be greatly improved with the much-improved data sets from upcoming cosmological surveys.

  20. Testing stellar evolution models with detached eclipsing binaries

    NASA Astrophysics Data System (ADS)

    Higl, J.; Weiss, A.

    2017-12-01

    Stellar evolution codes, as all other numerical tools, need to be verified. One of the standard stellar objects that allow stringent tests of stellar evolution theory and models, are detached eclipsing binaries. We have used 19 such objects to test our stellar evolution code, in order to see whether standard methods and assumptions suffice to reproduce the observed global properties. In this paper we concentrate on three effects that contain a specific uncertainty: atomic diffusion as used for standard solar model calculations, overshooting from convective regions, and a simple model for the effect of stellar spots on stellar radius, which is one of the possible solutions for the radius problem of M dwarfs. We find that in general old systems need diffusion to allow for, or at least improve, an acceptable fit, and that systems with convective cores indeed need overshooting. Only one system (AI Phe) requires the absence of it for a successful fit. To match stellar radii for very low-mass stars, the spot model proved to be an effective approach, but depending on model details, requires a high percentage of the surface being covered by spots. We briefly discuss improvements needed to further reduce the freedom in modelling and to allow an even more restrictive test by using these objects.

  1. Implementing the Mother-Baby Model of Nursing Care Using Models and Quality Improvement Tools.

    PubMed

    Brockman, Vicki

    As family-centered care has become the expected standard, many facilities follow the mother-baby model, in which care is provided to both a woman and her newborn in the same room by the same nurse. My facility employed a traditional model of nursing care, which was not evidence-based or financially sustainable. After implementing the mother-baby model, we experienced an increase in exclusive breastfeeding rates at hospital discharge, increased patient satisfaction, improved staff productivity and decreased salary costs, all while the number of births increased. Our change was successful because it was guided by the use of quality improvement tools, change theory and evidence-based practice models. © 2015 AWHONN.

  2. Standard solar model

    NASA Technical Reports Server (NTRS)

    Guenther, D. B.; Demarque, P.; Kim, Y.-C.; Pinsonneault, M. H.

    1992-01-01

    A set of solar models have been constructed, each based on a single modification to the physics of a reference solar model. In addition, a model combining several of the improvements has been calculated to provide a best solar model. Improvements were made to the nuclear reaction rates, the equation of state, the opacities, and the treatment of the atmosphere. The impact on both the structure and the frequencies of the low-l p-modes of the model to these improvements are discussed. It is found that the combined solar model, which is based on the best physics available (and does not contain any ad hoc assumptions), reproduces the observed oscillation spectrum (for low-l) within the errors associated with the uncertainties in the model physics (primarily opacities).

  3. Extending the Solvation-Layer Interface Condition Continum Electrostatic Model to a Linearized Poisson-Boltzmann Solvent.

    PubMed

    Molavi Tabrizi, Amirhossein; Goossens, Spencer; Mehdizadeh Rahimi, Ali; Cooper, Christopher D; Knepley, Matthew G; Bardhan, Jaydeep P

    2017-06-13

    We extend the linearized Poisson-Boltzmann (LPB) continuum electrostatic model for molecular solvation to address charge-hydration asymmetry. Our new solvation-layer interface condition (SLIC)/LPB corrects for first-shell response by perturbing the traditional continuum-theory interface conditions at the protein-solvent and the Stern-layer interfaces. We also present a GPU-accelerated treecode implementation capable of simulating large proteins, and our results demonstrate that the new model exhibits significant accuracy improvements over traditional LPB models, while reducing the number of fitting parameters from dozens (atomic radii) to just five parameters, which have physical meanings related to first-shell water behavior at an uncharged interface. In particular, atom radii in the SLIC model are not optimized but uniformly scaled from their Lennard-Jones radii. Compared to explicit-solvent free-energy calculations of individual atoms in small molecules, SLIC/LPB is significantly more accurate than standard parametrizations (RMS error 0.55 kcal/mol for SLIC, compared to RMS error of 3.05 kcal/mol for standard LPB). On parametrizing the electrostatic model with a simple nonpolar component for total molecular solvation free energies, our model predicts octanol/water transfer free energies with an RMS error 1.07 kcal/mol. A more detailed assessment illustrates that standard continuum electrostatic models reproduce total charging free energies via a compensation of significant errors in atomic self-energies; this finding offers a window into improving the accuracy of Generalized-Born theories and other coarse-grained models. Most remarkably, the SLIC model also reproduces positive charging free energies for atoms in hydrophobic groups, whereas standard PB models are unable to generate positive charging free energies regardless of the parametrized radii. The GPU-accelerated solver is freely available online, as is a MATLAB implementation.

  4. Benefits of a Pharmacology Antimalarial Reference Standard and Proficiency Testing Program Provided by the Worldwide Antimalarial Resistance Network (WWARN)

    PubMed Central

    Lourens, Chris; Lindegardh, Niklas; Barnes, Karen I.; Guerin, Philippe J.; Sibley, Carol H.; White, Nicholas J.

    2014-01-01

    Comprehensive assessment of antimalarial drug resistance should include measurements of antimalarial blood or plasma concentrations in clinical trials and in individual assessments of treatment failure so that true resistance can be differentiated from inadequate drug exposure. Pharmacometric modeling is necessary to assess pharmacokinetic-pharmacodynamic relationships in different populations to optimize dosing. To accomplish both effectively and to allow comparison of data from different laboratories, it is essential that drug concentration measurement is accurate. Proficiency testing (PT) of laboratory procedures is necessary for verification of assay results. Within the Worldwide Antimalarial Resistance Network (WWARN), the goal of the quality assurance/quality control (QA/QC) program is to facilitate and sustain high-quality antimalarial assays. The QA/QC program consists of an international PT program for pharmacology laboratories and a reference material (RM) program for the provision of antimalarial drug standards, metabolites, and internal standards for laboratory use. The RM program currently distributes accurately weighed quantities of antimalarial drug standards, metabolites, and internal standards to 44 pharmacology, in vitro, and drug quality testing laboratories. The pharmacology PT program has sent samples to eight laboratories in four rounds of testing. WWARN technical experts have provided advice for correcting identified problems to improve performance of subsequent analysis and ultimately improved the quality of data. Many participants have demonstrated substantial improvements over subsequent rounds of PT. The WWARN QA/QC program has improved the quality and value of antimalarial drug measurement in laboratories globally. It is a model that has potential to be applied to strengthening laboratories more widely and improving the therapeutics of other infectious diseases. PMID:24777099

  5. Visualization of seismic tomography on Google Earth: Improvement of KML generator and its web application to accept the data file in European standard format

    NASA Astrophysics Data System (ADS)

    Yamagishi, Y.; Yanaka, H.; Tsuboi, S.

    2009-12-01

    We have developed a conversion tool for the data of seismic tomography into KML, called KML generator, and made it available on the web site (http://www.jamstec.go.jp/pacific21/google_earth). The KML generator enables us to display vertical and horizontal cross sections of the model on Google Earth in three-dimensional manner, which would be useful to understand the Earth's interior. The previous generator accepts text files of grid-point data having longitude, latitude, and seismic velocity anomaly. Each data file contains the data for each depth. Metadata, such as bibliographic reference, grid-point interval, depth, are described in other information file. We did not allow users to upload their own tomographic model to the web application, because there is not standard format to represent tomographic model. Recently European seismology research project, NEIRES (Network of Research Infrastructures for European Seismology), advocates that the data of seismic tomography should be standardized. They propose a new format based on JSON (JavaScript Object Notation), which is one of the data-interchange formats, as a standard one for the tomography. This format consists of two parts, which are metadata and grid-point data values. The JSON format seems to be powerful to handle and to analyze the tomographic model, because the structure of the format is fully defined by JavaScript objects, thus the elements are directly accessible by a script. In addition, there exist JSON libraries for several programming languages. The International Federation of Digital Seismograph Network (FDSN) adapted this format as a FDSN standard format for seismic tomographic model. There might be a possibility that this format would not only be accepted by European seismologists but also be accepted as the world standard. Therefore we improve our KML generator for seismic tomography to accept the data file having also JSON format. We also improve the web application of the generator so that the JSON formatted data file can be uploaded. Users can convert any tomographic model data to KML. The KML obtained through the new generator should provide an arena to compare various tomographic models and other geophysical observations on Google Earth, which may act as a common platform for geoscience browser.

  6. Designing an evaluation framework for WFME basic standards for medical education.

    PubMed

    Tackett, Sean; Grant, Janet; Mmari, Kristin

    2016-01-01

    To create an evaluation plan for the World Federation for Medical Education (WFME) accreditation standards for basic medical education. We conceptualized the 100 basic standards from "Basic Medical Education: WFME Global Standards for Quality Improvement: The 2012 Revision" as medical education program objectives. Standards were simplified into evaluable items, which were then categorized as inputs, processes, outputs and/or outcomes to generate a logic model and corresponding plan for data collection. WFME standards posed significant challenges to evaluation due to complex wording, inconsistent formatting and lack of existing assessment tools. Our resulting logic model contained 244 items. Standard B 5.1.1 separated into 24 items, the most for any single standard. A large proportion of items (40%) required evaluation of more than one input, process, output and/or outcome. Only one standard (B 3.2.2) was interpreted as requiring evaluation of a program outcome. Current WFME standards are difficult to use for evaluation planning. Our analysis may guide adaptation and revision of standards to make them more evaluable. Our logic model and data collection plan may be useful to medical schools planning an institutional self-review and to accrediting authorities wanting to provide guidance to schools under their purview.

  7. Proposed Reference Spectral Irradiance Standards to Improve Photovoltaic Concentrating System Design and Performance Evaluation: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myers, D. R.; Emery, K. E.; Gueymard, C.

    2002-05-01

    This conference paper describes the American Society for Testing and Materials (ASTM), the International Electrotechnical Commission (IEC), and the International Standards Organization (ISO) standard solar terrestrial spectra (ASTM G-159, IEC-904-3, ISO 9845-1) provide standard spectra for photovoltaic performance applications. Modern terrestrial spectral radiation models and knowledge of atmospheric physics are applied to develop suggested revisions to update the reference spectra. We use a moderately complex radiative transfer model (SMARTS2) to produce the revised spectra. SMARTS2 has been validated against the complex MODTRAN radiative transfer code and spectral measurements. The model is proposed as an adjunct standard to reproduce the referencemore » spectra. The proposed spectra represent typical clear sky spectral conditions associated with sites representing reasonable photovoltaic energy production and weathering and durability climates. The proposed spectra are under consideration by ASTM.« less

  8. Benefits to the Simulation Training Community of a New ANSI Standard for the Exchange of Aero Simulation Models

    NASA Technical Reports Server (NTRS)

    Hildreth, Bruce L.; Jackson, E. Bruce

    2009-01-01

    The American Institute of Aeronautics Astronautics (AIAA) Modeling and Simulation Technical Committee is in final preparation of a new standard for the exchange of flight dynamics models. The standard will become an ANSI standard and is under consideration for submission to ISO for acceptance by the international community. The standard has some a spects that should provide benefits to the simulation training community. Use of the new standard by the training simulation community will reduce development, maintenance and technical refresh investment on each device. Furthermore, it will significantly lower the cost of performing model updates to improve fidelity or expand the envelope of the training device. Higher flight fidelity should result in better transfer of training, a direct benefit to the pilots under instruction. Costs of adopting the standard are minimal and should be paid back within the cost of the first use for that training device. The standard achie ves these advantages by making it easier to update the aerodynamic model. It provides a standard format for the model in a custom eXtensible Markup Language (XML) grammar, the Dynamic Aerospace Vehicle Exchange Markup Language (DAVE-ML). It employs an existing XML grammar, MathML, to describe the aerodynamic model in an input data file, eliminating the requirement for actual software compilation. The major components of the aero model become simply an input data file, and updates are simply new XML input files. It includes naming and axis system conventions to further simplify the exchange of information.

  9. MODEL DEVELOPMENT FOR FY08 CMAQ RELEASE

    EPA Science Inventory

    This task provides credible state of the art air quality models and guidance for use in implementation of National Ambient Air Quality Standards for ozone and PM. This research effort is to develop and improve air quality models, such as the Community Multiscale Air Quality (CMA...

  10. Computation of confined coflow jets with three turbulence models

    NASA Technical Reports Server (NTRS)

    Zhu, J.; Shih, T. H.

    1993-01-01

    A numerical study of confined jets in a cylindrical duct is carried out to examine the performance of two recently proposed turbulence models: an RNG-based K-epsilon model and a realizable Reynolds stress algebraic equation model. The former is of the same form as the standard K-epsilon model but has different model coefficients. The latter uses an explicit quadratic stress-strain relationship to model the turbulent stresses and is capable of ensuring the positivity of each turbulent normal stress. The flow considered involves recirculation with unfixed separation and reattachment points and severe adverse pressure gradients, thereby providing a valuable test of the predictive capability of the models for complex flows. Calculations are performed with a finite-volume procedure. Numerical credibility of the solutions is ensured by using second-order accurate differencing schemes and sufficiently fine grids. Calculations with the standard K-epsilon model are also made for comparison. Detailed comparisons with experiments show that the realizable Reynolds stress algebraic equation model consistently works better than does the standard K-epsilon model in capturing the essential flow features, while the RNG-based K-epsilon model does not seem to give improvements over the standard K-epsilon model under the flow conditions considered.

  11. Spatial enhancement of ECG using diagnostic similarity score based lead selective multi-scale linear model.

    PubMed

    Nallikuzhy, Jiss J; Dandapat, S

    2017-06-01

    In this work, a new patient-specific approach to enhance the spatial resolution of ECG is proposed and evaluated. The proposed model transforms a three-lead ECG into a standard twelve-lead ECG thereby enhancing its spatial resolution. The three leads used for prediction are obtained from the standard twelve-lead ECG. The proposed model takes advantage of the improved inter-lead correlation in wavelet domain. Since the model is patient-specific, it also selects the optimal predictor leads for a given patient using a lead selection algorithm. The lead selection algorithm is based on a new diagnostic similarity score which computes the diagnostic closeness between the original and the spatially enhanced leads. Standard closeness measures are used to assess the performance of the model. The similarity in diagnostic information between the original and the spatially enhanced leads are evaluated using various diagnostic measures. Repeatability and diagnosability are performed to quantify the applicability of the model. A comparison of the proposed model is performed with existing models that transform a subset of standard twelve-lead ECG into the standard twelve-lead ECG. From the analysis of the results, it is evident that the proposed model preserves diagnostic information better compared to other models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. ISG hybrid powertrain: a rule-based driver model incorporating look-ahead information

    NASA Astrophysics Data System (ADS)

    Shen, Shuiwen; Zhang, Junzhi; Chen, Xiaojiang; Zhong, Qing-Chang; Thornton, Roger

    2010-03-01

    According to European regulations, if the amount of regenerative braking is determined by the travel of the brake pedal, more stringent standards must be applied, otherwise it may adversely affect the existing vehicle safety system. The use of engine or vehicle speed to derive regenerative braking is one way to avoid strict design standards, but this introduces discontinuity in powertrain torque when the driver releases the acceleration pedal or applies the brake pedal. This is shown to cause oscillations in the pedal input and powertrain torque when a conventional driver model is adopted. Look-ahead information, together with other predicted vehicle states, are adopted to control the vehicle speed, in particular, during deceleration, and to improve the driver model so that oscillations can be avoided. The improved driver model makes analysis and validation of the control strategy for an integrated starter generator (ISG) hybrid powertrain possible.

  13. Exploiting salient semantic analysis for information retrieval

    NASA Astrophysics Data System (ADS)

    Luo, Jing; Meng, Bo; Quan, Changqin; Tu, Xinhui

    2016-11-01

    Recently, many Wikipedia-based methods have been proposed to improve the performance of different natural language processing (NLP) tasks, such as semantic relatedness computation, text classification and information retrieval. Among these methods, salient semantic analysis (SSA) has been proven to be an effective way to generate conceptual representation for words or documents. However, its feasibility and effectiveness in information retrieval is mostly unknown. In this paper, we study how to efficiently use SSA to improve the information retrieval performance, and propose a SSA-based retrieval method under the language model framework. First, SSA model is adopted to build conceptual representations for documents and queries. Then, these conceptual representations and the bag-of-words (BOW) representations can be used in combination to estimate the language models of queries and documents. The proposed method is evaluated on several standard text retrieval conference (TREC) collections. Experiment results on standard TREC collections show the proposed models consistently outperform the existing Wikipedia-based retrieval methods.

  14. The Quality Adaptation Model: Adaptation and Adoption of the Quality Standard ISO/IEC 19796-1 for Learning, Education, and Training

    ERIC Educational Resources Information Center

    Pawlowski, Jan M.

    2007-01-01

    In 2005, the new quality standard for learning, education, and training, ISO/IEC 19796-1, was published. Its purpose is to help educational organizations to develop quality systems and to improve the quality of their processes, products, and services. In this article, the standard is presented and compared to existing approaches, showing the…

  15. Progress toward a new measurement of the neutron lifetime

    NASA Astrophysics Data System (ADS)

    Grammer, Kyle

    2015-10-01

    Free neutron decay is the simplest nuclear beta decay. A precise value for the neutron lifetime is valuable for standard model consistency tests and Big Bang Nucleosynthesis models. There is a disagreement between the measured neutron lifetime from cold neutron beam experiments and ultracold neutron storage experiments. A new measurement of the neutron lifetime using the beam method is planned at the National Institute of Standards and Technology Center for Neutron Research. Experimental improvements should result in a 1s uncertainty measurement of the neutron lifetime. The technical improvements, recent apparatus tests, and the path towards the new measurement will be discussed. This work is supported by DOE Office of Science, NIST, and NSF.

  16. Progress toward a new measurement of the neutron lifetime

    NASA Astrophysics Data System (ADS)

    Grammer, Kyle

    2015-04-01

    Free neutron decay is the simplest nuclear beta decay. A precise value for the neutron lifetime is valuable for standard model consistency tests and Big Bang Nucleosynthesis models. There is a disagreement between the measured neutron lifetime from cold neutron beam experiments and ultracold neutron storage experiments. A new measurement of the neutron lifetime using the beam method is planned at the National Institute of Standards and Technology Center for Neutron Research. Experimental improvements should result in a 1s uncertainty measurement of the neutron lifetime. The technical improvements and the path towards the new measurement will be discussed. This work is supported by DOE Office of Science, NIST, and NSF.

  17. The air quality forecast in Beijing with Community Multi-scale Air Quality Modeling (CMAQ) System: model evaluation and improvement

    NASA Astrophysics Data System (ADS)

    Wu, Q.

    2013-12-01

    The MM5-SMOKE-CMAQ model system, which is developed by the United States Environmental Protection Agency(U.S. EPA) as the Models-3 system, has been used for the daily air quality forecast in the Beijing Municipal Environmental Monitoring Center(Beijing MEMC), as a part of the Ensemble Air Quality Forecast System for Beijing(EMS-Beijing) since the Olympic Games year 2008. In this study, we collect the daily forecast results of the CMAQ model in the whole year 2010 for the model evaluation. The results show that the model play a good model performance in most days but underestimate obviously in some air pollution episode. A typical air pollution episode from 11st - 20th January 2010 was chosen, which the air pollution index(API) of particulate matter (PM10) observed by Beijing MEMC reaches to 180 while the prediction of PM10-API is about 100. Taking in account all stations in Beijing, including urban and suburban stations, three numerical methods are used for model improvement: firstly, enhance the inner domain with 4km grids, the coverage from only Beijing to the area including its surrounding cities; secondly, update the Beijing stationary area emission inventory, from statistical county-level to village-town level, that would provide more detail spatial informance for area emissions; thirdly, add some industrial points emission in Beijing's surrounding cities, the latter two are both the improvement of emission. As the result, the peak of the nine national standard stations averaged PM10-API, which is simulated by CMAQ as daily hindcast PM10-API, reach to 160 and much near to the observation. The new results show better model performance, which the correlation coefficent is 0.93 in national standard stations average and 0.84 in all stations, the relative error is 15.7% in national standard stations averaged and 27% in all stations. The time series of 9 national standard in Beijing urban The scatter diagram of all stations in Beijing, the red is the forecast and the blue is new result.

  18. Multi-sensor Improved Sea-Surface Temperature (MISST) for IOOS - Navy Component

    DTIC Science & Technology

    2013-09-30

    application and data fusion techniques. 2. Parameterization of IR and MW retrieval differences, with consideration of diurnal warming and cool-skin effects...associated retrieval confidence, standard deviation (STD), and diurnal warming estimates to the application user community in the new GDS 2.0 GHRSST...including coral reefs, ocean modeling in the Gulf of Mexico, improved lake temperatures, numerical data assimilation by ocean models, numerical

  19. Electronic delay ignition module for single bridgewire Apollo standard initiator

    NASA Technical Reports Server (NTRS)

    Ward, R. D.

    1975-01-01

    An engineering model and a qualification model of the EDIM were constructed and tested to Scout flight qualification criteria. The qualification model incorporated design improvements resulting from the engineering model tests. Compatibility with single bridgewire Apollo standard initiator (SBASI) was proven by test firing forty-five (45) SBASI's with worst case voltage and temperature conditions. The EDIM was successfully qualified for Scout flight application with no failures during testing of the qualification unit. Included is a method of implementing the EDIM into Scout vehicle hardware and the ground support equipment necessary to check out the system.

  20. Rational models as theories - not standards - of behavior.

    PubMed

    McKenzie, Craig R.M.

    2003-09-01

    When people's behavior in laboratory tasks systematically deviates from a rational model, the implication is that real-world performance could be improved by changing the behavior. However, recent studies suggest that behavioral violations of rational models are at least sometimes the result of strategies that are well adapted to the real world (and not necessarily to the laboratory task). Thus, even if one accepts that certain behavior in the laboratory is irrational, compelling evidence that real-world behavior ought to change accordingly is often lacking. It is suggested here that rational models be seen as theories, and not standards, of behavior.

  1. The Real World Significance of Performance Prediction

    ERIC Educational Resources Information Center

    Pardos, Zachary A.; Wang, Qing Yang; Trivedi, Shubhendu

    2012-01-01

    In recent years, the educational data mining and user modeling communities have been aggressively introducing models for predicting student performance on external measures such as standardized tests as well as within-tutor performance. While these models have brought statistically reliable improvement to performance prediction, the real world…

  2. Improved model of hydrated calcium ion for molecular dynamics simulations using classical biomolecular force fields.

    PubMed

    Yoo, Jejoong; Wilson, James; Aksimentiev, Aleksei

    2016-10-01

    Calcium ions (Ca(2+) ) play key roles in various fundamental biological processes such as cell signaling and brain function. Molecular dynamics (MD) simulations have been used to study such interactions, however, the accuracy of the Ca(2+) models provided by the standard MD force fields has not been rigorously tested. Here, we assess the performance of the Ca(2+) models from the most popular classical force fields AMBER and CHARMM by computing the osmotic pressure of model compounds and the free energy of DNA-DNA interactions. In the simulations performed using the two standard models, Ca(2+) ions are seen to form artificial clusters with chloride, acetate, and phosphate species; the osmotic pressure of CaAc2 and CaCl2 solutions is a small fraction of the experimental values for both force fields. Using the standard parameterization of Ca(2+) ions in the simulations of Ca(2+) -mediated DNA-DNA interactions leads to qualitatively wrong outcomes: both AMBER and CHARMM simulations suggest strong inter-DNA attraction whereas, in experiment, DNA molecules repel one another. The artificial attraction of Ca(2+) to DNA phosphate is strong enough to affect the direction of the electric field-driven translocation of DNA through a solid-state nanopore. To address these shortcomings of the standard Ca(2+) model, we introduce a custom model of a hydrated Ca(2+) ion and show that using our model brings the results of the above MD simulations in quantitative agreement with experiment. Our improved model of Ca(2+) can be readily applied to MD simulations of various biomolecular systems, including nucleic acids, proteins and lipid bilayer membranes. © 2016 Wiley Periodicals, Inc. Biopolymers 105: 752-763, 2016. © 2016 Wiley Periodicals, Inc.

  3. Mathematics Teaching Today

    ERIC Educational Resources Information Center

    Martin, Tami S.; Speer, William R.

    2009-01-01

    This article describes features, consistent messages, and new components of "Mathematics Teaching Today: Improving Practice, Improving Student Learning" (NCTM 2007), an updated edition of "Professional Standards for Teaching Mathematics" (NCTM 1991). The new book describes aspects of high-quality mathematics teaching; offers a model for observing,…

  4. Using experimental data to test and improve SUSY theories

    NASA Astrophysics Data System (ADS)

    Wang, Ting

    There are several pieces of evidence that our world is described by a supersymmetric extension of the Standard Model. In this thesis, I assume this is the case and study how to use experimental data to test and improve supersymmetric standard models. Several experimental signatures and their implications are covered in this thesis: the result of the branching ratio of b → sgamma is used to put constraints on SUSY models; the measured time-dependent CP asymmetry in the B → φKS process is used to test unification scale models; the excess of positrons from cosmic rays helps us to test the property of the Lightest Supersymmetric Particle and the Cold Dark Matter production mechanisms; the LEP higgs search results are used to classify SUSY models; SUSY signatures at the Tevatron are used to distinguish different unification scale models; by considering the mu problem, SUSY theories are improved. Due to the large unknown parameter space, all of the above inputs should be used to partially reconstruct the soft Lagrangian, which is the central part of the model. Combining the results from these analysis, a significant amount of knowledge about the underlying theory has been learned. In the next several years, there will be more data coming. The methods and results in this thesis will be useful for dealing with future data.

  5. Cost reduction from resolution/improvement of carcinoid syndrome symptoms following treatment with above-standard dose of octreotide LAR.

    PubMed

    Huynh, Lynn; Totev, Todor; Vekeman, Francis; Neary, Maureen P; Duh, Mei S; Benson, Al B

    2017-09-01

    To calculate the cost reduction associated with diarrhea/flushing symptom resolution/improvement following treatment with above-standard dose octreotide-LAR from the commercial payor's perspective. Diarrhea and flushing are two major carcinoid syndrome symptoms of neuroendocrine tumor (NET). Previously, a study of NET patients from three US tertiary oncology centers (NET 3-Center Study) demonstrated that dose escalation of octreotide LAR to above-standard dose resolved/improved diarrhea/flushing in 79% of the patients within 1 year. Time course of diarrhea/flushing symptom data were collected from the NET 3-Center Study. Daily healthcare costs were calculated from a commercial claims database analysis. For the patient cohort experiencing any diarrhea/flushing symptom resolution/improvement, their observation period was divided into days of symptom resolution/improvement or no improvement, which were then multiplied by the respective daily healthcare cost and summed over 1 year to yield the blended mean annual cost per patient. For patients who experienced no diarrhea/flushing symptom improvement, mean annual daily healthcare cost of diarrhea/flushing over a 1-year period was calculated. The economic model found that 108 NET patients who experienced diarrhea/flushing symptom resolution/improvement within 1 year had statistically significantly lower mean annual healthcare cost/patient than patients with no symptom improvement, by $14,766 (p = .03). For the sub-set of 85 patients experiencing resolution/improvement of diarrhea, their cost reduction was more pronounced, at $18,740 (p = .01), statistically significantly lower than those with no improvement; outpatient costs accounted for 56% of the cost reduction (p = .02); inpatient costs, emergency department costs, and pharmacy costs accounted for the remaining 44%. The economic model relied on two different sources of data, with some heterogeneity in the prior treatment and disease status of patients. Symptom resolution/improvement of diarrhea/flushing after treatment with an above-standard dose of octreotide-LAR in NET was associated with a statistically significant healthcare cost decrease compared to a scenario of no symptom improvement.

  6. An improved interfacial bonding model for material interface modeling

    PubMed Central

    Lin, Liqiang; Wang, Xiaodu; Zeng, Xiaowei

    2016-01-01

    An improved interfacial bonding model was proposed from potential function point of view to investigate interfacial interactions in polycrystalline materials. It characterizes both attractive and repulsive interfacial interactions and can be applied to model different material interfaces. The path dependence of work-of-separation study indicates that the transformation of separation work is smooth in normal and tangential direction and the proposed model guarantees the consistency of the cohesive constitutive model. The improved interfacial bonding model was verified through a simple compression test in a standard hexagonal structure. The error between analytical solutions and numerical results from the proposed model is reasonable in linear elastic region. Ultimately, we investigated the mechanical behavior of extrafibrillar matrix in bone and the simulation results agreed well with experimental observations of bone fracture. PMID:28584343

  7. Historical Precision of an Ozone Correction Procedure for AM0 Solar Cell Calibration

    NASA Technical Reports Server (NTRS)

    Snyder, David B.; Jenkins, Phillip; Scheiman, David

    2005-01-01

    In an effort to improve the accuracy of the high altitude aircraft method for calibration of high band-gap solar cells, the ozone correction procedure has been revisited. The new procedure adjusts the measured short circuit current, Isc, according to satellite based ozone measurements and a model of the atmospheric ozone profile then extrapolates the measurements to air mass zero, AMO. The purpose of this paper is to assess the precision of the revised procedure by applying it to historical data sets. The average Isc of a silicon cell for a flying season increased 0.5% and the standard deviation improved from 0.5% to 0.3%. The 12 year average Isc of a GaAs cell increased 1% and the standard deviation improved from 0.8% to 0.5%. The slight increase in measured Isc and improvement in standard deviation suggests that the accuracy of the aircraft method may improve from 1% to nearly 0.5%.

  8. Software Quality Evaluation Models Applicable in Health Information and Communications Technologies. A Review of the Literature.

    PubMed

    Villamor Ordozgoiti, Alberto; Delgado Hito, Pilar; Guix Comellas, Eva María; Fernandez Sanchez, Carlos Manuel; Garcia Hernandez, Milagros; Lluch Canut, Teresa

    2016-01-01

    Information and Communications Technologies in healthcare has increased the need to consider quality criteria through standardised processes. The aim of this study was to analyse the software quality evaluation models applicable to healthcare from the perspective of ICT-purchasers. Through a systematic literature review with the keywords software, product, quality, evaluation and health, we selected and analysed 20 original research papers published from 2005-2016 in health science and technology databases. The results showed four main topics: non-ISO models, software quality evaluation models based on ISO/IEC standards, studies analysing software quality evaluation models, and studies analysing ISO standards for software quality evaluation. The models provide cost-efficiency criteria for specific software, and improve use outcomes. The ISO/IEC25000 standard is shown as the most suitable for evaluating the quality of ICTs for healthcare use from the perspective of institutional acquisition.

  9. Improving BeiDou real-time precise point positioning with numerical weather models

    NASA Astrophysics Data System (ADS)

    Lu, Cuixian; Li, Xingxing; Zus, Florian; Heinkelmann, Robert; Dick, Galina; Ge, Maorong; Wickert, Jens; Schuh, Harald

    2017-09-01

    Precise positioning with the current Chinese BeiDou Navigation Satellite System is proven to be of comparable accuracy to the Global Positioning System, which is at centimeter level for the horizontal components and sub-decimeter level for the vertical component. But the BeiDou precise point positioning (PPP) shows its limitation in requiring a relatively long convergence time. In this study, we develop a numerical weather model (NWM) augmented PPP processing algorithm to improve BeiDou precise positioning. Tropospheric delay parameters, i.e., zenith delays, mapping functions, and horizontal delay gradients, derived from short-range forecasts from the Global Forecast System of the National Centers for Environmental Prediction (NCEP) are applied into BeiDou real-time PPP. Observational data from stations that are capable of tracking the BeiDou constellation from the International GNSS Service (IGS) Multi-GNSS Experiments network are processed, with the introduced NWM-augmented PPP and the standard PPP processing. The accuracy of tropospheric delays derived from NCEP is assessed against with the IGS final tropospheric delay products. The positioning results show that an improvement in convergence time up to 60.0 and 66.7% for the east and vertical components, respectively, can be achieved with the NWM-augmented PPP solution compared to the standard PPP solutions, while only slight improvement in the solution convergence can be found for the north component. A positioning accuracy of 5.7 and 5.9 cm for the east component is achieved with the standard PPP that estimates gradients and the one that estimates no gradients, respectively, in comparison to 3.5 cm of the NWM-augmented PPP, showing an improvement of 38.6 and 40.1%. Compared to the accuracy of 3.7 and 4.1 cm for the north component derived from the two standard PPP solutions, the one of the NWM-augmented PPP solution is improved to 2.0 cm, by about 45.9 and 51.2%. The positioning accuracy for the up component improves from 11.4 and 13.2 cm with the two standard PPP solutions to 8.0 cm with the NWM-augmented PPP solution, an improvement of 29.8 and 39.4%, respectively.

  10. Muon (g-2) Technical Design Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grange, J.

    The Muon (g-2) Experiment, E989 at Fermilab, will measure the muon anomalous magnetic moment a factor-of-four more precisely than was done in E821 at the Brookhaven National Laboratory AGS. The E821 result appears to be greater than the Standard-Model prediction by more than three standard deviations. When combined with expected improvement in the Standard-Model hadronic contributions, E989 should be able to determine definitively whether or not the E821 result is evidence for physics beyond the Standard Model. After a review of the physics motivation and the basic technique, which will use the muon storage ring built at BNL and nowmore » relocated to Fermilab, the design of the new experiment is presented. This document was created in partial fulfillment of the requirements necessary to obtain DOE CD-2/3 approval.« less

  11. Ensemble Learning of QTL Models Improves Prediction of Complex Traits

    PubMed Central

    Bian, Yang; Holland, James B.

    2015-01-01

    Quantitative trait locus (QTL) models can provide useful insights into trait genetic architecture because of their straightforward interpretability but are less useful for genetic prediction because of the difficulty in including the effects of numerous small effect loci without overfitting. Tight linkage between markers introduces near collinearity among marker genotypes, complicating the detection of QTL and estimation of QTL effects in linkage mapping, and this problem is exacerbated by very high density linkage maps. Here we developed a thinning and aggregating (TAGGING) method as a new ensemble learning approach to QTL mapping. TAGGING reduces collinearity problems by thinning dense linkage maps, maintains aspects of marker selection that characterize standard QTL mapping, and by ensembling, incorporates information from many more markers-trait associations than traditional QTL mapping. The objective of TAGGING was to improve prediction power compared with QTL mapping while also providing more specific insights into genetic architecture than genome-wide prediction models. TAGGING was compared with standard QTL mapping using cross validation of empirical data from the maize (Zea mays L.) nested association mapping population. TAGGING-assisted QTL mapping substantially improved prediction ability for both biparental and multifamily populations by reducing both the variance and bias in prediction. Furthermore, an ensemble model combining predictions from TAGGING-assisted QTL and infinitesimal models improved prediction abilities over the component models, indicating some complementarity between model assumptions and suggesting that some trait genetic architectures involve a mixture of a few major QTL and polygenic effects. PMID:26276383

  12. Design Of Measurements For Evaluating Readiness Of Technoware Components To Meet The Required Standard Of Products

    NASA Astrophysics Data System (ADS)

    Fauzi, Ilham; Muharram Hasby, Fariz; Irianto, Dradjad

    2018-03-01

    Although government is able to make mandatory standards that must be obeyed by the industry, the respective industries themselves often have difficulties to fulfil the requirements described in those standards. This is especially true in many small and medium sized enterprises that lack the required capital to invest in standard-compliant equipment and machineries. This study aims to develop a set of measurement tools for evaluating the level of readiness of production technology with respect to the requirements of a product standard based on the quality function deployment (QFD) method. By combining the QFD methodology, UNESCAP Technometric model [9] and Analytic Hierarchy Process (AHP), this model is used to measure a firm’s capability to fulfill government standard in the toy making industry. Expert opinions from both the governmental officers responsible for setting and implementing standards and the industry practitioners responsible for managing manufacturing processes are collected and processed to find out the technological capabilities that should be improved by the firm to fulfill the existing standard. This study showed that the proposed model can be used successfully to measure the gap between the requirements of the standard and the readiness of technoware technological component in a particular firm.

  13. Program Assessment: Getting to a Practical How-To Model

    ERIC Educational Resources Information Center

    Gardiner, Lorraine R.; Corbitt, Gail; Adams, Steven J.

    2010-01-01

    The Association to Advance Collegiate Schools of Business (AACSB) International's assurance of learning (AoL) standards require that schools develop a sophisticated continuous-improvement process. The authors review various assessment models and develop a practical, 6-step AoL model based on the literature and the authors' AoL-implementation…

  14. Fuel Economy Regulations and Efficiency Technology Improvements in U.S. Cars Since 1975

    NASA Astrophysics Data System (ADS)

    MacKenzie, Donald Warren

    Light-duty vehicles account for 43% of petroleum consumption and 23% of greenhouse gas emissions in the United States. Corporate Average Fuel Economy (CAFE) standards are the primary policy tool addressing petroleum consumption in the U.S., and are set to tighten substantially through 2025. In this dissertation, I address several interconnected questions on the technical, policy, and market aspects of fuel consumption reduction. I begin by quantifying historic improvements in fuel efficiency technologies since the 1970s. First. I develop a linear regression model of acceleration performance conditional on power, weight, powertrain, and body characteristics, showing that vehicles today accelerate 20-30% faster than vehicles with similar specifications in the 1970s. Second, I find that growing use of alternative materials and a switch to more weight-efficient vehicle architectures since 1975 have cut the weight of today's new cars by approximately 790 kg (46%). Integrating these results with model-level specification data, I estimate that the average fuel economy of new cars could have tripled from 1975-2009, if not for changes in performance, size, and features over this period. The pace of improvements was not uniform, averaging 5% annually from 1975-1990, but only 2% annually since then. I conclude that the 2025 standards can be met through improvements in efficiency technology, if we can return to 1980s rates of improvement, and growth in acceleration performance and feature content is curtailed. I next test the hypotheses that higher fuel prices and more stringent CAFE standards cause automotive firms to deploy efficiency technologies more rapidly. I find some evidence that higher fuel prices cause more rapid changes in technology, but little to no evidence that tighter CAFE standards increase rates of technology change. I conclude that standards alone, without continued high gasoline prices, may not drive technology improvements at rates needed to meet the 2025 CAFE standards factors determining industry support for nationwide fuel economy regulations. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs@mit.edu)

  15. [Research model on commodity specification standard of radix Chinese materia medica].

    PubMed

    Kang, Chuan-Zhi; Zhou, Tao; Jiang, Wei-Ke; Huang, Lu-Qi; Guo, Lan-Ping

    2016-03-01

    As an important part of the market commodity circulation, the standard grade of Chinese traditional medicine commodity is very important to restrict the market order and guarantee the quality of the medicinal material. The State Council issuing the "protection and development of Chinese herbal medicine (2015-2020)" also make clear that the important task of improving the circulation of Chinese herbal medicine industry norms and the commodity specification standard of common traditional Chinese medicinal materials. However, as a large class of Chinese herbal medicines, the standard grade of the radix is more confused in the market circulation, and lack of a more reasonable study model in the development of the standard. Thus, this paper summarizes the research background, present situation and problems, and several key points of the commodity specification and grade standard in radix herbs. Then, the research model is introduced as an example of Pseudostellariae Radix, so as to provide technical support and reference for formulating commodity specifications and grades standard in other radix traditional Chinese medicinal materials. Copyright© by the Chinese Pharmaceutical Association.

  16. Search for the Standard Model Higgs boson produced in association with top quarks and decaying into [Formula: see text] in [Formula: see text] collisions at [Formula: see text] with the ATLAS detector.

    PubMed

    Aad, G; Abbott, B; Abdallah, J; Abdinov, O; Aben, R; Abolins, M; AbouZeid, O S; Abramowicz, H; Abreu, H; Abreu, R; Abulaiti, Y; Acharya, B S; Adamczyk, L; Adams, D L; Adelman, J; Adomeit, S; Adye, T; Affolder, A A; Agatonovic-Jovin, T; Aguilar-Saavedra, J A; Agustoni, M; Ahlen, S P; Ahmadov, F; Aielli, G; Akerstedt, H; Åkesson, T P A; Akimoto, G; Akimov, A V; Alberghi, G L; Albert, J; Albrand, S; Alconada Verzini, M J; Aleksa, M; Aleksandrov, I N; Alexa, C; Alexander, G; Alexopoulos, T; Alhroob, M; Alimonti, G; Alio, L; Alison, J; Alkire, S P; Allbrooke, B M M; Allport, P P; Aloisio, A; Alonso, A; Alonso, F; Alpigiani, C; Altheimer, A; Alvarez Gonzalez, B; Piqueras, D Álvarez; Alviggi, M G; Amako, K; Amaral Coutinho, Y; Amelung, C; Amidei, D; Amor Dos Santos, S P; Amorim, A; Amoroso, S; Amram, N; Amundsen, G; Anastopoulos, C; Ancu, L S; Andari, N; Andeen, T; Anders, C F; Anders, G; Anderson, K J; Andreazza, A; Andrei, V; Angelidakis, S; Angelozzi, I; Anger, P; Angerami, A; Anghinolfi, F; Anisenkov, A V; Anjos, N; Annovi, A; Antonelli, M; Antonov, A; Antos, J; Anulli, F; Aoki, M; Aperio Bella, L; Arabidze, G; Arai, Y; Araque, J P; Arce, A T H; Arduh, F A; Arguin, J-F; Argyropoulos, S; Arik, M; Armbruster, A J; Arnaez, O; Arnal, V; Arnold, H; Arratia, M; Arslan, O; Artamonov, A; Artoni, G; Asai, S; Asbah, N; Ashkenazi, A; Åsman, B; Asquith, L; Assamagan, K; Astalos, R; Atkinson, M; Atlay, N B; Auerbach, B; Augsten, K; Aurousseau, M; Avolio, G; Axen, B; Ayoub, M K; Azuelos, G; Baak, M A; Baas, A E; Bacci, C; Bachacou, H; Bachas, K; Backes, M; Backhaus, M; Badescu, E; Bagiacchi, P; Bagnaia, P; Bai, Y; Bain, T; Baines, J T; Baker, O K; Balek, P; Balestri, T; Balli, F; Banas, E; Banerjee, Sw; Bannoura, A A E; Bansil, H S; Barak, L; Baranov, S P; Barberio, E L; Barberis, D; Barbero, M; Barillari, T; Barisonzi, M; Barklow, T; Barlow, N; Barnes, S L; Barnett, B M; Barnett, R M; Barnovska, Z; Baroncelli, A; Barone, G; Barr, A J; Barreiro, F; Barreiro Guimarães da Costa, J; Bartoldus, R; Barton, A E; Bartos, P; Bassalat, A; Basye, A; Bates, R L; Batista, S J; Batley, J R; Battaglia, M; Bauce, M; Bauer, F; Bawa, H S; Beacham, J B; Beattie, M D; Beau, T; Beauchemin, P H; Beccherle, R; Bechtle, P; Beck, H P; Becker, K; Becker, M; Becker, S; Beckingham, M; Becot, C; Beddall, A J; Beddall, A; Bednyakov, V A; Bee, C P; Beemster, L J; Beermann, T A; Begel, M; Behr, J K; Belanger-Champagne, C; Bell, P J; Bell, W H; Bella, G; Bellagamba, L; Bellerive, A; Bellomo, M; Belotskiy, K; Beltramello, O; Benary, O; Benchekroun, D; Bender, M; Bendtz, K; Benekos, N; Benhammou, Y; Benhar Noccioli, E; Benitez Garcia, J A; Benjamin, D P; Bensinger, J R; Bentvelsen, S; Beresford, L; Beretta, M; Berge, D; Bergeaas Kuutmann, E; Berger, N; Berghaus, F; Beringer, J; Bernard, C; Bernard, N R; Bernius, C; Bernlochner, F U; Berry, T; Berta, P; Bertella, C; Bertoli, G; Bertolucci, F; Bertsche, C; Bertsche, D; Besana, M I; Besjes, G J; Bessidskaia Bylund, O; Bessner, M; Besson, N; Betancourt, C; Bethke, S; Bevan, A J; Bhimji, W; Bianchi, R M; Bianchini, L; Bianco, M; Biebel, O; Bieniek, S P; Biglietti, M; Bilbao De Mendizabal, J; Bilokon, H; Bindi, M; Binet, S; Bingul, A; Bini, C; Black, C W; Black, J E; Black, K M; Blackburn, D; Blair, R E; Blanchard, J-B; Blanco, J E; Blazek, T; Bloch, I; Blocker, C; Blum, W; Blumenschein, U; Bobbink, G J; Bobrovnikov, V S; Bocchetta, S S; Bocci, A; Bock, C; Boehler, M; Bogaerts, J A; Bogdanchikov, A G; Bohm, C; Boisvert, V; Bold, T; Boldea, V; Boldyrev, A S; Bomben, M; Bona, M; Boonekamp, M; Borisov, A; Borissov, G; Borroni, S; Bortfeldt, J; Bortolotto, V; Bos, K; Boscherini, D; Bosman, M; Boudreau, J; Bouffard, J; Bouhova-Thacker, E V; Boumediene, D; Bourdarios, C; Bousson, N; Boveia, A; Boyd, J; Boyko, I R; Bozic, I; Bracinik, J; Brandt, A; Brandt, G; Brandt, O; Bratzler, U; Brau, B; Brau, J E; Braun, H M; Brazzale, S F; Brendlinger, K; Brennan, A J; Brenner, L; Brenner, R; Bressler, S; Bristow, K; Bristow, T M; Britton, D; Britzger, D; Brochu, F M; Brock, I; Brock, R; Bronner, J; Brooijmans, G; Brooks, T; Brooks, W K; Brosamer, J; Brost, E; Brown, J; Bruckman de Renstrom, P A; Bruncko, D; Bruneliere, R; Bruni, A; Bruni, G; Bruschi, M; Bryngemark, L; Buanes, T; Buat, Q; Buchholz, P; Buckley, A G; Buda, S I; Budagov, I A; Buehrer, F; Bugge, L; Bugge, M K; Bulekov, O; Burckhart, H; Burdin, S; Burghgrave, B; Burke, S; Burmeister, I; Busato, E; Büscher, D; Büscher, V; Bussey, P; Buszello, C P; Butler, J M; Butt, A I; Buttar, C M; Butterworth, J M; Butti, P; Buttinger, W; Buzatu, A; Buzykaev, R; Cabrera Urbán, S; Caforio, D; Cakir, O; Calafiura, P; Calandri, A; Calderini, G; Calfayan, P; Caloba, L P; Calvet, D; Calvet, S; Camacho Toro, R; Camarda, S; Cameron, D; Caminada, L M; Caminal Armadans, R; Campana, S; Campanelli, M; Campoverde, A; Canale, V; Canepa, A; Cano Bret, M; Cantero, J; Cantrill, R; Cao, T; Capeans Garrido, M D M; Caprini, I; Caprini, M; Capua, M; Caputo, R; Cardarelli, R; Carli, T; Carlino, G; Carminati, L; Caron, S; Carquin, E; Carrillo-Montoya, G D; Carter, J R; Carvalho, J; Casadei, D; Casado, M P; Casolino, M; Castaneda-Miranda, E; Castelli, A; Castillo Gimenez, V; Castro, N F; Catastini, P; Catinaccio, A; Catmore, J R; Cattai, A; Caudron, J; Cavaliere, V; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Ceradini, F; Cerio, B C; Cerny, K; Cerqueira, A S; Cerri, A; Cerrito, L; Cerutti, F; Cerv, M; Cervelli, A; Cetin, S A; Chafaq, A; Chakraborty, D; Chalupkova, I; Chang, P; Chapleau, B; Chapman, J D; Charlton, D G; Chau, C C; Chavez Barajas, C A; Cheatham, S; Chegwidden, A; Chekanov, S; Chekulaev, S V; Chelkov, G A; Chelstowska, M A; Chen, C; Chen, H; Chen, K; Chen, L; Chen, S; Chen, X; Chen, Y; Cheng, H C; Cheng, Y; Cheplakov, A; Cheremushkina, E; Cherkaoui El Moursli, R; Chernyatin, V; Cheu, E; Chevalier, L; Chiarella, V; Childers, J T; Chiodini, G; Chisholm, A S; Chislett, R T; Chitan, A; Chizhov, M V; Choi, K; Chouridou, S; Chow, B K B; Christodoulou, V; Chromek-Burckhart, D; Chu, M L; Chudoba, J; Chuinard, A J; Chwastowski, J J; Chytka, L; Ciapetti, G; Ciftci, A K; Cinca, D; Cindro, V; Cioara, I A; Ciocio, A; Citron, Z H; Ciubancan, M; Clark, A; Clark, B L; Clark, P J; Clarke, R N; Cleland, W; Clement, C; Coadou, Y; Cobal, M; Coccaro, A; Cochran, J; Coffey, L; Cogan, J G; Cole, B; Cole, S; Colijn, A P; Collot, J; Colombo, T; Compostella, G; Conde Muiño, P; Coniavitis, E; Connell, S H; Connelly, I A; Consonni, S M; Consorti, V; Constantinescu, S; Conta, C; Conti, G; Conventi, F; Cooke, M; Cooper, B D; Cooper-Sarkar, A M; Copic, K; Cornelissen, T; Corradi, M; Corriveau, F; Corso-Radu, A; Cortes-Gonzalez, A; Cortiana, G; Costa, G; Costa, M J; Costanzo, D; Côté, D; Cottin, G; Cowan, G; Cox, B E; Cranmer, K; Cree, G; Crépé-Renaudin, S; Crescioli, F; Cribbs, W A; Crispin Ortuzar, M; Cristinziani, M; Croft, V; Crosetti, G; Cuhadar Donszelmann, T; Cummings, J; Curatolo, M; Cuthbert, C; Czirr, H; Czodrowski, P; D'Auria, S; D'Onofrio, M; Cunha Sargedas De Sousa, M J Da; Via, C Da; Dabrowski, W; Dafinca, A; Dai, T; Dale, O; Dallaire, F; Dallapiccola, C; Dam, M; Dandoy, J R; Daniells, A C; Danninger, M; Dano Hoffmann, M; Dao, V; Darbo, G; Darmora, S; Dassoulas, J; Dattagupta, A; Davey, W; David, C; Davidek, T; Davies, E; Davies, M; Davison, P; Davygora, Y; Dawe, E; Dawson, I; Daya-Ishmukhametova, R K; De, K; de Asmundis, R; De Castro, S; De Cecco, S; De Groot, N; de Jong, P; De la Torre, H; De Lorenzi, F; De Nooij, L; De Pedis, D; De Salvo, A; De Sanctis, U; De Santo, A; De Vivie De Regie, J B; Dearnaley, W J; Debbe, R; Debenedetti, C; Dedovich, D V; Deigaard, I; Del Peso, J; Del Prete, T; Delgove, D; Deliot, F; Delitzsch, C M; Deliyergiyev, M; Dell'Acqua, A; Dell'Asta, L; Dell'Orso, M; Della Pietra, M; Della Volpe, D; Delmastro, M; Delsart, P A; Deluca, C; DeMarco, D A; Demers, S; Demichev, M; Demilly, A; Denisov, S P; Derendarz, D; Derkaoui, J E; Derue, F; Dervan, P; Desch, K; Deterre, C; Deviveiros, P O; Dewhurst, A; Dhaliwal, S; Di Ciaccio, A; Di Ciaccio, L; Di Domenico, A; Di Donato, C; Di Girolamo, A; Di Girolamo, B; Di Mattia, A; Di Micco, B; Di Nardo, R; Di Simone, A; Di Sipio, R; Di Valentino, D; Diaconu, C; Diamond, M; Dias, F A; Diaz, M A; Diehl, E B; Dietrich, J; Diglio, S; Dimitrievska, A; Dingfelder, J; Dittus, F; Djama, F; Djobava, T; Djuvsland, J I; do Vale, M A B; Dobos, D; Dobre, M; Doglioni, C; Dohmae, T; Dolejsi, J; Dolezal, Z; Dolgoshein, B A; Donadelli, M; Donati, S; Dondero, P; Donini, J; Dopke, J; Doria, A; Dova, M T; Doyle, A T; Drechsler, E; Dris, M; Dubreuil, E; Duchovni, E; Duckeck, G; Ducu, O A; Duda, D; Dudarev, A; Duflot, L; Duguid, L; Dührssen, M; Dunford, M; Duran Yildiz, H; Düren, M; Durglishvili, A; Duschinger, D; Dyndal, M; Eckardt, C; Ecker, K M; Edson, W; Edwards, N C; Ehrenfeld, W; Eifert, T; Eigen, G; Einsweiler, K; Ekelof, T; El Kacimi, M; Ellert, M; Elles, S; Ellinghaus, F; Elliot, A A; Ellis, N; Elmsheuser, J; Elsing, M; Emeliyanov, D; Enari, Y; Endner, O C; Endo, M; Engelmann, R; Erdmann, J; Ereditato, A; Ernis, G; Ernst, J; Ernst, M; Errede, S; Ertel, E; Escalier, M; Esch, H; Escobar, C; Esposito, B; Etienvre, A I; Etzion, E; Evans, H; Ezhilov, A; Fabbri, L; Facini, G; Fakhrutdinov, R M; Falciano, S; Falla, R J; Faltova, J; Fang, Y; Fanti, M; Farbin, A; Farilla, A; Farooque, T; Farrell, S; Farrington, S M; Farthouat, P; Fassi, F; Fassnacht, P; Fassouliotis, D; Giannelli, M Faucci; Favareto, A; Fayard, L; Federic, P; Fedin, O L; Fedorko, W; Feigl, S; Feligioni, L; Feng, C; Feng, E J; Feng, H; Fenyuk, A B; Martinez, P Fernandez; Fernandez Perez, S; Ferrag, S; Ferrando, J; Ferrari, A; Ferrari, P; Ferrari, R; Ferreira de Lima, D E; Ferrer, A; Ferrere, D; Ferretti, C; Ferretto Parodi, A; Fiascaris, M; Fiedler, F; Filipčič, A; Filipuzzi, M; Filthaut, F; Fincke-Keeler, M; Finelli, K D; Fiolhais, M C N; Fiorini, L; Firan, A; Fischer, A; Fischer, C; Fischer, J; Fisher, W C; Fitzgerald, E A; Flechl, M; Fleck, I; Fleischmann, P; Fleischmann, S; Fletcher, G T; Fletcher, G; Flick, T; Floderus, A; Flores Castillo, L R; Flowerdew, M J; Formica, A; Forti, A; Fournier, D; Fox, H; Fracchia, S; Francavilla, P; Franchini, M; Francis, D; Franconi, L; Franklin, M; Fraternali, M; Freeborn, D; French, S T; Friedrich, F; Froidevaux, D; Frost, J A; Fukunaga, C; Fullana Torregrosa, E; Fulsom, B G; Fuster, J; Gabaldon, C; Gabizon, O; Gabrielli, A; Gabrielli, A; Gadatsch, S; Gadomski, S; Gagliardi, G; Gagnon, P; Galea, C; Galhardo, B; Gallas, E J; Gallop, B J; Gallus, P; Galster, G; Gan, K K; Gao, J; Gao, Y; Gao, Y S; Garay Walls, F M; Garberson, F; García, C; García Navarro, J E; Garcia-Sciveres, M; Gardner, R W; Garelli, N; Garonne, V; Gatti, C; Gaudiello, A; Gaudio, G; Gaur, B; Gauthier, L; Gauzzi, P; Gavrilenko, I L; Gay, C; Gaycken, G; Gazis, E N; Ge, P; Gecse, Z; Gee, C N P; Geerts, D A A; Geich-Gimbel, Ch; Geisler, M P; Gemme, C; Genest, M H; Gentile, S; George, M; George, S; Gerbaudo, D; Gershon, A; Ghazlane, H; Giacobbe, B; Giagu, S; Giangiobbe, V; Giannetti, P; Gibbard, B; Gibson, S M; Gilchriese, M; Gillam, T P S; Gillberg, D; Gilles, G; Gingrich, D M; Giokaris, N; Giordani, M P; Giorgi, F M; Giorgi, F M; Giraud, P F; Giromini, P; Giugni, D; Giuliani, C; Giulini, M; Gjelsten, B K; Gkaitatzis, S; Gkialas, I; Gkougkousis, E L; Gladilin, L K; Glasman, C; Glatzer, J; Glaysher, P C F; Glazov, A; Goblirsch-Kolb, M; Goddard, J R; Godlewski, J; Goldfarb, S; Golling, T; Golubkov, D; Gomes, A; Gonçalo, R; Goncalves Pinto Firmino Da Costa, J; Gonella, L; González de la Hoz, S; Gonzalez Parra, G; Gonzalez-Sevilla, S; Goossens, L; Gorbounov, P A; Gordon, H A; Gorelov, I; Gorini, B; Gorini, E; Gorišek, A; Gornicki, E; Goshaw, A T; Gössling, C; Gostkin, M I; Goujdami, D; Goussiou, A G; Govender, N; Grabas, H M X; Graber, L; Grabowska-Bold, I; Grafström, P; Grahn, K-J; Gramling, J; Gramstad, E; Grancagnolo, S; Grassi, V; Gratchev, V; Gray, H M; Graziani, E; Greenwood, Z D; Gregersen, K; Gregor, I M; Grenier, P; Griffiths, J; Grillo, A A; Grimm, K; Grinstein, S; Gris, Ph; Grivaz, J-F; Grohs, J P; Grohsjean, A; Gross, E; Grosse-Knetter, J; Grossi, G C; Grout, Z J; Guan, L; Guenther, J; Guescini, F; Guest, D; Gueta, O; Guido, E; Guillemin, T; Guindon, S; Gul, U; Gumpert, C; Guo, J; Gupta, S; Gutierrez, P; Gutierrez Ortiz, N G; Gutschow, C; Guyot, C; Gwenlan, C; Gwilliam, C B; Haas, A; Haber, C; Hadavand, H K; Haddad, N; Haefner, P; Hageböck, S; Hajduk, Z; Hakobyan, H; Haleem, M; Haley, J; Hall, D; Halladjian, G; Hallewell, G D; Hamacher, K; Hamal, P; Hamano, K; Hamer, M; Hamilton, A; Hamilton, S; Hamity, G N; Hamnett, P G; Han, L; Hanagaki, K; Hanawa, K; Hance, M; Hanke, P; Hann, R; Hansen, J B; Hansen, J D; Hansen, M C; Hansen, P H; Hara, K; Hard, A S; Harenberg, T; Hariri, F; Harkusha, S; Harrington, R D; Harrison, P F; Hartjes, F; Hasegawa, M; Hasegawa, S; Hasegawa, Y; Hasib, A; Hassani, S; Haug, S; Hauser, R; Hauswald, L; Havranek, M; Hawkes, C M; Hawkings, R J; Hawkins, A D; Hayashi, T; Hayden, D; Hays, C P; Hays, J M; Hayward, H S; Haywood, S J; Head, S J; Heck, T; Hedberg, V; Heelan, L; Heim, S; Heim, T; Heinemann, B; Heinrich, L; Hejbal, J; Helary, L; Hellman, S; Hellmich, D; Helsens, C; Henderson, J; Henderson, R C W; Heng, Y; Hengler, C; Henkelmann, S; Henrichs, A; Henriques Correia, A M; Henrot-Versille, S; Herbert, G H; Hernández Jiménez, Y; Herrberg-Schubert, R; Herten, G; Hertenberger, R; Hervas, L; Hesketh, G G; Hessey, N P; Hetherly, J W; Hickling, R; Higón-Rodriguez, E; Hill, E; Hill, J C; Hiller, K H; Hillier, S J; Hinchliffe, I; Hines, E; Hinman, R R; Hirose, M; Hirschbuehl, D; Hobbs, J; Hod, N; Hodgkinson, M C; Hodgson, P; Hoecker, A; Hoeferkamp, M R; Hoenig, F; Hohlfeld, M; Hohn, D; Holmes, T R; Hong, T M; Hooft van Huysduynen, L; Hopkins, W H; Horii, Y; Horton, A J; Hostachy, J-Y; Hou, S; Hoummada, A; Howard, J; Howarth, J; Hrabovsky, M; Hristova, I; Hrivnac, J; Hryn'ova, T; Hrynevich, A; Hsu, C; Hsu, P J; Hsu, S-C; Hu, D; Hu, Q; Hu, X; Huang, Y; Hubacek, Z; Hubaut, F; Huegging, F; Huffman, T B; Hughes, E W; Hughes, G; Huhtinen, M; Hülsing, T A; Huseynov, N; Huston, J; Huth, J; Iacobucci, G; Iakovidis, G; Ibragimov, I; Iconomidou-Fayard, L; Ideal, E; Idrissi, Z; Iengo, P; Igonkina, O; Iizawa, T; Ikegami, Y; Ikematsu, K; Ikeno, M; Ilchenko, Y; Iliadis, D; Ilic, N; Inamaru, Y; Ince, T; Ioannou, P; Iodice, M; Iordanidou, K; Ippolito, V; Irles Quiles, A; Isaksson, C; Ishino, M; Ishitsuka, M; Ishmukhametov, R; Issever, C; Istin, S; Iturbe Ponce, J M; Iuppa, R; Ivarsson, J; Iwanski, W; Iwasaki, H; Izen, J M; Izzo, V; Jabbar, S; Jackson, B; Jackson, M; Jackson, P; Jaekel, M R; Jain, V; Jakobs, K; Jakobsen, S; Jakoubek, T; Jakubek, J; Jamin, D O; Jana, D K; Jansen, E; Jansky, R W; Janssen, J; Janus, M; Jarlskog, G; Javadov, N; Javůrek, T; Jeanty, L; Jejelava, J; Jeng, G-Y; Jennens, D; Jenni, P; Jentzsch, J; Jeske, C; Jézéquel, S; Ji, H; Jia, J; Jiang, Y; Jiggins, S; Jimenez Pena, J; Jin, S; Jinaru, A; Jinnouchi, O; Joergensen, M D; Johansson, P; Johns, K A; Jon-And, K; Jones, G; Jones, R W L; Jones, T J; Jongmanns, J; Jorge, P M; Joshi, K D; Jovicevic, J; Ju, X; Jung, C A; Jussel, P; Juste Rozas, A; Kaci, M; Kaczmarska, A; Kado, M; Kagan, H; Kagan, M; Kahn, S J; Kajomovitz, E; Kalderon, C W; Kama, S; Kamenshchikov, A; Kanaya, N; Kaneda, M; Kaneti, S; Kantserov, V A; Kanzaki, J; Kaplan, B; Kapliy, A; Kar, D; Karakostas, K; Karamaoun, A; Karastathis, N; Kareem, M J; Karnevskiy, M; Karpov, S N; Karpova, Z M; Karthik, K; Kartvelishvili, V; Karyukhin, A N; Kashif, L; Kass, R D; Kastanas, A; Kataoka, Y; Katre, A; Katzy, J; Kawagoe, K; Kawamoto, T; Kawamura, G; Kazama, S; Kazanin, V F; Kazarinov, M Y; Keeler, R; Kehoe, R; Keil, M; Keller, J S; Kempster, J J; Keoshkerian, H; Kepka, O; Kerševan, B P; Kersten, S; Keyes, R A; Khalil-Zada, F; Khandanyan, H; Khanov, A; Kharlamov, A G; Khoo, T J; Khoriauli, G; Khovanskiy, V; Khramov, E; Khubua, J; Kim, H Y; Kim, H; Kim, S H; Kim, Y; Kimura, N; Kind, O M; King, B T; King, M; King, R S B; King, S B; Kirk, J; Kiryunin, A E; Kishimoto, T; Kisielewska, D; Kiss, F; Kiuchi, K; Kivernyk, O; Kladiva, E; Klein, M H; Klein, M; Klein, U; Kleinknecht, K; Klimek, P; Klimentov, A; Klingenberg, R; Klinger, J A; Klioutchnikova, T; Klok, P F; Kluge, E-E; Kluit, P; Kluth, S; Kneringer, E; Knoops, E B F G; Knue, A; Kobayashi, D; Kobayashi, T; Kobel, M; Kocian, M; Kodys, P; Koffas, T; Koffeman, E; Kogan, L A; Kohlmann, S; Kohout, Z; Kohriki, T; Koi, T; Kolanoski, H; Koletsou, I; Komar, A A; Komori, Y; Kondo, T; Kondrashova, N; Köneke, K; König, A C; König, S; Kono, T; Konoplich, R; Konstantinidis, N; Kopeliansky, R; Koperny, S; Köpke, L; Kopp, A K; Korcyl, K; Kordas, K; Korn, A; Korol, A A; Korolkov, I; Korolkova, E V; Kortner, O; Kortner, S; Kosek, T; Kostyukhin, V V; Kotov, V M; Kotwal, A; Kourkoumeli-Charalampidi, A; Kourkoumelis, C; Kouskoura, V; Koutsman, A; Kowalewski, R; Kowalski, T Z; Kozanecki, W; Kozhin, A S; Kramarenko, V A; Kramberger, G; Krasnopevtsev, D; Krasny, M W; Krasznahorkay, A; Kraus, J K; Kravchenko, A; Kreiss, S; Kretz, M; Kretzschmar, J; Kreutzfeldt, K; Krieger, P; Krizka, K; Kroeninger, K; Kroha, H; Kroll, J; Kroseberg, J; Krstic, J; Kruchonak, U; Krüger, H; Krumnack, N; Krumshteyn, Z V; Kruse, A; Kruse, M C; Kruskal, M; Kubota, T; Kucuk, H; Kuday, S; Kuehn, S; Kugel, A; Kuger, F; Kuhl, A; Kuhl, T; Kukhtin, V; Kulchitsky, Y; Kuleshov, S; Kuna, M; Kunigo, T; Kupco, A; Kurashige, H; Kurochkin, Y A; Kurumida, R; Kus, V; Kuwertz, E S; Kuze, M; Kvita, J; Kwan, T; Kyriazopoulos, D; La Rosa, A; La Rosa Navarro, J L; La Rotonda, L; Lacasta, C; Lacava, F; Lacey, J; Lacker, H; Lacour, D; Lacuesta, V R; Ladygin, E; Lafaye, R; Laforge, B; Lagouri, T; Lai, S; Lambourne, L; Lammers, S; Lampen, C L; Lampl, W; Lançon, E; Landgraf, U; Landon, M P J; Lang, V S; Lange, J C; Lankford, A J; Lanni, F; Lantzsch, K; Laplace, S; Lapoire, C; Laporte, J F; Lari, T; Manghi, F Lasagni; Lassnig, M; Laurelli, P; Lavrijsen, W; Law, A T; Laycock, P; Le Dortz, O; Le Guirriec, E; Le Menedeu, E; LeBlanc, M; LeCompte, T; Ledroit-Guillon, F; Lee, C A; Lee, S C; Lee, L; Lefebvre, G; Lefebvre, M; Legger, F; Leggett, C; Lehan, A; Lehmann Miotto, G; Lei, X; Leight, W A; Leisos, A; Leister, A G; Leite, M A L; Leitner, R; Lellouch, D; Lemmer, B; Leney, K J C; Lenz, T; Lenzen, G; Lenzi, B; Leone, R; Leone, S; Leonidopoulos, C; Leontsinis, S; Leroy, C; Lester, C G; Levchenko, M; Levêque, J; Levin, D; Levinson, L J; Levy, M; Lewis, A; Leyko, A M; Leyton, M; Li, B; Li, H; Li, H L; Li, L; Li, L; Li, S; Li, Y; Liang, Z; Liao, H; Liberti, B; Liblong, A; Lichard, P; Lie, K; Liebal, J; Liebig, W; Limbach, C; Limosani, A; Lin, S C; Lin, T H; Linde, F; Lindquist, B E; Linnemann, J T; Lipeles, E; Lipniacka, A; Lisovyi, M; Liss, T M; Lissauer, D; Lister, A; Litke, A M; Liu, B; Liu, D; Liu, J; Liu, J B; Liu, K; Liu, L; Liu, M; Liu, M; Liu, Y; Livan, M; Lleres, A; Llorente Merino, J; Lloyd, S L; Lo Sterzo, F; Lobodzinska, E; Loch, P; Lockman, W S; Loebinger, F K; Loevschall-Jensen, A E; Loginov, A; Lohse, T; Lohwasser, K; Lokajicek, M; Long, B A; Long, J D; Long, R E; Looper, K A; Lopes, L; Lopez Mateos, D; Lopez Paredes, B; Lopez Paz, I; Lorenz, J; Lorenzo Martinez, N; Losada, M; Loscutoff, P; Lösel, P J; Lou, X; Lounis, A; Love, J; Love, P A; Lu, N; Lubatti, H J; Luci, C; Lucotte, A; Luehring, F; Lukas, W; Luminari, L; Lundberg, O; Lund-Jensen, B; Lungwitz, M; Lynn, D; Lysak, R; Lytken, E; Ma, H; Ma, L L; Maccarrone, G; Macchiolo, A; Macdonald, C M; Machado Miguens, J; Macina, D; Madaffari, D; Madar, R; Maddocks, H J; Mader, W F; Madsen, A; Maeland, S; Maeno, T; Maevskiy, A; Magradze, E; Mahboubi, K; Mahlstedt, J; Maiani, C; Maidantchik, C; Maier, A A; Maier, T; Maio, A; Majewski, S; Makida, Y; Makovec, N; Malaescu, B; Malecki, Pa; Maleev, V P; Malek, F; Mallik, U; Malon, D; Malone, C; Maltezos, S; Malyshev, V M; Malyukov, S; Mamuzic, J; Mancini, G; Mandelli, B; Mandelli, L; Mandić, I; Mandrysch, R; Maneira, J; Manfredini, A; Manhaes de Andrade Filho, L; Manjarres Ramos, J; Mann, A; Manning, P M; Manousakis-Katsikakis, A; Mansoulie, B; Mantifel, R; Mantoani, M; Mapelli, L; March, L; Marchiori, G; Marcisovsky, M; Marino, C P; Marjanovic, M; Marroquim, F; Marsden, S P; Marshall, Z; Marti, L F; Marti-Garcia, S; Martin, B; Martin, T A; Martin, V J; Martin Dit Latour, B; Martinez, M; Martin-Haugh, S; Martoiu, V S; Martyniuk, A C; Marx, M; Marzano, F; Marzin, A; Masetti, L; Mashimo, T; Mashinistov, R; Masik, J; Maslennikov, A L; Massa, I; Massa, L; Massol, N; Mastrandrea, P; Mastroberardino, A; Masubuchi, T; Mättig, P; Mattmann, J; Maurer, J; Maxfield, S J; Maximov, D A; Mazini, R; Mazza, S M; Mazzaferro, L; Mc Goldrick, G; Mc Kee, S P; McCarn, A; McCarthy, R L; McCarthy, T G; McCubbin, N A; McFarlane, K W; Mcfayden, J A; Mchedlidze, G; McMahon, S J; McPherson, R A; Medinnis, M; Meehan, S; Mehlhase, S; Mehta, A; Meier, K; Meineck, C; Meirose, B; Mellado Garcia, B R; Meloni, F; Mengarelli, A; Menke, S; Meoni, E; Mercurio, K M; Mergelmeyer, S; Mermod, P; Merola, L; Meroni, C; Merritt, F S; Messina, A; Metcalfe, J; Mete, A S; Meyer, C; Meyer, C; Meyer, J-P; Meyer, J; Middleton, R P; Miglioranzi, S; Mijović, L; Mikenberg, G; Mikestikova, M; Mikuž, M; Milesi, M; Milic, A; Miller, D W; Mills, C; Milov, A; Milstead, D A; Minaenko, A A; Minami, Y; Minashvili, I A; Mincer, A I; Mindur, B; Mineev, M; Ming, Y; Mir, L M; Mitani, T; Mitrevski, J; Mitsou, V A; Miucci, A; Miyagawa, P S; Mjörnmark, J U; Moa, T; Mochizuki, K; Mohapatra, S; Mohr, W; Molander, S; Moles-Valls, R; Mönig, K; Monini, C; Monk, J; Monnier, E; Montejo Berlingen, J; Monticelli, F; Monzani, S; Moore, R W; Morange, N; Moreno, D; Moreno Llácer, M; Morettini, P; Morgenstern, M; Morii, M; Morinaga, M; Morisbak, V; Moritz, S; Morley, A K; Mornacchi, G; Morris, J D; Mortensen, S S; Morton, A; Morvaj, L; Moser, H G; Mosidze, M; Moss, J; Motohashi, K; Mount, R; Mountricha, E; Mouraviev, S V; Moyse, E J W; Muanza, S; Mudd, R D; Mueller, F; Mueller, J; Mueller, K; Mueller, R S P; Mueller, T; Muenstermann, D; Mullen, P; Munwes, Y; Murillo Quijada, J A; Murray, W J; Musheghyan, H; Musto, E; Myagkov, A G; Myska, M; Nackenhorst, O; Nadal, J; Nagai, K; Nagai, R; Nagai, Y; Nagano, K; Nagarkar, A; Nagasaka, Y; Nagata, K; Nagel, M; Nagy, E; Nairz, A M; Nakahama, Y; Nakamura, K; Nakamura, T; Nakano, I; Namasivayam, H; Nanava, G; Naranjo Garcia, R F; Narayan, R; Naumann, T; Navarro, G; Nayyar, R; Neal, H A; Nechaeva, P Yu; Neep, T J; Nef, P D; Negri, A; Negrini, M; Nektarijevic, S; Nellist, C; Nelson, A; Nemecek, S; Nemethy, P; Nepomuceno, A A; Nessi, M; Neubauer, M S; Neumann, M; Neves, R M; Nevski, P; Newman, P R; Nguyen, D H; Nickerson, R B; Nicolaidou, R; Nicquevert, B; Nielsen, J; Nikiforou, N; Nikiforov, A; Nikolaenko, V; Nikolic-Audit, I; Nikolopoulos, K; Nilsen, J K; Nilsson, P; Ninomiya, Y; Nisati, A; Nisius, R; Nobe, T; Nomachi, M; Nomidis, I; Nooney, T; Norberg, S; Nordberg, M; Novgorodova, O; Nowak, S; Nozaki, M; Nozka, L; Ntekas, K; Nunes Hanninger, G; Nunnemann, T; Nurse, E; Nuti, F; O'Brien, B J; O'grady, F; O'Neil, D C; O'Shea, V; Oakham, F G; Oberlack, H; Obermann, T; Ocariz, J; Ochi, A; Ochoa, I; Oda, S; Odaka, S; Ogren, H; Oh, A; Oh, S H; Ohm, C C; Ohman, H; Oide, H; Okamura, W; Okawa, H; Okumura, Y; Okuyama, T; Olariu, A; Olivares Pino, S A; Oliveira Damazio, D; Oliver Garcia, E; Olszewski, A; Olszowska, J; Onofre, A; Onyisi, P U E; Oram, C J; Oreglia, M J; Oren, Y; Orestano, D; Orlando, N; Oropeza Barrera, C; Orr, R S; Osculati, B; Ospanov, R; Otero Y Garzon, G; Otono, H; Ouchrif, M; Ouellette, E A; Ould-Saada, F; Ouraou, A; Oussoren, K P; Ouyang, Q; Ovcharova, A; Owen, M; Owen, R E; Ozcan, V E; Ozturk, N; Pachal, K; Pacheco Pages, A; Padilla Aranda, C; Pagáčová, M; Pagan Griso, S; Paganis, E; Pahl, C; Paige, F; Pais, P; Pajchel, K; Palacino, G; Palestini, S; Palka, M; Pallin, D; Palma, A; Pan, Y B; Panagiotopoulou, E; Pandini, C E; Panduro Vazquez, J G; Pani, P; Panitkin, S; Paolozzi, L; Papadopoulou, Th D; Papageorgiou, K; Paramonov, A; Paredes Hernandez, D; Parker, M A; Parker, K A; Parodi, F; Parsons, J A; Parzefall, U; Pasqualucci, E; Passaggio, S; Pastore, F; Pastore, Fr; Pásztor, G; Pataraia, S; Patel, N D; Pater, J R; Pauly, T; Pearce, J; Pearson, B; Pedersen, L E; Pedersen, M; Pedraza Lopez, S; Pedro, R; Peleganchuk, S V; Pelikan, D; Peng, H; Penning, B; Penwell, J; Perepelitsa, D V; Perez Codina, E; Pérez García-Estañ, M T; Perini, L; Pernegger, H; Perrella, S; Peschke, R; Peshekhonov, V D; Peters, K; Peters, R F Y; Petersen, B A; Petersen, T C; Petit, E; Petridis, A; Petridou, C; Petrolo, E; Petrucci, F; Pettersson, N E; Pezoa, R; Phillips, P W; Piacquadio, G; Pianori, E; Picazio, A; Piccaro, E; Piccinini, M; Pickering, M A; Piegaia, R; Pignotti, D T; Pilcher, J E; Pilkington, A D; Pina, J; Pinamonti, M; Pinfold, J L; Pingel, A; Pinto, B; Pires, S; Pitt, M; Pizio, C; Plazak, L; Pleier, M-A; Pleskot, V; Plotnikova, E; Plucinski, P; Pluth, D; Poettgen, R; Poggioli, L; Pohl, D; Polesello, G; Policicchio, A; Polifka, R; Polini, A; Pollard, C S; Polychronakos, V; Pommès, K; Pontecorvo, L; Pope, B G; Popeneciu, G A; Popovic, D S; Poppleton, A; Pospisil, S; Potamianos, K; Potrap, I N; Potter, C J; Potter, C T; Poulard, G; Poveda, J; Pozdnyakov, V; Pralavorio, P; Pranko, A; Prasad, S; Prell, S; Price, D; Price, L E; Primavera, M; Prince, S; Proissl, M; Prokofiev, K; Prokoshin, F; Protopapadaki, E; Protopopescu, S; Proudfoot, J; Przybycien, M; Ptacek, E; Puddu, D; Pueschel, E; Puldon, D; Purohit, M; Puzo, P; Qian, J; Qin, G; Qin, Y; Quadt, A; Quarrie, D R; Quayle, W B; Queitsch-Maitland, M; Quilty, D; Raddum, S; Radeka, V; Radescu, V; Radhakrishnan, S K; Radloff, P; Rados, P; Ragusa, F; Rahal, G; Rajagopalan, S; Rammensee, M; Rangel-Smith, C; Rauscher, F; Rave, S; Ravenscroft, T; Raymond, M; Read, A L; Readioff, N P; Rebuzzi, D M; Redelbach, A; Redlinger, G; Reece, R; Reeves, K; Rehnisch, L; Reisin, H; Relich, M; Rembser, C; Ren, H; Renaud, A; Rescigno, M; Resconi, S; Rezanova, O L; Reznicek, P; Rezvani, R; Richter, R; Richter, S; Richter-Was, E; Ricken, O; Ridel, M; Rieck, P; Riegel, C J; Rieger, J; Rijssenbeek, M; Rimoldi, A; Rinaldi, L; Ristić, B; Ritsch, E; Riu, I; Rizatdinova, F; Rizvi, E; Robertson, S H; Robichaud-Veronneau, A; Robinson, D; Robinson, J E M; Robson, A; Roda, C; Roe, S; Røhne, O; Rolli, S; Romaniouk, A; Romano, M; Saez, S M Romano; Romero Adam, E; Rompotis, N; Ronzani, M; Roos, L; Ros, E; Rosati, S; Rosbach, K; Rose, P; Rosendahl, P L; Rosenthal, O; Rossetti, V; Rossi, E; Rossi, L P; Rosten, R; Rotaru, M; Roth, I; Rothberg, J; Rousseau, D; Royon, C R; Rozanov, A; Rozen, Y; Ruan, X; Rubbo, F; Rubinskiy, I; Rud, V I; Rudolph, C; Rudolph, M S; Rühr, F; Ruiz-Martinez, A; Rurikova, Z; Rusakovich, N A; Ruschke, A; Russell, H L; Rutherfoord, J P; Ruthmann, N; Ryabov, Y F; Rybar, M; Rybkin, G; Ryder, N C; Saavedra, A F; Sabato, G; Sacerdoti, S; Saddique, A; Sadrozinski, H F-W; Sadykov, R; Safai Tehrani, F; Saimpert, M; Sakamoto, H; Sakurai, Y; Salamanna, G; Salamon, A; Saleem, M; Salek, D; Sales De Bruin, P H; Salihagic, D; Salnikov, A; Salt, J; Salvatore, D; Salvatore, F; Salvucci, A; Salzburger, A; Sampsonidis, D; Sanchez, A; Sánchez, J; Sanchez Martinez, V; Sandaker, H; Sandbach, R L; Sander, H G; Sanders, M P; Sandhoff, M; Sandoval, C; Sandstroem, R; Sankey, D P C; Sannino, M; Sansoni, A; Santoni, C; Santonico, R; Santos, H; Santoyo Castillo, I; Sapp, K; Sapronov, A; Saraiva, J G; Sarrazin, B; Sasaki, O; Sasaki, Y; Sato, K; Sauvage, G; Sauvan, E; Savage, G; Savard, P; Sawyer, C; Sawyer, L; Saxon, J; Sbarra, C; Sbrizzi, A; Scanlon, T; Scannicchio, D A; Scarcella, M; Scarfone, V; Schaarschmidt, J; Schacht, P; Schaefer, D; Schaefer, R; Schaeffer, J; Schaepe, S; Schaetzel, S; Schäfer, U; Schaffer, A C; Schaile, D; Schamberger, R D; Scharf, V; Schegelsky, V A; Scheirich, D; Schernau, M; Schiavi, C; Schillo, C; Schioppa, M; Schlenker, S; Schmidt, E; Schmieden, K; Schmitt, C; Schmitt, S; Schmitt, S; Schneider, B; Schnellbach, Y J; Schnoor, U; Schoeffel, L; Schoening, A; Schoenrock, B D; Schopf, E; Schorlemmer, A L S; Schott, M; Schouten, D; Schovancova, J; Schramm, S; Schreyer, M; Schroeder, C; Schuh, N; Schultens, M J; Schultz-Coulon, H-C; Schulz, H; Schumacher, M; Schumm, B A; Schune, Ph; Schwanenberger, C; Schwartzman, A; Schwarz, T A; Schwegler, Ph; Schwemling, Ph; Schwienhorst, R; Schwindling, J; Schwindt, T; Schwoerer, M; Sciacca, F G; Scifo, E; Sciolla, G; Scuri, F; Scutti, F; Searcy, J; Sedov, G; Sedykh, E; Seema, P; Seidel, S C; Seiden, A; Seifert, F; Seixas, J M; Sekhniaidze, G; Sekula, S J; Selbach, K E; Seliverstov, D M; Semprini-Cesari, N; Serfon, C; Serin, L; Serkin, L; Serre, T; Seuster, R; Severini, H; Sfiligoj, T; Sforza, F; Sfyrla, A; Shabalina, E; Shamim, M; Shan, L Y; Shang, R; Shank, J T; Shapiro, M; Shatalov, P B; Shaw, K; Shcherbakova, A; Shehu, C Y; Sherwood, P; Shi, L; Shimizu, S; Shimmin, C O; Shimojima, M; Shiyakova, M; Shmeleva, A; Saadi, D Shoaleh; Shochet, M J; Shojaii, S; Shrestha, S; Shulga, E; Shupe, M A; Shushkevich, S; Sicho, P; Sidiropoulou, O; Sidorov, D; Sidoti, A; Siegert, F; Sijacki, Dj; Silva, J; Silver, Y; Silverstein, S B; Simak, V; Simard, O; Simic, Lj; Simion, S; Simioni, E; Simmons, B; Simon, D; Simoniello, R; Sinervo, P; Sinev, N B; Siragusa, G; Sisakyan, A N; Sivoklokov, S Yu; Sjölin, J; Sjursen, T B; Skinner, M B; Skottowe, H P; Skubic, P; Slater, M; Slavicek, T; Slawinska, M; Sliwa, K; Smakhtin, V; Smart, B H; Smestad, L; Smirnov, S Yu; Smirnov, Y; Smirnova, L N; Smirnova, O; Smith, M N K; Smizanska, M; Smolek, K; Snesarev, A A; Snidero, G; Snyder, S; Sobie, R; Socher, F; Soffer, A; Soh, D A; Solans, C A; Solar, M; Solc, J; Soldatov, E Yu; Soldevila, U; Solodkov, A A; Soloshenko, A; Solovyanov, O V; Solovyev, V; Sommer, P; Song, H Y; Soni, N; Sood, A; Sopczak, A; Sopko, B; Sopko, V; Sorin, V; Sosa, D; Sosebee, M; Sotiropoulou, C L; Soualah, R; Soueid, P; Soukharev, A M; South, D; Spagnolo, S; Spalla, M; Spanò, F; Spearman, W R; Spettel, F; Spighi, R; Spigo, G; Spiller, L A; Spousta, M; Spreitzer, T; Denis, R D St; Staerz, S; Stahlman, J; Stamen, R; Stamm, S; Stanecka, E; Stanescu, C; Stanescu-Bellu, M; Stanitzki, M M; Stapnes, S; Starchenko, E A; Stark, J; Staroba, P; Starovoitov, P; Staszewski, R; Stavina, P; Steinberg, P; Stelzer, B; Stelzer, H J; Stelzer-Chilton, O; Stenzel, H; Stern, S; Stewart, G A; Stillings, J A; Stockton, M C; Stoebe, M; Stoicea, G; Stolte, P; Stonjek, S; Stradling, A R; Straessner, A; Stramaglia, M E; Strandberg, J; Strandberg, S; Strandlie, A; Strauss, E; Strauss, M; Strizenec, P; Ströhmer, R; Strom, D M; Stroynowski, R; Strubig, A; Stucci, S A; Stugu, B; Styles, N A; Su, D; Su, J; Subramaniam, R; Succurro, A; Sugaya, Y; Suhr, C; Suk, M; Sulin, V V; Sultansoy, S; Sumida, T; Sun, S; Sun, X; Sundermann, J E; Suruliz, K; Susinno, G; Sutton, M R; Suzuki, S; Suzuki, Y; Svatos, M; Swedish, S; Swiatlowski, M; Sykora, I; Sykora, T; Ta, D; Taccini, C; Tackmann, K; Taenzer, J; Taffard, A; Tafirout, R; Taiblum, N; Takai, H; Takashima, R; Takeda, H; Takeshita, T; Takubo, Y; Talby, M; Talyshev, A A; Tam, J Y C; Tan, K G; Tanaka, J; Tanaka, R; Tanaka, S; Tanaka, S; Tannenwald, B B; Tannoury, N; Tapprogge, S; Tarem, S; Tarrade, F; Tartarelli, G F; Tas, P; Tasevsky, M; Tashiro, T; Tassi, E; Tavares Delgado, A; Tayalati, Y; Taylor, F E; Taylor, G N; Taylor, W; Teischinger, F A; Teixeira Dias Castanheira, M; Teixeira-Dias, P; Temming, K K; Ten Kate, H; Teng, P K; Teoh, J J; Tepel, F; Terada, S; Terashi, K; Terron, J; Terzo, S; Testa, M; Teuscher, R J; Therhaag, J; Theveneaux-Pelzer, T; Thomas, J P; Thomas-Wilsker, J; Thompson, E N; Thompson, P D; Thompson, R J; Thompson, A S; Thomsen, L A; Thomson, E; Thomson, M; Thun, R P; Tibbetts, M J; Torres, R E Ticse; Tikhomirov, V O; Tikhonov, Yu A; Timoshenko, S; Tiouchichine, E; Tipton, P; Tisserant, S; Todorov, T; Todorova-Nova, S; Tojo, J; Tokár, S; Tokushuku, K; Tollefson, K; Tolley, E; Tomlinson, L; Tomoto, M; Tompkins, L; Toms, K; Torrence, E; Torres, H; Torró Pastor, E; Toth, J; Touchard, F; Tovey, D R; Trefzger, T; Tremblet, L; Tricoli, A; Trigger, I M; Trincaz-Duvoid, S; Tripiana, M F; Trischuk, W; Trocmé, B; Troncon, C; Trottier-McDonald, M; Trovatelli, M; True, P; Trzebinski, M; Trzupek, A; Tsarouchas, C; Tseng, J C-L; Tsiareshka, P V; Tsionou, D; Tsipolitis, G; Tsirintanis, N; Tsiskaridze, S; Tsiskaridze, V; Tskhadadze, E G; Tsukerman, I I; Tsulaia, V; Tsuno, S; Tsybychev, D; Tudorache, A; Tudorache, V; Tuna, A N; Tupputi, S A; Turchikhin, S; Turecek, D; Turra, R; Turvey, A J; Tuts, P M; Tykhonov, A; Tylmad, M; Tyndel, M; Ueda, I; Ueno, R; Ughetto, M; Ugland, M; Uhlenbrock, M; Ukegawa, F; Unal, G; Undrus, A; Unel, G; Ungaro, F C; Unno, Y; Unverdorben, C; Urban, J; Urquijo, P; Urrejola, P; Usai, G; Usanova, A; Vacavant, L; Vacek, V; Vachon, B; Valderanis, C; Valencic, N; Valentinetti, S; Valero, A; Valery, L; Valkar, S; Valladolid Gallego, E; Vallecorsa, S; Valls Ferrer, J A; Van Den Wollenberg, W; Van Der Deijl, P C; van der Geer, R; van der Graaf, H; Van Der Leeuw, R; van Eldik, N; van Gemmeren, P; Van Nieuwkoop, J; van Vulpen, I; van Woerden, M C; Vanadia, M; Vandelli, W; Vanguri, R; Vaniachine, A; Vannucci, F; Vardanyan, G; Vari, R; Varnes, E W; Varol, T; Varouchas, D; Vartapetian, A; Varvell, K E; Vazeille, F; Vazquez Schroeder, T; Veatch, J; Veloso, F; Velz, T; Veneziano, S; Ventura, A; Ventura, D; Venturi, M; Venturi, N; Venturini, A; Vercesi, V; Verducci, M; Verkerke, W; Vermeulen, J C; Vest, A; Vetterli, M C; Viazlo, O; Vichou, I; Vickey, T; Vickey Boeriu, O E; Viehhauser, G H A; Viel, S; Vigne, R; Villa, M; Villaplana Perez, M; Vilucchi, E; Vincter, M G; Vinogradov, V B; Vivarelli, I; Vives Vaque, F; Vlachos, S; Vladoiu, D; Vlasak, M; Vogel, M; Vokac, P; Volpi, G; Volpi, M; von der Schmitt, H; von Radziewski, H; von Toerne, E; Vorobel, V; Vorobev, K; Vos, M; Voss, R; Vossebeld, J H; Vranjes, N; Vranjes Milosavljevic, M; Vrba, V; Vreeswijk, M; Vuillermet, R; Vukotic, I; Vykydal, Z; Wagner, P; Wagner, W; Wahlberg, H; Wahrmund, S; Wakabayashi, J; Walder, J; Walker, R; Walkowiak, W; Wang, C; Wang, F; Wang, H; Wang, H; Wang, J; Wang, J; Wang, K; Wang, R; Wang, S M; Wang, T; Wang, X; Wanotayaroj, C; Warburton, A; Ward, C P; Wardrope, D R; Warsinsky, M; Washbrook, A; Wasicki, C; Watkins, P M; Watson, A T; Watson, I J; Watson, M F; Watts, G; Watts, S; Waugh, B M; Webb, S; Weber, M S; Weber, S W; Webster, J S; Weidberg, A R; Weinert, B; Weingarten, J; Weiser, C; Weits, H; Wells, P S; Wenaus, T; Wengler, T; Wenig, S; Wermes, N; Werner, M; Werner, P; Wessels, M; Wetter, J; Whalen, K; Wharton, A M; White, A; White, M J; White, R; White, S; Whiteson, D; Wickens, F J; Wiedenmann, W; Wielers, M; Wienemann, P; Wiglesworth, C; Wiik-Fuchs, L A M; Wildauer, A; Wilkens, H G; Williams, H H; Williams, S; Willis, C; Willocq, S; Wilson, A; Wilson, J A; Wingerter-Seez, I; Winklmeier, F; Winter, B T; Wittgen, M; Wittkowski, J; Wollstadt, S J; Wolter, M W; Wolters, H; Wosiek, B K; Wotschack, J; Woudstra, M J; Wozniak, K W; Wu, M; Wu, M; Wu, S L; Wu, X; Wu, Y; Wyatt, T R; Wynne, B M; Xella, S; Xu, D; Xu, L; Yabsley, B; Yacoob, S; Yakabe, R; Yamada, M; Yamaguchi, Y; Yamamoto, A; Yamamoto, S; Yamanaka, T; Yamauchi, K; Yamazaki, Y; Yan, Z; Yang, H; Yang, H; Yang, Y; Yao, L; Yao, W-M; Yasu, Y; Yatsenko, E; Yau Wong, K H; Ye, J; Ye, S; Yeletskikh, I; Yen, A L; Yildirim, E; Yorita, K; Yoshida, R; Yoshihara, K; Young, C; Young, C J S; Youssef, S; Yu, D R; Yu, J; Yu, J M; Yu, J; Yuan, L; Yurkewicz, A; Yusuff, I; Zabinski, B; Zaidan, R; Zaitsev, A M; Zalieckas, J; Zaman, A; Zambito, S; Zanello, L; Zanzi, D; Zeitnitz, C; Zeman, M; Zemla, A; Zengel, K; Zenin, O; Ženiš, T; Zerwas, D; Zhang, D; Zhang, F; Zhang, J; Zhang, L; Zhang, R; Zhang, X; Zhang, Z; Zhao, X; Zhao, Y; Zhao, Z; Zhemchugov, A; Zhong, J; Zhou, B; Zhou, C; Zhou, L; Zhou, L; Zhou, N; Zhu, C G; Zhu, H; Zhu, J; Zhu, Y; Zhuang, X; Zhukov, K; Zibell, A; Zieminska, D; Zimine, N I; Zimmermann, C; Zimmermann, R; Zimmermann, S; Zinonos, Z; Zinser, M; Ziolkowski, M; Živković, L; Zobernig, G; Zoccoli, A; Zur Nedden, M; Zurzolo, G; Zwalinski, L

    A search for the Standard Model Higgs boson produced in association with a top-quark pair, [Formula: see text], is presented. The analysis uses 20.3 fb -1 of pp collision data at [Formula: see text], collected with the ATLAS detector at the Large Hadron Collider during 2012. The search is designed for the [Formula: see text] decay mode and uses events containing one or two electrons or muons. In order to improve the sensitivity of the search, events are categorised according to their jet and b -tagged jet multiplicities. A neural network is used to discriminate between signal and background events, the latter being dominated by [Formula: see text]+jets production. In the single-lepton channel, variables calculated using a matrix element method are included as inputs to the neural network to improve discrimination of the irreducible [Formula: see text] background. No significant excess of events above the background expectation is found and an observed (expected) limit of 3.4 (2.2) times the Standard Model cross section is obtained at 95 % confidence level. The ratio of the measured [Formula: see text] signal cross section to the Standard Model expectation is found to be [Formula: see text] assuming a Higgs boson mass of 125[Formula: see text].

  17. Designing Collaborative Developmental Standards by Refactoring of the Earth Science Models, Libraries, Workflows and Frameworks.

    NASA Astrophysics Data System (ADS)

    Mirvis, E.; Iredell, M.

    2015-12-01

    The operational (OPS) NOAA National Centers for Environmental Prediction (NCEP) suite, traditionally, consist of a large set of multi- scale HPC models, workflows, scripts, tools and utilities, which are very much depending on the variety of the additional components. Namely, this suite utilizes a unique collection of the in-house developed 20+ shared libraries (NCEPLIBS), certain versions of the 3-rd party libraries (like netcdf, HDF, ESMF, jasper, xml etc.), HPC workflow tool within dedicated (sometimes even vendors' customized) HPC system homogeneous environment. This domain and site specific, accompanied with NCEP's product- driven large scale real-time data operations complicates NCEP collaborative development tremendously by reducing chances to replicate this OPS environment anywhere else. The NOAA/NCEP's Environmental Modeling Center (EMC) missions to develop and improve numerical weather, climate, hydrological and ocean prediction through the partnership with the research community. Realizing said difficulties, lately, EMC has been taken an innovative approach to improve flexibility of the HPC environment by building the elements and a foundation for NCEP OPS functionally equivalent environment (FEE), which can be used to ease the external interface constructs as well. Aiming to reduce turnaround time of the community code enhancements via Research-to-Operations (R2O) cycle, EMC developed and deployed several project sub-set standards that already paved the road to NCEP OPS implementation standards. In this topic we will discuss the EMC FEE for O2R requirements and approaches in collaborative standardization, including NCEPLIBS FEE and models code version control paired with the models' derived customized HPC modules and FEE footprints. We will share NCEP/EMC experience and potential in the refactoring of EMC development processes, legacy codes and in securing model source code quality standards by using combination of the Eclipse IDE, integrated with the reverse engineering tools/APIs. We will also inform on collaborative efforts in the restructuring of the NOAA Environmental Modeling System (NEMS) - the multi- model and coupling framework, and transitioning FEE verification methodology.

  18. NREL and IBM Improve Solar Forecasting with Big Data | Energy Systems

    Science.gov Websites

    forecasting model using deep-machine-learning technology. The multi-scale, multi-model tool, named Watt-sun the first standard suite of metrics for this purpose. Validating Watt-sun at multiple sites across the

  19. Adjunct antibody administration with standard treatment reduces relapse rates in a murine tuberculosis model of necrotic granulomas.

    PubMed

    Ordonez, Alvaro A; Pokkali, Supriya; Kim, Sunhwa; Carr, Brian; Klunk, Mariah H; Tong, Leah; Saini, Vikram; Chang, Yong S; McKevitt, Matthew; Smith, Victoria; Gossage, David L; Jain, Sanjay K

    2018-01-01

    Matrix metalloproteinase (MMP)-9 is a zinc-dependent protease associated with early immune responses to Mycobacterium tuberculosis infection, macrophage recruitment and granuloma formation. We evaluated whether adjunctive inhibition of MMP-9 could improve the response to standard TB treatment in a mouse model that develops necrotic lesions. Six weeks after an aerosol infection with M. tuberculosis, C3HeB/FeJ mice received standard TB treatment (12 weeks) comprising rifampin, isoniazid and pyrazinamide alone or in combination with either anti-MMP-9 antibody, etanercept (positive control) or isotype antibody (negative control) for 6 weeks. Anti-MMP-9 and the isotype control had comparable high serum exposures and expected terminal half-life. The relapse rate in mice receiving standard TB treatment was 46.6%. Compared to the standard TB treatment, relapse rates in animals that received adjunctive treatments with anti-MMP-9 antibody or etanercept were significantly decreased to 25.9% (P = 0.006) and 29.8% (P = 0.019) respectively, but were not different from the arm that received the isotype control antibody (25.9%). Immunostaining demonstrated localization of MMP-9 primarily in macrophages in both murine and human lung tissues infected with M. tuberculosis, suggesting the importance of MMP-9 in TB pathogenesis. These data suggest that the relapse rates in M. tuberculosis-infected mice may be non-specifically improved by administration of antibodies in conjunction with standard TB treatments. Future studies are needed to evaluate the mechanism(s) leading to improved outcomes with adjunctive antibody treatments.

  20. Adjunct antibody administration with standard treatment reduces relapse rates in a murine tuberculosis model of necrotic granulomas

    PubMed Central

    Kim, Sunhwa; Carr, Brian; Klunk, Mariah H.; Tong, Leah; Saini, Vikram; Chang, Yong S.; McKevitt, Matthew; Smith, Victoria; Gossage, David L.; Jain, Sanjay K.

    2018-01-01

    Matrix metalloproteinase (MMP)-9 is a zinc-dependent protease associated with early immune responses to Mycobacterium tuberculosis infection, macrophage recruitment and granuloma formation. We evaluated whether adjunctive inhibition of MMP-9 could improve the response to standard TB treatment in a mouse model that develops necrotic lesions. Six weeks after an aerosol infection with M. tuberculosis, C3HeB/FeJ mice received standard TB treatment (12 weeks) comprising rifampin, isoniazid and pyrazinamide alone or in combination with either anti-MMP-9 antibody, etanercept (positive control) or isotype antibody (negative control) for 6 weeks. Anti-MMP-9 and the isotype control had comparable high serum exposures and expected terminal half-life. The relapse rate in mice receiving standard TB treatment was 46.6%. Compared to the standard TB treatment, relapse rates in animals that received adjunctive treatments with anti-MMP-9 antibody or etanercept were significantly decreased to 25.9% (P = 0.006) and 29.8% (P = 0.019) respectively, but were not different from the arm that received the isotype control antibody (25.9%). Immunostaining demonstrated localization of MMP-9 primarily in macrophages in both murine and human lung tissues infected with M. tuberculosis, suggesting the importance of MMP-9 in TB pathogenesis. These data suggest that the relapse rates in M. tuberculosis-infected mice may be non-specifically improved by administration of antibodies in conjunction with standard TB treatments. Future studies are needed to evaluate the mechanism(s) leading to improved outcomes with adjunctive antibody treatments. PMID:29758082

  1. Technical note: Harmonizing met-ocean model data via standard web services within small research groups

    NASA Astrophysics Data System (ADS)

    Signell, R. P.; Camossi, E.

    2015-11-01

    Work over the last decade has resulted in standardized web-services and tools that can significantly improve the efficiency and effectiveness of working with meteorological and ocean model data. While many operational modelling centres have enabled query and access to data via common web services, most small research groups have not. The penetration of this approach into the research community, where IT resources are limited, can be dramatically improved by: (1) making it simple for providers to enable web service access to existing output files; (2) using technology that is free, and that is easy to deploy and configure; and (3) providing tools to communicate with web services that work in existing research environments. We present a simple, local brokering approach that lets modelers continue producing custom data, but virtually aggregates and standardizes the data using NetCDF Markup Language. The THREDDS Data Server is used for data delivery, pycsw for data search, NCTOOLBOX (Matlab®1) and Iris (Python) for data access, and Ocean Geospatial Consortium Web Map Service for data preview. We illustrate the effectiveness of this approach with two use cases involving small research modelling groups at NATO and USGS.1 Mention of trade names or commercial products does not constitute endorsement or recommendation for use by the US Government.

  2. The North American Carbon Program Multi-scale Synthesis and Terrestrial Model Intercomparison Project – Part 2: Environmental driver data

    DOE PAGES

    Wei, Yaxing; Liu, Shishi; Huntzinger, Deborah N.; ...

    2014-12-05

    Ecosystems are important and dynamic components of the global carbon cycle, and terrestrial biospheric models (TBMs) are crucial tools in further understanding of how terrestrial carbon is stored and exchanged with the atmosphere across a variety of spatial and temporal scales. Improving TBM skills, and quantifying and reducing their estimation uncertainties, pose significant challenges. The Multi-scale Synthesis and Terrestrial Model Intercomparison Project (MsTMIP) is a formal multi-scale and multi-model intercomparison effort set up to tackle these challenges. The MsTMIP protocol prescribes standardized environmental driver data that are shared among model teams to facilitate model model and model observation comparisons. Inmore » this article, we describe the global and North American environmental driver data sets prepared for the MsTMIP activity to both support their use in MsTMIP and make these data, along with the processes used in selecting/processing these data, accessible to a broader audience. Based on project needs and lessons learned from past model intercomparison activities, we compiled climate, atmospheric CO 2 concentrations, nitrogen deposition, land use and land cover change (LULCC), C3 / C4 grasses fractions, major crops, phenology and soil data into a standard format for global (0.5⁰ x 0.5⁰ resolution) and regional (North American: 0.25⁰ x 0.25⁰ resolution) simulations. In order to meet the needs of MsTMIP, improvements were made to several of the original environmental data sets, by improving the quality, and/or changing their spatial and temporal coverage, and resolution. The resulting standardized model driver data sets are being used by over 20 different models participating in MsTMIP. Lastly, the data are archived at the Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC, http://daac.ornl.gov) to provide long-term data management and distribution.« less

  3. Design and in vivo evaluation of more efficient and selective deep brain stimulation electrodes

    NASA Astrophysics Data System (ADS)

    Howell, Bryan; Huynh, Brian; Grill, Warren M.

    2015-08-01

    Objective. Deep brain stimulation (DBS) is an effective treatment for movement disorders and a promising therapy for treating epilepsy and psychiatric disorders. Despite its clinical success, the efficiency and selectivity of DBS can be improved. Our objective was to design electrode geometries that increased the efficiency and selectivity of DBS. Approach. We coupled computational models of electrodes in brain tissue with cable models of axons of passage (AOPs), terminating axons (TAs), and local neurons (LNs); we used engineering optimization to design electrodes for stimulating these neural elements; and the model predictions were tested in vivo. Main results. Compared with the standard electrode used in the Medtronic Model 3387 and 3389 arrays, model-optimized electrodes consumed 45-84% less power. Similar gains in selectivity were evident with the optimized electrodes: 50% of parallel AOPs could be activated while reducing activation of perpendicular AOPs from 44 to 48% with the standard electrode to 0-14% with bipolar designs; 50% of perpendicular AOPs could be activated while reducing activation of parallel AOPs from 53 to 55% with the standard electrode to 1-5% with an array of cathodes; and, 50% of TAs could be activated while reducing activation of AOPs from 43 to 100% with the standard electrode to 2-15% with a distal anode. In vivo, both the geometry and polarity of the electrode had a profound impact on the efficiency and selectivity of stimulation. Significance. Model-based design is a powerful tool that can be used to improve the efficiency and selectivity of DBS electrodes.

  4. Conformal standard model with an extended scalar sector

    NASA Astrophysics Data System (ADS)

    Latosinski, Adam; Lewandowski, Adrian; Meissner, Krzysztof A.; Nicolai, Hermann

    2015-10-01

    We present an extended version of the Conformal Standard Model (characterized by the absence of any new intermediate scales between the electroweak scale and the Planck scale) with an enlarged scalar sector coupling to right-chiral neutrinos. The scalar potential and the Yukawa couplings involving only right-chiral neutrinos are invariant under a new global symmetry SU(3) N that complements the standard U(1) B-L symmetry, and is broken explicitly only by the Yukawa interaction, of order O (10-6), coupling right-chiral neutrinos and the electroweak lepton doublets. We point out four main advantages of this enlargement, namely: (1) the economy of the (non-supersymmetric) Standard Model, and thus its observational success, is preserved; (2) thanks to the enlarged scalar sector the RG improved one-loop effective potential is everywhere positive with a stable global minimum, thereby avoiding the notorious instability of the Standard Model vacuum; (3) the pseudo-Goldstone bosons resulting from spontaneous breaking of the SU(3) N symmetry are natural Dark Matter candidates with calculable small masses and couplings; and (4) the Majorana Yukawa coupling matrix acquires a form naturally adapted to leptogenesis. The model is made perturbatively consistent up to the Planck scale by imposing the vanishing of quadratic divergences at the Planck scale (`softly broken conformal symmetry'). Observable consequences of the model occur mainly via the mixing of the new scalars and the standard model Higgs boson.

  5. New features to the night sky radiance model illumina: Hyperspectral support, improved obstacles and cloud reflection

    NASA Astrophysics Data System (ADS)

    Aubé, M.; Simoneau, A.

    2018-05-01

    Illumina is one of the most physically detailed artificial night sky brightness model to date. It has been in continuous development since 2005 [1]. In 2016-17, many improvements were made to the Illumina code including an overhead cloud scheme, an improved blocking scheme for subgrid obstacles (trees and buildings), and most importantly, a full hyperspectral modeling approach. Code optimization resulted in significant reduction in execution time enabling users to run the model on standard personal computers for some applications. After describing the new schemes introduced in the model, we give some examples of applications for a peri-urban and a rural site both located inside the International Dark Sky reserve of Mont-Mégantic (QC, Canada).

  6. Progress toward a new beam measurement of the neutron lifetime

    NASA Astrophysics Data System (ADS)

    Hoogerheide, Shannon Fogwell

    2016-09-01

    Neutron beta decay is the simplest example of nuclear beta decay. A precise value of the neutron lifetime is important for consistency tests of the Standard Model and Big Bang Nucleosysnthesis models. The beam neutron lifetime method requires the absolute counting of the decay protons in a neutron beam of precisely known flux. Recent work has resulted in improvements in both the neutron and proton detection systems that should permit a significant reduction in systematic uncertainties. A new measurement of the neutron lifetime using the beam method will be performed at the National Institute of Standards and Technology Center for Neutron Research. The projected uncertainty of this new measurement is 1 s. An overview of the measurement and the technical improvements will be discussed.

  7. Improved search for a Higgs boson produced in association with Z → l+ l- in pp collisions sqrt[s] = 1.96 TeV.

    PubMed

    Aaltonen, T; González, B Alvarez; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Appel, J A; Apresyan, A; Arisawa, T; Artikov, A; Asaadi, J; Ashmanskas, W; Auerbach, B; Aurisano, A; Azfar, F; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Barria, P; Bartos, P; Bauce, M; Bauer, G; Bedeschi, F; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Bland, K R; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Brigliadori, L; Brisuda, A; Bromberg, C; Brucken, E; Bucciantonio, M; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Cabrera, S; Calancha, C; Camarda, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carls, B; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Corbo, M; Cordelli, M; Cox, C A; Cox, D J; Crescioli, F; Almenar, C Cuenca; Cuevas, J; Culbertson, R; Dagenhart, D; d'Ascenzo, N; Datta, M; de Barbaro, P; De Cecco, S; De Lorenzo, G; Dell'Orso, M; Deluca, C; Demortier, L; Deng, J; Deninno, M; Devoto, F; d'Errico, M; Di Canto, A; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Dorigo, T; Ebina, K; Elagin, A; Eppig, A; Erbacher, R; Errede, D; Errede, S; Ershaidat, N; Eusebi, R; Fang, H C; Farrington, S; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Frank, M J; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garcia, J E; Garfinkel, A F; Garosi, P; Gerberich, H; Gerchtein, E; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Ginsburg, C M; Giokaris, N; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldin, D; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; da Costa, J Guimaraes; Gunay-Unalan, Z; Haber, C; Hahn, S R; Halkiadakis, E; Hamaguchi, A; Han, J Y; Happacher, F; Hara, K; Hare, D; Hare, M; Harr, R F; Hatakeyama, K; Hays, C; Heck, M; Heinrich, J; Herndon, M; Hewamanage, S; Hidas, D; Hocker, A; Hopkins, W; Horn, D; Hou, S; Hughes, R E; Hurwitz, M; Husemann, U; Hussain, N; Hussein, M; Huston, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jang, D; Jayatilaka, B; Jeon, E J; Jha, M K; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Junk, T R; Kamon, T; Karchin, P E; Kato, Y; Ketchum, W; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, H W; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Klimenko, S; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Krumnack, N; Kruse, M; Krutelyov, V; Kuhr, T; Kurata, M; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, E; Lee, H S; Lee, J S; Lee, S W; Leo, S; Leone, S; Lewis, J D; Lin, C-J; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, Q; Liu, T; Lockwitz, S; Lockyer, N S; Loginov, A; Lucchesi, D; Lueck, J; Lujan, P; Lukens, P; Lungu, G; Lys, J; Lysak, R; Madrak, R; Maeshima, K; Makhoul, K; Maksimovic, P; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Martínez, M; Martínez-Ballarín, R; Mastrandrea, P; Mathis, M; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzione, A; Mesropian, C; Miao, T; Mietlicki, D; Mitra, A; Miyake, H; Moed, S; Moggi, N; Mondragon, M N; Moon, C S; Moore, R; Morello, M J; Morlock, J; Fernandez, P Movilla; Mukherjee, A; Muller, Th; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Naganoma, J; Nakano, I; Napier, A; Nett, J; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norniella, O; Nurse, E; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Ortolan, L; Griso, S Pagan; Pagliarone, C; Palencia, E; Papadimitriou, V; Paramonov, A A; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pilot, J; Pitts, K; Plager, C; Pondrom, L; Potamianos, K; Poukhov, O; Prokoshin, F; Pronko, A; Ptohos, F; Pueschel, E; Punzi, G; Pursley, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Renton, P; Rescigno, M; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Ruffini, F; Ruiz, A; Russ, J; Rusu, V; Safonov, A; Sakumoto, W K; Santi, L; Sartori, L; Sato, K; Saveliev, V; Savoy-Navarro, A; Schlabach, P; Schmidt, A; Schmidt, E E; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sforza, F; Sfyrla, A; Shalhout, S Z; Shears, T; Shekhar, R; Shepard, P F; Shimojima, M; Shiraishi, S; Shochet, M; Shreyber, I; Simonenko, A; Sinervo, P; Sissakian, A; Sliwa, K; Smith, J R; Snider, F D; Soha, A; Somalwar, S; Sorin, V; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Strycker, G L; Sudo, Y; Sukhanov, A; Suslov, I; Takemasa, K; Takeuchi, Y; Tang, J; Tecchio, M; Teng, P K; Thom, J; Thome, J; Thompson, G A; Thomson, E; Ttito-Guzmán, P; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Trovato, M; Tu, Y; Turini, N; Ukegawa, F; Uozumi, S; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Vidal, M; Vila, I; Vilar, R; Vogel, M; Volpi, G; Wagner, P; Wagner, R L; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Wilbur, S; Wick, F; Williams, H H; Wilson, J S; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, H; Wright, T; Wu, X; Wu, Z; Yamamoto, K; Yamaoka, J; Yang, T; Yang, U K; Yang, Y C; Yao, W-M; Yeh, G P; Yi, K; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanetti, A; Zeng, Y; Zucchelli, S

    2010-12-17

    We search for the standard model Higgs boson produced with a Z boson in 4.1 fb(-1) of integrated luminosity collected with the CDF II detector at the Tevatron. In events consistent with the decay of the Higgs boson to a bottom-quark pair and the Z boson to electrons or muons, we set 95% credibility level upper limits on the ZH production cross section multiplied by the H → bb branching ratio. Improved analysis methods enhance signal sensitivity by 20% relative to previous searches. At a Higgs boson mass of 115 GeV/c2 we set a limit of 5.9 times the standard model cross section.

  8. The regionalization of national-scale SPARROW models for stream nutrients

    USGS Publications Warehouse

    Schwarz, Gregory E.; Alexander, Richard B.; Smith, Richard A.; Preston, Stephen D.

    2011-01-01

    This analysis modifies the parsimonious specification of recently published total nitrogen (TN) and total phosphorus (TP) national-scale SPAtially Referenced Regressions On Watershed attributes models to allow each model coefficient to vary geographically among three major river basins of the conterminous United States. Regionalization of the national models reduces the standard errors in the prediction of TN and TP loads, expressed as a percentage of the predicted load, by about 6 and 7%. We develop and apply a method for combining national-scale and regional-scale information to estimate a hybrid model that imposes cross-region constraints that limit regional variation in model coefficients, effectively reducing the number of free model parameters as compared to a collection of independent regional models. The hybrid TN and TP regional models have improved model fit relative to the respective national models, reducing the standard error in the prediction of loads, expressed as a percentage of load, by about 5 and 4%. Only 19% of the TN hybrid model coefficients and just 2% of the TP hybrid model coefficients show evidence of substantial regional specificity (more than ±100% deviation from the national model estimate). The hybrid models have much greater precision in the estimated coefficients than do the unconstrained regional models, demonstrating the efficacy of pooling information across regions to improve regional models.

  9. Using normalization 3D model for automatic clinical brain quantative analysis and evaluation

    NASA Astrophysics Data System (ADS)

    Lin, Hong-Dun; Yao, Wei-Jen; Hwang, Wen-Ju; Chung, Being-Tau; Lin, Kang-Ping

    2003-05-01

    Functional medical imaging, such as PET or SPECT, is capable of revealing physiological functions of the brain, and has been broadly used in diagnosing brain disorders by clinically quantitative analysis for many years. In routine procedures, physicians manually select desired ROIs from structural MR images and then obtain physiological information from correspondent functional PET or SPECT images. The accuracy of quantitative analysis thus relies on that of the subjectively selected ROIs. Therefore, standardizing the analysis procedure is fundamental and important in improving the analysis outcome. In this paper, we propose and evaluate a normalization procedure with a standard 3D-brain model to achieve precise quantitative analysis. In the normalization process, the mutual information registration technique was applied for realigning functional medical images to standard structural medical images. Then, the standard 3D-brain model that shows well-defined brain regions was used, replacing the manual ROIs in the objective clinical analysis. To validate the performance, twenty cases of I-123 IBZM SPECT images were used in practical clinical evaluation. The results show that the quantitative analysis outcomes obtained from this automated method are in agreement with the clinical diagnosis evaluation score with less than 3% error in average. To sum up, the method takes advantage of obtaining precise VOIs, information automatically by well-defined standard 3-D brain model, sparing manually drawn ROIs slice by slice from structural medical images in traditional procedure. That is, the method not only can provide precise analysis results, but also improve the process rate for mass medical images in clinical.

  10. Efficacy of monitoring and empirical predictive modeling at improving public health protection at Chicago beaches

    USGS Publications Warehouse

    Nevers, Meredith B.; Whitman, Richard L.

    2011-01-01

    Efforts to improve public health protection in recreational swimming waters have focused on obtaining real-time estimates of water quality. Current monitoring techniques rely on the time-intensive culturing of fecal indicator bacteria (FIB) from water samples, but rapidly changing FIB concentrations result in management errors that lead to the public being exposed to high FIB concentrations (type II error) or beaches being closed despite acceptable water quality (type I error). Empirical predictive models may provide a rapid solution, but their effectiveness at improving health protection has not been adequately assessed. We sought to determine if emerging monitoring approaches could effectively reduce risk of illness exposure by minimizing management errors. We examined four monitoring approaches (inactive, current protocol, a single predictive model for all beaches, and individual models for each beach) with increasing refinement at 14 Chicago beaches using historical monitoring and hydrometeorological data and compared management outcomes using different standards for decision-making. Predictability (R2) of FIB concentration improved with model refinement at all beaches but one. Predictive models did not always reduce the number of management errors and therefore the overall illness burden. Use of a Chicago-specific single-sample standard-rather than the default 235 E. coli CFU/100 ml widely used-together with predictive modeling resulted in the greatest number of open beach days without any increase in public health risk. These results emphasize that emerging monitoring approaches such as empirical models are not equally applicable at all beaches, and combining monitoring approaches may expand beach access.

  11. Technical Standards for Nursing Education Programs in the 21st Century.

    PubMed

    Ailey, Sarah H; Marks, Beth

    The Institute of Medicine (2000, 2002) exposed serious safety problems in the health system and called for total qualitative system change. The Institute of Medicine (2011, 2015) also calls for improving the education of nurses to provide leadership for a redesigned health system. Intertwined with improving education is the need to recruit and retain diverse highly qualified students. Disability is part of diversity inclusion, but current technical standards (nonacademic requirements) for admission to many nursing programs are a barrier to the entry of persons with disabilities. Rehabilitation nurse leaders are in a unique position to improve disability diversity in nursing. The purpose of this paper is to discuss the importance of disability diversity in nursing. The history of existing technical standards used in many nursing programs is reviewed along with examples. On the basis of the concept that disability inclusion is a part of diversity inclusion, we propose a new model of technical standards for nursing education. Rehabilitation nurse leaders can lead in eliminating barriers to persons with disabilities entering nursing.

  12. A comparison of selected models for estimating cable icing

    NASA Astrophysics Data System (ADS)

    McComber, Pierre; Druez, Jacques; Laflamme, Jean

    In many cold climate countries, it is becoming increasingly important to monitor transmission line icing. Indeed, by knowing in advance of localized danger for icing overloads, electric utilities can take measures in time to prevent generalized failure of the power transmission network. Recently in Canada, a study was made to compare the estimation of a few icing models working from meteorological data in estimating ice loads for freezing rain events. The models tested were using only standard meteorological parameters, i.e. wind speed and direction, temperature and precipitation rate. This study has shown that standard meteorological parameters can only achieve very limited accuracy, especially for longer icing events. However, with the help of an additional instrument monitoring the icing rate intensity, a significant improvement in model prediction might be achieved. The icing rate meter (IRM) which counts icing and de-icing cycles per unit time on a standard probe can be used to estimate the icing intensity. A cable icing estimation is then made by taking into consideration the accretion size, temperature, wind speed and direction, and precipitation rate. In this paper, a comparison is made between the predictions of two previously tested models (one obtained and the other reconstructed from their description in the public literature) and of a model based on the icing rate meter readings. The models are tested against nineteen events recorded on an icing test line at Mt. Valin, Canada, during the winter season 1991-1992. These events are mostly rime resulting from in-cloud icing. However, freezing rain and wet snow events were also recorded. Results indicate that a significant improvement in the estimation is attained by using the icing rate meter data together with the other standard meteorological parameters.

  13. Precision electroweak physics at LEP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mannelli, M.

    1994-12-01

    Copious event statistics, a precise understanding of the LEP energy scale, and a favorable experimental situation at the Z{sup 0} resonance have allowed the LEP experiments to provide both dramatic confirmation of the Standard Model of strong and electroweak interactions and to place substantially improved constraints on the parameters of the model. The author concentrates on those measurements relevant to the electroweak sector. It will be seen that the precision of these measurements probes sensitively the structure of the Standard Model at the one-loop level, where the calculation of the observables measured at LEP is affected by the value chosenmore » for the top quark mass. One finds that the LEP measurements are consistent with the Standard Model, but only if the mass of the top quark is measured to be within a restricted range of about 20 GeV.« less

  14. Test of a Power Transfer Model for Standardized Electrofishing

    USGS Publications Warehouse

    Miranda, L.E.; Dolan, C.R.

    2003-01-01

    Standardization of electrofishing in waters with differing conductivities is critical when monitoring temporal and spatial differences in fish assemblages. We tested a model that can help improve the consistency of electrofishing by allowing control over the amount of power that is transferred to the fish. The primary objective was to verify, under controlled laboratory conditions, whether the model adequately described fish immobilization responses elicited with various electrical settings over a range of water conductivities. We found that the model accurately described empirical observations over conductivities ranging from 12 to 1,030 ??S/cm for DC and various pulsed-DC settings. Because the model requires knowledge of a fish's effective conductivity, an attribute that is likely to vary according to species, size, temperature, and other variables, a second objective was to gather available estimates of the effective conductivity of fish to examine the magnitude of variation and to assess whether in practical applications a standard effective conductivity value for fish may be assumed. We found that applying a standard fish effective conductivity of 115 ??S/cm introduced relatively little error into the estimation of the peak power density required to immobilize fish with electrofishing. However, this standard was derived from few estimates of fish effective conductivity and a limited number of species; more estimates are needed to validate our working standard.

  15. MODTRAN4 radiative transfer modeling for atmospheric correction

    NASA Astrophysics Data System (ADS)

    Berk, Alexander; Anderson, Gail P.; Bernstein, Lawrence S.; Acharya, Prabhat K.; Dothe, H.; Matthew, Michael W.; Adler-Golden, Steven M.; Chetwynd, James H.; Richtsmeier, Steven C.; Pukall, Brian; Allred, Clark L.; Jeong, Laila S.; Hoke, Michael L.

    1999-10-01

    MODTRAN4, the latest publicly released version of MODTRAN, provides many new and important options for modeling atmospheric radiation transport. A correlated-k algorithm improves multiple scattering, eliminates Curtis-Godson averaging, and introduces Beer's Law dependencies into the band model. An optimized 15 cm(superscript -1) band model provides over a 10-fold increase in speed over the standard MODTRAN 1 cm(superscript -1) band model with comparable accuracy when higher spectral resolution results are unnecessary. The MODTRAN ground surface has been upgraded to include the effects of Bidirectional Reflectance Distribution Functions (BRDFs) and Adjacency. The BRDFs are entered using standard parameterizations and are coupled into line-of-sight surface radiance calculations.

  16. Adjustment of regional regression models of urban-runoff quality using data for Chattanooga, Knoxville, and Nashville, Tennessee

    USGS Publications Warehouse

    Hoos, Anne B.; Patel, Anant R.

    1996-01-01

    Model-adjustment procedures were applied to the combined data bases of storm-runoff quality for Chattanooga, Knoxville, and Nashville, Tennessee, to improve predictive accuracy for storm-runoff quality for urban watersheds in these three cities and throughout Middle and East Tennessee. Data for 45 storms at 15 different sites (five sites in each city) constitute the data base. Comparison of observed values of storm-runoff load and event-mean concentration to the predicted values from the regional regression models for 10 constituents shows prediction errors, as large as 806,000 percent. Model-adjustment procedures, which combine the regional model predictions with local data, are applied to improve predictive accuracy. Standard error of estimate after model adjustment ranges from 67 to 322 percent. Calibration results may be biased due to sampling error in the Tennessee data base. The relatively large values of standard error of estimate for some of the constituent models, although representing significant reduction (at least 50 percent) in prediction error compared to estimation with unadjusted regional models, may be unacceptable for some applications. The user may wish to collect additional local data for these constituents and repeat the analysis, or calibrate an independent local regression model.

  17. Lorentz-symmetry test at Planck-scale suppression with nucleons in a spin-polarized 133Cs cold atom clock

    NASA Astrophysics Data System (ADS)

    Pihan-Le Bars, H.; Guerlin, C.; Lasseri, R.-D.; Ebran, J.-P.; Bailey, Q. G.; Bize, S.; Khan, E.; Wolf, P.

    2017-04-01

    We introduce an improved model that links the frequency shift of the 133Cs hyperfine Zeeman transitions |F =3 ,mF ⟩↔|F =4 ,mF ⟩ to the Lorentz-violating Standard Model extension (SME) coefficients of the proton and neutron. The new model uses Lorentz transformations developed to second order in boost and additionally takes the nuclear structure into account, beyond the simple Schmidt model used previously in Standard Model extension analyses, thereby providing access to both proton and neutron SME coefficients including the isotropic coefficient c˜T T. Using this new model in a second analysis of the data delivered by the FO2 dual Cs/Rb fountain at Paris Observatory and previously analyzed in [1], we improve by up to 13 orders of magnitude the present maximum sensitivities for laboratory tests [2] on the c˜Q, c˜T J, and c˜T T coefficients for the neutron and on the c˜Q coefficient for the proton, reaching respectively 10-20, 10-17, 10-13, and 10-15 GeV .

  18. Method to improve accuracy of positioning object by eLoran system with applying standard Kalman filter

    NASA Astrophysics Data System (ADS)

    Grunin, A. P.; Kalinov, G. A.; Bolokhovtsev, A. V.; Sai, S. V.

    2018-05-01

    This article reports on a novel method to improve the accuracy of positioning an object by a low frequency hyperbolic radio navigation system like an eLoran. This method is based on the application of the standard Kalman filter. Investigations of an affection of the filter parameters and the type of the movement on accuracy of the vehicle position estimation are carried out. Evaluation of the method accuracy was investigated by separating data from the semi-empirical movement model to different types of movements.

  19. Cardiac arrest risk standardization using administrative data compared to registry data.

    PubMed

    Grossestreuer, Anne V; Gaieski, David F; Donnino, Michael W; Nelson, Joshua I M; Mutter, Eric L; Carr, Brendan G; Abella, Benjamin S; Wiebe, Douglas J

    2017-01-01

    Methods for comparing hospitals regarding cardiac arrest (CA) outcomes, vital for improving resuscitation performance, rely on data collected by cardiac arrest registries. However, most CA patients are treated at hospitals that do not participate in such registries. This study aimed to determine whether CA risk standardization modeling based on administrative data could perform as well as that based on registry data. Two risk standardization logistic regression models were developed using 2453 patients treated from 2000-2015 at three hospitals in an academic health system. Registry and administrative data were accessed for all patients. The outcome was death at hospital discharge. The registry model was considered the "gold standard" with which to compare the administrative model, using metrics including comparing areas under the curve, calibration curves, and Bland-Altman plots. The administrative risk standardization model had a c-statistic of 0.891 (95% CI: 0.876-0.905) compared to a registry c-statistic of 0.907 (95% CI: 0.895-0.919). When limited to only non-modifiable factors, the administrative model had a c-statistic of 0.818 (95% CI: 0.799-0.838) compared to a registry c-statistic of 0.810 (95% CI: 0.788-0.831). All models were well-calibrated. There was no significant difference between c-statistics of the models, providing evidence that valid risk standardization can be performed using administrative data. Risk standardization using administrative data performs comparably to standardization using registry data. This methodology represents a new tool that can enable opportunities to compare hospital performance in specific hospital systems or across the entire US in terms of survival after CA.

  20. Breast cancer screening services: trade-offs in quality, capacity, outreach, and centralization.

    PubMed

    Güneş, Evrim D; Chick, Stephen E; Akşin, O Zeynep

    2004-11-01

    This work combines and extends previous work on breast cancer screening models by explicitly incorporating, for the first time, aspects of the dynamics of health care states, program outreach, and the screening volume-quality relationship in a service system model to examine the effect of public health policy and service capacity decisions on public health outcomes. We consider the impact of increasing standards for minimum reading volume to improve quality, expanding outreach with or without decentralization of service facilities, and the potential of queueing due to stochastic effects and limited capacity. The results indicate a strong relation between screening quality and the cost of screening and treatment, and emphasize the importance of accounting for service dynamics when assessing the performance of health care interventions. For breast cancer screening, increasing outreach without improving quality and maintaining capacity results in less benefit than predicted by standard models.

  1. Improved Limits on $$B^{0}$$ Decays to Invisible $(+gamma)$ Final States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lees, J.P.; Poireau, V.; Tisserand, V.

    2013-11-01

    We establish improved upper limits on branching fractions for B{sup 0} decays to final states where the decay products are purely invisible (i.e., no observable final state particles) and for final states where the only visible product is a photon. Within the Standard Model, these decays have branching fractions that are below the current experimental sensitivity, but various models of physics beyond the Standard Model predict significant contributions for these channels. Using 471 million B{bar B} pairs collected at the {Upsilon} (4S) resonance by the BABAR experiment at the PEP-II e{sup +}e{sup -} storage ring at the SLAC National Acceleratormore » Laboratory, we establish upper limits at the 90% confidence level of 2.4 x 10{sup -5} for the branching fraction of B{sup 0} {yields} invisible and 1.7 x 10{sup -5} for the branching fraction of B{sup 0} {yields} invisible + {gamma}.« less

  2. Imagining the Future, or How the Standard Model May Survive the Attacks

    NASA Astrophysics Data System (ADS)

    Hooft, Gerard'T.

    After the last missing piece, the Higgs particle, has probably been identified, the Standard Model of the subatomic particles appears to be a quite robust structure, that can survive on its own for a long time to come. Most researchers expect considerable modifications and improvements to come in the near future, but it could also be that the Model will stay essentially as it is. This, however, would also require a change in our thinking, and the question remains whether and how it can be reconciled with our desire for our theories to be "natural".

  3. Imagining the future, or how the Standard Model may survive the attacks

    NASA Astrophysics Data System (ADS)

    'T Hooft, Gerard

    2016-06-01

    After the last missing piece, the Higgs particle, has probably been identified, the Standard Model of the subatomic particles appears to be a quite robust structure, that can survive on its own for a long time to come. Most researchers expect considerable modifications and improvements to come in the near future, but it could also be that the Model will stay essentially as it is. This, however, would also require a change in our thinking, and the question remains whether and how it can be reconciled with our desire for our theories to be “natural”.

  4. Cognitive Training and Transcranial Direct Current Stimulation for Mild Cognitive Impairment in Parkinson's Disease: A Randomized Controlled Trial

    PubMed Central

    Gasson, Natalie; Johnson, Andrew R.; Booth, Leon; Loftus, Andrea M.

    2018-01-01

    This study examined whether standard cognitive training, tailored cognitive training, transcranial direct current stimulation (tDCS), standard cognitive training + tDCS, or tailored cognitive training + tDCS improved cognitive function and functional outcomes in participants with PD and mild cognitive impairment (PD-MCI). Forty-two participants with PD-MCI were randomized to one of six groups: (1) standard cognitive training, (2) tailored cognitive training, (3) tDCS, (4) standard cognitive training + tDCS, (5) tailored cognitive training + tDCS, or (6) a control group. Interventions lasted 4 weeks, with cognitive and functional outcomes measured at baseline, post-intervention, and follow-up. The trial was registered with the Australian New Zealand Clinical Trials Registry (ANZCTR: 12614001039673). While controlling for moderator variables, Generalized Linear Mixed Models (GLMMs) showed that when compared to the control group, the intervention groups demonstrated variable statistically significant improvements across executive function, attention/working memory, memory, language, activities of daily living (ADL), and quality of life (QOL; Hedge's g range = 0.01 to 1.75). More outcomes improved for the groups that received standard or tailored cognitive training combined with tDCS. Participants with PD-MCI receiving cognitive training (standard or tailored) or tDCS demonstrated significant improvements on cognitive and functional outcomes, and combining these interventions provided greater therapeutic effects. PMID:29780572

  5. Improved formalism for precision Higgs coupling fits

    NASA Astrophysics Data System (ADS)

    Barklow, Tim; Fujii, Keisuke; Jung, Sunghoon; Karl, Robert; List, Jenny; Ogawa, Tomohisa; Peskin, Michael E.; Tian, Junping

    2018-03-01

    Future e+e- colliders give the promise of model-independent determinations of the couplings of the Higgs boson. In this paper, we present an improved formalism for extracting Higgs boson couplings from e+e- data, based on the effective field theory description of corrections to the Standard Model. We apply this formalism to give projections of Higgs coupling accuracies for stages of the International Linear Collider and for other proposed e+e- colliders.

  6. A Risk Assessment Model for Reduced Aircraft Separation: A Quantitative Method to Evaluate the Safety of Free Flight

    NASA Technical Reports Server (NTRS)

    Cassell, Rick; Smith, Alex; Connors, Mary; Wojciech, Jack; Rosekind, Mark R. (Technical Monitor)

    1996-01-01

    As new technologies and procedures are introduced into the National Airspace System, whether they are intended to improve efficiency, capacity, or safety level, the quantification of potential changes in safety levels is of vital concern. Applications of technology can improve safety levels and allow the reduction of separation standards. An excellent example is the Precision Runway Monitor (PRM). By taking advantage of the surveillance and display advances of PRM, airports can run instrument parallel approaches to runways separated by 3400 feet with the same level of safety as parallel approaches to runways separated by 4300 feet using the standard technology. Despite a wealth of information from flight operations and testing programs, there is no readily quantifiable relationship between numerical safety levels and the separation standards that apply to aircraft on final approach. This paper presents a modeling approach to quantify the risk associated with reducing separation on final approach. Reducing aircraft separation, both laterally and longitudinally, has been the goal of several aviation R&D programs over the past several years. Many of these programs have focused on technological solutions to improve navigation accuracy, surveillance accuracy, aircraft situational awareness, controller situational awareness, and other technical and operational factors that are vital to maintaining flight safety. The risk assessment model relates different types of potential aircraft accidents and incidents and their contribution to overall accident risk. The framework links accident risks to a hierarchy of failsafe mechanisms characterized by procedures and interventions. The model will be used to assess the overall level of safety associated with reducing separation standards and the introduction of new technology and procedures, as envisaged under the Free Flight concept. The model framework can be applied to various aircraft scenarios, including parallel and in-trail approaches. This research was performed under contract to NASA and in cooperation with the FAA's Safety Division (ASY).

  7. Satellite Sounder Data Assimilation for Improving Alaska Region Weather Forecast

    NASA Technical Reports Server (NTRS)

    Zhu, Jiang; Stevens, E.; Zavodsky, B. T.; Zhang, X.; Heinrichs, T.; Broderson, D.

    2014-01-01

    Data assimilation has been demonstrated very useful in improving both global and regional numerical weather prediction. Alaska has very coarser surface observation sites. On the other hand, it gets much more satellite overpass than lower 48 states. How to utilize satellite data to improve numerical prediction is one of hot topics among weather forecast community in Alaska. The Geographic Information Network of Alaska (GINA) at University of Alaska is conducting study on satellite data assimilation for WRF model. AIRS/CRIS sounder profile data are used to assimilate the initial condition for the customized regional WRF model (GINA-WRF model). Normalized standard deviation, RMSE, and correlation statistic analysis methods are applied to analyze one case of 48 hours forecasts and one month of 24-hour forecasts in order to evaluate the improvement of regional numerical model from Data assimilation. The final goal of the research is to provide improved real-time short-time forecast for Alaska regions.

  8. Workflow standardization of a novel team care model to improve chronic care: a quasi-experimental study.

    PubMed

    Panattoni, Laura; Hurlimann, Lily; Wilson, Caroline; Durbin, Meg; Tai-Seale, Ming

    2017-04-19

    Team-based chronic care models have not been widely adopted in community settings, partly due to their varying effectiveness in randomized control trials, implementation challenges, and concerns about physician acceptance. The Palo Alto Medical Foundation designed and implemented "Champion," a novel team-based model that includes new standard work (e.g. proactive patient outreach, pre-visit schedule grooming, depression screening, care planning, health coaching) to support patients' self-management of hypertension and diabetes. We investigated whether Champion improved clinical outcomes. We conducted a quasi-experimental study comparing the Champion clinic-level intervention (n = 38 physicians) with a usual care clinic (n = 37 physicians) in Northern California. The primary outcomes, blood pressure and glycohemoglobin (A1c), were analyzed using a piecewise linear growth curve model for patients exposed to a Champion physician visit (n = 3156) or usual care visit (n = 8034) in the two years prior and one year post implementation. Secondary outcomes were provider experience, compared at baseline and 12 months in both the intervention and usual care clinics using multi-level ordered logistic modeling, and electronic health record based fidelity measures. Compared to usual care, in the first 6 months after a Champion physician visit, diabetes patients aged 18-75 experienced an additional -1.13 mm Hg (95% CI: -2.23 to -0.04) decline in diastolic blood pressure and -0.47 (95% CI: -0.61 to -0.33) decline in A1c. There were no additional improvements in blood pressure or A1c 6 to 12 months post physician visit. At 12 months, Champion physicians reported improved experience with managing chronic care patients in 6 of 7 survey items (p < 0.05), but compared to usual, this difference was only statistically significant for one item (p < 0.05). Fidelity to standard work was uneven; depression screening was the most commonly documented element (85% of patients), while care plans were the least (30.8% of patients). Champion standard work improved glycemic control over the first 6 months and physicians' experience with managing chronic care; changes in blood pressure were not clinically meaningful. Our results suggest the need to understand the relationship between the intervention, the contextual features of implementation, and fidelity to further improve chronic disease outcomes. This study was retrospectively registered with the ISRCTN Registry on March 15, 2017 (ISRCTN11341906).

  9. Using operations research to plan improvement of the transport of critically ill patients.

    PubMed

    Chen, Jing; Awasthi, Anjali; Shechter, Steven; Atkins, Derek; Lemke, Linda; Fisher, Les; Dodek, Peter

    2013-01-01

    Operations research is the application of mathematical modeling, statistical analysis, and mathematical optimization to understand and improve processes in organizations. The objective of this study was to illustrate how the methods of operations research can be used to identify opportunities to reduce the absolute value and variability of interfacility transport intervals for critically ill patients. After linking data from two patient transport organizations in British Columbia, Canada, for all critical care transports during the calendar year 2006, the steps for transfer of critically ill patients were tabulated into a series of time intervals. Statistical modeling, root-cause analysis, Monte Carlo simulation, and sensitivity analysis were used to test the effect of changes in component intervals on overall duration and variation of transport times. Based on quality improvement principles, we focused on reducing the 75th percentile and standard deviation of these intervals. We analyzed a total of 3808 ground and air transports. Constraining time spent by transport personnel at sending and receiving hospitals was projected to reduce the total time taken by 33 minutes with as much as a 20% reduction in standard deviation of these transport intervals in 75% of ground transfers. Enforcing a policy of requiring acceptance of patients who have life- or limb-threatening conditions or organ failure was projected to reduce the standard deviation of air transport time by 63 minutes and the standard deviation of ground transport time by 68 minutes. Based on findings from our analyses, we developed recommendations for technology renovation, personnel training, system improvement, and policy enforcement. Use of the tools of operations research identifies opportunities for improvement in a complex system of critical care transport.

  10. The Phase of Illness Paradigm: A Checklist Centric Model to Improve Patient Care in the Burn Intensive Care Unit

    DTIC Science & Technology

    2015-04-01

    Light Cycle  Sleep, 4-8 hrs  Increase mobility  Consider ear plugs, sleep aid Treatment  NA  Dexmedetomidine drip  Haloperidol IV Push... Haloperidol IV Push  Quetiepine PO/Enteral Notes: CV Monitoring Standard monitoring (Tele, SpO2, RR, NBP) Maximize knowledge Standard ICU

  11. Solar Measurement and Modeling | Grid Modernization | NREL

    Science.gov Websites

    Energy SunShot Initiative by improving the tools and methods that measure solar radiation to reduce and disseminate accurate solar measurement and modeling methods, best practices and standards, and Normal Irradiance Measurements, Solar Energy (2016) Radiometer Calibration Methods and Resulting

  12. Parameter identification of piezoelectric hysteresis model based on improved artificial bee colony algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Geng; Zhou, Kexin; Zhang, Yeming

    2018-04-01

    The widely used Bouc-Wen hysteresis model can be utilized to accurately simulate the voltage-displacement curves of piezoelectric actuators. In order to identify the unknown parameters of the Bouc-Wen model, an improved artificial bee colony (IABC) algorithm is proposed in this paper. A guiding strategy for searching the current optimal position of the food source is proposed in the method, which can help balance the local search ability and global exploitation capability. And the formula for the scout bees to search for the food source is modified to increase the convergence speed. Some experiments were conducted to verify the effectiveness of the IABC algorithm. The results show that the identified hysteresis model agreed well with the actual actuator response. Moreover, the identification results were compared with the standard particle swarm optimization (PSO) method, and it can be seen that the search performance in convergence rate of the IABC algorithm is better than that of the standard PSO method.

  13. Fetal heart rate deceleration detection using a discrete cosine transform implementation of singular spectrum analysis.

    PubMed

    Warrick, P A; Precup, D; Hamilton, E F; Kearney, R E

    2007-01-01

    To develop a singular-spectrum analysis (SSA) based change-point detection algorithm applicable to fetal heart rate (FHR) monitoring to improve the detection of deceleration events. We present a method for decomposing a signal into near-orthogonal components via the discrete cosine transform (DCT) and apply this in a novel online manner to change-point detection based on SSA. The SSA technique forms models of the underlying signal that can be compared over time; models that are sufficiently different indicate signal change points. To adapt the algorithm to deceleration detection where many successive similar change events can occur, we modify the standard SSA algorithm to hold the reference model constant under such conditions, an approach that we term "base-hold SSA". The algorithm is applied to a database of 15 FHR tracings that have been preprocessed to locate candidate decelerations and is compared to the markings of an expert obstetrician. Of the 528 true and 1285 false decelerations presented to the algorithm, the base-hold approach improved on standard SSA, reducing the number of missed decelerations from 64 to 49 (21.9%) while maintaining the same reduction in false-positives (278). The standard SSA assumption that changes are infrequent does not apply to FHR analysis where decelerations can occur successively and in close proximity; our base-hold SSA modification improves detection of these types of event series.

  14. A hybrid health service accreditation program model incorporating mandated standards and continuous improvement: interview study of multiple stakeholders in Australian health care.

    PubMed

    Greenfield, David; Hinchcliff, Reece; Hogden, Anne; Mumford, Virginia; Debono, Deborah; Pawsey, Marjorie; Westbrook, Johanna; Braithwaite, Jeffrey

    2016-07-01

    The study aim was to investigate the understandings and concerns of stakeholders regarding the evolution of health service accreditation programs in Australia. Stakeholder representatives from programs in the primary, acute and aged care sectors participated in semi-structured interviews. Across 2011-12 there were 47 group and individual interviews involving 258 participants. Interviews lasted, on average, 1 h, and were digitally recorded and transcribed. Transcriptions were analysed using textual referencing software. Four significant issues were considered to have directed the evolution of accreditation programs: altering underlying program philosophies; shifting of program content focus and details; different surveying expectations and experiences and the influence of external contextual factors upon accreditation programs. Three accreditation program models were noted by participants: regulatory compliance; continuous quality improvement and a hybrid model, incorporating elements of these two. Respondents noted the compatibility or incommensurability of the first two models. Participation in a program was reportedly experienced as ranging on a survey continuum from "malicious compliance" to "performance audits" to "quality improvement journeys". Wider contextual factors, in particular, political and community expectations, and associated media reporting, were considered significant influences on the operation and evolution of programs. A hybrid accreditation model was noted to have evolved. The hybrid model promotes minimum standards and continuous quality improvement, through examining the structure and processes of organisations and the outcomes of care. The hybrid model appears to be directing organisational and professional attention to enhance their safety cultures. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Quality specifications in postgraduate medical e-learning: an integrative literature review leading to a postgraduate medical e-learning model.

    PubMed

    De Leeuw, R A; Westerman, Michiel; Nelson, E; Ket, J C F; Scheele, F

    2016-07-08

    E-learning is driving major shifts in medical education. Prioritizing learning theories and quality models improves the success of e-learning programs. Although many e-learning quality standards are available, few are focused on postgraduate medical education. We conducted an integrative review of the current postgraduate medical e-learning literature to identify quality specifications. The literature was thematically organized into a working model. Unique quality specifications (n = 72) were consolidated and re-organized into a six-domain model that we called the Postgraduate Medical E-learning Model (Postgraduate ME Model). This model was partially based on the ISO-19796 standard, and drew on cognitive load multimedia principles. The domains of the model are preparation, software design and system specifications, communication, content, assessment, and maintenance. This review clarified the current state of postgraduate medical e-learning standards and specifications. It also synthesized these specifications into a single working model. To validate our findings, the next-steps include testing the Postgraduate ME Model in controlled e-learning settings.

  16. Evaluating and improving the results of air quality models in Texas using TES, AIRS and other satellite data

    NASA Astrophysics Data System (ADS)

    Osterman, G.; Harper, C.; Estes, M.; Zhao, W.; Bowman, K.; Pierce, B.; Irion, B.; Kahn, B.; Al-Saadi, J.

    2008-05-01

    The Houston/Galveston/Brazoria (HGB) area of Texas has been classified as in moderate nonattainment of the Environmental Protection Agency (EPA) 8-hour standard for ground level ozone since April 30, 2004. The Texas Commission on Environmental Quality uses photochemical model results as one of its primary tools to develop strategies to bring the HGB area into attainment with the EPA standard. The state of Texas then includes the strategies into a revised version of its State Implementation Plan (SIP). We will discuss efforts that have been or soon will be underway to use satellite data to evaluate and improve the meteorological and photochemical modeling efforts at TCEQ. In particular we will show the use of GOES, AIRS and TES data to improve the ability to model, using the MM5 model, the meteorological conditions over Texas and the Gulf of Mexico. The meteorological fields are then used as one of the inputs to the CAMx air quality model used at TCEQ. We will discuss the use of chemical transport model results as initial and boundary conditions which are a key uncertainty in the modeling of the air above Houston. We will also discuss the use of TES data to assist in the evaluation of preliminary model results generated by TCEQ for time periods in 2005. The satellite data will provide key information on ozone and carbon monoxide concentrations away from surface monitors in the troposphere. We will show how satellite data is becoming a key tool in the effort to improve air quality in the HGB area and one that can easily applied for use in other regions of the country.

  17. Embedding Quantum Mechanics Into a Broader Noncontextual Theory: A Conciliatory Result

    NASA Astrophysics Data System (ADS)

    Garola, Claudio; Sozzo, Sandro

    2010-12-01

    The extended semantic realism ( ESR) model embodies the mathematical formalism of standard (Hilbert space) quantum mechanics in a noncontextual framework, reinterpreting quantum probabilities as conditional instead of absolute. We provide here an improved version of this model and show that it predicts that, whenever idealized measurements are performed, a modified Bell-Clauser-Horne-Shimony-Holt ( BCHSH) inequality holds if one takes into account all individual systems that are prepared, standard quantum predictions hold if one considers only the individual systems that are detected, and a standard BCHSH inequality holds at a microscopic (purely theoretical) level. These results admit an intuitive explanation in terms of an unconventional kind of unfair sampling and constitute a first example of the unified perspective that can be attained by adopting the ESR model.

  18. Fuel thermal conductivity (FTHCON). Status report. [PWR; BWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagrman, D. L.

    1979-02-01

    An improvement of the fuel thermal conductivity subcode is described which is part of the fuel rod behavior modeling task performed at EG and G Idaho, Inc. The original version was published in the Materials Properties (MATPRO) Handbook, Section A-2 (Fuel Thermal Conductivity). The improved version incorporates data which were not included in the previous work and omits some previously used data which are believed to come from cracked specimens. The models for the effect of porosity on thermal conductivity and for the electronic contribution to thermal coductivity have been completely revised in order to place these models on amore » more mechanistic basis. As a result of modeling improvements the standard error of the model with respect to its data base has been significantly reduced.« less

  19. APhA 2011 REMS white paper: Summary of the REMS stakeholder meeting on improving program design and implementation.

    PubMed

    American Pharmacists Association; Bough, Marcie

    2011-01-01

    To develop an improved risk evaluation and mitigation strategies (REMS) system for maximizing effective and safe patient medication use while minimizing burden on the health care delivery system. 34 stakeholders gathered October 6-7, 2010, in Arlington, VA, for the REMS Stakeholder Meeting, convened by the American Pharmacists Association (APhA). Participants included national health care provider associations, including representatives for physicians, physician assistants, nurses, nurse practitioners, and pharmacists, as well as representatives for patient advocates, drug distributors, community pharmacists (chain and independent), drug manufacturer associations (brand, generic, and biologic organizations), and health information technology, standards, and safety organizations. Staff from the Food and Drug Administration (FDA) Center for Drug Evaluation and Research participated as observers. The meeting built on themes from the APhA's 2009 REMS white paper. The current REMS environment presents many challenges for health care providers due to the growing number of REMS programs and the lack of standardization or similarities among various REMS programs. A standardized REMS process that focuses on maximizing patient safety and minimizing impacts on patient access and provider implementation could offset these challenges. A new process that includes effective provider interventions and standardized tools and systems for implementing REMS programs may improve patient care and overcome some of the communication issues providers and patients currently face. Metrics could be put in place to evaluate the effectiveness of REMS elements. By incorporating REMS program components into existing technologies and data infrastructures, achieving REMS implementation that is workflow neutral and minimizes administrative burden may be possible. An appropriate compensation model could ensure providers have adequate resources for patient care and REMS implementation. Overall, stakeholders should continue to work collaboratively with FDA and manufacturers to improve REMS program design and implementation issues. A workable REMS system will require effective patient interventions, standardized elements that limit barriers to implementation for both patients and providers, standardized yet flexible implementation strategies, use of existing technologies in practice settings, increased opportunities for provider input early in REMS design processes, improved communication strategies and awareness of program requirements, and viable provider compensation models needed to offset costs to implement and comply with REMS program requirements.

  20. SEER*Educate: Use of Abstracting Quality Index Scores to Monitor Improvement of All Employees.

    PubMed

    Potts, Mary S; Scott, Tim; Hafterson, Jennifer L

    2016-01-01

    Integral parts of the Seattle-Puget Sound's Cancer Surveillance System registry's continuous improvement model include the incorporation of SEER*Educate into its training program for all staff and analyzing assessment results using the Abstracting Quality Index (AQI). The AQI offers a comprehensive measure of overall performance in SEER*Educate, which is a Web-based application used to personalize learning and diagnostically pinpoint each staff member's place on the AQI continuum. The assessment results are tallied from 6 abstracting standards within 2 domains: incidence reporting and coding accuracy. More than 100 data items are aligned to 1 or more of the 6 standards to build an aggregated score that is placed on a continuum for continuous improvement. The AQI score accurately identifies those individuals who have a good understanding of how to apply the 6 abstracting standards to reliably generate high quality abstracts.

  1. New Data Bases and Standards for Gravity Anomalies

    NASA Astrophysics Data System (ADS)

    Keller, G. R.; Hildenbrand, T. G.; Webring, M. W.; Hinze, W. J.; Ravat, D.; Li, X.

    2008-12-01

    Ever since the use of high-precision gravimeters emerged in the 1950's, gravity surveys have been an important tool for geologic studies. Recent developments that make geologically useful measurements from airborne and satellite platforms, the ready availability of the Global Positioning System that provides precise vertical and horizontal control, improved global data bases, and the increased availability of processing and modeling software have accelerated the use of the gravity method. As a result, efforts are being made to improve the gravity databases publicly available to the geoscience community by expanding their holdings and increasing the accuracy and precision of the data in them. Specifically the North American Gravity Database as well as the individual databases of Canada, Mexico, and the United States are being revised using new formats and standards to improve their coverage, standardization, and accuracy. An important part of this effort is revision of procedures and standards for calculating gravity anomalies taking into account the enhanced computational power available, modern satellite-based positioning technology, improved terrain databases, and increased interest in more accurately defining the different components of gravity anomalies. The most striking revision is the use of one single internationally accepted reference ellipsoid for the horizontal and vertical datums of gravity stations as well as for the computation of the calculated value of theoretical gravity. The new standards hardly impact the interpretation of local anomalies, but do improve regional anomalies in that long wavelength artifacts are removed. Most importantly, such new standards can be consistently applied to gravity database compilations of nations, continents, and even the entire world. Although many types of gravity anomalies have been described, they fall into three main classes. The primary class incorporates planetary effects, which are analytically prescribed, to derive the predicted or modeled gravity, and thus, anomalies of this class are termed planetary. The most primitive version of a gravity anomaly is simply the difference between the value of gravity predicted by the effect of the reference ellipsoid and the observed gravity anomaly. When the height of the gravity station increases, the ellipsoidal gravity anomaly decreases because of the increased distance of measurement from the anomaly- producing masses. The two primary anomalies in geophysics, which are appropriately classified as planetary anomalies, are the Free-air and Bouguer gravity anomalies. They employ models that account for planetary effects on gravity including the topography of the earth. A second class of anomaly, geological anomalies, includes the modeled gravity effect of known or assumed masses leading to the predicted gravity by using geological data such as densities and crustal thickness. The third class of anomaly, filtered anomalies, removes arbitrary gravity effects of largely unknown sources that are empirically or analytically determined from the nature of the gravity anomalies by filtering.

  2. Technology Innovations to Improve Biomass Cookstoves to Meet Tier 4 Standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Still, Dean K; Hatfield, Micheal S

    Technology Innovations to Improve Biomass Cookstoves to Meet Tier 4 Standards. Protecting public health has become a major motivation for investigating how improved cook stoves might function as a viable intervention. Currently, the great majority of cookstoves for sale in the developing world were not designed for this purpose but instead success was based on criteria such as reduced fuel use, affordability, and ease of use. With DOE funding Aprovecho Research Center spent three years creating stoves using an iterative development and modeling approach resulting in four stoves that in lab tests met the World Health Organization (2014) intermediate ratemore » vented targets for PM2.5 and for CO.« less

  3. Disability prevention and communication among workers, physicians, employers, and insurers--current models and opportunities for improvement.

    PubMed

    Pransky, Glenn; Shaw, William; Franche, Renee-Louise; Clarke, Andrew

    2004-06-03

    To review prevailing models of disability management and prevention with respect to communication, and to suggest alternative approaches. Review of selected articles. Effective disability management and return to work strategies have been the focus of an increasing number of intervention programmes and associated research studies, spanning a variety of worker populations and provider and business perspectives. Although primary and secondary disability prevention approaches have addressed theoretical basis, methods and costs, few identify communication as a key factor influencing disability outcomes. Four prevailing models of disability management and prevention (medical model, physical rehabilitation model, job-match model, and managed care model) are identified. The medical model emphasizes the physician's role to define functional limitations and job restrictions. In the physical rehabilitation model, rehabilitation professionals communicate the importance of exercise and muscle reconditioning for resuming normal work activities. The job-match model relies on the ability of employers to accurately communicate physical job requirements. The managed care model focuses on dissemination of acceptable standards for medical treatment and duration of work absence, and interventions by case managers when these standards are exceeded. Despite contrary evidence for many health impairments, these models share a common assumption that medical disability outcomes are highly predictable and unaffected by either individual or contextual factors. As a result, communication is often authoritative and unidirectional, with workers and employers in a passive role. Improvements in communication may be responsible for successes across a variety of new interventions. Communication-based interventions may further improve disability outcomes, reduce adversarial relationships, and prove cost-effective; however, controlled trials are needed.

  4. Clinical diagnostic model for sciatica developed in primary care patients with low back-related leg pain

    PubMed Central

    Konstantinou, Kika; Ogollah, Reuben; Hay, Elaine M.; Dunn, Kate M.

    2018-01-01

    Background Identification of sciatica may assist timely management but can be challenging in clinical practice. Diagnostic models to identify sciatica have mainly been developed in secondary care settings with conflicting reference standard selection. This study explores the challenges of reference standard selection and aims to ascertain which combination of clinical assessment items best identify sciatica in people seeking primary healthcare. Methods Data on 394 low back-related leg pain consulters were analysed. Potential sciatica indicators were seven clinical assessment items. Two reference standards were used: (i) high confidence sciatica clinical diagnosis; (ii) high confidence sciatica clinical diagnosis with confirmatory magnetic resonance imaging findings. Multivariable logistic regression models were produced for both reference standards. A tool predicting sciatica diagnosis in low back-related leg pain was derived. Latent class modelling explored the validity of the reference standard. Results Model (i) retained five items; model (ii) retained six items. Four items remained in both models: below knee pain, leg pain worse than back pain, positive neural tension tests and neurological deficit. Model (i) was well calibrated (p = 0.18), discrimination was area under the receiver operating characteristic curve (AUC) 0.95 (95% CI 0.93, 0.98). Model (ii) showed good discrimination (AUC 0.82; 0.78, 0.86) but poor calibration (p = 0.004). Bootstrapping revealed minimal overfitting in both models. Agreement between the two latent classes and clinical diagnosis groups defined by model (i) was substantial, and fair for model (ii). Conclusion Four clinical assessment items were common in both reference standard definitions of sciatica. A simple scoring tool for identifying sciatica was developed. These criteria could be used clinically and in research to improve accuracy of identification of this subgroup of back pain patients. PMID:29621243

  5. Progress toward a new beam measurement of the neutron lifetime

    NASA Astrophysics Data System (ADS)

    Hoogerheide, Shannon Fogwell; BL2 Collaboration

    2017-01-01

    Neutron beta decay is the simplest example of nuclear beta decay. A precise value of the neutron lifetime is important for consistency tests of the Standard Model and Big Bang Nucleosynthesis models. The beam neutron lifetime method requires the absolute counting of the decay protons in a neutron beam of precisely known flux. Recent work has resulted in improvements in both the neutron and proton detection systems that should permit a significant reduction in systematic uncertainties. A new measurement of the neutron lifetime using the beam method is underway at the National Institute of Standards and Technology Center for Neutron Research. The projected uncertainty of this new measurement is 1 s. An overview of the measurement, its current status, and the technical improvements will be discussed.

  6. A participatory model for improving occupational health and safety: improving informal sector working conditions in Thailand.

    PubMed

    Manothum, Aniruth; Rukijkanpanich, Jittra; Thawesaengskulthai, Damrong; Thampitakkul, Boonwa; Chaikittiporn, Chalermchai; Arphorn, Sara

    2009-01-01

    The purpose of this study was to evaluate the implementation of an Occupational Health and Safety Management Model for informal sector workers in Thailand. The studied model was characterized by participatory approaches to preliminary assessment, observation of informal business practices, group discussion and participation, and the use of environmental measurements and samples. This model consisted of four processes: capacity building, risk analysis, problem solving, and monitoring and control. The participants consisted of four local labor groups from different regions, including wood carving, hand-weaving, artificial flower making, and batik processing workers. The results demonstrated that, as a result of applying the model, the working conditions of the informal sector workers had improved to meet necessary standards. This model encouraged the use of local networks, which led to cooperation within the groups to create appropriate technologies to solve their problems. The authors suggest that this model could effectively be applied elsewhere to improve informal sector working conditions on a broader scale.

  7. Numerical prediction of pollutant dispersion and transport in an atmospheric boundary layer

    NASA Astrophysics Data System (ADS)

    Zeoli, Stéphanie; Bricteux, Laurent; Mech. Eng. Dpt. Team

    2014-11-01

    The ability to accurately predict concentration levels of air pollutant released from point sources is required in order to determine their environmental impact. A wall modeled large-eddy simulation (WMLES) of the ABL is performed using the OpenFoam based solver SOWFA (Churchfield and Lee, NREL). It uses Boussinesq approximation for buoyancy effects and takes into account Coriolis forces. A synthetic eddy method is proposed to properly model turbulence inlet velocity boundary conditions. This method will be compared with the standard pressure gradient forcing. WMLES are usually performed using a standard Smagorinsky model or its dynamic version. It is proposed here to investigate a subgrid scale (SGS) model with a better spectral behavior. To this end, a regularized variational multiscale (RVMs) model (Jeanmart and Winckelmans, 2007) is implemented together with standard wall function in order to preserve the dynamics of the large scales within the Ekman layer. The influence of the improved SGS model on the wind simulation and scalar transport will be discussed based on turbulence diagnostics.

  8. Models for Effective Service Delivery in Special Education Programs

    ERIC Educational Resources Information Center

    Epler, Pam; Ross, Rorie

    2015-01-01

    Educators today are challenged with the task of designing curricula and standards for students of varying abilities. While technology and innovation steadily improve classroom learning, teachers and administrators continue to struggle in developing the best methodologies and practices for students with disabilities. "Models for Effective…

  9. Do Energy Efficiency Standards Improve Quality? Evidence from a Revealed Preference Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Houde, Sebastien; Spurlock, C. Anna

    Minimum energy efficiency standards have occupied a central role in U.S. energy policy for more than three decades, but little is known about their welfare effects. In this paper, we employ a revealed preference approach to quantify the impact of past revisions in energy efficiency standards on product quality. The micro-foundation of our approach is a discrete choice model that allows us to compute a price-adjusted index of vertical quality. Focusing on the appliance market, we show that several standard revisions during the period 2001-2011 have led to an increase in quality. We also show that these standards have hadmore » a modest effect on prices, and in some cases they even led to decreases in prices. For revision events where overall quality increases and prices decrease, the consumer welfare effect of tightening the standards is unambiguously positive. Finally, we show that after controlling for the effect of improvement in energy efficiency, standards have induced an expansion of quality in the non-energy dimension. We discuss how imperfect competition can rationalize these results.« less

  10. Standardized Automated CO2/H2O Flux Systems for Individual Research Groups and Flux Networks

    NASA Astrophysics Data System (ADS)

    Burba, George; Begashaw, Israel; Fratini, Gerardo; Griessbaum, Frank; Kathilankal, James; Xu, Liukang; Franz, Daniela; Joseph, Everette; Larmanou, Eric; Miller, Scott; Papale, Dario; Sabbatini, Simone; Sachs, Torsten; Sakai, Ricardo; McDermitt, Dayle

    2017-04-01

    In recent years, spatial and temporal flux data coverage improved significantly, and on multiple scales, from a single station to continental networks, due to standardization, automation, and management of data collection, and better handling of the extensive amounts of generated data. With more stations and networks, larger data flows from each station, and smaller operating budgets, modern tools are required to effectively and efficiently handle the entire process. Such tools are needed to maximize time dedicated to authoring publications and answering research questions, and to minimize time and expenses spent on data acquisition, processing, and quality control. Thus, these tools should produce standardized verifiable datasets and provide a way to cross-share the standardized data with external collaborators to leverage available funding, promote data analyses and publications. LI-COR gas analyzers are widely used in past and present flux networks such as AmeriFlux, ICOS, AsiaFlux, OzFlux, NEON, CarboEurope, and FluxNet-Canada, etc. These analyzers have gone through several major improvements over the past 30 years. However, in 2016, a three-prong development was completed to create an automated flux system which can accept multiple sonic anemometer and datalogger models, compute final and complete fluxes on-site, merge final fluxes with supporting weather soil and radiation data, monitor station outputs and send automated alerts to researchers, and allow secure sharing and cross-sharing of the station and data access. Two types of these research systems were developed: open-path (LI-7500RS) and enclosed-path (LI-7200RS). Key developments included: • Improvement of gas analyzer performance • Standardization and automation of final flux calculations onsite, and in real-time • Seamless integration with latest site management and data sharing tools In terms of the gas analyzer performance, the RS analyzers are based on established LI-7500/A and LI-7200 models, and the improvements focused on increased stability in the presence of contamination, refining temperature control and compensation, and providing more accurate fast gas concentration measurements. In terms of the flux calculations, improvements focused on automating the on-site flux calculations using EddyPro® software run by a weatherized fully digital microcomputer, SmartFlux2. In terms of site management and data sharing, the development focused on web-based software, FluxSuite, which allows real-time station monitoring and data access by multiple users. The presentation will describe details for the key developments and will include results from field tests of the RS gas analyzer models in comparison with older models and control reference instruments.

  11. The Impact of the Developmental Training Model on Staff Development in Air Force Child Development Programs

    ERIC Educational Resources Information Center

    Bird, Candace Maria Edmonds

    2010-01-01

    In an effort to standardize training delivery and to individualize staff development based on observation and reflective practice, the Air Force implemented the Developmental Training Model (DTM) in its Child Development Programs. The goal of the Developmental Training Model is to enhance high quality programs through improvements in the training…

  12. Improved formalism for precision Higgs coupling fits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barklow, Tim; Fujii, Keisuke; Jung, Sunghoon

    Future e +e – colliders give the promise of model-independent determinations of the couplings of the Higgs boson. In this paper, we present an improved formalism for extracting Higgs boson couplings from e +e – data, based on the effective field theory description of corrections to the Standard Model. Lastly, we apply this formalism to give projections of Higgs coupling accuracies for stages of the International Linear Collider and for other proposed e +e – colliders.

  13. Improved formalism for precision Higgs coupling fits

    DOE PAGES

    Barklow, Tim; Fujii, Keisuke; Jung, Sunghoon; ...

    2018-03-20

    Future e +e – colliders give the promise of model-independent determinations of the couplings of the Higgs boson. In this paper, we present an improved formalism for extracting Higgs boson couplings from e +e – data, based on the effective field theory description of corrections to the Standard Model. Lastly, we apply this formalism to give projections of Higgs coupling accuracies for stages of the International Linear Collider and for other proposed e +e – colliders.

  14. C-reactive protein in the detection of post-stroke infections: systematic review and individual participant data analysis.

    PubMed

    Bustamante, Alejandro; Vilar-Bergua, Andrea; Guettier, Sophie; Sánchez-Poblet, Josep; García-Berrocoso, Teresa; Giralt, Dolors; Fluri, Felix; Topakian, Raffi; Worthmann, Hans; Hug, Andreas; Molnar, Tihamer; Waje-Andreassen, Ulrike; Katan, Mira; Smith, Craig J; Montaner, Joan

    2017-04-01

    We conducted a systematic review and individual participant data meta-analysis to explore the role of C-reactive protein (CRP) in early detection or prediction of post-stroke infections. CRP, an acute-phase reactant binds to the phosphocholine expressed on the surface of dead or dying cells and some bacteria, thereby activating complement and promoting phagocytosis by macrophages. We searched PubMed up to May-2015 for studies measuring CRP in stroke and evaluating post-stroke infections. Individual participants' data were merged into a single database. CRP levels were standardized and divided into quartiles. Factors independently associated with post-stroke infections were determined by logistic regression analysis and the additional predictive value of CRP was assessed by comparing areas under receiver operating characteristic curves and integrated discrimination improvement index. Data from seven studies including 699 patients were obtained. Standardized CRP levels were higher in patients with post-stroke infections beyond 24 h. Standardized CRP levels in the fourth quartile were independently associated with infection in two different logistic regression models, model 1 [stroke severity and dysphagia, odds ratio = 9.70 (3.10-30.41)] and model 2 [age, sex, and stroke severity, odds ratio = 3.21 (1.93-5.32)]. Addition of CRP improved discrimination in both models [integrated discrimination improvement = 9.83% (0.89-18.77) and 5.31% (2.83-7.79), respectively], but accuracy was only improved for model 1 (area under the curve 0.806-0.874, p = 0.036). In this study, CRP was independently associated with development of post-stroke infections, with the optimal time-window for measurement at 24-48 h. However, its additional predictive value is moderate over clinical information. Combination with other biomarkers in a panel seems a promising strategy for future studies. © 2017 International Society for Neurochemistry.

  15. Observation of the rare Bs0 →µ+µ- decay from the combined analysis of CMS and LHCb data

    NASA Astrophysics Data System (ADS)

    Cms Collaboration; Khachatryan, V.; Sirunyan, A. M.; Tumasyan, A.; Adam, W.; Bergauer, T.; Dragicevic, M.; Erö, J.; Friedl, M.; Frühwirth, R.; Ghete, V. M.; Hartl, C.; Hörmann, N.; Hrubec, J.; Jeitler, M.; Kiesenhofer, W.; Knünz, V.; Krammer, M.; Krätschmer, I.; Liko, D.; Mikulec, I.; Rabady, D.; Rahbaran, B.; Rohringer, H.; Schöfbeck, R.; Strauss, J.; Treberer-Treberspurg, W.; Waltenberger, W.; Wulz, C.-E.; Mossolov, V.; Shumeiko, N.; Suarez Gonzalez, J.; Alderweireldt, S.; Bansal, S.; Cornelis, T.; de Wolf, E. A.; Janssen, X.; Knutsson, A.; Lauwers, J.; Luyckx, S.; Ochesanu, S.; Rougny, R.; van de Klundert, M.; van Haevermaet, H.; van Mechelen, P.; van Remortel, N.; van Spilbeeck, A.; Blekman, F.; Blyweert, S.; D'Hondt, J.; Daci, N.; Heracleous, N.; Keaveney, J.; Lowette, S.; Maes, M.; Olbrechts, A.; Python, Q.; Strom, D.; Tavernier, S.; van Doninck, W.; van Mulders, P.; van Onsem, G. P.; Villella, I.; Caillol, C.; Clerbaux, B.; de Lentdecker, G.; Dobur, D.; Favart, L.; Gay, A. P. R.; Grebenyuk, A.; Léonard, A.; Mohammadi, A.; Perniè, L.; Randle-Conde, A.; Reis, T.; Seva, T.; Thomas, L.; Vander Velde, C.; Vanlaer, P.; Wang, J.; Zenoni, F.; Adler, V.; Beernaert, K.; Benucci, L.; Cimmino, A.; Costantini, S.; Crucy, S.; Dildick, S.; Fagot, A.; Garcia, G.; McCartin, J.; Ocampo Rios, A. A.; Ryckbosch, D.; Salva Diblen, S.; Sigamani, M.; Strobbe, N.; Thyssen, F.; Tytgat, M.; Yazgan, E.; Zaganidis, N.; Basegmez, S.; Beluffi, C.; Bruno, G.; Castello, R.; Caudron, A.; Ceard, L.; da Silveira, G. G.; Delaere, C.; Du Pree, T.; Favart, D.; Forthomme, L.; Giammanco, A.; Hollar, J.; Jafari, A.; Jez, P.; Komm, M.; Lemaitre, V.; Nuttens, C.; Pagano, D.; Perrini, L.; Pin, A.; Piotrzkowski, K.; Popov, A.; Quertenmont, L.; Selvaggi, M.; Vidal Marono, M.; Vizan Garcia, J. M.; Beliy, N.; Caebergs, T.; Daubie, E.; Hammad, G. H.; Aldá Júnior, W. L.; Alves, G. A.; Brito, L.; Correa Martins Junior, M.; Dos Reis Martins, T.; Mora Herrera, C.; Pol, M. E.; Rebello Teles, P.; Carvalho, W.; Chinellato, J.; Custódio, A.; da Costa, E. M.; de Jesus Damiao, D.; de Oliveira Martins, C.; Fonseca de Souza, S.; Malbouisson, H.; Matos Figueiredo, D.; Mundim, L.; Nogima, H.; Prado da Silva, W. L.; Santaolalla, J.; Santoro, A.; Sznajder, A.; Tonelli Manganote, E. J.; Vilela Pereira, A.; Bernardes, C. A.; Dogra, S.; Fernandez Perez Tomei, T. R.; Gregores, E. M.; Mercadante, P. G.; Novaes, S. F.; Padula, Sandra S.; Aleksandrov, A.; Genchev, V.; Hadjiiska, R.; Iaydjiev, P.; Marinov, A.; Piperov, S.; Rodozov, M.; Sultanov, G.; Vutova, M.; Dimitrov, A.; Glushkov, I.; Litov, L.; Pavlov, B.; Petkov, P.; Bian, J. G.; Chen, G. M.; Chen, H. S.; Chen, M.; Cheng, T.; Du, R.; Jiang, C. H.; Plestina, R.; Romeo, F.; Tao, J.; Wang, Z.; Asawatangtrakuldee, C.; Ban, Y.; Li, Q.; Liu, S.; Mao, Y.; Qian, S. J.; Wang, D.; Xu, Z.; Zou, W.; Avila, C.; Cabrera, A.; Chaparro Sierra, L. F.; Florez, C.; Gomez, J. P.; Gomez Moreno, B.; Sanabria, J. C.; Godinovic, N.; Lelas, D.; Polic, D.; Puljak, I.; Antunovic, Z.; Kovac, M.; Brigljevic, V.; Kadija, K.; Luetic, J.; Mekterovic, D.; Sudic, L.; Attikis, A.; Mavromanolakis, G.; Mousa, J.; Nicolaou, C.; Ptochos, F.; Razis, P. A.; Bodlak, M.; Finger, M.; Finger, M., Jr.; Assran, Y.; Ellithi Kamel, A.; Mahmoud, M. A.; Radi, A.; Kadastik, M.; Murumaa, M.; Raidal, M.; Tiko, A.; Eerola, P.; Fedi, G.; Voutilainen, M.; Härkönen, J.; Karimäki, V.; Kinnunen, R.; Kortelainen, M. J.; Lampén, T.; Lassila-Perini, K.; Lehti, S.; Lindén, T.; Luukka, P.; Mäenpää, T.; Peltola, T.; Tuominen, E.; Tuominiemi, J.; Tuovinen, E.; Wendland, L.; Talvitie, J.; Tuuva, T.; Besancon, M.; Couderc, F.; Dejardin, M.; Denegri, D.; Fabbro, B.; Faure, J. L.; Favaro, C.; Ferri, F.; Ganjour, S.; Givernaud, A.; Gras, P.; Hamel de Monchenault, G.; Jarry, P.; Locci, E.; Malcles, J.; Rander, J.; Rosowsky, A.; Titov, M.; Baffioni, S.; Beaudette, F.; Busson, P.; Charlot, C.; Dahms, T.; Dalchenko, M.; Dobrzynski, L.; Filipovic, N.; Florent, A.; Granier de Cassagnac, R.; Mastrolorenzo, L.; Miné, P.; Mironov, C.; Naranjo, I. N.; Nguyen, M.; Ochando, C.; Ortona, G.; Paganini, P.; Regnard, S.; Salerno, R.; Sauvan, J. B.; Sirois, Y.; Veelken, C.; Yilmaz, Y.; Zabi, A.; Agram, J.-L.; Andrea, J.; Aubin, A.; Bloch, D.; Brom, J.-M.; Chabert, E. C.; Collard, C.; Conte, E.; Fontaine, J.-C.; Gelé, D.; Goerlach, U.; Goetzmann, C.; Le Bihan, A.-C.; Skovpen, K.; van Hove, P.; Gadrat, S.; Beauceron, S.; Beaupere, N.; Boudoul, G.; Bouvier, E.; Brochet, S.; Carrillo Montoya, C. A.; Chasserat, J.; Chierici, R.; Contardo, D.; Depasse, P.; El Mamouni, H.; Fan, J.; Fay, J.; Gascon, S.; Gouzevitch, M.; Ille, B.; Kurca, T.; Lethuillier, M.; Mirabito, L.; Perries, S.; Ruiz Alvarez, J. D.; Sabes, D.; Sgandurra, L.; Sordini, V.; Vander Donckt, M.; Verdier, P.; Viret, S.; Xiao, H.; Tsamalaidze, Z.; Autermann, C.; Beranek, S.; Bontenackels, M.; Edelhoff, M.; Feld, L.; Heister, A.; Hindrichs, O.; Klein, K.; Ostapchuk, A.; Raupach, F.; Sammet, J.; Schael, S.; Schulte, J. F.; Weber, H.; Wittmer, B.; Zhukov, V.; Ata, M.; Brodski, M.; Dietz-Laursonn, E.; Duchardt, D.; Erdmann, M.; Fischer, R.; Güth, A.; Hebbeker, T.; Heidemann, C.; Hoepfner, K.; Klingebiel, D.; Knutzen, S.; Kreuzer, P.; Merschmeyer, M.; Meyer, A.; Millet, P.; Olschewski, M.; Padeken, K.; Papacz, P.; Reithler, H.; Schmitz, S. A.; Sonnenschein, L.; Teyssier, D.; Thüer, S.; Weber, M.; Cherepanov, V.; Erdogan, Y.; Flügge, G.; Geenen, H.; Geisler, M.; Haj Ahmad, W.; Hoehle, F.; Kargoll, B.; Kress, T.; Kuessel, Y.; Künsken, A.; Lingemann, J.; Nowack, A.; Nugent, I. M.; Pooth, O.; Stahl, A.; Aldaya Martin, M.; Asin, I.; Bartosik, N.; Behr, J.; Behrens, U.; Bell, A. J.; Bethani, A.; Borras, K.; Burgmeier, A.; Cakir, A.; Calligaris, L.; Campbell, A.; Choudhury, S.; Costanza, F.; Diez Pardos, C.; Dolinska, G.; Dooling, S.; Dorland, T.; Eckerlin, G.; Eckstein, D.; Eichhorn, T.; Flucke, G.; Garay Garcia, J.; Geiser, A.; Gunnellini, P.; Hauk, J.; Hempel, M.; Jung, H.; Kalogeropoulos, A.; Kasemann, M.; Katsas, P.; Kieseler, J.; Kleinwort, C.; Korol, I.; Krücker, D.; Lange, W.; Leonard, J.; Lipka, K.; Lobanov, A.; Lohmann, W.; Lutz, B.; Mankel, R.; Marfin, I.; Melzer-Pellmann, I.-A.; Meyer, A. B.; Mittag, G.; Mnich, J.; Mussgiller, A.; Naumann-Emme, S.; Nayak, A.; Ntomari, E.; Perrey, H.; Pitzl, D.; Placakyte, R.; Raspereza, A.; Ribeiro Cipriano, P. M.; Roland, B.; Ron, E.; Sahin, M. Ö.; Salfeld-Nebgen, J.; Saxena, P.; Schoerner-Sadenius, T.; Schröder, M.; Seitz, C.; Spannagel, S.; Vargas Trevino, A. D. R.; Walsh, R.; Wissing, C.; Blobel, V.; Centis Vignali, M.; Draeger, A. R.; Erfle, J.; Garutti, E.; Goebel, K.; Görner, M.; Haller, J.; Hoffmann, M.; Höing, R. S.; Junkes, A.; Kirschenmann, H.; Klanner, R.; Kogler, R.; Lange, J.; Lapsien, T.; Lenz, T.; Marchesini, I.; Ott, J.; Peiffer, T.; Perieanu, A.; Pietsch, N.; Poehlsen, J.; Poehlsen, T.; Rathjens, D.; Sander, C.; Schettler, H.; Schleper, P.; Schlieckau, E.; Schmidt, A.; Seidel, M.; Sola, V.; Stadie, H.; Steinbrück, G.; Troendle, D.; Usai, E.; Vanelderen, L.; Vanhoefer, A.; Barth, C.; Baus, C.; Berger, J.; Böser, C.; Butz, E.; Chwalek, T.; de Boer, W.; Descroix, A.; Dierlamm, A.; Feindt, M.; Frensch, F.; Giffels, M.; Gilbert, A.; Hartmann, F.; Hauth, T.; Husemann, U.; Katkov, I.; Kornmayer, A.; Kuznetsova, E.; Lobelle Pardo, P.; Mozer, M. U.; Müller, T.; Müller, Th.; Nürnberg, A.; Quast, G.; Rabbertz, K.; Röcker, S.; Simonis, H. J.; Stober, F. M.; Ulrich, R.; Wagner-Kuhr, J.; Wayand, S.; Weiler, T.; Wolf, R.; Anagnostou, G.; Daskalakis, G.; Geralis, T.; Giakoumopoulou, V. A.; Kyriakis, A.; Loukas, D.; Markou, A.; Markou, C.; Psallidas, A.; Topsis-Giotis, I.; Agapitos, A.; Kesisoglou, S.; Panagiotou, A.; Saoulidou, N.; Stiliaris, E.; Aslanoglou, X.; Evangelou, I.; Flouris, G.; Foudas, C.; Kokkas, P.; Manthos, N.; Papadopoulos, I.; Paradas, E.; Strologas, J.; Bencze, G.; Hajdu, C.; Hidas, P.; Horvath, D.; Sikler, F.; Veszpremi, V.; Vesztergombi, G.; Zsigmond, A. J.; Beni, N.; Czellar, S.; Karancsi, J.; Molnar, J.; Palinkas, J.; Szillasi, Z.; Makovec, A.; Raics, P.; Trocsanyi, Z. L.; Ujvari, B.; Sahoo, N.; Swain, S. K.; Beri, S. B.; Bhatnagar, V.; Gupta, R.; Bhawandeep, U.; Kalsi, A. K.; Kaur, M.; Kumar, R.; Mittal, M.; Nishu, N.; Singh, J. B.; Ashok Kumar; Arun Kumar; Ahuja, S.; Bhardwaj, A.; Choudhary, B. C.; Kumar, A.; Malhotra, S.; Naimuddin, M.; Ranjan, K.; Sharma, V.; Banerjee, S.; Bhattacharya, S.; Chatterjee, K.; Dutta, S.; Gomber, B.; Jain, Sa.; Jain, Sh.; Khurana, R.; Modak, A.; Mukherjee, S.; Roy, D.; Sarkar, S.; Sharan, M.; Abdulsalam, A.; Dutta, D.; Kailas, S.; Kumar, V.; Mohanty, A. K.; Pant, L. M.; Shukla, P.; Topkar, A.; Aziz, T.; Banerjee, S.; Bhowmik, S.; Chatterjee, R. M.; Dewanjee, R. K.; Dugad, S.; Ganguly, S.; Ghosh, S.; Guchait, M.; Gurtu, A.; Kole, G.; Kumar, S.; Maity, M.; Majumder, G.; Mazumdar, K.; Mohanty, G. B.; Parida, B.; Sudhakar, K.; Wickramage, N.; Bakhshiansohi, H.; Behnamian, H.; Etesami, S. M.; Fahim, A.; Goldouzian, R.; Khakzad, M.; Mohammadi Najafabadi, M.; Naseri, M.; Paktinat Mehdiabadi, S.; Rezaei Hosseinabadi, F.; Safarzadeh, B.; Zeinali, M.; Felcini, M.; Grunewald, M.; Abbrescia, M.; Calabria, C.; Chhibra, S. S.; Colaleo, A.; Creanza, D.; de Filippis, N.; de Palma, M.; Fiore, L.; Iaselli, G.; Maggi, G.; Maggi, M.; My, S.; Nuzzo, S.; Pompili, A.; Pugliese, G.; Radogna, R.; Selvaggi, G.; Sharma, A.; Silvestris, L.; Venditti, R.; Verwilligen, P.; Abbiendi, G.; Benvenuti, A. C.; Bonacorsi, D.; Braibant-Giacomelli, S.; Brigliadori, L.; Campanini, R.; Capiluppi, P.; Castro, A.; Cavallo, F. R.; Codispoti, G.; Cuffiani, M.; Dallavalle, G. M.; Fabbri, F.; Fanfani, A.; Fasanella, D.; Giacomelli, P.; Grandi, C.; Guiducci, L.; Marcellini, S.; Masetti, G.; Montanari, A.; Navarria, F. L.; Perrotta, A.; Primavera, F.; Rossi, A. M.; Rovelli, T.; Siroli, G. P.; Tosi, N.; Travaglini, R.; Albergo, S.; Cappello, G.; Chiorboli, M.; Costa, S.; Giordano, F.; Potenza, R.; Tricomi, A.; Tuve, C.; Barbagli, G.; Ciulli, V.; Civinini, C.; D'Alessandro, R.; Focardi, E.; Gallo, E.; Gonzi, S.; Gori, V.; Lenzi, P.; Meschini, M.; Paoletti, S.; Sguazzoni, G.; Tropiano, A.; Benussi, L.; Bianco, S.; Fabbri, F.; Piccolo, D.; Ferretti, R.; Ferro, F.; Lo Vetere, M.; Robutti, E.; Tosi, S.; Dinardo, M. E.; Fiorendi, S.; Gennai, S.; Gerosa, R.; Ghezzi, A.; Govoni, P.; Lucchini, M. T.; Malvezzi, S.; Manzoni, R. A.; Martelli, A.; Marzocchi, B.; Menasce, D.; Moroni, L.; Paganoni, M.; Pedrini, D.; Ragazzi, S.; Redaelli, N.; Tabarelli de Fatis, T.; Buontempo, S.; Cavallo, N.; di Guida, S.; Fabozzi, F.; Iorio, A. O. M.; Lista, L.; Meola, S.; Merola, M.; Paolucci, P.; Azzi, P.; Bacchetta, N.; Bisello, D.; Branca, A.; Carlin, R.; Checchia, P.; Dall'Osso, M.; Dorigo, T.; Dosselli, U.; Galanti, M.; Gasparini, F.; Gasparini, U.; Giubilato, P.; Gozzelino, A.; Kanishchev, K.; Lacaprara, S.; Margoni, M.; Meneguzzo, A. T.; Pazzini, J.; Pozzobon, N.; Ronchese, P.; Simonetto, F.; Torassa, E.; Tosi, M.; Zotto, P.; Zucchetta, A.; Zumerle, G.; Gabusi, M.; Ratti, S. P.; Re, V.; Riccardi, C.; Salvini, P.; Vitulo, P.; Biasini, M.; Bilei, G. M.; Ciangottini, D.; Fanò, L.; Lariccia, P.; Mantovani, G.; Menichelli, M.; Saha, A.; Santocchia, A.; Spiezia, A.; Androsov, K.; Azzurri, P.; Bagliesi, G.; Bernardini, J.; Boccali, T.; Broccolo, G.; Castaldi, R.; Ciocci, M. A.; Dell'Orso, R.; Donato, S.; Fiori, F.; Foà, L.; Giassi, A.; Grippo, M. T.; Ligabue, F.; Lomtadze, T.; Martini, L.; Messineo, A.; Moon, C. S.; Palla, F.; Rizzi, A.; Savoy-Navarro, A.; Serban, A. T.; Spagnolo, P.; Squillacioti, P.; Tenchini, R.; Tonelli, G.; Venturi, A.; Verdini, P. G.; Vernieri, C.; Barone, L.; Cavallari, F.; D'Imperio, G.; Del Re, D.; Diemoz, M.; Jorda, C.; Longo, E.; Margaroli, F.; Meridiani, P.; Micheli, F.; Nourbakhsh, S.; Organtini, G.; Paramatti, R.; Rahatlou, S.; Rovelli, C.; Santanastasio, F.; Soffi, L.; Traczyk, P.; Amapane, N.; Arcidiacono, R.; Argiro, S.; Arneodo, M.; Bellan, R.; Biino, C.; Cartiglia, N.; Casasso, S.; Costa, M.; Degano, A.; Demaria, N.; Finco, L.; Mariotti, C.; Maselli, S.; Migliore, E.; Monaco, V.; Musich, M.; Obertino, M. M.; Pacher, L.; Pastrone, N.; Pelliccioni, M.; Pinna Angioni, G. L.; Potenza, A.; Romero, A.; Ruspa, M.; Sacchi, R.; Solano, A.; Staiano, A.; Tamponi, U.; Belforte, S.; Candelise, V.; Casarsa, M.; Cossutti, F.; Della Ricca, G.; Gobbo, B.; La Licata, C.; Marone, M.; Schizzi, A.; Umer, T.; Zanetti, A.; Chang, S.; Kropivnitskaya, A.; Nam, S. K.; Kim, D. H.; Kim, G. N.; Kim, M. S.; Kong, D. J.; Lee, S.; Oh, Y. D.; Park, H.; Sakharov, A.; Son, D. C.; Kim, T. J.; Kim, J. Y.; Song, S.; Choi, S.; Gyun, D.; Hong, B.; Jo, M.; Kim, H.; Kim, Y.; Lee, B.; Lee, K. S.; Park, S. K.; Roh, Y.; Yoo, H. D.; Choi, M.; Kim, J. H.; Park, I. C.; Ryu, G.; Ryu, M. S.; Choi, Y.; Choi, Y. K.; Goh, J.; Kim, D.; Kwon, E.; Lee, J.; Yu, I.; Juodagalvis, A.; Komaragiri, J. R.; Md Ali, M. A. B.; Casimiro Linares, E.; Castilla-Valdez, H.; de La Cruz-Burelo, E.; Heredia-de La Cruz, I.; Hernandez-Almada, A.; Lopez-Fernandez, R.; Sanchez-Hernandez, A.; Carrillo Moreno, S.; Vazquez Valencia, F.; Pedraza, I.; Salazar Ibarguen, H. A.; Morelos Pineda, A.; Krofcheck, D.; Butler, P. H.; Reucroft, S.; Ahmad, A.; Ahmad, M.; Hassan, Q.; Hoorani, H. R.; Khan, W. A.; Khurshid, T.; Shoaib, M.; Bialkowska, H.; Bluj, M.; Boimska, B.; Frueboes, T.; Górski, M.; Kazana, M.; Nawrocki, K.; Romanowska-Rybinska, K.; Szleper, M.; Zalewski, P.; Brona, G.; Bunkowski, K.; Cwiok, M.; Dominik, W.; Doroba, K.; Kalinowski, A.; Konecki, M.; Krolikowski, J.; Misiura, M.; Olszewski, M.; Wolszczak, W.; Bargassa, P.; Beirão da Cruz E Silva, C.; Faccioli, P.; Ferreira Parracho, P. G.; Gallinaro, M.; Lloret Iglesias, L.; Nguyen, F.; Rodrigues Antunes, J.; Seixas, J.; Varela, J.; Vischia, P.; Afanasiev, S.; Bunin, P.; Gavrilenko, M.; Golutvin, I.; Gorbunov, I.; Kamenev, A.; Karjavin, V.; Konoplyanikov, V.; Lanev, A.; Malakhov, A.; Matveev, V.; Moisenz, P.; Palichik, V.; Perelygin, V.; Shmatov, S.; Skatchkov, N.; Smirnov, V.; Zarubin, A.; Golovtsov, V.; Ivanov, Y.; Kim, V.; Levchenko, P.; Murzin, V.; Oreshkin, V.; Smirnov, I.; Sulimov, V.; Uvarov, L.; Vavilov, S.; Vorobyev, A.; Vorobyev, An.; Andreev, Yu.; Dermenev, A.; Gninenko, S.; Golubev, N.; Kirsanov, M.; Krasnikov, N.; Pashenkov, A.; Tlisov, D.; Toropin, A.; Epshteyn, V.; Gavrilov, V.; Lychkovskaya, N.; Popov, V.; Pozdnyakov, I.; Safronov, G.; Semenov, S.; Spiridonov, A.; Stolin, V.; Vlasov, E.; Zhokin, A.; Andreev, V.; Azarkin, M.; Dremin, I.; Kirakosyan, M.; Leonidov, A.; Mesyats, G.; Rusakov, S. V.; Vinogradov, A.; Belyaev, A.; Boos, E.; Dubinin, M.; Dudko, L.; Ershov, A.; Gribushin, A.; Klyukhin, V.; Kodolova, O.; Lokhtin, I.; Obraztsov, S.; Petrushanko, S.; Savrin, V.; Snigirev, A.; Azhgirey, I.; Bayshev, I.; Bitioukov, S.; Kachanov, V.; Kalinin, A.; Konstantinov, D.; Krychkine, V.; Petrov, V.; Ryutin, R.; Sobol, A.; Tourtchanovitch, L.; Troshin, S.; Tyurin, N.; Uzunian, A.; Volkov, A.; Adzic, P.; Ekmedzic, M.; Milosevic, J.; Rekovic, V.; Alcaraz Maestre, J.; Battilana, C.; Calvo, E.; Cerrada, M.; Chamizo Llatas, M.; Colino, N.; de La Cruz, B.; Delgado Peris, A.; Domínguez Vázquez, D.; Escalante Del Valle, A.; Fernandez Bedoya, C.; Fernández Ramos, J. P.; Flix, J.; Fouz, M. C.; Garcia-Abia, P.; Gonzalez Lopez, O.; Goy Lopez, S.; Hernandez, J. M.; Josa, M. I.; Navarro de Martino, E.; Pérez-Calero Yzquierdo, A.; Puerta Pelayo, J.; Quintario Olmeda, A.; Redondo, I.; Romero, L.; Soares, M. S.; Albajar, C.; de Trocóniz, J. F.; Missiroli, M.; Moran, D.; Brun, H.; Cuevas, J.; Fernandez Menendez, J.; Folgueras, S.; Gonzalez Caballero, I.; Brochero Cifuentes, J. A.; Cabrillo, I. J.; Calderon, A.; Duarte Campderros, J.; Fernandez, M.; Gomez, G.; Graziano, A.; Lopez Virto, A.; Marco, J.; Marco, R.; Martinez Rivero, C.; Matorras, F.; Munoz Sanchez, F. J.; Piedra Gomez, J.; Rodrigo, T.; Rodríguez-Marrero, A. Y.; Ruiz-Jimeno, A.; Scodellaro, L.; Vila, I.; Vilar Cortabitarte, R.; Abbaneo, D.; Auffray, E.; Auzinger, G.; Bachtis, M.; Baillon, P.; Ball, A. H.; Barney, D.; Benaglia, A.; Bendavid, J.; Benhabib, L.; Benitez, J. F.; Bernet, C.; Bloch, P.; Bocci, A.; Bonato, A.; Bondu, O.; Botta, C.; Breuker, H.; Camporesi, T.; Cerminara, G.; Colafranceschi, S.; D'Alfonso, M.; D'Enterria, D.; Dabrowski, A.; David, A.; de Guio, F.; de Roeck, A.; de Visscher, S.; di Marco, E.; Dobson, M.; Dordevic, M.; Dupont-Sagorin, N.; Elliott-Peisert, A.; Franzoni, G.; Funk, W.; Gigi, D.; Gill, K.; Giordano, D.; Girone, M.; Glege, F.; Guida, R.; Gundacker, S.; Guthoff, M.; Hammer, J.; Hansen, M.; Harris, P.; Hegeman, J.; Innocente, V.; Janot, P.; Kousouris, K.; Krajczar, K.; Lecoq, P.; Lourenço, C.; Magini, N.; Malgeri, L.; Mannelli, M.; Marrouche, J.; Masetti, L.; Meijers, F.; Mersi, S.; Meschi, E.; Moortgat, F.; Morovic, S.; Mulders, M.; Orsini, L.; Pape, L.; Perez, E.; Perrozzi, L.; Petrilli, A.; Petrucciani, G.; Pfeiffer, A.; Pimiä, M.; Piparo, D.; Plagge, M.; Racz, A.; Rolandi, G.; Rovere, M.; Sakulin, H.; Schäfer, C.; Schwick, C.; Sharma, A.; Siegrist, P.; Silva, P.; Simon, M.; Sphicas, P.; Spiga, D.; Steggemann, J.; Stieger, B.; Stoye, M.; Takahashi, Y.; Treille, D.; Tsirou, A.; Veres, G. I.; Wardle, N.; Wöhri, H. K.; Wollny, H.; Zeuner, W. D.; Bertl, W.; Deiters, K.; Erdmann, W.; Horisberger, R.; Ingram, Q.; Kaestli, H. C.; Kotlinski, D.; Renker, D.; Rohe, T.; Bachmair, F.; Bäni, L.; Bianchini, L.; Buchmann, M. A.; Casal, B.; Chanon, N.; Dissertori, G.; Dittmar, M.; Donegà, M.; Dünser, M.; Eller, P.; Grab, C.; Hits, D.; Hoss, J.; Lustermann, W.; Mangano, B.; Marini, A. C.; Marionneau, M.; Martinez Ruiz Del Arbol, P.; Masciovecchio, M.; Meister, D.; Mohr, N.; Musella, P.; Nägeli, C.; Nessi-Tedaldi, F.; Pandolfi, F.; Pauss, F.; Peruzzi, M.; Quittnat, M.; Rebane, L.; Rossini, M.; Starodumov, A.; Takahashi, M.; Theofilatos, K.; Wallny, R.; Weber, H. A.; Amsler, C.; Canelli, M. F.; Chiochia, V.; de Cosa, A.; Hinzmann, A.; Hreus, T.; Kilminster, B.; Lange, C.; Millan Mejias, B.; Ngadiuba, J.; Pinna, D.; Robmann, P.; Ronga, F. J.; Taroni, S.; Verzetti, M.; Yang, Y.; Cardaci, M.; Chen, K. H.; Ferro, C.; Kuo, C. M.; Lin, W.; Lu, Y. J.; Volpe, R.; Yu, S. S.; Chang, P.; Chang, Y. H.; Chang, Y. W.; Chao, Y.; Chen, K. F.; Chen, P. H.; Dietz, C.; Grundler, U.; Hou, W.-S.; Kao, K. Y.; Liu, Y. F.; Lu, R.-S.; Majumder, D.; Petrakou, E.; Tzeng, Y. M.; Wilken, R.; Asavapibhop, B.; Singh, G.; Srimanobhas, N.; Suwonjandee, N.; Adiguzel, A.; Bakirci, M. N.; Cerci, S.; Dozen, C.; Dumanoglu, I.; Eskut, E.; Girgis, S.; Gokbulut, G.; Gurpinar, E.; Hos, I.; Kangal, E. E.; Kayis Topaksu, A.; Onengut, G.; Ozdemir, K.; Ozturk, S.; Polatoz, A.; Sunar Cerci, D.; Tali, B.; Topakli, H.; Vergili, M.; Akin, I. V.; Bilin, B.; Bilmis, S.; Gamsizkan, H.; Isildak, B.; Karapinar, G.; Ocalan, K.; Sekmen, S.; Surat, U. E.; Yalvac, M.; Zeyrek, M.; Albayrak, E. A.; Gülmez, E.; Kaya, M.; Kaya, O.; Yetkin, T.; Cankocak, K.; Vardarlı, F. I.; Levchuk, L.; Sorokin, P.; Brooke, J. J.; Clement, E.; Cussans, D.; Flacher, H.; Goldstein, J.; Grimes, M.; Heath, G. P.; Heath, H. F.; Jacob, J.; Kreczko, L.; Lucas, C.; Meng, Z.; Newbold, D. M.; Paramesvaran, S.; Poll, A.; Sakuma, T.; Senkin, S.; Smith, V. J.; Bell, K. W.; Belyaev, A.; Brew, C.; Brown, R. M.; Cockerill, D. J. A.; Coughlan, J. A.; Harder, K.; Harper, S.; Olaiya, E.; Petyt, D.; Shepherd-Themistocleous, C. H.; Thea, A.; Tomalin, I. R.; Williams, T.; Womersley, W. J.; Worm, S. D.; Baber, M.; Bainbridge, R.; Buchmuller, O.; Burton, D.; Colling, D.; Cripps, N.; Dauncey, P.; Davies, G.; Della Negra, M.; Dunne, P.; Ferguson, W.; Fulcher, J.; Futyan, D.; Hall, G.; Iles, G.; Jarvis, M.; Karapostoli, G.; Kenzie, M.; Lane, R.; Lucas, R.; Lyons, L.; Magnan, A.-M.; Malik, S.; Mathias, B.; Nash, J.; Nikitenko, A.; Pela, J.; Pesaresi, M.; Petridis, K.; Raymond, D. M.; Rogerson, S.; Rose, A.; Seez, C.; Sharp, P.; Tapper, A.; Vazquez Acosta, M.; Virdee, T.; Zenz, S. C.; Cole, J. E.; Hobson, P. R.; Khan, A.; Kyberd, P.; Leggat, D.; Leslie, D.; Reid, I. D.; Symonds, P.; Teodorescu, L.; Turner, M.; Dittmann, J.; Hatakeyama, K.; Kasmi, A.; Liu, H.; Scarborough, T.; Charaf, O.; Cooper, S. I.; Henderson, C.; Rumerio, P.; Avetisyan, A.; Bose, T.; Fantasia, C.; Lawson, P.; Richardson, C.; Rohlf, J.; St. John, J.; Sulak, L.; Alimena, J.; Berry, E.; Bhattacharya, S.; Christopher, G.; Cutts, D.; Demiragli, Z.; Dhingra, N.; Ferapontov, A.; Garabedian, A.; Heintz, U.; Kukartsev, G.; Laird, E.; Landsberg, G.; Luk, M.; Narain, M.; Segala, M.; Sinthuprasith, T.; Speer, T.; Swanson, J.; Breedon, R.; Breto, G.; Calderon de La Barca Sanchez, M.; Chauhan, S.; Chertok, M.; Conway, J.; Conway, R.; Cox, P. T.; Erbacher, R.; Gardner, M.; Ko, W.; Lander, R.; Mulhearn, M.; Pellett, D.; Pilot, J.; Ricci-Tam, F.; Shalhout, S.; Smith, J.; Squires, M.; Stolp, D.; Tripathi, M.; Wilbur, S.; Yohay, R.; Cousins, R.; Everaerts, P.; Farrell, C.; Hauser, J.; Ignatenko, M.; Rakness, G.; Takasugi, E.; Valuev, V.; Weber, M.; Burt, K.; Clare, R.; Ellison, J.; Gary, J. W.; Hanson, G.; Heilman, J.; Ivova Rikova, M.; Jandir, P.; Kennedy, E.; Lacroix, F.; Long, O. R.; Luthra, A.; Malberti, M.; Olmedo Negrete, M.; Shrinivas, A.; Sumowidagdo, S.; Wimpenny, S.; Branson, J. G.; Cerati, G. B.; Cittolin, S.; D'Agnolo, R. T.; Holzner, A.; Kelley, R.; Klein, D.; Kovalskyi, D.; Letts, J.; MacNeill, I.; Olivito, D.; Padhi, S.; Palmer, C.; Pieri, M.; Sani, M.; Sharma, V.; Simon, S.; Tu, Y.; Vartak, A.; Welke, C.; Würthwein, F.; Yagil, A.; Barge, D.; Bradmiller-Feld, J.; Campagnari, C.; Danielson, T.; Dishaw, A.; Dutta, V.; Flowers, K.; Franco Sevilla, M.; Geffert, P.; George, C.; Golf, F.; Gouskos, L.; Incandela, J.; Justus, C.; McColl, N.; Richman, J.; Stuart, D.; To, W.; West, C.; Yoo, J.; Apresyan, A.; Bornheim, A.; Bunn, J.; Chen, Y.; Duarte, J.; Mott, A.; Newman, H. B.; Pena, C.; Pierini, M.; Spiropulu, M.; Vlimant, J. R.; Wilkinson, R.; Xie, S.; Zhu, R. Y.; Azzolini, V.; Calamba, A.; Carlson, B.; Ferguson, T.; Iiyama, Y.; Paulini, M.; Russ, J.; Vogel, H.; Vorobiev, I.; Cumalat, J. P.; Ford, W. T.; Gaz, A.; Krohn, M.; Luiggi Lopez, E.; Nauenberg, U.; Smith, J. G.; Stenson, K.; Wagner, S. R.; Alexander, J.; Chatterjee, A.; Chaves, J.; Chu, J.; Dittmer, S.; Eggert, N.; Mirman, N.; Nicolas Kaufman, G.; Patterson, J. R.; Ryd, A.; Salvati, E.; Skinnari, L.; Sun, W.; Teo, W. D.; Thom, J.; Thompson, J.; Tucker, J.; Weng, Y.; Winstrom, L.; Wittich, P.; Winn, D.; Abdullin, S.; Albrow, M.; Anderson, J.; Apollinari, G.; Bauerdick, L. A. T.; Beretvas, A.; Berryhill, J.; Bhat, P. C.; Bolla, G.; Burkett, K.; Butler, J. N.; Cheung, H. W. K.; Chlebana, F.; Cihangir, S.; Elvira, V. D.; Fisk, I.; Freeman, J.; Gao, Y.; Gottschalk, E.; Gray, L.; Green, D.; Grünendahl, S.; Gutsche, O.; Hanlon, J.; Hare, D.; Harris, R. M.; Hirschauer, J.; Hooberman, B.; Jindariani, S.; Johnson, M.; Joshi, U.; Kaadze, K.; Klima, B.; Kreis, B.; Kwan, S.; Linacre, J.; Lincoln, D.; Lipton, R.; Liu, T.; Lykken, J.; Maeshima, K.; Marraffino, J. M.; Martinez Outschoorn, V. I.; Maruyama, S.; Mason, D.; McBride, P.; Merkel, P.; Mishra, K.; Mrenna, S.; Nahn, S.; Newman-Holmes, C.; O'Dell, V.; Prokofyev, O.; Sexton-Kennedy, E.; Sharma, S.; Soha, A.; Spalding, W. J.; Spiegel, L.; Taylor, L.; Tkaczyk, S.; Tran, N. V.; Uplegger, L.; Vaandering, E. W.; Vidal, R.; Whitbeck, A.; Whitmore, J.; Yang, F.; Acosta, D.; Avery, P.; Bortignon, P.; Bourilkov, D.; Carver, M.; Curry, D.; Das, S.; de Gruttola, M.; di Giovanni, G. P.; Field, R. D.; Fisher, M.; Furic, I. K.; Hugon, J.; Konigsberg, J.; Korytov, A.; Kypreos, T.; Low, J. F.; Matchev, K.; Mei, H.; Milenovic, P.; Mitselmakher, G.; Muniz, L.; Rinkevicius, A.; Shchutska, L.; Snowball, M.; Sperka, D.; Yelton, J.; Zakaria, M.; Hewamanage, S.; Linn, S.; Markowitz, P.; Martinez, G.; Rodriguez, J. L.; Adams, T.; Askew, A.; Bochenek, J.; Diamond, B.; Haas, J.; Hagopian, S.; Hagopian, V.; Johnson, K. F.; Prosper, H.; Veeraraghavan, V.; Weinberg, M.; Baarmand, M. M.; Hohlmann, M.; Kalakhety, H.; Yumiceva, F.; Adams, M. R.; Apanasevich, L.; Berry, D.; Betts, R. R.; Bucinskaite, I.; Cavanaugh, R.; Evdokimov, O.; Gauthier, L.; Gerber, C. E.; Hofman, D. J.; Kurt, P.; Moon, D. H.; O'Brien, C.; Sandoval Gonzalez, I. D.; Silkworth, C.; Turner, P.; Varelas, N.; Bilki, B.; Clarida, W.; Dilsiz, K.; Haytmyradov, M.; Merlo, J.-P.; Mermerkaya, H.; Mestvirishvili, A.; Moeller, A.; Nachtman, J.; Ogul, H.; Onel, Y.; Ozok, F.; Penzo, A.; Rahmat, R.; Sen, S.; Tan, P.; Tiras, E.; Wetzel, J.; Yi, K.; Barnett, B. A.; Blumenfeld, B.; Bolognesi, S.; Fehling, D.; Gritsan, A. V.; Maksimovic, P.; Martin, C.; Swartz, M.; Baringer, P.; Bean, A.; Benelli, G.; Bruner, C.; Kenny, R. P., III; Malek, M.; Murray, M.; Noonan, D.; Sanders, S.; Sekaric, J.; Stringer, R.; Wang, Q.; Wood, J. S.; Chakaberia, I.; Ivanov, A.; Khalil, S.; Makouski, M.; Maravin, Y.; Saini, L. K.; Skhirtladze, N.; Svintradze, I.; Gronberg, J.; Lange, D.; Rebassoo, F.; Wright, D.; Baden, A.; Belloni, A.; Calvert, B.; Eno, S. C.; Gomez, J. A.; Hadley, N. J.; Kellogg, R. G.; Kolberg, T.; Lu, Y.; Mignerey, A. C.; Pedro, K.; Skuja, A.; Tonjes, M. B.; Tonwar, S. C.; Apyan, A.; Barbieri, R.; Bauer, G.; Busza, W.; Cali, I. A.; Chan, M.; Di Matteo, L.; Gomez Ceballos, G.; Goncharov, M.; Gulhan, D.; Klute, M.; Lai, Y. S.; Lee, Y.-J.; Levin, A.; Luckey, P. D.; Ma, T.; Paus, C.; Ralph, D.; Roland, C.; Roland, G.; Stephans, G. S. F.; Sumorok, K.; Velicanu, D.; Veverka, J.; Wyslouch, B.; Yang, M.; Zanetti, M.; Zhukova, V.; Dahmes, B.; Gude, A.; Kao, S. C.; Klapoetke, K.; Kubota, Y.; Mans, J.; Pastika, N.; Rusack, R.; Singovsky, A.; Tambe, N.; Turkewitz, J.; Acosta, J. G.; Oliveros, S.; Avdeeva, E.; Bloom, K.; Bose, S.; Claes, D. R.; Dominguez, A.; Gonzalez Suarez, R.; Keller, J.; Knowlton, D.; Kravchenko, I.; Lazo-Flores, J.; Meier, F.; Ratnikov, F.; Snow, G. R.; Zvada, M.; Dolen, J.; Godshalk, A.; Iashvili, I.; Kharchilava, A.; Kumar, A.; Rappoccio, S.; Alverson, G.; Barberis, E.; Baumgartel, D.; Chasco, M.; Massironi, A.; Morse, D. M.; Nash, D.; Orimoto, T.; Trocino, D.; Wang, R.-J.; Wood, D.; Zhang, J.; Hahn, K. A.; Kubik, A.; Mucia, N.; Odell, N.; Pollack, B.; Pozdnyakov, A.; Schmitt, M.; Stoynev, S.; Sung, K.; Velasco, M.; Won, S.; Brinkerhoff, A.; Chan, K. M.; Drozdetskiy, A.; Hildreth, M.; Jessop, C.; Karmgard, D. J.; Kellams, N.; Lannon, K.; Lynch, S.; Marinelli, N.; Musienko, Y.; Pearson, T.; Planer, M.; Ruchti, R.; Smith, G.; Valls, N.; Wayne, M.; Wolf, M.; Woodard, A.; Antonelli, L.; Brinson, J.; Bylsma, B.; Durkin, L. S.; Flowers, S.; Hart, A.; Hill, C.; Hughes, R.; Kotov, K.; Ling, T. Y.; Luo, W.; Puigh, D.; Rodenburg, M.; Winer, B. L.; Wolfe, H.; Wulsin, H. W.; Driga, O.; Elmer, P.; Hardenbrook, J.; Hebda, P.; Hunt, A.; Koay, S. A.; Lujan, P.; Marlow, D.; Medvedeva, T.; Mooney, M.; Olsen, J.; Piroué, P.; Quan, X.; Saka, H.; Stickland, D.; Tully, C.; Werner, J. S.; Zuranski, A.; Brownson, E.; Malik, S.; Mendez, H.; Ramirez Vargas, J. E.; Barnes, V. E.; Benedetti, D.; Bortoletto, D.; de Mattia, M.; Gutay, L.; Hu, Z.; Jha, M. K.; Jones, M.; Jung, K.; Kress, M.; Leonardo, N.; Miller, D. H.; Neumeister, N.; Radburn-Smith, B. C.; Shi, X.; Shipsey, I.; Silvers, D.; Svyatkovskiy, A.; Wang, F.; Xie, W.; Xu, L.; Zablocki, J.; Parashar, N.; Stupak, J.; Adair, A.; Akgun, B.; Ecklund, K. M.; Geurts, F. J. M.; Li, W.; Michlin, B.; Padley, B. P.; Redjimi, R.; Roberts, J.; Zabel, J.; Betchart, B.; Bodek, A.; Covarelli, R.; de Barbaro, P.; Demina, R.; Eshaq, Y.; Ferbel, T.; Garcia-Bellido, A.; Goldenzweig, P.; Han, J.; Harel, A.; Khukhunaishvili, A.; Korjenevski, S.; Petrillo, G.; Vishnevskiy, D.; Ciesielski, R.; Demortier, L.; Goulianos, K.; Mesropian, C.; Arora, S.; Barker, A.; Chou, J. P.; Contreras-Campana, C.; Contreras-Campana, E.; Duggan, D.; Ferencek, D.; Gershtein, Y.; Gray, R.; Halkiadakis, E.; Hidas, D.; Kaplan, S.; Lath, A.; Panwalkar, S.; Park, M.; Patel, R.; Salur, S.; Schnetzer, S.; Somalwar, S.; Stone, R.; Thomas, S.; Thomassen, P.; Walker, M.; Rose, K.; Spanier, S.; York, A.; Bouhali, O.; Castaneda Hernandez, A.; Eusebi, R.; Flanagan, W.; Gilmore, J.; Kamon, T.; Khotilovich, V.; Krutelyov, V.; Montalvo, R.; Osipenkov, I.; Pakhotin, Y.; Perloff, A.; Roe, J.; Rose, A.; Safonov, A.; Suarez, I.; Tatarinov, A.; Ulmer, K. A.; Akchurin, N.; Cowden, C.; Damgov, J.; Dragoiu, C.; Dudero, P. R.; Faulkner, J.; Kovitanggoon, K.; Kunori, S.; Lee, S. W.; Libeiro, T.; Volobouev, I.; Appelt, E.; Delannoy, A. G.; Greene, S.; Gurrola, A.; Johns, W.; Maguire, C.; Mao, Y.; Melo, A.; Sharma, M.; Sheldon, P.; Snook, B.; Tuo, S.; Velkovska, J.; Arenton, M. W.; Boutle, S.; Cox, B.; Francis, B.; Goodell, J.; Hirosky, R.; Ledovskoy, A.; Li, H.; Lin, C.; Neu, C.; Wood, J.; Clarke, C.; Harr, R.; Karchin, P. E.; Kottachchi Kankanamge Don, C.; Lamichhane, P.; Sturdy, J.; Belknap, D. A.; Carlsmith, D.; Cepeda, M.; Dasu, S.; Dodd, L.; Duric, S.; Friis, E.; Hall-Wilton, R.; Herndon, M.; Hervé, A.; Klabbers, P.; Lanaro, A.; Lazaridis, C.; Levine, A.; Loveless, R.; Mohapatra, A.; Ojalvo, I.; Perry, T.; Pierro, G. A.; Polese, G.; Ross, I.; Sarangi, T.; Savin, A.; Smith, W. H.; Taylor, D.; Vuosalo, C.; Bediaga, I.; de Miranda, J. M.; Ferreira Rodrigues, F.; Gomes, A.; Massafferri, A.; Dos Reis, A. C.; Rodrigues, A. B.; Amato, S.; Carvalho Akiba, K.; de Paula, L.; Francisco, O.; Gandelman, M.; Hicheur, A.; Lopes, J. H.; Martins Tostes, D.; Nasteva, I.; Otalora Goicochea, J. M.; Polycarpo, E.; Potterat, C.; Rangel, M. S.; Salustino Guimaraes, V.; Souza de Paula, B.; Vieira, D.; An, L.; Gao, Y.; Jing, F.; Li, Y.; Yang, Z.; Yuan, X.; Zhang, Y.; Zhong, L.; Beaucourt, L.; Chefdeville, M.; Decamp, D.; Déléage, N.; Ghez, Ph.; Lees, J.-P.; Marchand, J. F.; Minard, M.-N.; Pietrzyk, B.; Qian, W.; T'jampens, S.; Tisserand, V.; Tournefier, E.; Ajaltouni, Z.; Baalouch, M.; Cogneras, E.; Deschamps, O.; El Rifai, I.; Grabalosa Gándara, M.; Henrard, P.; Hoballah, M.; Lefèvre, R.; Maratas, J.; Monteil, S.; Niess, V.; Perret, P.; Adrover, C.; Akar, S.; Aslanides, E.; Cogan, J.; Kanso, W.; Le Gac, R.; Leroy, O.; Mancinelli, G.; Mordà, A.; Perrin-Terrin, M.; Serrano, J.; Tsaregorodtsev, A.; Amhis, Y.; Barsuk, S.; Borsato, M.; Kochebina, O.; Lefrançois, J.; Machefert, F.; Martín Sánchez, A.; Nicol, M.; Robbe, P.; Schune, M.-H.; Teklishyn, M.; Vallier, A.; Viaud, B.; Wormser, G.; Ben-Haim, E.; Charles, M.; Coquereau, S.; David, P.; Del Buono, L.; Henry, L.; Polci, F.; Albrecht, J.; Brambach, T.; Cauet, Ch.; Deckenhoff, M.; Eitschberger, U.; Ekelhof, R.; Gavardi, L.; Kruse, F.; Meier, F.; Niet, R.; Parkinson, C. J.; Schlupp, M.; Shires, A.; Spaan, B.; Swientek, S.; Wishahi, J.; Aquines Gutierrez, O.; Blouw, J.; Britsch, M.; Fontana, M.; Popov, D.; Schmelling, M.; Volyanskyy, D.; Zavertyaev, M.; Bachmann, S.; Bien, A.; Comerma-Montells, A.; de Cian, M.; Dordei, F.; Esen, S.; Färber, C.; Gersabeck, E.; Grillo, L.; Han, X.; Hansmann-Menzemer, S.; Jaeger, A.; Kolpin, M.; Kreplin, K.; Krocker, G.; Leverington, B.; Marks, J.; Meissner, M.; Neuner, M.; Nikodem, T.; Seyfert, P.; Stahl, M.; Stahl, S.; Uwer, U.; Vesterinen, M.; Wandernoth, S.; Wiedner, D.; Zhelezov, A.; McNulty, R.; Wallace, R.; Zhang, W. C.; Palano, A.; Carbone, A.; Falabella, A.; Galli, D.; Marconi, U.; Moggi, N.; Mussini, M.; Perazzini, S.; Vagnoni, V.; Valenti, G.; Zangoli, M.; Bonivento, W.; Cadeddu, S.; Cardini, A.; Cogoni, V.; Contu, A.; Lai, A.; Liu, B.; Manca, G.; Oldeman, R.; Saitta, B.; Vacca, C.; Andreotti, M.; Baldini, W.; Bozzi, C.; Calabrese, R.; Corvo, M.; Fiore, M.; Fiorini, M.; Luppi, E.; Pappalardo, L. L.; Shapoval, I.; Tellarini, G.; Tomassetti, L.; Vecchi, S.; Anderlini, L.; Bizzeti, A.; Frosini, M.; Graziani, G.; Passaleva, G.; Veltri, M.; Bencivenni, G.; Campana, P.; de Simone, P.; Lanfranchi, G.; Palutan, M.; Rama, M.; Sarti, A.; Sciascia, B.; Vazquez Gomez, R.; Cardinale, R.; Fontanelli, F.; Gambetta, S.; Patrignani, C.; Petrolini, A.; Pistone, A.; Calvi, M.; Cassina, L.; Gotti, C.; Khanji, B.; Kucharczyk, M.; Matteuzzi, C.; Fu, J.; Geraci, A.; Neri, N.; Palombo, F.; Amerio, S.; Collazuol, G.; Gallorini, S.; Gianelle, A.; Lucchesi, D.; Lupato, A.; Morandin, M.; Rotondo, M.; Sestini, L.; Simi, G.; Stroili, R.; Bedeschi, F.; Cenci, R.; Leo, S.; Marino, P.; Morello, M. J.; Punzi, G.; Stracka, S.; Walsh, J.; Carboni, G.; Furfaro, E.; Santovetti, E.; Satta, A.; Alves, A. A., Jr.; Auriemma, G.; Bocci, V.; Martellotti, G.; Penso, G.; Pinci, D.; Santacesaria, R.; Satriano, C.; Sciubba, A.; Dziurda, A.; Kucewicz, W.; Lesiak, T.; Rachwal, B.; Witek, M.; Firlej, M.; Fiutowski, T.; Idzik, M.; Morawski, P.; Moron, J.; Oblakowska-Mucha, A.; Swientek, K.; Szumlak, T.; Batozskaya, V.; Klimaszewski, K.; Kurek, K.; Szczekowski, M.; Ukleja, A.; Wislicki, W.; Cojocariu, L.; Giubega, L.; Grecu, A.; Maciuc, F.; Orlandea, M.; Popovici, B.; Stoica, S.; Straticiuc, M.; Alkhazov, G.; Bondar, N.; Dzyuba, A.; Maev, O.; Sagidova, N.; Shcheglov, Y.; Vorobyev, A.; Belogurov, S.; Belyaev, I.; Egorychev, V.; Golubkov, D.; Kvaratskheliya, T.; Machikhiliyan, I. V.; Polyakov, I.; Savrina, D.; Semennikov, A.; Zhokhov, A.; Berezhnoy, A.; Korolev, M.; Leflat, A.; Nikitin, N.; Filippov, S.; Gushchin, E.; Kravchuk, L.; Bondar, A.; Eidelman, S.; Krokovny, P.; Kudryavtsev, V.; Shekhtman, L.; Vorobyev, V.; Artamonov, A.; Belous, K.; Dzhelyadin, R.; Guz, Yu.; Novoselov, A.; Obraztsov, V.; Popov, A.; Romanovsky, V.; Shapkin, M.; Stenyakin, O.; Yushchenko, O.; Badalov, A.; Calvo Gomez, M.; Garrido, L.; Gascon, D.; Graciani Diaz, R.; Graugés, E.; Marin Benito, C.; Picatoste Olloqui, E.; Rives Molina, V.; Ruiz, H.; Vilasis-Cardona, X.; Adeva, B.; Alvarez Cartelle, P.; Dosil Suárez, A.; Fernandez Albor, V.; Gallas Torreira, A.; García Pardiñas, J.; Hernando Morata, J. A.; Plo Casasus, M.; Romero Vidal, A.; Saborido Silva, J. J.; Sanmartin Sedes, B.; Santamarina Rios, C.; Vazquez Regueiro, P.; Vázquez Sierra, C.; Vieites Diaz, M.; Alessio, F.; Archilli, F.; Barschel, C.; Benson, S.; Buytaert, J.; Campora Perez, D.; Castillo Garcia, L.; Cattaneo, M.; Charpentier, Ph.; Cid Vidal, X.; Clemencic, M.; Closier, J.; Coco, V.; Collins, P.; Corti, G.; Couturier, B.; D'Ambrosio, C.; Dettori, F.; di Canto, A.; Dijkstra, H.; Durante, P.; Ferro-Luzzi, M.; Forty, R.; Frank, M.; Frei, C.; Gaspar, C.; Gligorov, V. V.; Granado Cardoso, L. A.; Gys, T.; Haen, C.; He, J.; Head, T.; van Herwijnen, E.; Jacobsson, R.; Johnson, D.; Joram, C.; Jost, B.; Karacson, M.; Karbach, T. M.; Lacarrere, D.; Langhans, B.; Lindner, R.; Linn, C.; Lohn, S.; Mapelli, A.; Matev, R.; Mathe, Z.; Neubert, S.; Neufeld, N.; Otto, A.; Panman, J.; Pepe Altarelli, M.; Rauschmayr, N.; Rihl, M.; Roiser, S.; Ruf, T.; Schindler, H.; Schmidt, B.; Schopper, A.; Schwemmer, R.; Sridharan, S.; Stagni, F.; Subbiah, V. K.; Teubert, F.; Thomas, E.; Tonelli, D.; Trisovic, A.; Ubeda Garcia, M.; Wicht, J.; Wyllie, K.; Battista, V.; Bay, A.; Blanc, F.; Dorigo, M.; Dupertuis, F.; Fitzpatrick, C.; Gianì, S.; Haefeli, G.; Jaton, P.; Khurewathanakul, C.; Komarov, I.; La Thi, V. N.; Lopez-March, N.; Märki, R.; Martinelli, M.; Muster, B.; Nakada, T.; Nguyen, A. D.; Nguyen, T. D.; Nguyen-Mau, C.; Prisciandaro, J.; Puig Navarro, A.; Rakotomiaramanana, B.; Rouvinet, J.; Schneider, O.; Soomro, F.; Szczypka, P.; Tobin, M.; Tourneur, S.; Tran, M. T.; Veneziano, G.; Xu, Z.; Anderson, J.; Bernet, R.; Bowen, E.; Bursche, A.; Chiapolini, N.; Chrzaszcz, M.; Elsasser, Ch.; Graverini, E.; Lionetto, F.; Lowdon, P.; Müller, K.; Serra, N.; Steinkamp, O.; Storaci, B.; Straumann, U.; Tresch, M.; Vollhardt, A.; Aaij, R.; Ali, S.; van Beuzekom, M.; David, P. N. Y.; de Bruyn, K.; Farinelli, C.; Heijne, V.; Hulsbergen, W.; Jans, E.; Koppenburg, P.; Kozlinskiy, A.; van Leerdam, J.; Merk, M.; Oggero, S.; Pellegrino, A.; Snoek, H.; van Tilburg, J.; Tsopelas, P.; Tuning, N.; de Vries, J. A.; Ketel, T.; Koopman, R. F.; Lambert, R. W.; Martinez Santos, D.; Raven, G.; Schiller, M.; Syropoulos, V.; Tolk, S.; Dovbnya, A.; Kandybei, S.; Raniuk, I.; Okhrimenko, O.; Pugatch, V.; Bifani, S.; Farley, N.; Griffith, P.; Kenyon, I. R.; Lazzeroni, C.; Mazurov, A.; McCarthy, J.; Pescatore, L.; Watson, N. K.; Williams, M. P.; Adinolfi, M.; Benton, J.; Brook, N. H.; Cook, A.; Coombes, M.; Dalseno, J.; Hampson, T.; Harnew, S. T.; Naik, P.; Price, E.; Prouve, C.; Rademacker, J. H.; Richards, S.; Saunders, D. M.; Skidmore, N.; Souza, D.; Velthuis, J. J.; Voong, D.; Barter, W.; Bettler, M.-O.; Cliff, H. V.; Evans, H.-M.; Garra Tico, J.; Gibson, V.; Gregson, S.; Haines, S. C.; Jones, C. R.; Sirendi, M.; Smith, J.; Ward, D. R.; Wotton, S. A.; Wright, S.; Back, J. J.; Blake, T.; Craik, D. C.; Crocombe, A. C.; Dossett, D.; Gershon, T.; Kreps, M.; Langenbruch, C.; Latham, T.; O'Hanlon, D. P.; Pilař, T.; Poluektov, A.; Reid, M. M.; Silva Coutinho, R.; Wallace, C.; Whitehead, M.; Easo, S.; Nandakumar, R.; Papanestis, A.; Ricciardi, S.; Wilson, F. F.; Carson, L.; Clarke, P. E. L.; Cowan, G. A.; Eisenhardt, S.; Ferguson, D.; Lambert, D.; Luo, H.; Morris, A.-B.; Muheim, F.; Needham, M.; Playfer, S.; Alexander, M.; Beddow, J.; Dean, C.-T.; Eklund, L.; Hynds, D.; Karodia, S.; Longstaff, I.; Ogilvy, S.; Pappagallo, M.; Sail, P.; Skillicorn, I.; Soler, F. J. P.; Spradlin, P.; Affolder, A.; Bowcock, T. J. V.; Brown, H.; Casse, G.; Donleavy, S.; Dreimanis, K.; Farry, S.; Fay, R.; Hennessy, K.; Hutchcroft, D.; Liles, M.; McSkelly, B.; Patel, G. D.; Price, J. D.; Pritchard, A.; Rinnert, K.; Shears, T.; Smith, N. A.; Ciezarek, G.; Cunliffe, S.; Currie, R.; Egede, U.; Fol, P.; Golutvin, A.; Hall, S.; McCann, M.; Owen, P.; Patel, M.; Petridis, K.; Redi, F.; Sepp, I.; Smith, E.; Sutcliffe, W.; Websdale, D.; Appleby, R. B.; Barlow, R. J.; Bird, T.; Bjørnstad, P. M.; Borghi, S.; Brett, D.; Brodzicka, J.; Capriotti, L.; Chen, S.; de Capua, S.; Dujany, G.; Gersabeck, M.; Harrison, J.; Hombach, C.; Klaver, S.; Lafferty, G.; McNab, A.; Parkes, C.; Pearce, A.; Reichert, S.; Rodrigues, E.; Rodriguez Perez, P.; Smith, M.; Cheung, S.-F.; Derkach, D.; Evans, T.; Gauld, R.; Greening, E.; Harnew, N.; Hill, D.; Hunt, P.; Hussain, N.; Jalocha, J.; John, M.; Lupton, O.; Malde, S.; Smith, E.; Stevenson, S.; Thomas, C.; Topp-Joergensen, S.; Torr, N.; Wilkinson, G.; Counts, I.; Ilten, P.; Williams, M.; Andreassen, R.; Davis, A.; de Silva, W.; Meadows, B.; Sokoloff, M. D.; Sun, L.; Todd, J.; Andrews, J. E.; Hamilton, B.; Jawahery, A.; Wimberley, J.; Artuso, M.; Blusk, S.; Borgia, A.; Britton, T.; Ely, S.; Gandini, P.; Garofoli, J.; Gui, B.; Hadjivasiliou, C.; Jurik, N.; Kelsey, M.; Mountain, R.; Pal, B. K.; Skwarnicki, T.; Stone, S.; Wang, J.; Xing, Z.; Zhang, L.; Baesso, C.; Cruz Torres, M.; Göbel, C.; Molina Rodriguez, J.; Xie, Y.; Milanes, D. A.; Grünberg, O.; Heß, M.; Voß, C.; Waldi, R.; Likhomanenko, T.; Malinin, A.; Shevchenko, V.; Ustyuzhanin, A.; Martinez Vidal, F.; Oyanguren, A.; Ruiz Valls, P.; Sanchez Mayordomo, C.; Onderwater, C. J. G.; Wilschut, H. W.; Pesen, E.

    2015-06-01

    The standard model of particle physics describes the fundamental particles and their interactions via the strong, electromagnetic and weak forces. It provides precise predictions for measurable quantities that can be tested experimentally. The probabilities, or branching fractions, of the strange B meson () and the B0 meson decaying into two oppositely charged muons (μ+ and μ-) are especially interesting because of their sensitivity to theories that extend the standard model. The standard model predicts that the and decays are very rare, with about four of the former occurring for every billion mesons produced, and one of the latter occurring for every ten billion B0 mesons. A difference in the observed branching fractions with respect to the predictions of the standard model would provide a direction in which the standard model should be extended. Before the Large Hadron Collider (LHC) at CERN started operating, no evidence for either decay mode had been found. Upper limits on the branching fractions were an order of magnitude above the standard model predictions. The CMS (Compact Muon Solenoid) and LHCb (Large Hadron Collider beauty) collaborations have performed a joint analysis of the data from proton-proton collisions that they collected in 2011 at a centre-of-mass energy of seven teraelectronvolts and in 2012 at eight teraelectronvolts. Here we report the first observation of the µ+µ- decay, with a statistical significance exceeding six standard deviations, and the best measurement so far of its branching fraction. Furthermore, we obtained evidence for the µ+µ- decay with a statistical significance of three standard deviations. Both measurements are statistically compatible with standard model predictions and allow stringent constraints to be placed on theories beyond the standard model. The LHC experiments will resume taking data in 2015, recording proton-proton collisions at a centre-of-mass energy of 13 teraelectronvolts, which will approximately double the production rates of and B0 mesons and lead to further improvements in the precision of these crucial tests of the standard model.

  16. Single Top Production at Next-to-Leading Order in the Standard Model Effective Field Theory.

    PubMed

    Zhang, Cen

    2016-04-22

    Single top production processes at hadron colliders provide information on the relation between the top quark and the electroweak sector of the standard model. We compute the next-to-leading order QCD corrections to the three main production channels: t-channel, s-channel, and tW associated production, in the standard model including operators up to dimension six. The calculation can be matched to parton shower programs and can therefore be directly used in experimental analyses. The QCD corrections are found to significantly impact the extraction of the current limits on the operators, because both of an improved accuracy and a better precision of the theoretical predictions. In addition, the distributions of some of the key discriminating observables are modified in a nontrivial way, which could change the interpretation of measurements in terms of UV complete models.

  17. A Pilot for Improving Depression Care on College Campuses: Results of the College Breakthrough Series-Depression (CBS-D) Project

    ERIC Educational Resources Information Center

    Chung, Henry; Klein, Michael C.; Silverman, Daniel; Corson-Rikert, Janet; Davidson, Eleanor; Ellis, Patricia; Kasnakian, Caroline

    2011-01-01

    Objective: To implement a pilot quality improvement project for depression identification and treatment in college health. Participants: Eight college health center teams composed primarily of primary care and counseling service directors and clinicians. Methods: Chronic (Collaborative) Care Model (CCM) used with standardized screening to…

  18. Quality improvement prototype: Johnson Space Center, National Aeronautics and Space Administration

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The Johnson Space Flight Center was recognized by the Office of Management and Budget as a model for its high standards of quality. Included are an executive summary of the center's activities, an organizational overview, techniques for improving quality, the status of the quality effort and a listing of key personnel.

  19. 76 FR 46814 - Medicare Program; Evaluation Criteria and Standards for Quality Improvement Program Contracts...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-03

    ... Work) The Patient Safety initiatives are designed to help achieve the goals of improving individual... coordinating center, the Center for Medicare and Medicaid Innovation, and the Agency for Healthcare Research... outreach activities required to complete all Aims of the 10th SOW successfully. The CRISP Model is designed...

  20. 75 FR 20111 - Energy Conservation Program: Energy Conservation Standards for Residential Water Heaters, Direct...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-16

    ... the ``three heating products'') must be designed to ``achieve the maximum improvement in energy... and CO 2 savings are performed with different computer models, leading to different time frames for... of EPCA sets forth a variety of provisions designed to improve energy efficiency. Part A\\1\\ of Title...

  1. 75 FR 33565 - Notice of Intent To Prepare an Environmental Impact Statement for New Medium- and Heavy-Duty Fuel...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-14

    ...- and Heavy-Duty Fuel Efficiency Improvement Program AGENCY: National Highway Traffic Safety... efficiency improvement program for commercial medium- and heavy-duty on-highway vehicles and work trucks... efficiency standards starting with model year (MY) 2016 commercial medium- and heavy-duty on-highway vehicles...

  2. Meeting report from the first meetings of the Computational Modeling in Biology Network (COMBINE)

    PubMed Central

    Le Novère, Nicolas; Hucka, Michael; Anwar, Nadia; Bader, Gary D; Demir, Emek; Moodie, Stuart; Sorokin, Anatoly

    2011-01-01

    The Computational Modeling in Biology Network (COMBINE), is an initiative to coordinate the development of the various community standards and formats in computational systems biology and related fields. This report summarizes the activities pursued at the first annual COMBINE meeting held in Edinburgh on October 6-9 2010 and the first HARMONY hackathon, held in New York on April 18-22 2011. The first of those meetings hosted 81 attendees. Discussions covered both official COMBINE standards-(BioPAX, SBGN and SBML), as well as emerging efforts and interoperability between different formats. The second meeting, oriented towards software developers, welcomed 59 participants and witnessed many technical discussions, development of improved standards support in community software systems and conversion between the standards. Both meetings were resounding successes and showed that the field is now mature enough to develop representation formats and related standards in a coordinated manner. PMID:22180826

  3. Meeting report from the first meetings of the Computational Modeling in Biology Network (COMBINE).

    PubMed

    Le Novère, Nicolas; Hucka, Michael; Anwar, Nadia; Bader, Gary D; Demir, Emek; Moodie, Stuart; Sorokin, Anatoly

    2011-11-30

    The Computational Modeling in Biology Network (COMBINE), is an initiative to coordinate the development of the various community standards and formats in computational systems biology and related fields. This report summarizes the activities pursued at the first annual COMBINE meeting held in Edinburgh on October 6-9 2010 and the first HARMONY hackathon, held in New York on April 18-22 2011. The first of those meetings hosted 81 attendees. Discussions covered both official COMBINE standards-(BioPAX, SBGN and SBML), as well as emerging efforts and interoperability between different formats. The second meeting, oriented towards software developers, welcomed 59 participants and witnessed many technical discussions, development of improved standards support in community software systems and conversion between the standards. Both meetings were resounding successes and showed that the field is now mature enough to develop representation formats and related standards in a coordinated manner.

  4. The standardized live patient and mechanical patient models--their roles in trauma teaching.

    PubMed

    Ali, Jameel; Al Ahmadi, Khalid; Williams, Jack Ivan; Cherry, Robert Allen

    2009-01-01

    We have previously demonstrated improved medical student performance using standardized live patient models in the Trauma Evaluation and Management (TEAM) program. The trauma manikin has also been offered as an option for teaching trauma skills in this program. In this study, we compare performance using both models. Final year medical students were randomly assigned to three groups: group I (n = 22) with neither model, group II (n = 24) with patient model, and group III (n = 24) with mechanical model using the same clinical scenario. All students completed pre-TEAM and post-TEAM multiple choice question (MCQ) exams and an evaluation questionnaire scoring five items on a scale of 1 to 5 with 5 being the highest. The items were objectives were met, knowledge improved, skills improved, overall satisfaction, and course should be mandatory. Students (groups II and III) then switched models, rating preferences in six categories: more challenging, more interesting, more dynamic, more enjoyable learning, more realistic, and overall better model. Scores were analyzed by ANOVA with p < 0.05 being considered statistically significant. All groups had similar scores (means % +/- SD)in the pretest (group I - 50.8 +/- 7.4, group II - 51.3 +/- 6.4, group III - 51.1 +/- 6.6). All groups improved their post-test scores but groups II and III scored higher than group I with no difference in scores between groups II and III (group I - 77.5 +/- 3.8, group II - 84.8 +/- 3.6, group III - 86.3 +/- 3.2). The percent of students scoring 5 in the questionnaire are as follows: objectives met - 100% for all groups; knowledge improved: group I - 91%, group II - 96%, group III - 92%; skills improved: group I - 9%, group II - 83%, group III - 96%; overall satisfaction: group I - 91%, group II - 92%, group III - 92%; should be mandatory: group I - 32%, group II - 96%, group III - 100%. Student preferences (48 students) are as follows: the mechanical model was more challenging (44 of 48); more interesting (40 of 48); more dynamic (46 of 48); more enjoyable (48 of 48); more realistic (32/48), and better overall model (42 of 48). Using the TEAM program, we have demonstrated that improvement in knowledge and skills are equally enhanced by using mechanical or patient models in trauma teaching. However, students overwhelmingly preferred the mechanical model.

  5. Investigation for improving Global Positioning System (GPS) orbits using a discrete sequential estimator and stochastic models of selected physical processes

    NASA Technical Reports Server (NTRS)

    Goad, Clyde C.; Chadwell, C. David

    1993-01-01

    GEODYNII is a conventional batch least-squares differential corrector computer program with deterministic models of the physical environment. Conventional algorithms were used to process differenced phase and pseudorange data to determine eight-day Global Positioning system (GPS) orbits with several meter accuracy. However, random physical processes drive the errors whose magnitudes prevent improving the GPS orbit accuracy. To improve the orbit accuracy, these random processes should be modeled stochastically. The conventional batch least-squares algorithm cannot accommodate stochastic models, only a stochastic estimation algorithm is suitable, such as a sequential filter/smoother. Also, GEODYNII cannot currently model the correlation among data values. Differenced pseudorange, and especially differenced phase, are precise data types that can be used to improve the GPS orbit precision. To overcome these limitations and improve the accuracy of GPS orbits computed using GEODYNII, we proposed to develop a sequential stochastic filter/smoother processor by using GEODYNII as a type of trajectory preprocessor. Our proposed processor is now completed. It contains a correlated double difference range processing capability, first order Gauss Markov models for the solar radiation pressure scale coefficient and y-bias acceleration, and a random walk model for the tropospheric refraction correction. The development approach was to interface the standard GEODYNII output files (measurement partials and variationals) with software modules containing the stochastic estimator, the stochastic models, and a double differenced phase range processing routine. Thus, no modifications to the original GEODYNII software were required. A schematic of the development is shown. The observational data are edited in the preprocessor and the data are passed to GEODYNII as one of its standard data types. A reference orbit is determined using GEODYNII as a batch least-squares processor and the GEODYNII measurement partial (FTN90) and variational (FTN80, V-matrix) files are generated. These two files along with a control statement file and a satellite identification and mass file are passed to the filter/smoother to estimate time-varying parameter states at each epoch, improved satellite initial elements, and improved estimates of constant parameters.

  6. Cardiac arrest risk standardization using administrative data compared to registry data

    PubMed Central

    Gaieski, David F.; Donnino, Michael W.; Nelson, Joshua I. M.; Mutter, Eric L.; Carr, Brendan G.; Abella, Benjamin S.; Wiebe, Douglas J.

    2017-01-01

    Background Methods for comparing hospitals regarding cardiac arrest (CA) outcomes, vital for improving resuscitation performance, rely on data collected by cardiac arrest registries. However, most CA patients are treated at hospitals that do not participate in such registries. This study aimed to determine whether CA risk standardization modeling based on administrative data could perform as well as that based on registry data. Methods and results Two risk standardization logistic regression models were developed using 2453 patients treated from 2000–2015 at three hospitals in an academic health system. Registry and administrative data were accessed for all patients. The outcome was death at hospital discharge. The registry model was considered the “gold standard” with which to compare the administrative model, using metrics including comparing areas under the curve, calibration curves, and Bland-Altman plots. The administrative risk standardization model had a c-statistic of 0.891 (95% CI: 0.876–0.905) compared to a registry c-statistic of 0.907 (95% CI: 0.895–0.919). When limited to only non-modifiable factors, the administrative model had a c-statistic of 0.818 (95% CI: 0.799–0.838) compared to a registry c-statistic of 0.810 (95% CI: 0.788–0.831). All models were well-calibrated. There was no significant difference between c-statistics of the models, providing evidence that valid risk standardization can be performed using administrative data. Conclusions Risk standardization using administrative data performs comparably to standardization using registry data. This methodology represents a new tool that can enable opportunities to compare hospital performance in specific hospital systems or across the entire US in terms of survival after CA. PMID:28783754

  7. Assimilation of ground and satellite snow observations in a distributed hydrologic model to improve water supply forecasts in the Upper Colorado River Basin

    NASA Astrophysics Data System (ADS)

    Micheletty, P. D.; Day, G. N.; Quebbeman, J.; Carney, S.; Park, G. H.

    2016-12-01

    The Upper Colorado River Basin above Lake Powell is a major source of water supply for 25 million people and provides irrigation water for 3.5 million acres. Approximately 85% of the annual runoff is produced from snowmelt. Water supply forecasts of the April-July runoff produced by the National Weather Service (NWS) Colorado Basin River Forecast Center (CBRFC), are critical to basin water management. This project leverages advanced distributed models, datasets, and snow data assimilation techniques to improve operational water supply forecasts made by CBRFC in the Upper Colorado River Basin. The current work will specifically focus on improving water supply forecasts through the implementation of a snow data assimilation process coupled with the Hydrology Laboratory-Research Distributed Hydrologic Model (HL-RDHM). Three types of observations will be used in the snow data assimilation system: satellite Snow Covered Area (MODSCAG), satellite Dust Radiative Forcing in Snow (MODDRFS), and SNOTEL Snow Water Equivalent (SWE). SNOTEL SWE provides the main source of high elevation snowpack information during the snow season, however, these point measurement sites are carefully selected to provide consistent indices of snowpack, and may not be representative of the surrounding watershed. We address this problem by transforming the SWE observations to standardized deviates and interpolating the standardized deviates using a spatial regression model. The interpolation process will also take advantage of the MODIS Snow Covered Area and Grainsize (MODSCAG) product to inform the model on the spatial distribution of snow. The interpolated standardized deviates are back-transformed and used in an Ensemble Kalman Filter (EnKF) to update the model simulated SWE. The MODIS Dust Radiative Forcing in Snow (MODDRFS) product will be used more directly through temporary adjustments to model snowmelt parameters, which should improve melt estimates in areas affected by dust on snow. In order to assess the value of different data sources, reforecasts will be produced for a historical period and performance measures will be computed to assess forecast skill. The existing CBRFC Ensemble Streamflow Prediction (ESP) reforecasts will provide a baseline for comparison to determine the added-value of the data assimilation process.

  8. A partial least squares based spectrum normalization method for uncertainty reduction for laser-induced breakdown spectroscopy measurements

    NASA Astrophysics Data System (ADS)

    Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou

    2013-10-01

    A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.

  9. Standard solar model. II - g-modes

    NASA Technical Reports Server (NTRS)

    Guenther, D. B.; Demarque, P.; Pinsonneault, M. H.; Kim, Y.-C.

    1992-01-01

    The paper presents the g-mode oscillation for a set of modern solar models. Each solar model is based on a single modification or improvement to the physics of a reference solar model. Improvements were made to the nuclear reaction rates, the equation of state, the opacities, and the treatment of the atmosphere. The error in the predicted g-mode periods associated with the uncertainties in the model physics is predicted and the specific sensitivities of the g-mode periods and their period spacings to the different model structures are described. In addition, these models are compared to a sample of published observations. A remarkably good agreement is found between the 'best' solar model and the observations of Hill and Gu (1990).

  10. A Final Approach Trajectory Model for Current Operations

    NASA Technical Reports Server (NTRS)

    Gong, Chester; Sadovsky, Alexander

    2010-01-01

    Predicting accurate trajectories with limited intent information is a challenge faced by air traffic management decision support tools in operation today. One such tool is the FAA's Terminal Proximity Alert system which is intended to assist controllers in maintaining safe separation of arrival aircraft during final approach. In an effort to improve the performance of such tools, two final approach trajectory models are proposed; one based on polynomial interpolation, the other on the Fourier transform. These models were tested against actual traffic data and used to study effects of the key final approach trajectory modeling parameters of wind, aircraft type, and weight class, on trajectory prediction accuracy. Using only the limited intent data available to today's ATM system, both the polynomial interpolation and Fourier transform models showed improved trajectory prediction accuracy over a baseline dead reckoning model. Analysis of actual arrival traffic showed that this improved trajectory prediction accuracy leads to improved inter-arrival separation prediction accuracy for longer look ahead times. The difference in mean inter-arrival separation prediction error between the Fourier transform and dead reckoning models was 0.2 nmi for a look ahead time of 120 sec, a 33 percent improvement, with a corresponding 32 percent improvement in standard deviation.

  11. Dynamical System Analysis of Reynolds Stress Closure Equations

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.

    1997-01-01

    In this paper, we establish the causality between the model coefficients in the standard pressure-strain correlation model and the predicted equilibrium states for homogeneous turbulence. We accomplish this by performing a comprehensive fixed point analysis of the modeled Reynolds stress and dissipation rate equations. The results from this analysis will be very useful for developing improved pressure-strain correlation models to yield observed equilibrium behavior.

  12. GUIDANCE FOR THE PERFORMANCE EVALUATION OF THREE-DIMENSIONAL AIR QUALITY MODELING SYSTEMS FOR PARTICULATE MATTER AND VISIBILITY

    EPA Science Inventory

    The National Ambient Air Quality Standards for particulate matter (PM) and the federal regional haze regulations place some emphasis on the assessment of fine particle (PM; 5) concentrations. Current air quality models need to be improved and evaluated against observations to a...

  13. Improving Conceptual Understanding and Representation Skills through Excel-Based Modeling

    ERIC Educational Resources Information Center

    Malone, Kathy L.; Schunn, Christian D.; Schuchardt, Anita M.

    2018-01-01

    The National Research Council framework for science education and the Next Generation Science Standards have developed a need for additional research and development of curricula that is both technologically model-based and includes engineering practices. This is especially the case for biology education. This paper describes a quasi-experimental…

  14. Practices and Policies for Implementing Restorative Justice within Schools

    ERIC Educational Resources Information Center

    Pavelka, Sandra

    2013-01-01

    Restorative justice models provide schools with the opportunity to improve school culture by addressing the disciplinary standards and creating a forum for peaceful resolution of conflict and misbehavior. These models seek to determine the impact of the incident and establish a mutual, prescriptive agreement for resolving and repairing the harm…

  15. Absolute Spectrophotometric Calibration to 1% from the FUV through the near-IR

    NASA Astrophysics Data System (ADS)

    Finley, David

    2006-07-01

    We are requesting additional support to complete the work now being carried out under the Cycle 14 archive program, HST-AR-10654. The most critical component of that effort is an accurate determination of the STIS spectrometer LSF, so that we may correctly model the infill of the Balmer line cores by light redistributed from the wings and adjacent continuum. That is the essential input for obtaining accurate and unbiased effective temperatures and gravities, and hence calibrated fluxes, via line profile fitting of the WD calibration standards. To evaluate the published STIS LSF, we investigated the spectral images of the calibration targets, yielding several significant results: a} the STIS LSF varies significantly; b} existing observation-based spectroscopic LSFs or imaging PSFs are inadequate for deriving suitable spectroscopic LSFs; c} accounting for the PSF/LSF variability will improve spectrophotometric accuracy; d} the LSFs used for model fits must be consistent with the extraction process details; and, e} TinyTim-generated PSFs, with some modifications, provide the most suitable basis for producing the required LSFs that are tailored to each individual spectral observation. Based on our current {greatly improved} state of knowlege of the instrumental effects, we are now requesting additional support to complete the work needed to generate correct LSFs, and then carry out the analyses that were the subject of the original proposal.Our goal is the same: to produce a significant improvement to the existing HST calibration. The current calibration is based on three primary DA white dwarf standards, GD 71, GD 153,and G 191-B2B. The standard fluxes are calculated using NLTE models, with effective temperatures and gravities that were derived from Balmer line fits using LTE models. We propose to improve the accuracy and internal consistency of the calibration by deriving corrected effective temperatures and gravities based on fitting the observed line profiles with updated NLTE models, and including the fit results from multiple STIS spectra, rather than the {usually} 1 or 2 ground-based spectra used previously. We will also determine the fluxes for 5 new, fainter primary or secondary standards, extending the standard V magnitude lower limit from 13.4 to 16.5, and extending the wavelength coverage from 0.1 to 2.5 micron. The goal is to achieve an overall flux accuracy of 1%, which will be needed, for example, for the upcoming supernova survey missions to measure the equation of state of the dark energy that is accelerating the expansion of the universe.

  16. Lattice field theory applications in high energy physics

    NASA Astrophysics Data System (ADS)

    Gottlieb, Steven

    2016-10-01

    Lattice gauge theory was formulated by Kenneth Wilson in 1974. In the ensuing decades, improvements in actions, algorithms, and computers have enabled tremendous progress in QCD, to the point where lattice calculations can yield sub-percent level precision for some quantities. Beyond QCD, lattice methods are being used to explore possible beyond the standard model (BSM) theories of dynamical symmetry breaking and supersymmetry. We survey progress in extracting information about the parameters of the standard model by confronting lattice calculations with experimental results and searching for evidence of BSM effects.

  17. Supervised Risk Predictor of Breast Cancer Based on Intrinsic Subtypes

    PubMed Central

    Parker, Joel S.; Mullins, Michael; Cheang, Maggie C.U.; Leung, Samuel; Voduc, David; Vickery, Tammi; Davies, Sherri; Fauron, Christiane; He, Xiaping; Hu, Zhiyuan; Quackenbush, John F.; Stijleman, Inge J.; Palazzo, Juan; Marron, J.S.; Nobel, Andrew B.; Mardis, Elaine; Nielsen, Torsten O.; Ellis, Matthew J.; Perou, Charles M.; Bernard, Philip S.

    2009-01-01

    Purpose To improve on current standards for breast cancer prognosis and prediction of chemotherapy benefit by developing a risk model that incorporates the gene expression–based “intrinsic” subtypes luminal A, luminal B, HER2-enriched, and basal-like. Methods A 50-gene subtype predictor was developed using microarray and quantitative reverse transcriptase polymerase chain reaction data from 189 prototype samples. Test sets from 761 patients (no systemic therapy) were evaluated for prognosis, and 133 patients were evaluated for prediction of pathologic complete response (pCR) to a taxane and anthracycline regimen. Results The intrinsic subtypes as discrete entities showed prognostic significance (P = 2.26E-12) and remained significant in multivariable analyses that incorporated standard parameters (estrogen receptor status, histologic grade, tumor size, and node status). A prognostic model for node-negative breast cancer was built using intrinsic subtype and clinical information. The C-index estimate for the combined model (subtype and tumor size) was a significant improvement on either the clinicopathologic model or subtype model alone. The intrinsic subtype model predicted neoadjuvant chemotherapy efficacy with a negative predictive value for pCR of 97%. Conclusion Diagnosis by intrinsic subtype adds significant prognostic and predictive information to standard parameters for patients with breast cancer. The prognostic properties of the continuous risk score will be of value for the management of node-negative breast cancers. The subtypes and risk score can also be used to assess the likelihood of efficacy from neoadjuvant chemotherapy. PMID:19204204

  18. It's Only a Phase: Applying the 5 Phases of Clinical Trials to the NSCR Model Improvement Process

    NASA Technical Reports Server (NTRS)

    Elgart, S. R.; Milder, C. M.; Chappell, L. J.; Semones, E. J.

    2017-01-01

    NASA limits astronaut radiation exposures to a 3% risk of exposure-induced death from cancer (REID) at the upper 95% confidence level. Since astronauts approach this limit, it is important that the estimate of REID be as accurate as possible. The NASA Space Cancer Risk 2012 (NSCR-2012) model has been the standard for NASA's space radiation protection guidelines since its publication in 2013. The model incorporates elements from U.S. baseline statistics, Japanese atomic bomb survivor research, animal models, cellular studies, and radiation transport to calculate astronaut baseline risk of cancer and REID. The NSCR model is under constant revision to ensure emerging research is incorporated into radiation protection standards. It is important to develop guidelines, however, to determine what new research is appropriate for integration. Certain standards of transparency are necessary in order to assess data quality, statistical quality, and analytical quality. To this effect, all original source code and any raw data used to develop the code are required to confirm there are no errors which significantly change reported outcomes. It is possible to apply a clinical trials approach to select and assess the improvement concepts that will be incorporated into future iterations of NSCR. This poster describes the five phases of clinical trials research, pre-clinical research, and clinical research phases I-IV, explaining how each step can be translated into an appropriate NSCR model selection guideline.

  19. A Lagrangian subgrid-scale model with dynamic estimation of Lagrangian time scale for large eddy simulation of complex flows

    NASA Astrophysics Data System (ADS)

    Verma, Aman; Mahesh, Krishnan

    2012-08-01

    The dynamic Lagrangian averaging approach for the dynamic Smagorinsky model for large eddy simulation is extended to an unstructured grid framework and applied to complex flows. The Lagrangian time scale is dynamically computed from the solution and does not need any adjustable parameter. The time scale used in the standard Lagrangian model contains an adjustable parameter θ. The dynamic time scale is computed based on a "surrogate-correlation" of the Germano-identity error (GIE). Also, a simple material derivative relation is used to approximate GIE at different events along a pathline instead of Lagrangian tracking or multi-linear interpolation. Previously, the time scale for homogeneous flows was computed by averaging along directions of homogeneity. The present work proposes modifications for inhomogeneous flows. This development allows the Lagrangian averaged dynamic model to be applied to inhomogeneous flows without any adjustable parameter. The proposed model is applied to LES of turbulent channel flow on unstructured zonal grids at various Reynolds numbers. Improvement is observed when compared to other averaging procedures for the dynamic Smagorinsky model, especially at coarse resolutions. The model is also applied to flow over a cylinder at two Reynolds numbers and good agreement with previous computations and experiments is obtained. Noticeable improvement is obtained using the proposed model over the standard Lagrangian model. The improvement is attributed to a physically consistent Lagrangian time scale. The model also shows good performance when applied to flow past a marine propeller in an off-design condition; it regularizes the eddy viscosity and adjusts locally to the dominant flow features.

  20. Using the weighted area under the net benefit curve for decision curve analysis.

    PubMed

    Talluri, Rajesh; Shete, Sanjay

    2016-07-18

    Risk prediction models have been proposed for various diseases and are being improved as new predictors are identified. A major challenge is to determine whether the newly discovered predictors improve risk prediction. Decision curve analysis has been proposed as an alternative to the area under the curve and net reclassification index to evaluate the performance of prediction models in clinical scenarios. The decision curve computed using the net benefit can evaluate the predictive performance of risk models at a given or range of threshold probabilities. However, when the decision curves for 2 competing models cross in the range of interest, it is difficult to identify the best model as there is no readily available summary measure for evaluating the predictive performance. The key deterrent for using simple measures such as the area under the net benefit curve is the assumption that the threshold probabilities are uniformly distributed among patients. We propose a novel measure for performing decision curve analysis. The approach estimates the distribution of threshold probabilities without the need of additional data. Using the estimated distribution of threshold probabilities, the weighted area under the net benefit curve serves as the summary measure to compare risk prediction models in a range of interest. We compared 3 different approaches, the standard method, the area under the net benefit curve, and the weighted area under the net benefit curve. Type 1 error and power comparisons demonstrate that the weighted area under the net benefit curve has higher power compared to the other methods. Several simulation studies are presented to demonstrate the improvement in model comparison using the weighted area under the net benefit curve compared to the standard method. The proposed measure improves decision curve analysis by using the weighted area under the curve and thereby improves the power of the decision curve analysis to compare risk prediction models in a clinical scenario.

  1. Tool Efficiency Analysis model research in SEMI industry

    NASA Astrophysics Data System (ADS)

    Lei, Ma; Nana, Zhang; Zhongqiu, Zhang

    2018-06-01

    One of the key goals in SEMI industry is to improve equipment through put and ensure equipment production efficiency maximization. This paper is based on SEMI standards in semiconductor equipment control, defines the transaction rules between different tool states, and presents a TEA system model which is to analysis tool performance automatically based on finite state machine. The system was applied to fab tools and verified its effectiveness successfully, and obtained the parameter values used to measure the equipment performance, also including the advices of improvement.

  2. Public reporting needed to improve the health of Tennesseans.

    PubMed

    Bailey, James E; Gibson, Deborah

    2005-11-01

    Tennessee providers are recognizing an urgent need for a new and improved model of healthcare, characterized by transparent accountability to consumers. Meaningful health system improvements will require broad public disclosure of healthcare performance data at the hospital, clinic and community levels using nationally recognized standards. All Tennessee communities need a routine community health report card, to help their citizens to work together toward their most important health goals.

  3. Improved Propulsion Modeling for Low-Thrust Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Knittel, Jeremy M.; Englander, Jacob A.; Ozimek, Martin T.; Atchison, Justin A.; Gould, Julian J.

    2017-01-01

    Low-thrust trajectory design is tightly coupled with spacecraft systems design. In particular, the propulsion and power characteristics of a low-thrust spacecraft are major drivers in the design of the optimal trajectory. Accurate modeling of the power and propulsion behavior is essential for meaningful low-thrust trajectory optimization. In this work, we discuss new techniques to improve the accuracy of propulsion modeling in low-thrust trajectory optimization while maintaining the smooth derivatives that are necessary for a gradient-based optimizer. The resulting model is significantly more realistic than the industry standard and performs well inside an optimizer. A variety of deep-space trajectory examples are presented.

  4. Standard model with a complex scalar singlet: Cosmological implications and theoretical considerations

    NASA Astrophysics Data System (ADS)

    Chiang, Cheng-Wei; Ramsey-Musolf, Michael J.; Senaha, Eibun

    2018-01-01

    We analyze the theoretical and phenomenological considerations for the electroweak phase transition and dark matter in an extension of the standard model with a complex scalar singlet (cxSM). In contrast with earlier studies, we use a renormalization group improved scalar potential and treat its thermal history in a gauge-invariant manner. We find that the parameter space consistent with a strong first-order electroweak phase transition (SFOEWPT) and present dark matter phenomenological constraints is significantly restricted compared to results of a conventional, gauge-noninvariant analysis. In the simplest variant of the cxSM, recent LUX data and a SFOEWPT require a dark matter mass close to half the mass of the standard model-like Higgs boson. We also comment on various caveats regarding the perturbative treatment of the phase transition dynamics.

  5. Potential reductions in ambient NO2 concentrations from meeting diesel vehicle emissions standards

    NASA Astrophysics Data System (ADS)

    von Schneidemesser, Erika; Kuik, Friderike; Mar, Kathleen A.; Butler, Tim

    2017-11-01

    Exceedances of the concentration limit value for ambient nitrogen dioxide (NO2) at roadside sites are an issue in many cities throughout Europe. This is linked to the emissions of light duty diesel vehicles which have on-road emissions that are far greater than the regulatory standards. These exceedances have substantial implications for human health and economic loss. This study explores the possible gains in ambient air quality if light duty diesel vehicles were able to meet the regulatory standards (including both emissions standards from Europe and the United States). We use two independent methods: a measurement-based and a model-based method. The city of Berlin is used as a case study. The measurement-based method used data from 16 monitoring stations throughout the city of Berlin to estimate annual average reductions in roadside NO2 of 9.0 to 23 µg m-3 and in urban background NO2 concentrations of 1.2 to 2.7 µg m-3. These ranges account for differences in fleet composition assumptions, and the stringency of the regulatory standard. The model simulations showed reductions in urban background NO2 of 2.0 µg m-3, and at the scale of the greater Berlin area of 1.6 to 2.0 µg m-3 depending on the setup of the simulation and resolution of the model. Similar results were found for other European cities. The similarities in results using the measurement- and model-based methods support our ability to draw robust conclusions that are not dependent on the assumptions behind either methodology. The results show the significant potential for NO2 reductions if regulatory standards for light duty diesel vehicles were to be met under real-world operating conditions. Such reductions could help improve air quality by reducing NO2 exceedances in urban areas, but also have broader implications for improvements in human health and other benefits.

  6. Nonadiabatic nonradial p-mode frequencies of the standard solar model, with and without helium diffusion

    NASA Technical Reports Server (NTRS)

    Guenther, D. B.

    1994-01-01

    The nonadiabatic frequencies of a standard solar model and a solar model that includes helium diffusion are discussed. The nonadiabatic pulsation calculation includes physics that describes the losses and gains due to radiation. Radiative gains and losses are modeled in both the diffusion approximation, which is only valid in optically thick regions, and the Eddington approximation, which is valid in both optically thin and thick regions. The calculated pulsation frequencies for modes with l less than or equal to 1320 are compared to the observed spectrum of the Sun. Compared to a strictly adiabatic calculation, the nonadiabatic calculation of p-mode frequencies improves the agreement between model and observation. When helium diffusion is included in the model the frequencies of the modes that are sensitive to regions near the base of the convection zone are improved (i.e., brought into closer agreement with observation), but the agreement is made worse for other modes. Cyclic variations in the frequency spacings of the Sun as a function of frequency of n are presented as evidence for a discontinuity in the structure of the Sun, possibly located near the base of the convection zone.

  7. Migration Stories: Upgrading a PDS Archive to PDS4

    NASA Astrophysics Data System (ADS)

    Kazden, D. P.; Walker, R. J.; Mafi, J. N.; King, T. A.; Joy, S. P.; Moon, I. S.

    2015-12-01

    Increasing bandwidth, storage capacity and computational capabilities have greatly increased our ability to access data and use them. A significant challenge, however, is to make data archived under older standards useful in the new data environments. NASA's Planetary Data System (PDS) recently released version 4 of its information model (PDS4). PDS4 is an improvement and has advantages over previous versions. PDS4 adopts the XML standard for metadata and expresses structural requirements with XML Schema and content constraints by using Schematron. This allows for thorough validation by using off the shelf tools. This is a substantial improvement over previous PDS versions. PDS4 was designed to improve discoverability of products (resources) in a PDS archive. These additions allow for more uniform metadata harvesting from the collection level to the product level. New tools and services are being deployed that depend on the data adhering to the PDS4 model. However, the PDS has been an operational archive since 1989 and has large holdings that are compliant with previous versions of the PDS information model. The challenge is the make the older data accessible and useable with the new PDS4 based tools. To provide uniform utility and access to the entire archive the older data must be migrated to the PDS4 model. At the Planetary Plasma Interactions (PPI) Node of the PDS we've been actively planning and preparing to migrate our legacy archive to the new PDS4 standards for several years. With the release of the PDS4 standards we have begun the migration of our archive. In this presentation we will discuss the preparation of the data for the migration and how we are approaching this task. The presentation will consist of a series of stories to describe our experiences and the best practices we have learned.

  8. Development of the Digital Arthritis Index, a Novel Metric to Measure Disease Parameters in a Rat Model of Rheumatoid Arthritis.

    PubMed

    Lim, Maria A; Louie, Brenton; Ford, Daniel; Heath, Kyle; Cha, Paulyn; Betts-Lacroix, Joe; Lum, Pek Yee; Robertson, Timothy L; Schaevitz, Laura

    2017-01-01

    Despite a broad spectrum of anti-arthritic drugs currently on the market, there is a constant demand to develop improved therapeutic agents. Efficient compound screening and rapid evaluation of treatment efficacy in animal models of rheumatoid arthritis (RA) can accelerate the development of clinical candidates. Compound screening by evaluation of disease phenotypes in animal models facilitates preclinical research by enhancing understanding of human pathophysiology; however, there is still a continuous need to improve methods for evaluating disease. Current clinical assessment methods are challenged by the subjective nature of scoring-based methods, time-consuming longitudinal experiments, and the requirement for better functional readouts with relevance to human disease. To address these needs, we developed a low-touch, digital platform for phenotyping preclinical rodent models of disease. As a proof-of-concept, we utilized the rat collagen-induced arthritis (CIA) model of RA and developed the Digital Arthritis Index (DAI), an objective and automated behavioral metric that does not require human-animal interaction during the measurement and calculation of disease parameters. The DAI detected the development of arthritis similar to standard in vivo methods, including ankle joint measurements and arthritis scores, as well as demonstrated a positive correlation to ankle joint histopathology. The DAI also determined responses to multiple standard-of-care (SOC) treatments and nine repurposed compounds predicted by the SMarTR TM Engine to have varying degrees of impact on RA. The disease profiles generated by the DAI complemented those generated by standard methods. The DAI is a highly reproducible and automated approach that can be used in-conjunction with standard methods for detecting RA disease progression and conducting phenotypic drug screens.

  9. High Precision Determination of the β Decay QEC Value of 11C and Implications on the Tests of the Standard Model

    NASA Astrophysics Data System (ADS)

    Gulyuz, K.; Bollen, G.; Brodeur, M.; Bryce, R. A.; Cooper, K.; Eibach, M.; Izzo, C.; Kwan, E.; Manukyan, K.; Morrissey, D. J.; Naviliat-Cuncic, O.; Redshaw, M.; Ringle, R.; Sandler, R.; Schwarz, S.; Sumithrarachchi, C. S.; Valverde, A. A.; Villari, A. C. C.

    2016-01-01

    We report the determination of the QEC value of the mirror transition of 11C by measuring the atomic masses of 11C and 11B using Penning trap mass spectrometry. More than an order of magnitude improvement in precision is achieved as compared to the 2012 Atomic Mass Evaluation (Ame2012) [Chin. Phys. C 36, 1603 (2012)]. This leads to a factor of 3 improvement in the calculated F t value. Using the new value, QEC=1981.690 (61 ) keV , the uncertainty on F t is no longer dominated by the uncertainty on the QEC value. Based on this measurement, we provide an updated estimate of the Gamow-Teller to Fermi mixing ratio and standard model values of the correlation coefficients.

  10. High Precision Determination of the β Decay Q(EC) Value of (11)C and Implications on the Tests of the Standard Model.

    PubMed

    Gulyuz, K; Bollen, G; Brodeur, M; Bryce, R A; Cooper, K; Eibach, M; Izzo, C; Kwan, E; Manukyan, K; Morrissey, D J; Naviliat-Cuncic, O; Redshaw, M; Ringle, R; Sandler, R; Schwarz, S; Sumithrarachchi, C S; Valverde, A A; Villari, A C C

    2016-01-08

    We report the determination of the Q(EC) value of the mirror transition of (11)C by measuring the atomic masses of (11)C and (11)B using Penning trap mass spectrometry. More than an order of magnitude improvement in precision is achieved as compared to the 2012 Atomic Mass Evaluation (Ame2012) [Chin. Phys. C 36, 1603 (2012)]. This leads to a factor of 3 improvement in the calculated Ft value. Using the new value, Q(EC)=1981.690(61)  keV, the uncertainty on Ft is no longer dominated by the uncertainty on the Q(EC) value. Based on this measurement, we provide an updated estimate of the Gamow-Teller to Fermi mixing ratio and standard model values of the correlation coefficients.

  11. IMGMD: A platform for the integration and standardisation of In silico Microbial Genome-scale Metabolic Models.

    PubMed

    Ye, Chao; Xu, Nan; Dong, Chuan; Ye, Yuannong; Zou, Xuan; Chen, Xiulai; Guo, Fengbiao; Liu, Liming

    2017-04-07

    Genome-scale metabolic models (GSMMs) constitute a platform that combines genome sequences and detailed biochemical information to quantify microbial physiology at the system level. To improve the unity, integrity, correctness, and format of data in published GSMMs, a consensus IMGMD database was built in the LAMP (Linux + Apache + MySQL + PHP) system by integrating and standardizing 328 GSMMs constructed for 139 microorganisms. The IMGMD database can help microbial researchers download manually curated GSMMs, rapidly reconstruct standard GSMMs, design pathways, and identify metabolic targets for strategies on strain improvement. Moreover, the IMGMD database facilitates the integration of wet-lab and in silico data to gain an additional insight into microbial physiology. The IMGMD database is freely available, without any registration requirements, at http://imgmd.jiangnan.edu.cn/database.

  12. The search for the pair production of second-generation scalar leptoquarks and measurements of the differential cross sections of the W boson produced in association with jets with the CMS detector at the LHC

    NASA Astrophysics Data System (ADS)

    Baumgartel, Darin C.

    Since the formulation of the Standard Model of particle physics, numerous experiments have sought to observe the signatures of the subatomic particles by examining the outcomes of charged particle collisions. Over time, advances in detector technology and scientific computing have allowed for unprecedented precision measurements of Standard Model phenomena and particle properties. Although the Standard Model has displayed remarkable predictive power, extensions to the Standard Model have been formulated to account for unexplained phenomena, and these extensions often infer the existence of additional subatomic particles. Consequently, experiments at particle colliders often endeavor to search for signatures of physics beyond the Standard Model. These searches and measurements are often complementary pursuits, as searches are often limited by the precision of estimations of the Standard Model backgrounds. At the forefront of present-day collider experiments is the Large Hadron Collider at CERN, which delivers proton-proton collisions with unprecedented energy and luminosity. Collisions are recorded with detectors located at interaction points along the ring of the Large Hadron Collider. The CMS detector is one of two general-purpose detectors at the Large Hadron Collider, and the high-precision detection of particles from collision events in the CMS detector make the CMS detector a powerful tool for both Standard-Model measurements and searches for new physics. The Standard Model is characterized by three generation of quarks and leptons. This correspondence between the generations of quarks and leptons is necessary to allow for the renormalizability of the Standard Model, but it is not an inherent property of the Standard Model. Motivated by this compelling symmetry, many theories and models propose the existence of leptoquark bosons which mediate transitions between quarks and leptons. Experimental constraints indicate that leptoquarks would couple to a single generation, and this thesis describes searches for leptoquarks produced in pairs and decaying to final states containing either two muons and two jets, or one muon, one muon-neutrino, and two jets. Searches are conducted with collision data at center-of-mass energies of both 7 TeV and 8 TeV. No compelling evidence for the existence of leptoquarks is found, and upper limits on the leptoquark mass and cross section are placed at the 95% confidence level. These limits are the most stringent to date, and are several times larger than limits placed previously at hadron collider experiments. While the pair production of massive leptoquark bosons yields final states which have strong kinematic differences from the Standard Model processes, the ability to exploit these differences is limited by the ability to accurately model the backgrounds. The most notable of these backgrounds is the production of a W boson in association with one or more jets. Since the W+jets process has a very large cross section and a final state containing missing energy, its contribution to the total Standard Model background is both nominally large and more difficult to discriminate against than backgrounds with only visible final state objects. Furthermore, estimates of this background are not easily improved by comparisons with data in control regions, and simulations of the background are often limited to leading-order predictions. To improve the understanding and modeling of this background for future endeavors, this thesis also presents measurements of the W+jets process differentially as a function of several variables, including the jet multiplicity, the individual jet transverse momenta and pseudorapidities, the angular separation between the jets and the muon, and the scalar sum of the transverse momenta of all jets. The agreement of these measurements with respect to predictions from event leading-order generators and next-to-leading-order calculations is assessed.

  13. Scaling and kinematics optimisation of the scapula and thorax in upper limb musculoskeletal models

    PubMed Central

    Prinold, Joe A.I.; Bull, Anthony M.J.

    2014-01-01

    Accurate representation of individual scapula kinematics and subject geometries is vital in musculoskeletal models applied to upper limb pathology and performance. In applying individual kinematics to a model׳s cadaveric geometry, model constraints are commonly prescriptive. These rely on thorax scaling to effectively define the scapula׳s path but do not consider the area underneath the scapula in scaling, and assume a fixed conoid ligament length. These constraints may not allow continuous solutions or close agreement with directly measured kinematics. A novel method is presented to scale the thorax based on palpated scapula landmarks. The scapula and clavicle kinematics are optimised with the constraint that the scapula medial border does not penetrate the thorax. Conoid ligament length is not used as a constraint. This method is simulated in the UK National Shoulder Model and compared to four other methods, including the standard technique, during three pull-up techniques (n=11). These are high-performance activities covering a large range of motion. Model solutions without substantial jumps in the joint kinematics data were improved from 23% of trials with the standard method, to 100% of trials with the new method. Agreement with measured kinematics was significantly improved (more than 10° closer at p<0.001) when compared to standard methods. The removal of the conoid ligament constraint and the novel thorax scaling correction factor were shown to be key. Separation of the medial border of the scapula from the thorax was large, although this may be physiologically correct due to the high loads and high arm elevation angles. PMID:25011621

  14. Maximizing your Process Improvement ROI through Harmonization

    DTIC Science & Technology

    2008-03-01

    ISO 12207 ) provide comprehensive guidance on what system and software engineering processes are needed. The frameworks of Six Sigma provide specific...reductions. Their veloci-Q Enterprise integrated system, includes ISO 9001, CMM, P-CMM, TL9000, British Standard 7799, and Six Sigma. They estimate a 30...at their discretion. And, they chose to blend process maturity models and ISO standards to support their objective regarding the establishment of

  15. Federal Workforce Quality: Measurement and Improvement

    DTIC Science & Technology

    1992-08-01

    explicit standards of production and service quality . Assessment Tools 4 OPM should institutionalize its data collection program of longitudinal research...include data about quirements, should set explicit standards of various aspects of the model. That is, the production and service quality . effort...are the immediate consumers service quality are possible. of the products and services delivered, and still others in the larger society who have no

  16. Search for the Standard Model Higgs boson produced in association with top quarks and decaying into $$\\varvec{b\\bar{b}}$$ b b ¯ in $$\\varvec{pp}$$ p p collisions at $$\\sqrt{\\mathbf{s}}= \\varvec{8{{\\,\\mathrm TeV}}}$$ s = 8 T e V with the ATLAS detector

    DOE PAGES

    Aad, G.; Abbott, B.; Abdallah, J.; ...

    2015-07-29

    In this study, a search for the Standard Model Higgs boson produced in association with a top-quark pair, tt¯H, is presented. The analysis uses 20.3 fb –1 of pp collision data at √s=8 TeV, collected with the ATLAS detector at the Large Hadron Collider during 2012. The search is designed for the H→bb¯ decay mode and uses events containing one or two electrons or muons. In order to improve the sensitivity of the search, events are categorised according to their jet and b-tagged jet multiplicities. A neural network is used to discriminate between signal and background events, the latter beingmore » dominated by tt¯+jets production. In the single-lepton channel, variables calculated using a matrix element method are included as inputs to the neural network to improve discrimination of the irreducible tt¯+bb¯ background. No significant excess of events above the background expectation is found and an observed (expected) limit of 3.4 (2.2) times the Standard Model cross section is obtained at 95 % confidence level. The ratio of the measured tt¯H signal cross section to the Standard Model expectation is found to be μ=1.5±1.1 assuming a Higgs boson mass of 125GeV.« less

  17. Stochastic models for atomic clocks

    NASA Technical Reports Server (NTRS)

    Barnes, J. A.; Jones, R. H.; Tryon, P. V.; Allan, D. W.

    1983-01-01

    For the atomic clocks used in the National Bureau of Standards Time Scales, an adequate model is the superposition of white FM, random walk FM, and linear frequency drift for times longer than about one minute. The model was tested on several clocks using maximum likelihood techniques for parameter estimation and the residuals were acceptably random. Conventional diagnostics indicate that additional model elements contribute no significant improvement to the model even at the expense of the added model complexity.

  18. Development of a Two-fluid Drag Law for Clustered Particles using Direct Numerical Simulation and Validation through Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gokaltun, Seckin; Munroe, Norman; Subramaniam, Shankar

    2014-12-31

    This study presents a new drag model, based on the cohesive inter-particle forces, implemented in the MFIX code. This new drag model combines an existing standard model in MFIX with a particle-based drag model based on a switching principle. Switches between the models in the computational domain occur where strong particle-to-particle cohesion potential is detected. Three versions of the new model were obtained by using one standard drag model in each version. Later, performance of each version was compared against available experimental data for a fluidized bed, published in the literature and used extensively by other researchers for validation purposes.more » In our analysis of the results, we first observed that standard models used in this research were incapable of producing closely matching results. Then, we showed for a simple case that a threshold is needed to be set on the solid volume fraction. This modification was applied to avoid non-physical results for the clustering predictions, when governing equation of the solid granular temperate was solved. Later, we used our hybrid technique and observed the capability of our approach in improving the numerical results significantly; however, improvement of the results depended on the threshold of the cohesive index, which was used in the switching procedure. Our results showed that small values of the threshold for the cohesive index could result in significant reduction of the computational error for all the versions of the proposed drag model. In addition, we redesigned an existing circulating fluidized bed (CFB) test facility in order to create validation cases for clustering regime of Geldart A type particles.« less

  19. Bias in logistic regression due to imperfect diagnostic test results and practical correction approaches.

    PubMed

    Valle, Denis; Lima, Joanna M Tucker; Millar, Justin; Amratia, Punam; Haque, Ubydul

    2015-11-04

    Logistic regression is a statistical model widely used in cross-sectional and cohort studies to identify and quantify the effects of potential disease risk factors. However, the impact of imperfect tests on adjusted odds ratios (and thus on the identification of risk factors) is under-appreciated. The purpose of this article is to draw attention to the problem associated with modelling imperfect diagnostic tests, and propose simple Bayesian models to adequately address this issue. A systematic literature review was conducted to determine the proportion of malaria studies that appropriately accounted for false-negatives/false-positives in a logistic regression setting. Inference from the standard logistic regression was also compared with that from three proposed Bayesian models using simulations and malaria data from the western Brazilian Amazon. A systematic literature review suggests that malaria epidemiologists are largely unaware of the problem of using logistic regression to model imperfect diagnostic test results. Simulation results reveal that statistical inference can be substantially improved when using the proposed Bayesian models versus the standard logistic regression. Finally, analysis of original malaria data with one of the proposed Bayesian models reveals that microscopy sensitivity is strongly influenced by how long people have lived in the study region, and an important risk factor (i.e., participation in forest extractivism) is identified that would have been missed by standard logistic regression. Given the numerous diagnostic methods employed by malaria researchers and the ubiquitous use of logistic regression to model the results of these diagnostic tests, this paper provides critical guidelines to improve data analysis practice in the presence of misclassification error. Easy-to-use code that can be readily adapted to WinBUGS is provided, enabling straightforward implementation of the proposed Bayesian models.

  20. How to Develop and Interpret a Credibility Assessment of Numerical Models for Human Research: NASA-STD-7009 Demystified

    NASA Technical Reports Server (NTRS)

    Nelson, Emily S.; Mulugeta, Lealem; Walton, Marlei; Myers, Jerry G.

    2014-01-01

    In the wake of the Columbia accident, the NASA-STD-7009 [1] credibility assessment was developed as a unifying platform to describe model credibility and the uncertainties in its modeling predictions. This standard is now being adapted by NASAs Human Research Program to cover a wide range of numerical models for human research. When used properly, the standard can improve the process of code development by encouraging the use of best practices. It can also give management more insight in making informed decisions through a better understanding of the models capabilities and limitations.To a newcomer, the abstractions presented in NASA-STD-7009 and the sheer volume of information that must be absorbed can be overwhelming. This talk is aimed at describing the credibility assessment, which is the heart of the standard, in plain terms. It will outline how to develop a credibility assessment under the standard. It will also show how to quickly interpret the graphs and tables that result from the assessment and how to drill down from the top-level view to the foundation of the assessment. Finally, it will highlight some of the resources that are available for further study.

  1. Big A Systems Architecture - From Strategy to Design: Systems Architecting in DoD

    DTIC Science & Technology

    2013-04-01

    and modeling standards such as Systems Modeling Language ( SysML ). These tools have great potential to facilitate col- laboration and improve the...MBSE and SysML to Big “A” systems architecting and the potential ben- efits to acquisition outcomes are planned to be the subject of a future article

  2. Modeling tropospheric wet delays with national GNSS reference network in China for BeiDou precise point positioning

    NASA Astrophysics Data System (ADS)

    Zheng, Fu; Lou, Yidong; Gu, Shengfeng; Gong, Xiaopeng; Shi, Chuang

    2017-10-01

    During past decades, precise point positioning (PPP) has been proven to be a well-known positioning technique for centimeter or decimeter level accuracy. However, it needs long convergence time to get high-accuracy positioning, which limits the prospects of PPP, especially in real-time applications. It is expected that the PPP convergence time can be reduced by introducing high-quality external information, such as ionospheric or tropospheric corrections. In this study, several methods for tropospheric wet delays modeling over wide areas are investigated. A new, improved model is developed, applicable in real-time applications in China. Based on the GPT2w model, a modified parameter of zenith wet delay exponential decay wrt. height is introduced in the modeling of the real-time tropospheric delay. The accuracy of this tropospheric model and GPT2w model in different seasons is evaluated with cross-validation, the root mean square of the zenith troposphere delay (ZTD) is 1.2 and 3.6 cm on average, respectively. On the other hand, this new model proves to be better than the tropospheric modeling based on water-vapor scale height; it can accurately express tropospheric delays up to 10 km altitude, which potentially has benefits in many real-time applications. With the high-accuracy ZTD model, the augmented PPP convergence performance for BeiDou navigation satellite system (BDS) and GPS is evaluated. It shows that the contribution of the high-quality ZTD model on PPP convergence performance has relation with the constellation geometry. As BDS constellation geometry is poorer than GPS, the improvement for BDS PPP is more significant than that for GPS PPP. Compared with standard real-time PPP, the convergence time is reduced by 2-7 and 20-50% for the augmented BDS PPP, while GPS PPP only improves about 6 and 18% (on average), in horizontal and vertical directions, respectively. When GPS and BDS are combined, the geometry is greatly improved, which is good enough to get a reliable PPP solution, the augmentation PPP improves insignificantly comparing with standard PPP.

  3. Electronic Information Standards to Support Obesity Prevention and Bridge Services Across Systems, 2010-2015.

    PubMed

    Wiltz, Jennifer L; Blanck, Heidi M; Lee, Brian; Kocot, S Lawrence; Seeff, Laura; McGuire, Lisa C; Collins, Janet

    2017-10-26

    Electronic information technology standards facilitate high-quality, uniform collection of data for improved delivery and measurement of health care services. Electronic information standards also aid information exchange between secure systems that link health care and public health for better coordination of patient care and better-informed population health improvement activities. We developed international data standards for healthy weight that provide common definitions for electronic information technology. The standards capture healthy weight data on the "ABCDs" of a visit to a health care provider that addresses initial obesity prevention and care: assessment, behaviors, continuity, identify resources, and set goals. The process of creating healthy weight standards consisted of identifying needs and priorities, developing and harmonizing standards, testing the exchange of data messages, and demonstrating use-cases. Healthy weight products include 2 message standards, 5 use-cases, 31 LOINC (Logical Observation Identifiers Names and Codes) question codes, 7 healthy weight value sets, 15 public-private engagements with health information technology implementers, and 2 technical guides. A logic model and action steps outline activities toward better data capture, interoperable systems, and information use. Sharing experiences and leveraging this work in the context of broader priorities can inform the development of electronic information standards for similar core conditions and guide strategic activities in electronic systems.

  4. Electronic Information Standards to Support Obesity Prevention and Bridge Services Across Systems, 2010–2015

    PubMed Central

    Blanck, Heidi M.; Lee, Brian; Kocot, S. Lawrence; Seeff, Laura; McGuire, Lisa C.; Collins, Janet

    2017-01-01

    Electronic information technology standards facilitate high-quality, uniform collection of data for improved delivery and measurement of health care services. Electronic information standards also aid information exchange between secure systems that link health care and public health for better coordination of patient care and better-informed population health improvement activities. We developed international data standards for healthy weight that provide common definitions for electronic information technology. The standards capture healthy weight data on the “ABCDs” of a visit to a health care provider that addresses initial obesity prevention and care: assessment, behaviors, continuity, identify resources, and set goals. The process of creating healthy weight standards consisted of identifying needs and priorities, developing and harmonizing standards, testing the exchange of data messages, and demonstrating use-cases. Healthy weight products include 2 message standards, 5 use-cases, 31 LOINC (Logical Observation Identifiers Names and Codes) question codes, 7 healthy weight value sets, 15 public–private engagements with health information technology implementers, and 2 technical guides. A logic model and action steps outline activities toward better data capture, interoperable systems, and information use. Sharing experiences and leveraging this work in the context of broader priorities can inform the development of electronic information standards for similar core conditions and guide strategic activities in electronic systems. PMID:29072985

  5. Status of the Local Enforcement of Energy Efficiency Standards and Labeling Program in China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Nan; Zheng, Nina; Fino-Chen, Cecilia

    2011-09-26

    As part of its commitment to promoting and improving the local enforcement of appliance energy efficiency standards and labeling, the China National Institute of Standardization (CNIS) launched the National and Local Enforcement of Energy Efficiency Standards and Labeling project on August 14, 2009. The project’s short-term goal is to expand the effort to improve enforcement of standards and labeling requirements to the entire country within three years, with a long-term goal of perfecting overall enforcement. For this project, Jiangsu, Shandong, Sichuan and Shanghai were selected as pilot locations. This report provides information on the local enforcement project’s recent background, activitiesmore » and results as well as comparison to previous rounds of check-testing in 2006 and 2007. In addition, the report also offers evaluation on the achievement and weaknesses in the local enforcement scheme and recommendations. The results demonstrate both improvement and some backsliding. Enforcement schemes are in place in all target cities and applicable national standards and regulations were followed as the basis for local check testing. Check testing results show in general high labeling compliance across regions with 100% compliance for five products, including full compliance for all three products tested in Jiangsu province and two out of three products tested in Shandong province. Program results also identified key weaknesses in labeling compliance in Sichuan as well as in the efficiency standards compliance levels for small and medium three-phase asynchronous motors and self-ballasted fluorescent lamps. For example, compliance for the same product ranged from as low as 40% to 100% with mixed results for products that had been tested in previous rounds. For refrigerators, in particular, the efficiency standards compliance rate exhibited a wider range of 50% to 100%, and the average rate across all tested models also dropped from 96% in 2007 to 63%, possibly due to the implementation of newly strengthened efficiency standards in 2009. Areas for improvement include: Greater awareness at the local level to ensure that all manufacturers register their products with the label certification project and to minimize their resistance to inspections; improvement of the product sampling methodology to include representative testing of both large and small manufacturers and greater standardization of testing tools and procedures; and continued improvement in local enforcement efforts.« less

  6. Bedside handover: quality improvement strategy to "transform care at the bedside".

    PubMed

    Chaboyer, Wendy; McMurray, Anne; Johnson, Joanne; Hardy, Linda; Wallis, Marianne; Sylvia Chu, Fang Ying

    2009-01-01

    This quality improvement project implemented bedside handover in nursing. Using Lewin's 3-Step Model for Change, 3 wards in an Australian hospital changed from verbal reporting in an isolated room to bedside handover. Practice guidelines and a competency standard were developed. The change was received positively by both staff and patients. Staff members reported that bedside handover improved safety, efficiency, teamwork, and the level of support from senior staff members.

  7. Cost-effectiveness and Budget Impact of Routine Use of Fractional Exhaled Nitric Oxide Monitoring for the Management of Adult Asthma Patients in Spain.

    PubMed

    Sabatelli, L; Seppälä, U; Sastre, J; Crater, G

    Fractional exhaled nitric oxide (FeNO) is a marker for type 2 airway inflammation. The objective of this study was to evaluate the cost-effectiveness and budget impact of FeNO monitoring for management of adult asthma in Spain. A cost-effectiveness analysis model was used to evaluate the effect on costs of adding FeNO monitoring to asthma management. Over a 1-year period, the model estimated the incremental cost per quality-adjusted life year and incremental number of exacerbations avoided when FeNO monitoring was added to standard guideline-driven asthma care compared with standard care alone. Univariate and multivariate sensitivity analyses were applied to explore uncertainty in the model. A budget impact model was used to examine the impact of FeNO monitoring on primary care costs across the Spanish health system. The results showed that adding FeNO to standard asthma care saved €62.53 per patient-year in the adult population and improved quality-adjusted life years by 0.026 per patient-year. The budget impact analysis revealed a potential net yearly saving of €129 million if FeNO monitoring had been used in primary care settings in Spain. The present economic model shows that adding FeNO to the treatment algorithm can considerably reduce costs and improve quality of life when used to manage asthma in combination with current treatment guidelines.

  8. Improved parameter inference in catchment models: 1. Evaluating parameter uncertainty

    NASA Astrophysics Data System (ADS)

    Kuczera, George

    1983-10-01

    A Bayesian methodology is developed to evaluate parameter uncertainty in catchment models fitted to a hydrologic response such as runoff, the goal being to improve the chance of successful regionalization. The catchment model is posed as a nonlinear regression model with stochastic errors possibly being both autocorrelated and heteroscedastic. The end result of this methodology, which may use Box-Cox power transformations and ARMA error models, is the posterior distribution, which summarizes what is known about the catchment model parameters. This can be simplified to a multivariate normal provided a linearization in parameter space is acceptable; means of checking and improving this assumption are discussed. The posterior standard deviations give a direct measure of parameter uncertainty, and study of the posterior correlation matrix can indicate what kinds of data are required to improve the precision of poorly determined parameters. Finally, a case study involving a nine-parameter catchment model fitted to monthly runoff and soil moisture data is presented. It is shown that use of ordinary least squares when its underlying error assumptions are violated gives an erroneous description of parameter uncertainty.

  9. Evaluating ambulatory care training in Firoozgar hospital based on Iranian national standards of undergraduate medical education

    PubMed Central

    Sabzghabaei, Foroogh; Salajeghe, Mahla; Soltani Arabshahi, Seyed Kamran

    2017-01-01

    Background: In this study, ambulatory care training in Firoozgar hospital was evaluated based on Iranian national standards of undergraduate medical education related to ambulatory education using Baldrige Excellence Model. Moreover, some suggestions were offered to promote education quality in the current condition of ambulatory education in Firoozgar hospital and national standards using the gap analysis method. Methods: This descriptive analytic study was a kind of evaluation research performed using the standard check lists published by the office of undergraduate medical education council. Data were collected through surveying documents, interviewing, and observing the processes based on the Baldrige Excellence Model. After confirming the validity and reliability of the check lists, we evaluated the establishment level of the national standards of undergraduate medical education in the clinics of this hospital in the 4 following domains: educational program, evaluation, training and research resources, and faculty members. Data were analyzed according to the national standards of undergraduate medical education related to ambulatory education and the Baldrige table for scoring. Finally, the quality level of the current condition was determined as very appropriate, appropriate, medium, weak, and very weak. Results: In domains of educational program 62%, in evaluation 48%, in training and research resources 46%, in faculty members 68%, and in overall ratio, 56% of the standards were appropriate. Conclusion: The most successful domains were educational program and faculty members, but evaluation and training and research resources domains had a medium performance. Some domains and indicators were determined as weak and their quality needed to be improved, so it is suggested to provide the necessary facilities and improvements by attending to the quality level of the national standards of ambulatory education PMID:29951400

  10. The Regionalization of National-Scale SPARROW Models for Stream Nutrients

    USGS Publications Warehouse

    Schwarz, G.E.; Alexander, R.B.; Smith, R.A.; Preston, S.D.

    2011-01-01

    This analysis modifies the parsimonious specification of recently published total nitrogen (TN) and total phosphorus (TP) national-scale SPAtially Referenced Regressions On Watershed attributes models to allow each model coefficient to vary geographically among three major river basins of the conterminous United States. Regionalization of the national models reduces the standard errors in the prediction of TN and TP loads, expressed as a percentage of the predicted load, by about 6 and 7%. We develop and apply a method for combining national-scale and regional-scale information to estimate a hybrid model that imposes cross-region constraints that limit regional variation in model coefficients, effectively reducing the number of free model parameters as compared to a collection of independent regional models. The hybrid TN and TP regional models have improved model fit relative to the respective national models, reducing the standard error in the prediction of loads, expressed as a percentage of load, by about 5 and 4%. Only 19% of the TN hybrid model coefficients and just 2% of the TP hybrid model coefficients show evidence of substantial regional specificity (more than ??100% deviation from the national model estimate). The hybrid models have much greater precision in the estimated coefficients than do the unconstrained regional models, demonstrating the efficacy of pooling information across regions to improve regional models. ?? 2011 American Water Resources Association. This article is a U.S. Government work and is in the public domain in the USA.

  11. Project officer's perspective: quality assurance as a management tool.

    PubMed

    Heiby, J

    1993-06-01

    Advances in the management of health programs in less developed countries (LDC) have not kept pace with the progress of the technology used. The US Agency for International Development mandated the Quality Assurance Project (QAP) to provide quality improvement technical assistance to primary health care systems in LDCs while developing appropriate quality assurance (QA) strategies. The quality of health care in recent years in the US and Europe focused on the introduction of management techniques developed for industry into health systems. The experience of the QAP and its predecessor, the PRICOR Project, shows that quality improvement techniques facilitate measurement of quality of care. A recently developed WHO model for the management of the sick child provides scientifically based standards for actual care. Since 1988, outside investigators measuring how LDC clinicians perform have revealed serious deficiencies in quality compared with the program's own standards. This prompted developed of new QA management initiatives: 1) communicating standards clearly to the program staff; 2) actively monitoring actual performance corresponds to these standards; and 3) taking action to improve performance. QA means that managers are expected to monitor service delivery, undertake problem solving, and set specific targets for quality improvement. Quality improvement methods strengthen supervision as supervisors can objectively assess health worker performance. QA strengthens the management functions that support service delivery, e.g., training, records management, finance, logistics, and supervision. Attention to quality can contribute to improved health worker motivation and effective incentive programs by recognition for a job well done and opportunities for learning new skills. These standards can also address patient satisfaction. QA challenges managers to aim for the optimal level of care attainable.

  12. Explicit Convection over the Western Pacific Warm Pool in the Community Atmospheric Model.

    NASA Astrophysics Data System (ADS)

    Ziemiaski, Micha Z.; Grabowski, Wojciech W.; Moncrieff, Mitchell W.

    2005-05-01

    This paper reports on the application of the cloud-resolving convection parameterization (CRCP) to the Community Atmospheric Model (CAM), the atmospheric component of the Community Climate System Model (CCSM). The cornerstone of CRCP is the use of a two-dimensional zonally oriented cloud-system-resolving model to represent processes on mesoscales at the subgrid scale of a climate model. Herein, CRCP is applied at each climate model column over the tropical western Pacific warm pool, in a domain spanning 10°S-10°N, 150°-170°E. Results from the CRCP simulation are compared with CAM in its standard configuration.The CRCP simulation shows significant improvements of the warm pool climate. The cloud condensate distribution is much improved as well as the bias of the tropopause height. More realistic structure of the intertropical convergence zone (ITCZ) during the boreal winter and better representation of the variability of convection are evident. In particular, the diurnal cycle of precipitation has phase and amplitude in good agreement with observations. Also improved is the large-scale organization of the tropical convection, especially superclusters associated with Madden-Julian oscillation (MJO)-like systems. Location and propagation characteristics, as well as lower-tropospheric cyclonic and upper-tropospheric anticyclonic gyres, are more realistic than in the standard CAM. Finally, the simulations support an analytic theory of dynamical coupling between organized convection and equatorial beta-plane vorticity dynamics associated with MJO-like systems.

  13. Quantification of amine functional groups and their influence on OM/OC in the IMPROVE network

    NASA Astrophysics Data System (ADS)

    Kamruzzaman, Mohammed; Takahama, Satoshi; Dillner, Ann M.

    2018-01-01

    Recently, we developed a method using FT-IR spectroscopy coupled with partial least squares (PLS) regression to measure the four most abundant organic functional groups, aliphatic C-H, alcohol OH, carboxylic acid OH and carbonyl C=O, in atmospheric particulate matter. These functional groups are summed to estimate organic matter (OM) while the carbon from the functional groups is summed to estimate organic carbon (OC). With this method, OM and OM/OC can be estimated for each sample rather than relying on one assumed value to convert OC measurements to OM. This study continues the development of the FT-IR and PLS method for estimating OM and OM/OC by including the amine functional group. Amines are ubiquitous in the atmosphere and come from motor vehicle exhaust, animal husbandry, biomass burning, and vegetation among other sources. In this study, calibration standards for amines are produced by aerosolizing individual amine compounds and collecting them on PTFE filters using an IMPROVE sampler, thereby mimicking the filter media and collection geometry of ambient standards. The moles of amine functional group on each standard and a narrow range of amine-specific wavenumbers in the FT-IR spectra (wavenumber range 1 550-1 500 cm-1) are used to develop a PLS calibration model. The PLS model is validated using three methods: prediction of a set of laboratory standards not included in the model, a peak height analysis and a PLS model with a broader wavenumber range. The model is then applied to the ambient samples collected throughout 2013 from 16 IMPROVE sites in the USA. Urban sites have higher amine concentrations than most rural sites, but amine functional groups account for a lower fraction of OM at urban sites. Amine concentrations, contributions to OM and seasonality vary by site and sample. Amine has a small impact on the annual average OM/OC for urban sites, but for some rural sites including amine in the OM/OC calculations increased OM/OC by 0.1 or more.

  14. Application of overlay modeling and control with Zernike polynomials in an HVM environment

    NASA Astrophysics Data System (ADS)

    Ju, JaeWuk; Kim, MinGyu; Lee, JuHan; Nabeth, Jeremy; Jeon, Sanghuck; Heo, Hoyoung; Robinson, John C.; Pierson, Bill

    2016-03-01

    Shrinking technology nodes and smaller process margins require improved photolithography overlay control. Generally, overlay measurement results are modeled with Cartesian polynomial functions for both intra-field and inter-field models and the model coefficients are sent to an advanced process control (APC) system operating in an XY Cartesian basis. Dampened overlay corrections, typically via exponentially or linearly weighted moving average in time, are then retrieved from the APC system to apply on the scanner in XY Cartesian form for subsequent lot exposure. The goal of the above method is to process lots with corrections that target the least possible overlay misregistration in steady state as well as in change point situations. In this study, we model overlay errors on product using Zernike polynomials with same fitting capability as the process of reference (POR) to represent the wafer-level terms, and use the standard Cartesian polynomials to represent the field-level terms. APC calculations for wafer-level correction are performed in Zernike basis while field-level calculations use standard XY Cartesian basis. Finally, weighted wafer-level correction terms are converted to XY Cartesian space in order to be applied on the scanner, along with field-level corrections, for future wafer exposures. Since Zernike polynomials have the property of being orthogonal in the unit disk we are able to reduce the amount of collinearity between terms and improve overlay stability. Our real time Zernike modeling and feedback evaluation was performed on a 20-lot dataset in a high volume manufacturing (HVM) environment. The measured on-product results were compared to POR and showed a 7% reduction in overlay variation including a 22% terms variation. This led to an on-product raw overlay Mean + 3Sigma X&Y improvement of 5% and resulted in 0.1% yield improvement.

  15. Improved NSGA model for multi objective operation scheduling and its evaluation

    NASA Astrophysics Data System (ADS)

    Li, Weining; Wang, Fuyu

    2017-09-01

    Reasonable operation can increase the income of the hospital and improve the patient’s satisfactory level. In this paper, by using multi object operation scheduling method with improved NSGA algorithm, it shortens the operation time, reduces the operation costand lowers the operation risk, the multi-objective optimization model is established for flexible operation scheduling, through the MATLAB simulation method, the Pareto solution is obtained, the standardization of data processing. The optimal scheduling scheme is selected by using entropy weight -Topsis combination method. The results show that the algorithm is feasible to solve the multi-objective operation scheduling problem, and provide a reference for hospital operation scheduling.

  16. Observation of the rare $$B^0_s\\to\\mu^+\\mu^-$$ decay from the combined analysis of CMS and LHCb data

    DOE PAGES

    Khachatryan, Vardan

    2015-05-13

    The standard model of particle physics describes the fundamental particles and their interactions via the strong, electromagnetic and weak forces. It provides precise predictions for measurable quantities that can be tested experimentally. We foudn that the probabilities, or branching fractions, of the strange B meson (B 0 2 ) and the B 0 meson decaying into two oppositely charged muons (μ + and μ -) are especially interesting because of their sensitivity to theories that extend the standard model. The standard model predicts that the B 0 2 → μ + and μ - and (B 0 → μ +more » and μ - decays are very rare, with about four of the former occurring for every billion mesons produced, and one of the latter occurring for every ten billion B 0 mesons1. A difference in the observed branching fractions with respect to the predictions of the standard model would provide a direction in which the standard model should be extended. Before the Large Hadron Collider (LHC) at CERN2 started operating, no evidence for either decay mode had been found. Upper limits on the branching fractions were an order of magnitude above the standard model predictions. The CMS (Compact Muon Solenoid) and LHCb (Large Hadron Collider beauty) collaborations have performed a joint analysis of the data from proton–proton collisions that they collected in 2011 at a centre-of-mass energy of seven teraelectronvolts and in 2012 at eight teraelectronvolts. Here we report the first observation of the μ + and μ -decay, with a statistical significance exceeding six standard deviations, and the best measurement so far of its branching fraction. We then obtained evidence for the B 0 → μ + and μ - decay with a statistical significance of three standard deviations. Both measurements are statistically compatible with standard model predictions and allow stringent constraints to be placed on theories beyond the standard model. The LHC experiments will resume taking data in 2015, recording proton–proton collisions at a centre-of-mass energy of 13 teraelectronvolts, which will approximately double the production rates of B 0 2 and B 0 mesons and lead to further improvements in the precision of these crucial tests of the standard model.« less

  17. Observation of the rare $$B^0_s\\to\\mu^+\\mu^-$$ decay from the combined analysis of CMS and LHCb data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khachatryan, Vardan

    The standard model of particle physics describes the fundamental particles and their interactions via the strong, electromagnetic and weak forces. It provides precise predictions for measurable quantities that can be tested experimentally. We foudn that the probabilities, or branching fractions, of the strange B meson (B 0 2 ) and the B 0 meson decaying into two oppositely charged muons (μ + and μ -) are especially interesting because of their sensitivity to theories that extend the standard model. The standard model predicts that the B 0 2 → μ + and μ - and (B 0 → μ +more » and μ - decays are very rare, with about four of the former occurring for every billion mesons produced, and one of the latter occurring for every ten billion B 0 mesons1. A difference in the observed branching fractions with respect to the predictions of the standard model would provide a direction in which the standard model should be extended. Before the Large Hadron Collider (LHC) at CERN2 started operating, no evidence for either decay mode had been found. Upper limits on the branching fractions were an order of magnitude above the standard model predictions. The CMS (Compact Muon Solenoid) and LHCb (Large Hadron Collider beauty) collaborations have performed a joint analysis of the data from proton–proton collisions that they collected in 2011 at a centre-of-mass energy of seven teraelectronvolts and in 2012 at eight teraelectronvolts. Here we report the first observation of the μ + and μ -decay, with a statistical significance exceeding six standard deviations, and the best measurement so far of its branching fraction. We then obtained evidence for the B 0 → μ + and μ - decay with a statistical significance of three standard deviations. Both measurements are statistically compatible with standard model predictions and allow stringent constraints to be placed on theories beyond the standard model. The LHC experiments will resume taking data in 2015, recording proton–proton collisions at a centre-of-mass energy of 13 teraelectronvolts, which will approximately double the production rates of B 0 2 and B 0 mesons and lead to further improvements in the precision of these crucial tests of the standard model.« less

  18. Observation of the rare B(s)(0) →µ+µ− decay from the combined analysis of CMS and LHCb data.

    PubMed

    2015-06-04

    The standard model of particle physics describes the fundamental particles and their interactions via the strong, electromagnetic and weak forces. It provides precise predictions for measurable quantities that can be tested experimentally. The probabilities, or branching fractions, of the strange B meson (B(s)(0)) and the B0 meson decaying into two oppositely charged muons (μ+ and μ−) are especially interesting because of their sensitivity to theories that extend the standard model. The standard model predicts that the B(s)(0) →µ+µ− and B(0) →µ+µ− decays are very rare, with about four of the former occurring for every billion mesons produced, and one of the latter occurring for every ten billion B0 mesons. A difference in the observed branching fractions with respect to the predictions of the standard model would provide a direction in which the standard model should be extended. Before the Large Hadron Collider (LHC) at CERN started operating, no evidence for either decay mode had been found. Upper limits on the branching fractions were an order of magnitude above the standard model predictions. The CMS (Compact Muon Solenoid) and LHCb (Large Hadron Collider beauty) collaborations have performed a joint analysis of the data from proton–proton collisions that they collected in 2011 at a centre-of-mass energy of seven teraelectronvolts and in 2012 at eight teraelectronvolts. Here we report the first observation of the B(s)(0) → µ+µ− decay, with a statistical significance exceeding six standard deviations, and the best measurement so far of its branching fraction. Furthermore, we obtained evidence for the B(0) → µ+µ− decay with a statistical significance of three standard deviations. Both measurements are statistically compatible with standard model predictions and allow stringent constraints to be placed on theories beyond the standard model. The LHC experiments will resume taking data in 2015, recording proton–proton collisions at a centre-of-mass energy of 13 teraelectronvolts, which will approximately double the production rates of B(s)(0) and B0 mesons and lead to further improvements in the precision of these crucial tests of the standard model.

  19. A Model for Pharmacological Research-Treatment of Cocaine Dependence

    PubMed Central

    Montoya, Ivan D.; Hess, Judith M.; Preston, Kenzie L.; Gorelick, David A.

    2008-01-01

    Major problems for research on pharmacological treatments for cocaine dependence are lack of comparability of results from different treatment research programs and poor validity and/or reliability of results. Double-blind, placebo-controlled, random assignment, experimental designs, using standard intake and assessment procedures help to reduce these problems. Cessation or reduction of drug use and/or craving, retention in treatment, and medical and psychosocial improvement are some of the outcome variables collected in treatment research programs. A model to be followed across different outpatient clinical trials for pharmacological treatment of cocaine dependence is presented here. This model represents an effort to standardize data collection to make results more valid and comparable. PMID:8749725

  20. GLIMPSE: An integrated assessment model-based tool for ...

    EPA Pesticide Factsheets

    Dan Loughlin will describe the GCAM-USA integrated assessment model and how that model is being improved and integrated into the GLIMPSE decision support system. He will also demonstrate the application of the model to evaluate the emissions and health implications of hypothetical state-level renewable electricity standards. Introduce the GLIMPSE project to state and regional environmental modelers and analysts. Presented as part of the State Energy and Air Quality Group Webinar Series, which is organized by NESCAUM.

  1. Occupant behavior models: A critical review of implementation and representation approaches in building performance simulation programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Chen, Yixing; Belafi, Zsofia

    Occupant behavior (OB) in buildings is a leading factor influencing energy use in buildings. Quantifying this influence requires the integration of OB models with building performance simulation (BPS). This study reviews approaches to representing and implementing OB models in today’s popular BPS programs, and discusses weaknesses and strengths of these approaches and key issues in integrating of OB models with BPS programs. Two of the key findings are: (1) a common data model is needed to standardize the representation of OB models, enabling their flexibility and exchange among BPS programs and user applications; the data model can be implemented usingmore » a standard syntax (e.g., in the form of XML schema), and (2) a modular software implementation of OB models, such as functional mock-up units for co-simulation, adopting the common data model, has advantages in providing a robust and interoperable integration with multiple BPS programs. Such common OB model representation and implementation approaches help standardize the input structures of OB models, enable collaborative development of a shared library of OB models, and allow for rapid and widespread integration of OB models with BPS programs to improve the simulation of occupant behavior and quantification of their impact on building performance.« less

  2. Occupant behavior models: A critical review of implementation and representation approaches in building performance simulation programs

    DOE PAGES

    Hong, Tianzhen; Chen, Yixing; Belafi, Zsofia; ...

    2017-07-27

    Occupant behavior (OB) in buildings is a leading factor influencing energy use in buildings. Quantifying this influence requires the integration of OB models with building performance simulation (BPS). This study reviews approaches to representing and implementing OB models in today’s popular BPS programs, and discusses weaknesses and strengths of these approaches and key issues in integrating of OB models with BPS programs. Two of the key findings are: (1) a common data model is needed to standardize the representation of OB models, enabling their flexibility and exchange among BPS programs and user applications; the data model can be implemented usingmore » a standard syntax (e.g., in the form of XML schema), and (2) a modular software implementation of OB models, such as functional mock-up units for co-simulation, adopting the common data model, has advantages in providing a robust and interoperable integration with multiple BPS programs. Such common OB model representation and implementation approaches help standardize the input structures of OB models, enable collaborative development of a shared library of OB models, and allow for rapid and widespread integration of OB models with BPS programs to improve the simulation of occupant behavior and quantification of their impact on building performance.« less

  3. Neutrino degeneracy and cosmological nucleosynthesis, revisited

    NASA Technical Reports Server (NTRS)

    Olive, K. A.; Schramm, David N.; Thomas, D.; Walker, T. P.

    1991-01-01

    A reexamination of the effects of non-zero degeneracies on Big Bang Nucleosynthesis is made. As previously noted, non-trivial alterations of the standard model conclusions can be induced only if excess lepton numbers L sub i, comparable to photon number densities eta sub tau, are assumed (where eta sub tau is approx. 3 times 10(exp 9) eta sub b). Furthermore, the required lepton number densities (L sub i eta sub tau) must be different for upsilon sub e than for upsilon sub mu and epsilon sub tau. It is shown that this loophole in the standard model of nucleosynthesis is robust and will not vanish as abundance and reaction rate determinations improve. However, it is also argued that theoretically (L sub e) approx. (L sub mu) approx. (L sub tau) approx. eta sub b is much less than eta sub tau which would preclude this loophole in standard unified models.

  4. Status of the BL2 beam measurement of the neutron lifetime

    NASA Astrophysics Data System (ADS)

    Hoogerheide, Shannon Fogwell; BL2 Collaboration

    2017-09-01

    Neutron beta decay is the simplest example of nuclear beta decay and a precise value of the neutron lifetime is important for consistency tests of the Standard Model and Big Bang Nucleosynthesis models. A new measurement of the neutron lifetime, utilizing the beam method, is underway at the National Institute of Standards and Technology Center for Neutron Research with a projected uncertainty of 1 s. A review of the beam method and the technical improvements in this experiment will be presented. The status of the experiment, as well as preliminary measurements, beam characteristics, and early data will be discussed.

  5. New mechanistic insights in the NH 3-SCR reactions at low temperature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruggeri, Maria Pia; Selleri, Tomasso; Nova, Isabella

    2016-05-06

    The present study is focused on the investigation of the low temperature Standard SCR reaction mechanism over Fe- and Cu-promoted zeolites. Different techniques are employed, including in situ DRIFTS, transient reaction analysis and chemical trapping techniques. The results present strong evidence of nitrite formation in the oxidative activation of NO and of their role in SCR reactions. These elements lead to a deeper understanding of the standard SCR chemistry at low temperature and can potentially improve the consistency of mechanistic mathematical models. Furthermore, comprehension of the mechanism on a fundamental level can contribute to the development of improved SCR catalysts.

  6. Direct CP violation in K0→ππ: Standard Model Status.

    PubMed

    Gisbert, Hector; Pich, Antonio

    2018-05-01

    In 1988 the NA31 experiment presented the first evidence of direct CP violation in the K<sup>0</sup>→ππ decay amplitudes. A clear signal with a 7.2σ statistical significance was later established with the full data samples from the NA31, E731, NA48 and KTeV experiments, confirming that CP violation is associated with a ΔS=1 quark transition, as predicted by the Standard Model. However, the theoretical prediction for the measured ratio ε'/ε has been a subject of strong controversy along the years. Although the underlying physics was already clarified in 2001, the recent release of improved lattice data has revived again the theoretical debate. We review the current status, discussing in detail the different ingredients that enter into the calculation of this observable and the reasons why seemingly contradictory predictions were obtained in the past by several groups. An update of the Standard Model prediction is presented and the prospects for future improvements are analysed. Taking into account all known short-distance and long-distance contributions, one obtains Re(ε'/ε) = (15 ± 7) ·10<sup>-4</sup>, in good agreement with the experimental measurement. . © 2018 IOP Publishing Ltd.

  7. Quality Test of Flexible Flat Cable (FFC) With Short Open Test Using Law Ohm Approach through Embedded Fuzzy Logic Based On Open Source Arduino Data Logger

    NASA Astrophysics Data System (ADS)

    Rohmanu, Ajar; Everhard, Yan

    2017-04-01

    A technological development, especially in the field of electronics is very fast. One of the developments in the electronics hardware device is Flexible Flat Cable (FFC), which serves as a media liaison between the main boards with other hardware parts. The production of Flexible Flat Cable (FFC) will go through the process of testing and measuring of the quality Flexible Flat Cable (FFC). Currently, the testing and measurement is still done manually by observing the Light Emitting Diode (LED) by the operator, so there were many problems. This study will be made of test quality Flexible Flat Cable (FFC) computationally utilize Open Source Embedded System. The method used is the measurement with Short Open Test method using Ohm’s Law approach to 4-wire (Kelvin) and fuzzy logic as a decision maker measurement results based on Open Source Arduino Data Logger. This system uses a sensor current INA219 as a sensor to read the voltage value thus obtained resistance value Flexible Flat Cable (FFC). To get a good system we will do the Black-box testing as well as testing the accuracy and precision with the standard deviation method. In testing the system using three models samples were obtained the test results in the form of standard deviation for the first model of 1.921 second model of 4.567 and 6.300 for the third model. While the value of the Standard Error of Mean (SEM) for the first model of the model 0.304 second at 0.736 and 0.996 of the third model. In testing this system, we will also obtain the average value of the measurement tolerance resistance values for the first model of - 3.50% 4.45% second model and the third model of 5.18% with the standard measurement of prisoners and improve productivity becomes 118.33%. From the results of the testing system is expected to improve the quality and productivity in the process of testing Flexible Flat Cable (FFC).

  8. The NASA/Industry Design Analysis Methods for Vibrations (DAMVIBS) Program - A government overview. [of rotorcraft technology development using finite element method

    NASA Technical Reports Server (NTRS)

    Kvaternik, Raymond G.

    1992-01-01

    An overview is presented of government contributions to the program called Design Analysis Methods for Vibrations (DAMV) which attempted to develop finite-element-based analyses of rotorcraft vibrations. NASA initiated the program with a finite-element modeling program for the CH-47D tandem-rotor helicopter. The DAMV program emphasized four areas including: airframe finite-element modeling, difficult components studies, coupled rotor-airframe vibrations, and airframe structural optimization. Key accomplishments of the program include industrywide standards for modeling metal and composite airframes, improved industrial designs for vibrations, and the identification of critical structural contributors to airframe vibratory responses. The program also demonstrated the value of incorporating secondary modeling details to improving correlation, and the findings provide the basis for an improved finite-element-based dynamics design-analysis capability.

  9. AGARD standard aeroelastic configurations for dynamic response. 1: Wing 445.6

    NASA Technical Reports Server (NTRS)

    Yates, E. Carson, Jr.

    1988-01-01

    This report contains experimental flutter data for the AGARD 3-D swept tapered standard configuration Wing 445.6, along with related descriptive data of the model properties required for comparative flutter calculations. As part of a cooperative AGARD-SMP program, guided by the Sub-Committee on Aeroelasticity, this standard configuration may serve as a common basis for comparison of calculated and measured aeroelastic behavior. These comparisons will promote a better understanding of the assumptions, approximations and limitations underlying the various aerodynamic methods applied, thus pointing the way to further improvements.

  10. A multiple-time-scale turbulence model based on variable partitioning of turbulent kinetic energy spectrum

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.; Chen, C.-P.

    1988-01-01

    The paper presents a multiple-time-scale turbulence model of a single point closure and a simplified split-spectrum method. Consideration is given to a class of turbulent boundary layer flows and of separated and/or swirling elliptic turbulent flows. For the separated and/or swirling turbulent flows, the present turbulence model yielded significantly improved computational results over those obtained with the standard k-epsilon turbulence model.

  11. Specialization in general practice *

    PubMed Central

    Hart, Julian Tudor

    1980-01-01

    Ideas about general practitioner specialism may have been hampered in the past because of the three models of general practitioner specialism — in the hospital service, the fee-earning specialoid or the general practitioner obstetrician — none of which is satisfactory. However, general practitioner specialism can be justified in guaranteeing standards by concentrating groups of patients, accepting responsibility, and planning care. Medico-political changes may be needed to achieve improvement in clinical standards. PMID:7411511

  12. Big bang nucleosynthesis: The standard model and alternatives

    NASA Technical Reports Server (NTRS)

    Schramm, David N.

    1991-01-01

    Big bang nucleosynthesis provides (with the microwave background radiation) one of the two quantitative experimental tests of the big bang cosmological model. This paper reviews the standard homogeneous-isotropic calculation and shows how it fits the light element abundances ranging from He-4 at 24% by mass through H-2 and He-3 at parts in 10(exp 5) down to Li-7 at parts in 10(exp 10). Furthermore, the recent large electron positron (LEP) (and the stanford linear collider (SLC)) results on the number of neutrinos are discussed as a positive laboratory test of the standard scenario. Discussion is presented on the improved observational data as well as the improved neutron lifetime data. Alternate scenarios of decaying matter or of quark-hadron induced inhomogeneities are discussed. It is shown that when these scenarios are made to fit the observed abundances accurately, the resulting conlusions on the baryonic density relative to the critical density, omega(sub b) remain approximately the same as in the standard homogeneous case, thus, adding to the robustness of the conclusion that omega(sub b) approximately equals 0.06. This latter point is the driving force behind the need for non-baryonic dark matter (assuming omega(sub total) = 1) and the need for dark baryonic matter, since omega(sub visible) is less than omega(sub b).

  13. Extending the Utility of the Parabolic Approximation in Medical Ultrasound Using Wide-Angle Diffraction Modeling.

    PubMed

    Soneson, Joshua E

    2017-04-01

    Wide-angle parabolic models are commonly used in geophysics and underwater acoustics but have seen little application in medical ultrasound. Here, a wide-angle model for continuous-wave high-intensity ultrasound beams is derived, which approximates the diffraction process more accurately than the commonly used Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation without increasing implementation complexity or computing time. A method for preventing the high spatial frequencies often present in source boundary conditions from corrupting the solution is presented. Simulations of shallowly focused axisymmetric beams using both the wide-angle and standard parabolic models are compared to assess the accuracy with which they model diffraction effects. The wide-angle model proposed here offers improved focusing accuracy and less error throughout the computational domain than the standard parabolic model, offering a facile method for extending the utility of existing KZK codes.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eppich, Gary R.; Williams, Ross W.; Gaffney, Amy M.

    Here, age dating of nuclear material can provide insight into source and suspected use in nuclear forensic investigations. We report here a method for the determination of the date of most recent chemical purification for uranium materials using the 235U- 231Pa chronometer. Protactinium is separated from uranium and neptunium matrices using anion exchange resin, followed by sorption of Pa to an SiO 2 medium. The concentration of 231Pa is measured by isotope dilution mass spectrometry using 233Pa spikes prepared from an aliquot of 237Np and calibrated in-house using the rock standard Table Mountain Latite and the uranium isotopic standard U100.more » Combined uncertainties of age dates using this method are 1.5 to 3.5 %, an improvement over alpha spectrometry measurement methods. Model ages of five uranium standard reference materials are presented; all standards have concordant 235U- 231Pa and 234U- 230Th model ages.« less

  15. Knowledge management for efficient quantitative analyses during regulatory reviews.

    PubMed

    Krudys, Kevin; Li, Fang; Florian, Jeffry; Tornoe, Christoffer; Chen, Ying; Bhattaram, Atul; Jadhav, Pravin; Neal, Lauren; Wang, Yaning; Gobburu, Joga; Lee, Peter I D

    2011-11-01

    Knowledge management comprises the strategies and methods employed to generate and leverage knowledge within an organization. This report outlines the activities within the Division of Pharmacometrics at the US FDA to effectively manage knowledge with the ultimate goal of improving drug development and advancing public health. The infrastructure required for pharmacometric knowledge management includes provisions for data standards, queryable databases, libraries of modeling tools, archiving of analysis results and reporting templates for effective communication. Two examples of knowledge management systems developed within the Division of Pharmacometrics are used to illustrate these principles. The benefits of sound knowledge management include increased productivity, allowing reviewers to focus on research questions spanning new drug applications, such as improved trial design and biomarker development. The future of knowledge management depends on the collaboration between the FDA and industry to implement data and model standards to enhance sharing and dissemination of knowledge.

  16. A Flipped Pedagogy for Expert Problem Solving

    NASA Astrophysics Data System (ADS)

    Pritchard, David

    The internet provides free learning opportunities for declarative (Wikipedia, YouTube) and procedural (Kahn Academy, MOOCs) knowledge, challenging colleges to provide learning at a higher cognitive level. Our ``Modeling Applied to Problem Solving'' pedagogy for Newtonian Mechanics imparts strategic knowledge - how to systematically determine which concepts to apply and why. Declarative and procedural knowledge is learned online before class via an e-text, checkpoint questions, and homework on edX.org (see http://relate.mit.edu/physicscourse); it is organized into five Core Models. Instructors then coach students on simple ``touchstone problems'', novel exercises, and multi-concept problems - meanwhile exercising three of the four C's: communication, collaboration, critical thinking and problem solving. Students showed 1.2 standard deviations improvement on the MIT final exam after three weeks instruction, a significant positive shift in 7 of the 9 categories in the CLASS, and their grades improved by 0.5 standard deviation in their following physics course (Electricity and Magnetism).

  17. Towards an In-Beam Measurement of the Neutron Lifetime to 1 Second

    NASA Astrophysics Data System (ADS)

    Mulholland, Jonathan

    2014-03-01

    A precise value for the neutron lifetime is required for consistency tests of the Standard Model and is an essential parameter in the theory of Big Bang Nucleosynthesis. A new measurement of the neutron lifetime using the in-beam method is planned at the National Institute of Standards and Technology Center for Neutron Research. The systematic effects associated with the in-beam method are markedly different than those found in storage experiments utilizing ultracold neutrons. Experimental improvements, specifically recent advances in the determination of absolute neutron fluence, should permit an overall uncertainty of 1 second on the neutron lifetime. The dependence of the primordial mass fraction on the neutron lifetime, technical improvements of the in-beam technique, and the path toward improving the precision of the new measurement will be discussed.

  18. [Implementation results of emission standards of air pollutants for thermal power plants: a numerical simulation].

    PubMed

    Wang, Zhan-Shan; Pan, Li-Bo

    2014-03-01

    The emission inventory of air pollutants from the thermal power plants in the year of 2010 was set up. Based on the inventory, the air quality of the prediction scenarios by implementation of both 2003-version emission standard and the new emission standard were simulated using Models-3/CMAQ. The concentrations of NO2, SO2, and PM2.5, and the deposition of nitrogen and sulfur in the year of 2015 and 2020 were predicted to investigate the regional air quality improvement by the new emission standard. The results showed that the new emission standard could effectively improve the air quality in China. Compared with the implementation results of the 2003-version emission standard, by 2015 and 2020, the area with NO2 concentration higher than the emission standard would be reduced by 53.9% and 55.2%, the area with SO2 concentration higher than the emission standard would be reduced by 40.0%, the area with nitrogen deposition higher than 1.0 t x km(-2) would be reduced by 75.4% and 77.9%, and the area with sulfur deposition higher than 1.6 t x km(-2) would be reduced by 37.1% and 34.3%, respectively.

  19. The Discrepancy Evaluation Model: A Systematic Approach for the Evaluation of Career Planning and Placement Programs.

    ERIC Educational Resources Information Center

    Buttram, Joan L.; Covert, Robert W.

    The Discrepancy Evaluation Model (DEM), developed in 1966 by Malcolm Provus, provides information for program assessment and program improvement. Under the DEM, evaluation is defined as the comparison of an actual performance to a desired standard. The DEM embodies five stages of evaluation based upon a program's natural development: program…

  20. What Did the Teachers Think? Teachers' Responses to the Use of Value-Added Modeling as a Tool for Evaluating Teacher Effectiveness

    ERIC Educational Resources Information Center

    Lee, Linda

    2011-01-01

    The policy discourse on improving student achievement has shifted from student outcomes to focusing on evaluating teacher effectiveness using standardized test scores. A major urban newspaper released a public database that ranked teachers' effectiveness using Value-Added Modeling. Teachers, whom are generally marginalized, were given the…

  1. Two Models of Raters in a Structured Oral Examination: Does It Make a Difference?

    ERIC Educational Resources Information Center

    Touchie, Claire; Humphrey-Murto, Susan; Ainslie, Martha; Myers, Kathryn; Wood, Timothy J.

    2010-01-01

    Oral examinations have become more standardized over recent years. Traditionally a small number of raters were used for this type of examination. Past studies suggested that more raters should improve reliability. We compared the results of a multi-station structured oral examination using two different rater models, those based in a station,…

  2. Derivation and Validation of a Risk Standardization Model for Benchmarking Hospital Performance for Health-Related Quality of Life Outcomes after Acute Myocardial Infarction

    PubMed Central

    Arnold, Suzanne V.; Masoudi, Frederick A.; Rumsfeld, John S.; Li, Yan; Jones, Philip G.; Spertus, John A.

    2014-01-01

    Background Before outcomes-based measures of quality can be used to compare and improve care, they must be risk-standardized to account for variations in patient characteristics. Despite the importance of health-related quality of life (HRQL) outcomes among patients with acute myocardial infarction (AMI), no risk-standardized models have been developed. Methods and Results We assessed disease-specific HRQL using the Seattle Angina Questionnaire at baseline and 1 year later in 2693 unselected AMI patients from 24 hospitals enrolled in the TRIUMPH registry. Using 57 candidate sociodemographic, economic, and clinical variables present on admission, we developed a parsimonious, hierarchical linear regression model to predict HRQL. Eleven variables were independently associated with poor HRQL after AMI, including younger age, prior CABG, depressive symptoms, and financial difficulties (R2=20%). The model demonstrated excellent internal calibration and reasonable calibration in an independent sample of 1890 AMI patients in a separate registry, although the model slightly over-predicted HRQL scores in the higher deciles. Among the 24 TRIUMPH hospitals, 1-year unadjusted HRQL scores ranged from 67–89. After risk-standardization, HRQL scores variability narrowed substantially (range=79–83), and the group of hospital performance (bottom 20%/middle 60%/top 20%) changed in 14 of the 24 hospitals (58% reclassification with risk-standardization). Conclusions In this predictive model for HRQL after AMI, we identified risk factors, including economic and psychological characteristics, associated with HRQL outcomes. Adjusting for these factors substantially altered the rankings of hospitals as compared with unadjusted comparisons. Using this model to compare risk-standardized HRQL outcomes across hospitals may identify processes of care that maximize this important patient-centered outcome. PMID:24163068

  3. Economic Impact of Gene Expression Profiling in Patients with Early-Stage Breast Cancer in France.

    PubMed

    Katz, Gregory; Romano, Olivier; Foa, Cyril; Vataire, Anne-Lise; Chantelard, Jean-Victor; Hervé, Robert; Barletta, Hugues; Durieux, Axel; Martin, Jean-Pierre; Salmon, Rémy

    2015-01-01

    The heterogeneous nature of breast cancer can make decisions on adjuvant chemotherapy following surgical resection challenging. Oncotype DX is a validated gene expression profiling test that predicts the likelihood of adjuvant chemotherapy benefit in early-stage breast cancer. The aim of this study is to determine the costs of chemotherapy in private hospitals in France, and evaluate the cost-effectiveness of Oncotype DX from national insurance and societal perspectives. A multicenter study was conducted in seven French private hospitals, capturing retrospective data from 106 patient files. Cost estimates were used in conjunction with a published Markov model to assess the cost-effectiveness of using Oncotype DX to inform chemotherapy decision making versus standard care. Sensitivity analyses were performed. The cost of adjuvant chemotherapy in private hospitals was estimated at EUR 8,218 per patient from a national insurance perspective and EUR 10,305 from a societal perspective. Cost-effectiveness analysis indicated that introducing Oncotype DX improved life expectancy (+0.18 years) and quality-adjusted life expectancy (+0.17 QALYs) versus standard care. Oncotype DX was found cost-effective from a national insurance perspective (EUR 2,134 per QALY gained) and cost saving from a societal perspective versus standard care. Inclusion of lost productivity costs in the modeling analysis meant that costs for eligible patients undergoing Oncotype DX testing were on average EUR 602 lower than costs for those receiving standard care. As Oncotype DX was found both cost and life-saving from a societal perspective, the test was considered to be dominant to standard care. However, the delay in coverage has the potential to erode the quality of the French healthcare system, thus depriving patients of technologies that could improve clinical outcomes and allow healthcare professionals to better allocate hospital resources to improve the standard of care for all patients.

  4. Significant and Sustained Reduction in Chemotherapy Errors Through Improvement Science.

    PubMed

    Weiss, Brian D; Scott, Melissa; Demmel, Kathleen; Kotagal, Uma R; Perentesis, John P; Walsh, Kathleen E

    2017-04-01

    A majority of children with cancer are now cured with highly complex chemotherapy regimens incorporating multiple drugs and demanding monitoring schedules. The risk for error is high, and errors can occur at any stage in the process, from order generation to pharmacy formulation to bedside drug administration. Our objective was to describe a program to eliminate errors in chemotherapy use among children. To increase reporting of chemotherapy errors, we supplemented the hospital reporting system with a new chemotherapy near-miss reporting system. After the model for improvement, we then implemented several interventions, including a daily chemotherapy huddle, improvements to the preparation and delivery of intravenous therapy, headphones for clinicians ordering chemotherapy, and standards for chemotherapy administration throughout the hospital. Twenty-two months into the project, we saw a centerline shift in our U chart of chemotherapy errors that reached the patient from a baseline rate of 3.8 to 1.9 per 1,000 doses. This shift has been sustained for > 4 years. In Poisson regression analyses, we found an initial increase in error rates, followed by a significant decline in errors after 16 months of improvement work ( P < .001). After the model for improvement, our improvement efforts were associated with significant reductions in chemotherapy errors that reached the patient. Key drivers for our success included error vigilance through a huddle, standardization, and minimization of interruptions during ordering.

  5. Generic worklist handler for workflow-enabled products

    NASA Astrophysics Data System (ADS)

    Schmidt, Joachim; Meetz, Kirsten; Wendler, Thomas

    1999-07-01

    Workflow management (WfM) is an emerging field of medical information technology. It appears as a promising key technology to model, optimize and automate processes, for the sake of improved efficiency, reduced costs and improved patient care. The Application of WfM concepts requires the standardization of architectures and interfaces. A component of central interest proposed in this report is a generic work list handler: A standardized interface between a workflow enactment service and application system. Application systems with embedded work list handlers will be called 'Workflow Enabled Application Systems'. In this paper we discus functional requirements of work list handlers, as well as their integration into workflow architectures and interfaces. To lay the foundation for this specification, basic workflow terminology, the fundamentals of workflow management and - later in the paper - the available standards as defined by the Workflow Management Coalition are briefly reviewed.

  6. Improving Collaboration by Standardization Efforts in Systems Biology

    PubMed Central

    Dräger, Andreas; Palsson, Bernhard Ø.

    2014-01-01

    Collaborative genome-scale reconstruction endeavors of metabolic networks would not be possible without a common, standardized formal representation of these systems. The ability to precisely define biological building blocks together with their dynamic behavior has even been considered a prerequisite for upcoming synthetic biology approaches. Driven by the requirements of such ambitious research goals, standardization itself has become an active field of research on nearly all levels of granularity in biology. In addition to the originally envisaged exchange of computational models and tool interoperability, new standards have been suggested for an unambiguous graphical display of biological phenomena, to annotate, archive, as well as to rank models, and to describe execution and the outcomes of simulation experiments. The spectrum now even covers the interaction of entire neurons in the brain, three-dimensional motions, and the description of pharmacometric studies. Thereby, the mathematical description of systems and approaches for their (repeated) simulation are clearly separated from each other and also from their graphical representation. Minimum information definitions constitute guidelines and common operation protocols in order to ensure reproducibility of findings and a unified knowledge representation. Central database infrastructures have been established that provide the scientific community with persistent links from model annotations to online resources. A rich variety of open-source software tools thrives for all data formats, often supporting a multitude of programing languages. Regular meetings and workshops of developers and users lead to continuous improvement and ongoing development of these standardization efforts. This article gives a brief overview about the current state of the growing number of operation protocols, mark-up languages, graphical descriptions, and fundamental software support with relevance to systems biology. PMID:25538939

  7. Research on the management and endorsement of nuclear safety standards in the United States and its revelation for China

    NASA Astrophysics Data System (ADS)

    Liu, Ting; Tian, Yu; Yang, Lili; Gao, Siyi; Song, Dahu

    2018-01-01

    This paper introduces the American standard system, the Nuclear Regulatory Commission (NRC)’s responsibility, NRC nuclear safety regulations and standards system, studies on NRC’s standards management and endorsement mode, analyzes the characteristics of NRC standards endorsement management, and points out its disadvantages. This paper draws revelation from the standard management and endorsement model of NRC and points suggestion to China’s nuclear and radiation safety standards management.The issue of the “Nuclear Safety Law”plays an important role in China’s nuclear and radiation safety supervision. Nuclear and radiation safety regulations and standards are strong grips on the implementation of “Nuclear Safety Law”. This paper refers on the experience of international advanced countriy, will effectively promote the improvement of the endorsed management of China’s nuclear and radiation safety standards.

  8. Reducing death on the road: the effects of minimum safety standards, publicized crash tests, seat belts, and alcohol.

    PubMed Central

    Robertson, L S

    1996-01-01

    OBJECTIVES. Two phases of attempts to improve passenger car crash worthiness have occurred: minimum safety standards and publicized crash tests. This study evaluated these attempts, as well as changes in seat belt and alcohol use, in terms of their effect on occupant death and fatal crash rates. METHODS. Data on passenger car occupant fatalities and total involvement in fatal crashes, for 1975 through 1991, were obtained from the Fatal Accident Reporting System. Rates per mile were calculated through published sources on vehicle use by vehicle age. Regression estimates of effects of regulation, publicized crash tests, seat belt use and alcohol involvement were obtained. RESULTS. Substantial reductions in fatalities occurred in the vehicle model years from the late 1960s through most of the 1970s, when federal standards were applied. Some additional increments in reduced death rates, attributable to additional improved vehicle crashworthiness, occurred during the period of publicized crash tests. Increased seat belt use and reduced alcohol use also contributed significantly to reduced deaths. CONCLUSIONS. Minimum safety standards, crashworthiness improvements, seat belt use laws, and reduced alcohol use each contributed to a large reduction in passenger car occupant deaths. PMID:8561238

  9. The Application of Satellite-Derived, High-Resolution Land Use/Land Cover Data to Improve Urban Air Quality Model Forecasts

    NASA Technical Reports Server (NTRS)

    Quattrochi, D. A.; Lapenta, W. M.; Crosson, W. L.; Estes, M. G., Jr.; Limaye, A.; Kahn, M.

    2006-01-01

    Local and state agencies are responsible for developing state implementation plans to meet National Ambient Air Quality Standards. Numerical models used for this purpose simulate the transport and transformation of criteria pollutants and their precursors. The specification of land use/land cover (LULC) plays an important role in controlling modeled surface meteorology and emissions. NASA researchers have worked with partners and Atlanta stakeholders to incorporate an improved high-resolution LULC dataset for the Atlanta area within their modeling system and to assess meteorological and air quality impacts of Urban Heat Island (UHI) mitigation strategies. The new LULC dataset provides a more accurate representation of land use, has the potential to improve model accuracy, and facilitates prediction of LULC changes. Use of the new LULC dataset for two summertime episodes improved meteorological forecasts, with an existing daytime cold bias of approx. equal to 3 C reduced by 30%. Model performance for ozone prediction did not show improvement. In addition, LULC changes due to Atlanta area urbanization were predicted through 2030, for which model simulations predict higher urban air temperatures. The incorporation of UHI mitigation strategies partially offset this warming trend. The data and modeling methods used are generally applicable to other U.S. cities.

  10. Reviewing innovative Earth observation solutions for filling science-policy gaps in hydrology

    NASA Astrophysics Data System (ADS)

    Lehmann, Anthony; Giuliani, Gregory; Ray, Nicolas; Rahman, Kazi; Abbaspour, Karim C.; Nativi, Stefano; Craglia, Massimo; Cripe, Douglas; Quevauviller, Philippe; Beniston, Martin

    2014-10-01

    Improved data sharing is needed for hydrological modeling and water management that require better integration of data, information and models. Technological advances in Earth observation and Web technologies have allowed the development of Spatial Data Infrastructures (SDIs) for improved data sharing at various scales. International initiatives catalyze data sharing by promoting interoperability standards to maximize the use of data and by supporting easy access to and utilization of geospatial data. A series of recent European projects are contributing to the promotion of innovative Earth observation solutions and the uptake of scientific outcomes in policy. Several success stories involving different hydrologists' communities can be reported around the World. Gaps still exist in hydrological, agricultural, meteorological and climatological data access because of various issues. While many sources of data exists at all scales it remains difficult and time-consuming to assemble hydrological information for most projects. Furthermore, data and sharing formats remain very heterogeneous. Improvements require implementing/endorsing some commonly agreed standards and documenting data with adequate metadata. The brokering approach allows binding heterogeneous resources published by different data providers and adapting them to tools and interfaces commonly used by consumers of these resources. The challenge is to provide decision-makers with reliable information, based on integrated data and tools derived from both Earth observations and scientific models. Successful SDIs rely therefore on various aspects: a shared vision between all participants, necessity to solve a common problem, adequate data policies, incentives, and sufficient resources. New data streams from remote sensing or crowd sourcing are also producing valuable information to improve our understanding of the water cycle, while field sensors are developing rapidly and becoming less costly. More recent data standards are enhancing interoperability between hydrology and other scientific disciplines, while solutions exist to communicate uncertainty of data and models, which is an essential pre-requisite for decision-making. Distributed computing infrastructures can handle complex and large hydrological data and models, while Web Processing Services bring the flexibility to develop and execute simple to complex workflows over the Internet. The need for capacity building at human, infrastructure and institutional levels is also a major driver for reinforcing the commitment to SDI concepts.

  11. Modeling the irradiance and temperature rependence of photovoltaic modules in PVsyst

    DOE PAGES

    Sauer, Kenneth J.; Roessler, Thomas; Hansen, Clifford W.

    2014-11-10

    In order to reliably simulate the energy yield of photovoltaic (PV) systems, it is necessary to have an accurate model of how the PV modules perform with respect to irradiance and cell temperature. Building on previous work that addresses the irradiance dependence, two approaches to fit the temperature dependence of module power in PVsyst have been developed and are applied here to recent multi-irradiance and -temperature data for a standard Yingli Solar PV module type. The results demonstrate that it is possible to match the measured irradiance and temperature dependence of PV modules in PVsyst. As a result, improvements inmore » energy yield prediction using the optimized models relative to the PVsyst standard model are considered significant for decisions about project financing.« less

  12. Standard Model and New physics for ɛ'k/ɛk

    NASA Astrophysics Data System (ADS)

    Kitahara, Teppei

    2018-05-01

    The first result of the lattice simulation and improved perturbative calculations have pointed to a discrepancy between data on ɛ'k/ɛk and the standard-model (SM) prediction. Several new physics (NP) models can explain this discrepancy, and such NP models are likely to predict deviations of ℬ(K → πvv) from the SM predictions, which can be probed precisely in the near future by NA62 and KOTO experiments. We present correlations between ɛ'k/ɛk and ℬ(K → πvv) in two types of NP scenarios: a box dominated scenario and a Z-penguin dominated one. It is shown that different correlations are predicted and the future precision measurements of K → πvv can distinguish both scenarios.

  13. Randomized controlled trial of video self-modeling following speech restructuring treatment for stuttering.

    PubMed

    Cream, Angela; O'Brian, Sue; Jones, Mark; Block, Susan; Harrison, Elisabeth; Lincoln, Michelle; Hewat, Sally; Packman, Ann; Menzies, Ross; Onslow, Mark

    2010-08-01

    In this study, the authors investigated the efficacy of video self-modeling (VSM) following speech restructuring treatment to improve the maintenance of treatment effects. The design was an open-plan, parallel-group, randomized controlled trial. Participants were 89 adults and adolescents who undertook intensive speech restructuring treatment. Post treatment, participants were randomly assigned to 2 trial arms: standard maintenance and standard maintenance plus VSM. Participants in the latter arm viewed stutter-free videos of themselves each day for 1 month. The addition of VSM did not improve speech outcomes, as measured by percent syllables stuttered, at either 1 or 6 months postrandomization. However, at the latter assessment, self-rating of worst stuttering severity by the VSM group was 10% better than that of the control group, and satisfaction with speech fluency was 20% better. Quality of life was also better for the VSM group, which was mildly to moderately impaired compared with moderate impairment in the control group. VSM intervention after treatment was associated with improvements in self-reported outcomes. The clinical implications of this finding are discussed.

  14. Data reconstruction can improve abundance index estimation: An example using Taiwanese longline data for Pacific bluefin tuna

    PubMed Central

    Fukuda, Hiromu; Maunder, Mark N.

    2017-01-01

    Catch-per-unit-effort (CPUE) is often the main piece of information used in fisheries stock assessment; however, the catch and effort data that are traditionally compiled from commercial logbooks can be incomplete or unreliable due to many reasons. Pacific bluefin tuna (PBF) is a seasonal target species in the Taiwanese longline fishery. Since 2010, detailed catch information for each PBF has been made available through a catch documentation scheme. However, previously, only market landing data with a low coverage of logbooks were available. Therefore, several nontraditional procedures were performed to reconstruct catch and effort data from many alternative data sources not directly obtained from fishers for 2001–2015: (1) Estimating the catch number from the landing weight for 2001–2003, for which the catch number information was incomplete, based on Monte Carlo simulation; (2) deriving fishing days for 2007–2009 from voyage data recorder data, based on a newly developed algorithm; and (3) deriving fishing days for 2001–2006 from vessel trip information, based on linear relationships between fishing and at-sea days. Subsequently, generalized linear mixed models were developed with the delta-lognormal assumption for standardizing the CPUE calculated from the reconstructed data, and three-stage model evaluation was performed using (1) Akaike and Bayesian information criteria to determine the most favorable variable composition of standardization models, (2) overall R2 via cross-validation to compare fitting performance between area-separated and area-combined standardizations, and (3) system-based testing to explore the consistency of the standardized CPUEs with auxiliary data in the PBF stock assessment model. The last stage of evaluation revealed high consistency among the data, thus demonstrating improvements in data reconstruction for estimating the abundance index, and consequently the stock assessment. PMID:28968434

  15. Development of a Hospital Outcome Measure Intended for Use With Electronic Health Records: 30-Day Risk-standardized Mortality After Acute Myocardial Infarction.

    PubMed

    McNamara, Robert L; Wang, Yongfei; Partovian, Chohreh; Montague, Julia; Mody, Purav; Eddy, Elizabeth; Krumholz, Harlan M; Bernheim, Susannah M

    2015-09-01

    Electronic health records (EHRs) offer the opportunity to transform quality improvement by using clinical data for comparing hospital performance without the burden of chart abstraction. However, current performance measures using EHRs are lacking. With support from the Centers for Medicare & Medicaid Services (CMS), we developed an outcome measure of hospital risk-standardized 30-day mortality rates for patients with acute myocardial infarction for use with EHR data. As no appropriate source of EHR data are currently available, we merged clinical registry data from the Action Registry-Get With The Guidelines with claims data from CMS to develop the risk model (2009 data for development, 2010 data for validation). We selected candidate variables that could be feasibly extracted from current EHRs and do not require changes to standard clinical practice or data collection. We used logistic regression with stepwise selection and bootstrapping simulation for model development. The final risk model included 5 variables available on presentation: age, heart rate, systolic blood pressure, troponin ratio, and creatinine level. The area under the receiver operating characteristic curve was 0.78. Hospital risk-standardized mortality rates ranged from 9.6% to 13.1%, with a median of 10.7%. The odds of mortality for a high-mortality hospital (+1 SD) were 1.37 times those for a low-mortality hospital (-1 SD). This measure represents the first outcome measure endorsed by the National Quality Forum for public reporting of hospital quality based on clinical data in the EHR. By being compatible with current clinical practice and existing EHR systems, this measure is a model for future quality improvement measures.

  16. Contribution of the International Reference Ionosphere to the progress of the ionospheric representation

    NASA Astrophysics Data System (ADS)

    Bilitza, Dieter

    2017-04-01

    The International Reference Ionosphere (IRI), a joint project of the Committee on Space Research (COSPAR) and the International Union of Radio Science (URSI), is a data-based reference model for the ionosphere and since 2014 it is also recognized as the ISO (International Standardization Organization) standard for the ionosphere. The model is a synthesis of most of the available and reliable observations of ionospheric parameters combining ground and space measurements. This presentation reviews the steady progress in achieving a more and more accurate representation of the ionospheric plasma parameters accomplished during the last decade of IRI model improvements. Understandably, a data-based model is only as good as the data foundation on which it is built. We will discuss areas where we are in need of more data to obtain a more solid and continuous data foundation in space and time. We will also take a look at still existing discrepancies between simultaneous measurements of the same parameter with different measurement techniques and discuss the approach taken in the IRI model to deal with these conflicts. In conclusion we will provide an outlook at development activities that may result in significant future improvements of the accurate representation of the ionosphere in the IRI model.

  17. Improving rational thermal comfort prediction by using subpopulation characteristics: A case study at Hermitage Amsterdam

    PubMed Central

    Kramer, Rick; Schellen, Lisje; Schellen, Henk; Kingma, Boris

    2017-01-01

    ABSTRACT This study aims to improve the prediction accuracy of the rational standard thermal comfort model, known as the Predicted Mean Vote (PMV) model, by (1) calibrating one of its input variables “metabolic rate,” and (2) extending it by explicitly incorporating the variable running mean outdoor temperature (RMOT) that relates to adaptive thermal comfort. The analysis was performed with survey data (n = 1121) and climate measurements of the indoor and outdoor environment from a one year-long case study undertaken at Hermitage Amsterdam museum in the Netherlands. The PMVs were calculated for 35 survey days using (1) an a priori assumed metabolic rate, (2) a calibrated metabolic rate found by fitting the PMVs to the thermal sensation votes (TSVs) of each respondent using an optimization routine, and (3) extending the PMV model by including the RMOT. The results show that the calibrated metabolic rate is estimated to be 1.5 Met for this case study that was predominantly visited by elderly females. However, significant differences in metabolic rates have been revealed between adults and elderly showing the importance of differentiating between subpopulations. Hence, the standard tabular values, which only differentiate between various activities, may be oversimplified for many cases. Moreover, extending the PMV model with the RMOT substantially improves the thermal sensation prediction, but thermal sensation toward extreme cool and warm sensations remains partly underestimated. PMID:28680934

  18. Improving rational thermal comfort prediction by using subpopulation characteristics: A case study at Hermitage Amsterdam.

    PubMed

    Kramer, Rick; Schellen, Lisje; Schellen, Henk; Kingma, Boris

    2017-01-01

    This study aims to improve the prediction accuracy of the rational standard thermal comfort model, known as the Predicted Mean Vote (PMV) model, by (1) calibrating one of its input variables "metabolic rate," and (2) extending it by explicitly incorporating the variable running mean outdoor temperature (RMOT) that relates to adaptive thermal comfort. The analysis was performed with survey data ( n = 1121) and climate measurements of the indoor and outdoor environment from a one year-long case study undertaken at Hermitage Amsterdam museum in the Netherlands. The PMVs were calculated for 35 survey days using (1) an a priori assumed metabolic rate, (2) a calibrated metabolic rate found by fitting the PMVs to the thermal sensation votes (TSVs) of each respondent using an optimization routine, and (3) extending the PMV model by including the RMOT. The results show that the calibrated metabolic rate is estimated to be 1.5 Met for this case study that was predominantly visited by elderly females. However, significant differences in metabolic rates have been revealed between adults and elderly showing the importance of differentiating between subpopulations. Hence, the standard tabular values, which only differentiate between various activities, may be oversimplified for many cases. Moreover, extending the PMV model with the RMOT substantially improves the thermal sensation prediction, but thermal sensation toward extreme cool and warm sensations remains partly underestimated.

  19. Optimizing ACS NSQIP modeling for evaluation of surgical quality and risk: patient risk adjustment, procedure mix adjustment, shrinkage adjustment, and surgical focus.

    PubMed

    Cohen, Mark E; Ko, Clifford Y; Bilimoria, Karl Y; Zhou, Lynn; Huffman, Kristopher; Wang, Xue; Liu, Yaoming; Kraemer, Kari; Meng, Xiangju; Merkow, Ryan; Chow, Warren; Matel, Brian; Richards, Karen; Hart, Amy J; Dimick, Justin B; Hall, Bruce L

    2013-08-01

    The American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) collects detailed clinical data from participating hospitals using standardized data definitions, analyzes these data, and provides participating hospitals with reports that permit risk-adjusted comparisons with a surgical quality standard. Since its inception, the ACS NSQIP has worked to refine surgical outcomes measurements and enhance statistical methods to improve the reliability and validity of this hospital profiling. From an original focus on controlling for between-hospital differences in patient risk factors with logistic regression, ACS NSQIP has added a variable to better adjust for the complexity and risk profile of surgical procedures (procedure mix adjustment) and stabilized estimates derived from small samples by using a hierarchical model with shrinkage adjustment. New models have been developed focusing on specific surgical procedures (eg, "Procedure Targeted" models), which provide opportunities to incorporate indication and other procedure-specific variables and outcomes to improve risk adjustment. In addition, comparative benchmark reports given to participating hospitals have been expanded considerably to allow more detailed evaluations of performance. Finally, procedures have been developed to estimate surgical risk for individual patients. This article describes the development of, and justification for, these new statistical methods and reporting strategies in ACS NSQIP. Copyright © 2013 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  20. New strategy to improve quality control of Montenegro skin test at the production level.

    PubMed

    Guedes, Deborah Carbonera; Minozzo, João Carlos; Pasquali, Aline Kuhn Sbruzzi; Faulds, Craig; Soccol, Carlos Ricardo; Thomaz-Soccol, Vanete

    2017-01-01

    The production of the Montenegro antigen for skin test poses difficulties regarding quality control. Here, we propose that certain animal models reproducing a similar immune response to humans may be used in the quality control of Montenegro antigen production. Fifteen Cavia porcellus (guinea pigs) were immunized with Leishmania amazonensis or Leishmania braziliensis , and, after 30 days, they were skin tested with standard Montenegro antigen. To validate C. porcellus as an animal model for skin tests, eighteen Mesocricetus auratus (hamsters) were infected with L. amazonensis or L. braziliensis , and, after 45 days, they were skin tested with standard Montenegro antigen. Cavia porcellus immunized with L. amazonensis or L. braziliensis , and hamsters infected with the same species presented induration reactions when skin tested with standard Montenegro antigen 48-72h after the test. The comparison between immunization methods and immune response from the two animal species validated C. porcellus as a good model for Montenegro skin test, and the model showed strong potential as an in vivo model in the quality control of the production of Montenegro antigen.

  1. The new g-2 experiment at Fermilab

    NASA Astrophysics Data System (ADS)

    Anastasi, A.

    2017-04-01

    There is a long standing discrepancy between the Standard Model prediction for the muon g-2 and the value measured by the Brookhaven E821 Experiment. At present the discrepancy stands at about three standard deviations, with an uncertainty dominated by the theoretical error. Two new proposals - at Fermilab and J-PARC - plan to improve the experimental uncertainty by a factor of 4, and it is expected that there will be a significant reduction in the uncertainty of the Standard Model prediction. I will review the status of the planned experiment at Fermilab, E989, which will analyse 21 times more muons than the BNL experiment and discuss how the systematic uncertainty will be reduced by a factor of 3 such that a precision of 0.14 ppm can be achieved.

  2. Standard deviation analysis of the mastoid fossa temperature differential reading: a potential model for objective chiropractic assessment.

    PubMed

    Hart, John

    2011-03-01

    This study describes a model for statistically analyzing follow-up numeric-based chiropractic spinal assessments for an individual patient based on his or her own baseline. Ten mastoid fossa temperature differential readings (MFTD) obtained from a chiropractic patient were used in the study. The first eight readings served as baseline and were compared to post-adjustment readings. One of the two post-adjustment MFTD readings fell outside two standard deviations of the baseline mean and therefore theoretically represents improvement according to pattern analysis theory. This study showed how standard deviation analysis may be used to identify future outliers for an individual patient based on his or her own baseline data. Copyright © 2011 National University of Health Sciences. Published by Elsevier Inc. All rights reserved.

  3. A comparative survey of current and proposed tropospheric refraction-delay models for DSN radio metric data calibration

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Sovers, O. J.

    1994-01-01

    The standard tropospheric calibration model implemented in the operational Orbit Determination Program is the seasonal model developed by C. C. Chao in the early 1970's. The seasonal model has seen only slight modification since its release, particularly in the format and content of the zenith delay calibrations. Chao's most recent standard mapping tables, which are used to project the zenith delay calibrations along the station-to-spacecraft line of sight, have not been modified since they were first published in late 1972. This report focuses principally on proposed upgrades to the zenith delay mapping process, although modeling improvements to the zenith delay calibration process are also discussed. A number of candidate approximation models for the tropospheric mapping are evaluated, including the semi-analytic mapping function of Lanyi, and the semi-empirical mapping functions of Davis, et. al.('CfA-2.2'), of Ifadis (global solution model), of Herring ('MTT'), and of Niell ('NMF'). All of the candidate mapping functions are superior to the Chao standard mapping tables and approximation formulas when evaluated against the current Deep Space Network Mark 3 intercontinental very long baselines interferometry database.

  4. Equivalent circuit simulation of HPEM-induced transient responses at nonlinear loads

    NASA Astrophysics Data System (ADS)

    Kotzev, Miroslav; Bi, Xiaotang; Kreitlow, Matthias; Gronwald, Frank

    2017-09-01

    In this paper the equivalent circuit modeling of a nonlinearly loaded loop antenna and its transient responses to HPEM field excitations are investigated. For the circuit modeling the general strategy to characterize the nonlinearly loaded antenna by a linear and a nonlinear circuit part is pursued. The linear circuit part can be determined by standard methods of antenna theory and numerical field computation. The modeling of the nonlinear circuit part requires realistic circuit models of the nonlinear loads that are given by Schottky diodes. Combining both parts, appropriate circuit models are obtained and analyzed by means of a standard SPICE circuit simulator. It is the main result that in this way full-wave simulation results can be reproduced. Furthermore it is clearly seen that the equivalent circuit modeling offers considerable advantages with respect to computation speed and also leads to improved physical insights regarding the coupling between HPEM field excitation and nonlinearly loaded loop antenna.

  5. System Identification Applied to Dynamic CFD Simulation and Wind Tunnel Data

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Klein, Vladislav; Frink, Neal T.; Vicroy, Dan D.

    2011-01-01

    Demanding aerodynamic modeling requirements for military and civilian aircraft have provided impetus for researchers to improve computational and experimental techniques. Model validation is a key component for these research endeavors so this study is an initial effort to extend conventional time history comparisons by comparing model parameter estimates and their standard errors using system identification methods. An aerodynamic model of an aircraft performing one-degree-of-freedom roll oscillatory motion about its body axes is developed. The model includes linear aerodynamics and deficiency function parameters characterizing an unsteady effect. For estimation of unknown parameters two techniques, harmonic analysis and two-step linear regression, were applied to roll-oscillatory wind tunnel data and to computational fluid dynamics (CFD) simulated data. The model used for this study is a highly swept wing unmanned aerial combat vehicle. Differences in response prediction, parameters estimates, and standard errors are compared and discussed

  6. Ferrets as Models for Influenza Virus Transmission Studies and Pandemic Risk Assessments

    PubMed Central

    Barclay, Wendy; Barr, Ian; Fouchier, Ron A.M.; Matsuyama, Ryota; Nishiura, Hiroshi; Peiris, Malik; Russell, Charles J.; Subbarao, Kanta; Zhu, Huachen

    2018-01-01

    The ferret transmission model is extensively used to assess the pandemic potential of emerging influenza viruses, yet experimental conditions and reported results vary among laboratories. Such variation can be a critical consideration when contextualizing results from independent risk-assessment studies of novel and emerging influenza viruses. To streamline interpretation of data generated in different laboratories, we provide a consensus on experimental parameters that define risk-assessment experiments of influenza virus transmissibility, including disclosure of variables known or suspected to contribute to experimental variability in this model, and advocate adoption of more standardized practices. We also discuss current limitations of the ferret transmission model and highlight continued refinements and advances to this model ongoing in laboratories. Understanding, disclosing, and standardizing the critical parameters of ferret transmission studies will improve the comparability and reproducibility of pandemic influenza risk assessment and increase the statistical power and, perhaps, accuracy of this model. PMID:29774862

  7. [Primary branch size of Pinus koraiensis plantation: a prediction based on linear mixed effect model].

    PubMed

    Dong, Ling-Bo; Liu, Zhao-Gang; Li, Feng-Ri; Jiang, Li-Chun

    2013-09-01

    By using the branch analysis data of 955 standard branches from 60 sampled trees in 12 sampling plots of Pinus koraiensis plantation in Mengjiagang Forest Farm in Heilongjiang Province of Northeast China, and based on the linear mixed-effect model theory and methods, the models for predicting branch variables, including primary branch diameter, length, and angle, were developed. Considering tree effect, the MIXED module of SAS software was used to fit the prediction models. The results indicated that the fitting precision of the models could be improved by choosing appropriate random-effect parameters and variance-covariance structure. Then, the correlation structures including complex symmetry structure (CS), first-order autoregressive structure [AR(1)], and first-order autoregressive and moving average structure [ARMA(1,1)] were added to the optimal branch size mixed-effect model. The AR(1) improved the fitting precision of branch diameter and length mixed-effect model significantly, but all the three structures didn't improve the precision of branch angle mixed-effect model. In order to describe the heteroscedasticity during building mixed-effect model, the CF1 and CF2 functions were added to the branch mixed-effect model. CF1 function improved the fitting effect of branch angle mixed model significantly, whereas CF2 function improved the fitting effect of branch diameter and length mixed model significantly. Model validation confirmed that the mixed-effect model could improve the precision of prediction, as compare to the traditional regression model for the branch size prediction of Pinus koraiensis plantation.

  8. Consumer Vehicle Choice Model Documentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Changzheng; Greene, David L

    In response to the Fuel Economy and Greenhouse Gas (GHG) emissions standards, automobile manufacturers will need to adopt new technologies to improve the fuel economy of their vehicles and to reduce the overall GHG emissions of their fleets. The U.S. Environmental Protection Agency (EPA) has developed the Optimization Model for reducing GHGs from Automobiles (OMEGA) to estimate the costs and benefits of meeting GHG emission standards through different technology packages. However, the model does not simulate the impact that increased technology costs will have on vehicle sales or on consumer surplus. As the model documentation states, “While OMEGA incorporates functionsmore » which generally minimize the cost of meeting a specified carbon dioxide (CO2) target, it is not an economic simulation model which adjusts vehicle sales in response to the cost of the technology added to each vehicle.” Changes in the mix of vehicles sold, caused by the costs and benefits of added fuel economy technologies, could make it easier or more difficult for manufacturers to meet fuel economy and emissions standards, and impacts on consumer surplus could raise the costs or augment the benefits of the standards. Because the OMEGA model does not presently estimate such impacts, the EPA is investigating the feasibility of developing an adjunct to the OMEGA model to make such estimates. This project is an effort to develop and test a candidate model. The project statement of work spells out the key functional requirements for the new model.« less

  9. Online dynamical downscaling of temperature and precipitation within the iLOVECLIM model (version 1.1)

    NASA Astrophysics Data System (ADS)

    Quiquet, Aurélien; Roche, Didier M.; Dumas, Christophe; Paillard, Didier

    2018-02-01

    This paper presents the inclusion of an online dynamical downscaling of temperature and precipitation within the model of intermediate complexity iLOVECLIM v1.1. We describe the following methodology to generate temperature and precipitation fields on a 40 km × 40 km Cartesian grid of the Northern Hemisphere from the T21 native atmospheric model grid. Our scheme is not grid specific and conserves energy and moisture in the same way as the original climate model. We show that we are able to generate a high-resolution field which presents a spatial variability in better agreement with the observations compared to the standard model. Although the large-scale model biases are not corrected, for selected model parameters, the downscaling can induce a better overall performance compared to the standard version on both the high-resolution grid and on the native grid. Foreseen applications of this new model feature include the improvement of ice sheet model coupling and high-resolution land surface models.

  10. Statistical Evaluation of Time Series Analysis Techniques

    NASA Technical Reports Server (NTRS)

    Benignus, V. A.

    1973-01-01

    The performance of a modified version of NASA's multivariate spectrum analysis program is discussed. A multiple regression model was used to make the revisions. Performance improvements were documented and compared to the standard fast Fourier transform by Monte Carlo techniques.

  11. Head restraints can protect our necks

    DOT National Transportation Integrated Search

    2001-10-06

    The Insurance Institute for Highway Safety rated the head restraints of more than 200 passenger vehicle models and concluded that industry standards are improving with more than half of the vehicles rated good or acceptable in the prevention of whipl...

  12. CERT Resilience Management Model, Version 1.0

    DTIC Science & Technology

    2010-05-01

    practice such as ISO 27000 , COBIT, or ITIL. If you are a member of an established process improvement community, particularly one centered on CMMI...Systems Audit and Control Association ISO International Organization for Standardization ISSA Information Systems Security Association IT

  13. Doctors or technicians: assessing quality of medical education

    PubMed Central

    Hasan, Tayyab

    2010-01-01

    Medical education institutions usually adapt industrial quality management models that measure the quality of the process of a program but not the quality of the product. The purpose of this paper is to analyze the impact of industrial quality management models on medical education and students, and to highlight the importance of introducing a proper educational quality management model. Industrial quality management models can measure the training component in terms of competencies, but they lack the educational component measurement. These models use performance indicators to assess their process improvement efforts. Researchers suggest that the performance indicators used in educational institutions may only measure their fiscal efficiency without measuring the quality of the educational experience of the students. In most of the institutions, where industrial models are used for quality assurance, students are considered as customers and are provided with the maximum services and facilities possible. Institutions are required to fulfill a list of recommendations from the quality control agencies in order to enhance student satisfaction and to guarantee standard services. Quality of medical education should be assessed by measuring the impact of the educational program and quality improvement procedures in terms of knowledge base development, behavioral change, and patient care. Industrial quality models may focus on academic support services and processes, but educational quality models should be introduced in parallel to focus on educational standards and products. PMID:23745059

  14. Doctors or technicians: assessing quality of medical education.

    PubMed

    Hasan, Tayyab

    2010-01-01

    Medical education institutions usually adapt industrial quality management models that measure the quality of the process of a program but not the quality of the product. The purpose of this paper is to analyze the impact of industrial quality management models on medical education and students, and to highlight the importance of introducing a proper educational quality management model. Industrial quality management models can measure the training component in terms of competencies, but they lack the educational component measurement. These models use performance indicators to assess their process improvement efforts. Researchers suggest that the performance indicators used in educational institutions may only measure their fiscal efficiency without measuring the quality of the educational experience of the students. In most of the institutions, where industrial models are used for quality assurance, students are considered as customers and are provided with the maximum services and facilities possible. Institutions are required to fulfill a list of recommendations from the quality control agencies in order to enhance student satisfaction and to guarantee standard services. Quality of medical education should be assessed by measuring the impact of the educational program and quality improvement procedures in terms of knowledge base development, behavioral change, and patient care. Industrial quality models may focus on academic support services and processes, but educational quality models should be introduced in parallel to focus on educational standards and products.

  15. Review of TRMM/GPM Rainfall Algorithm Validation

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.

    2004-01-01

    A review is presented concerning current progress on evaluation and validation of standard Tropical Rainfall Measuring Mission (TRMM) precipitation retrieval algorithms and the prospects for implementing an improved validation research program for the next generation Global Precipitation Measurement (GPM) Mission. All standard TRMM algorithms are physical in design, and are thus based on fundamental principles of microwave radiative transfer and its interaction with semi-detailed cloud microphysical constituents. They are evaluated for consistency and degree of equivalence with one another, as well as intercompared to radar-retrieved rainfall at TRMM's four main ground validation sites. Similarities and differences are interpreted in the context of the radiative and microphysical assumptions underpinning the algorithms. Results indicate that the current accuracies of the TRMM Version 6 algorithms are approximately 15% at zonal-averaged / monthly scales with precisions of approximately 25% for full resolution / instantaneous rain rate estimates (i.e., level 2 retrievals). Strengths and weaknesses of the TRMM validation approach are summarized. Because the dew of convergence of level 2 TRMM algorithms is being used as a guide for setting validation requirements for the GPM mission, it is important that the GPM algorithm validation program be improved to ensure concomitant improvement in the standard GPM retrieval algorithms. An overview of the GPM Mission's validation plan is provided including a description of a new type of physical validation model using an analytic 3-dimensional radiative transfer model.

  16. A Search for the Standard Model Higgs Boson Produced in Association with a $W$ Boson

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frank, Martin Johannes

    2011-05-01

    We present a search for a standard model Higgs boson produced in association with a W boson using data collected with the CDF II detector from pmore » $$\\bar{p}$$ collisions at √s = 1.96 TeV. The search is performed in the WH → ℓvb$$\\bar{b}$$ channel. The two quarks usually fragment into two jets, but sometimes a third jet can be produced via gluon radiation, so we have increased the standard two-jet sample by including events that contain three jets. We reconstruct the Higgs boson using two or three jets depending on the kinematics of the event. We find an improvement in our search sensitivity using the larger sample together with this multijet reconstruction technique. Our data show no evidence of a Higgs boson, so we set 95% confidence level upper limits on the WH production rate. We set limits between 3.36 and 28.7 times the standard model prediction for Higgs boson masses ranging from 100 to 150 GeV/c 2.« less

  17. Elastic and inelastic scattering of neutrons on 238U nucleus

    NASA Astrophysics Data System (ADS)

    Capote, R.; Trkov, A.; Sin, M.; Herman, M. W.; Soukhovitskiĩ, E. Sh.

    2014-04-01

    Advanced modelling of neutron induced reactions on the 238U nucleus is aimed at improving our knowledge of neutron scattering. Capture and fission channels are well constrained by available experimental data and neutron standard evaluation. A focus of this contribution is on elastic and inelastic scattering cross sections. The employed nuclear reaction model includes - a new rotational-vibrational dispersive optical model potential coupling the low-lying collective bands of vibrational character observed in even-even actinides; - the Engelbrecht-Weidenmüller transformation allowing for inclusion of compound-direct interference effects; - and a multi-humped fission barrier with absorption in the secondary well described within the optical model for fission. Impact of the advanced modelling on elastic and inelastic scattering cross sections including angular distributions and emission spectra is assessed both by comparison with selected microscopic experimental data and integral criticality benchmarks including measured reaction rates (e.g. JEMIMA, FLAPTOP and BIG TEN). Benchmark calculations provided feedback to improve the reaction modelling. Improvement of existing libraries will be discussed.

  18. Be All That We Can Be: Lessons from the Military for Improving Our Nation's Child Care System.

    ERIC Educational Resources Information Center

    Campbell, Nancy Duff; Appelbaum, Judith C.; Martinson, Karin; Martin, Emily

    In response to increasing demands for military child care and lack of comprehensive care standards, the Military Child Care Act of 1989 (MCCA) mandated improvements in military child care. Today, the Department of Defense runs a model child care system serving over 200,000 children daily at over 300 locations worldwide. Noting that most of the…

  19. Accurate diode behavioral model with reverse recovery

    NASA Astrophysics Data System (ADS)

    Banáš, Stanislav; Divín, Jan; Dobeš, Josef; Paňko, Václav

    2018-01-01

    This paper deals with the comprehensive behavioral model of p-n junction diode containing reverse recovery effect, applicable to all standard SPICE simulators supporting Verilog-A language. The model has been successfully used in several production designs, which require its full complexity, robustness and set of tuning parameters comparable with standard compact SPICE diode model. The model is like standard compact model scalable with area and temperature and can be used as a stand-alone diode or as a part of more complex device macro-model, e.g. LDMOS, JFET, bipolar transistor. The paper briefly presents the state of the art followed by the chapter describing the model development and achieved solutions. During precise model verification some of them were found non-robust or poorly converging and replaced by more robust solutions, demonstrated in the paper. The measurement results of different technologies and different devices compared with a simulation using the new behavioral model are presented as the model validation. The comparison of model validation in time and frequency domains demonstrates that the implemented reverse recovery effect with correctly extracted parameters improves the model simulation results not only in switching from ON to OFF state, which is often published, but also its impedance/admittance frequency dependency in GHz range. Finally the model parameter extraction and the comparison with SPICE compact models containing reverse recovery effect is presented.

  20. Analysis of the influence of passenger vehicles front-end design on pedestrian lower extremity injuries by means of the LLMS model.

    PubMed

    Scattina, Alessandro; Mo, Fuhao; Masson, Catherine; Avalle, Massimiliano; Arnoux, Pierre Jean

    2018-01-30

    This work aims at investigating the influence of some front-end design parameters of a passenger vehicle on the behavior and damage occurring in the human lower limbs when impacted in an accident. The analysis is carried out by means of finite element analysis using a generic car model for the vehicle and the lower limbs model for safety (LLMS) for the purpose of pedestrian safety. Considering the pedestrian standardized impact procedure (as in the 2003/12/EC Directive), a parametric analysis, through a design of experiments plan, was performed. Various material properties, bumper thickness, position of the higher and lower bumper beams, and position of pedestrian, were made variable in order to identify how they influence the injury occurrence. The injury prediction was evaluated from the knee lateral flexion, ligament elongation, and state of stress in the bone structure. The results highlighted that the offset between the higher and lower bumper beams is the most influential parameter affecting the knee ligament response. The influence is smaller or absent considering the other responses and the other considered parameters. The stiffness characteristics of the bumper are, instead, more notable on the tibia. Even if an optimal value of the variables could not be identified trends were detected, with the potential of indicating strategies for improvement. The behavior of a vehicle front end in the impact against a pedestrian can be improved optimizing its design. The work indicates potential strategies for improvement. In this work, each parameter was changed independently one at a time; in future works, the interaction between the design parameters could be also investigated. Moreover, a similar parametric analysis can be carried out using a standard mechanical legform model in order to understand potential diversities or correlations between standard tools and human models.

  1. Whole Atmosphere Community Climate Model With Lower Ionospheric Chemistry: Improved Modeling of Nitric Acid and Active Chlorine During Energetic Particle Precipitation

    NASA Astrophysics Data System (ADS)

    Verronen, P. T.; Andersson, M. E.; Marsh, D. R.; Kovacs, T.; Plane, J. M. C.; Päivärinta, S. M.

    2016-12-01

    Energetic particle precipitation (EPP) and ion chemistry affect the neutral composition of the polar middle atmosphere. For example, production of odd nitrogen and odd hydrogen during EPP events can decrease ozone by tens of percent. However, the standard ion chemistry parameterizations used in atmospheric models neglect the effects on some important species, such as nitric acid. We present WACCM-D, a variant of the Whole Atmosphere Community Climate Model, which includes a set of lower ionosphere (D-region) chemistry: 307 reactions of 20 positive ions and 21 negative ions. Compared to the Sodankylä Ion and Neutral Chemistry (SIC), a state-of-the-art 1-D model of the D-region chemistry, WACCM-D represents the lower ionosphere well. Comparison of ion concentrations between the models shows that the WACCM-D bias is typically within ±10% or less below 70 km. At 70-90 km, when strong altitude gradients in ionization rates and/or ion concentrations exist, the bias can be larger for some ions but is still within tens of percent. We also compare WACCM-D results for the January 2005 solar proton event (SPE) to those from the standard WACCM and observations from the Aura/MLS and SCISAT/ACE-FTS instruments. The results indicate that WACCM-D improves the modeling of {HNO3}, {HCl}, {ClO}, {OH}, and {NOx} during the SPE. For example, Northern Hemispheric {HNO3} from WACCM-D shows an increase by two orders of magnitude at 40-70 km compared to WACCM, reaching 2.6 ppbv, in agreement with the observations. Based on our results, WACCM-D provides a state-of-the-art global representation of D-region ion chemistry and improves modeling of EPP atmospheric effects considerably.

  2. SafeCare: An Innovative Approach for Improving Quality Through Standards, Benchmarking, and Improvement in Low- and Middle- Income Countries.

    PubMed

    Johnson, Michael C; Schellekens, Onno; Stewart, Jacqui; van Ostenberg, Paul; de Wit, Tobias Rinke; Spieker, Nicole

    2016-08-01

    In low- and middle-income countries (LMICs), patients often have limited access to high-quality care because of a shortage of facilities and human resources, inefficiency of resource allocation, and limited health insurance. SafeCare was developed to provide innovative health care standards; surveyor training; a grading system for quality of care; a quality improvement process that is broken down into achievable, measurable steps to facilitate incremental improvement; and a private sector-supported health financing model. Three organizations-PharmAccess Foundation, Joint Commission International, and the Council for Health Service Accreditation of Southern Africa-launched SafeCare in 2011 as a formal partnership. Five SafeCare levels of improvement are allocated on the basis of an algorithm that incorporates both the overall score and weighted criteria, so that certain high-risk criteria need to be in place before a facility can move to the next SafeCare certification level. A customized quality improvement plan based on the SafeCare assessment results lists the specific, measurable activities that should be undertaken to address gaps in quality found during the initial assessment and to meet the nextlevel SafeCare certificate. The standards have been implemented in more than 800 primary and secondary facilities by qualified local surveyors, in partnership with various local public and private partner organizations, in six sub-Saharan African countries (Ghana, Kenya, Nigeria, Namibia, Tanzania, and Zambia). Expanding access to care and improving health care quality in LMICs will require a coordinated effort between institutions and other stakeholders. SafeCare's standards and assessment methodology can help build trust between stakeholders and lay the foundation for country-led quality monitoring systems.

  3. A priori and a posteriori investigations for developing large eddy simulations of multi-species turbulent mixing under high-pressure conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borghesi, Giulio; Bellan, Josette, E-mail: josette.bellan@jpl.nasa.gov; Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California 91109-8099

    2015-03-15

    A Direct Numerical Simulation (DNS) database was created representing mixing of species under high-pressure conditions. The configuration considered is that of a temporally evolving mixing layer. The database was examined and analyzed for the purpose of modeling some of the unclosed terms that appear in the Large Eddy Simulation (LES) equations. Several metrics are used to understand the LES modeling requirements. First, a statistical analysis of the DNS-database large-scale flow structures was performed to provide a metric for probing the accuracy of the proposed LES models as the flow fields obtained from accurate LESs should contain structures of morphology statisticallymore » similar to those observed in the filtered-and-coarsened DNS (FC-DNS) fields. To characterize the morphology of the large-scales structures, the Minkowski functionals of the iso-surfaces were evaluated for two different fields: the second-invariant of the rate of deformation tensor and the irreversible entropy production rate. To remove the presence of the small flow scales, both of these fields were computed using the FC-DNS solutions. It was found that the large-scale structures of the irreversible entropy production rate exhibit higher morphological complexity than those of the second invariant of the rate of deformation tensor, indicating that the burden of modeling will be on recovering the thermodynamic fields. Second, to evaluate the physical effects which must be modeled at the subfilter scale, an a priori analysis was conducted. This a priori analysis, conducted in the coarse-grid LES regime, revealed that standard closures for the filtered pressure, the filtered heat flux, and the filtered species mass fluxes, in which a filtered function of a variable is equal to the function of the filtered variable, may no longer be valid for the high-pressure flows considered in this study. The terms requiring modeling are the filtered pressure, the filtered heat flux, the filtered pressure work, and the filtered species mass fluxes. Improved models were developed based on a scale-similarity approach and were found to perform considerably better than the classical ones. These improved models were also assessed in an a posteriori study. Different combinations of the standard models and the improved ones were tested. At the relatively small Reynolds numbers achievable in DNS and at the relatively small filter widths used here, the standard models for the filtered pressure, the filtered heat flux, and the filtered species fluxes were found to yield accurate results for the morphology of the large-scale structures present in the flow. Analysis of the temporal evolution of several volume-averaged quantities representative of the mixing layer growth, and of the cross-stream variation of homogeneous-plane averages and second-order correlations, as well as of visualizations, indicated that the models performed equivalently for the conditions of the simulations. The expectation is that at the much larger Reynolds numbers and much larger filter widths used in practical applications, the improved models will have much more accurate performance than the standard one.« less

  4. The lagRST Model: A Turbulence Model for Non-Equilibrium Flows

    NASA Technical Reports Server (NTRS)

    Lillard, Randolph P.; Oliver, A. Brandon; Olsen, Michael E.; Blaisdell, Gregory A.; Lyrintzis, Anastasios S.

    2011-01-01

    This study presents a new class of turbulence model designed for wall bounded, high Reynolds number flows with separation. The model addresses deficiencies seen in the modeling of nonequilibrium turbulent flows. These flows generally have variable adverse pressure gradients which cause the turbulent quantities to react at a finite rate to changes in the mean flow quantities. This "lag" in the response of the turbulent quantities can t be modeled by most standard turbulence models, which are designed to model equilibrium turbulent boundary layers. The model presented uses a standard 2-equation model as the baseline for turbulent equilibrium calculations, but adds transport equations to account directly for non-equilibrium effects in the Reynolds Stress Tensor (RST) that are seen in large pressure gradients involving shock waves and separation. Comparisons are made to several standard turbulence modeling validation cases, including an incompressible boundary layer (both neutral and adverse pressure gradients), an incompressible mixing layer and a transonic bump flow. In addition, a hypersonic Shock Wave Turbulent Boundary Layer Interaction with separation is assessed along with a transonic capsule flow. Results show a substantial improvement over the baseline models for transonic separated flows. The results are mixed for the SWTBLI flows assessed. Separation predictions are not as good as the baseline models, but the over prediction of the peak heat flux downstream of the reattachment shock that plagues many models is reduced.

  5. Preservation of kinematics with posterior cruciate-, bicruciate- and patient-specific bicruciate-retaining prostheses in total knee arthroplasty by using computational simulation with normal knee model

    PubMed Central

    Koh, Y-G.; Son, J.; Kwon, S-K.; Kim, H-J.; Kang, K-T.

    2017-01-01

    Objectives Preservation of both anterior and posterior cruciate ligaments in total knee arthroplasty (TKA) can lead to near-normal post-operative joint mechanics and improved knee function. We hypothesised that a patient-specific bicruciate-retaining prosthesis preserves near-normal kinematics better than standard off-the-shelf posterior cruciate-retaining and bicruciate-retaining prostheses in TKA. Methods We developed the validated models to evaluate the post-operative kinematics in patient-specific bicruciate-retaining, standard off-the-shelf bicruciate-retaining and posterior cruciate-retaining TKA under gait and deep knee bend loading conditions using numerical simulation. Results Tibial posterior translation and internal rotation in patient-specific bicruciate-retaining prostheses preserved near-normal kinematics better than other standard off-the-shelf prostheses under gait loading conditions. Differences from normal kinematics were minimised for femoral rollback and internal-external rotation in patient-specific bicruciate-retaining, followed by standard off-the-shelf bicruciate-retaining and posterior cruciate-retaining TKA under deep knee bend loading conditions. Moreover, the standard off-the-shelf posterior cruciate-retaining TKA in this study showed the most abnormal performance in kinematics under gait and deep knee bend loading conditions, whereas patient-specific bicruciate-retaining TKA led to near-normal kinematics. Conclusion This study showed that restoration of the normal geometry of the knee joint in patient-specific bicruciate-retaining TKA and preservation of the anterior cruciate ligament can lead to improvement in kinematics compared with the standard off-the-shelf posterior cruciate-retaining and bicruciate-retaining TKA. Cite this article: Y-G. Koh, J. Son, S-K. Kwon, H-J. Kim, O-R. Kwon, K-T. Kang. Preservation of kinematics with posterior cruciate-, bicruciate- and patient-specific bicruciate-retaining prostheses in total knee arthroplasty by using computational simulation with normal knee model. Bone Joint Res 2017;6:557–565. DOI: 10.1302/2046-3758.69.BJR-2016-0250.R1. PMID:28947604

  6. Standardizing data exchange for clinical research protocols and case report forms: An assessment of the suitability of the Clinical Data Interchange Standards Consortium (CDISC) Operational Data Model (ODM).

    PubMed

    Huser, Vojtech; Sastry, Chandan; Breymaier, Matthew; Idriss, Asma; Cimino, James J

    2015-10-01

    Efficient communication of a clinical study protocol and case report forms during all stages of a human clinical study is important for many stakeholders. An electronic and structured study representation format that can be used throughout the whole study life-span can improve such communication and potentially lower total study costs. The most relevant standard for representing clinical study data, applicable to unregulated as well as regulated studies, is the Operational Data Model (ODM) in development since 1999 by the Clinical Data Interchange Standards Consortium (CDISC). ODM's initial objective was exchange of case report forms data but it is increasingly utilized in other contexts. An ODM extension called Study Design Model, introduced in 2011, provides additional protocol representation elements. Using a case study approach, we evaluated ODM's ability to capture all necessary protocol elements during a complete clinical study lifecycle in the Intramural Research Program of the National Institutes of Health. ODM offers the advantage of a single format for institutions that deal with hundreds or thousands of concurrent clinical studies and maintain a data warehouse for these studies. For each study stage, we present a list of gaps in the ODM standard and identify necessary vendor or institutional extensions that can compensate for such gaps. The current version of ODM (1.3.2) has only partial support for study protocol and study registration data mainly because it is outside the original development goal. ODM provides comprehensive support for representation of case report forms (in both the design stage and with patient level data). Inclusion of requirements of observational, non-regulated or investigator-initiated studies (outside Food and Drug Administration (FDA) regulation) can further improve future revisions of the standard. Published by Elsevier Inc.

  7. Improved particle position accuracy from off-axis holograms using a Chebyshev model.

    PubMed

    Öhman, Johan; Sjödahl, Mikael

    2018-01-01

    Side scattered light from micrometer-sized particles is recorded using an off-axis digital holographic setup. From holograms, a volume is reconstructed with information about both intensity and phase. Finding particle positions is non-trivial, since poor axial resolution elongates particles in the reconstruction. To overcome this problem, the reconstructed wavefront around a particle is used to find the axial position. The method is based on the change in the sign of the curvature around the true particle position plane. The wavefront curvature is directly linked to the phase response in the reconstruction. In this paper we propose a new method of estimating the curvature based on a parametric model. The model is based on Chebyshev polynomials and is fit to the phase anomaly and compared to a plane wave in the reconstructed volume. From the model coefficients, it is possible to find particle locations. Simulated results show increased performance in the presence of noise, compared to the use of finite difference methods. The standard deviation is decreased from 3-39 μm to 6-10 μm for varying noise levels. Experimental results show a corresponding improvement where the standard deviation is decreased from 18 μm to 13 μm.

  8. User-Centered Design Practices to Redesign a Nursing e-Chart in Line with the Nursing Process.

    PubMed

    Schachner, María B; Recondo, Francisco J; González, Zulma A; Sommer, Janine A; Stanziola, Enrique; Gassino, Fernando D; Simón, Mariana; López, Gastón E; Benítez, Sonia E

    2016-01-01

    Regarding the user-centered design (UCD) practices carried out at Hospital Italiano of Buenos Aires, nursing e-chart user interface was redesigned in order to improve records' quality of nursing process based on an adapted Virginia Henderson theoretical model and patient safety standards to fulfil Joint Commission accreditation requirements. UCD practices were applied as standardized and recommended for electronic medical records usability evaluation. Implementation of these practices yielded a series of prototypes in 5 iterative cycles of incremental improvements to achieve goals of usability which were used and perceived as satisfactory by general care nurses. Nurses' involvement allowed balance between their needs and institution requirements.

  9. BCDForest: a boosting cascade deep forest model towards the classification of cancer subtypes based on gene expression data.

    PubMed

    Guo, Yang; Liu, Shuhui; Li, Zhanhuai; Shang, Xuequn

    2018-04-11

    The classification of cancer subtypes is of great importance to cancer disease diagnosis and therapy. Many supervised learning approaches have been applied to cancer subtype classification in the past few years, especially of deep learning based approaches. Recently, the deep forest model has been proposed as an alternative of deep neural networks to learn hyper-representations by using cascade ensemble decision trees. It has been proved that the deep forest model has competitive or even better performance than deep neural networks in some extent. However, the standard deep forest model may face overfitting and ensemble diversity challenges when dealing with small sample size and high-dimensional biology data. In this paper, we propose a deep learning model, so-called BCDForest, to address cancer subtype classification on small-scale biology datasets, which can be viewed as a modification of the standard deep forest model. The BCDForest distinguishes from the standard deep forest model with the following two main contributions: First, a named multi-class-grained scanning method is proposed to train multiple binary classifiers to encourage diversity of ensemble. Meanwhile, the fitting quality of each classifier is considered in representation learning. Second, we propose a boosting strategy to emphasize more important features in cascade forests, thus to propagate the benefits of discriminative features among cascade layers to improve the classification performance. Systematic comparison experiments on both microarray and RNA-Seq gene expression datasets demonstrate that our method consistently outperforms the state-of-the-art methods in application of cancer subtype classification. The multi-class-grained scanning and boosting strategy in our model provide an effective solution to ease the overfitting challenge and improve the robustness of deep forest model working on small-scale data. Our model provides a useful approach to the classification of cancer subtypes by using deep learning on high-dimensional and small-scale biology data.

  10. Optimising the Active Muon Shield for the SHiP Experiment at CERN

    NASA Astrophysics Data System (ADS)

    Baranov, A.; Burnaev, E.; Derkach, D.; Filatov, A.; Klyuchnikov, N.; Lantwin, O.; Ratnikov, F.; Ustyuzhanin, A.; Zaitsev, A.

    2017-12-01

    The SHiP experiment is designed to search for very weakly interacting particles beyond the Standard Model which are produced in a 400 GeV/c proton beam dump at the CERN SPS. The critical challenge for this experiment is to keep the Standard Model background level negligible. In the beam dump, around 1011 muons will be produced per second. The muon rate in the spectrometer has to be reduced by at least four orders of magnitude to avoid muoninduced backgrounds. It is demonstrated that new improved active muon shield may be used to magnetically deflect the muons out of the acceptance of the spectrometer.

  11. AN IMPROVED SOCKING TECHNIQUE FOR MASTER SLAVES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsons, T.C.; Deckard, L.E.; Howe, P.W.

    1962-10-29

    A technique for socking a pair of standard Model 8 master-slave manipulators is described. The technique is primarily concerned with the fabrication of the bellows section, which provides for Z motion as well as wris movement and rotation. (N.W.R.)

  12. The ethical dimension in published animal research in critical care: the dark side of our moon.

    PubMed

    Huet, Olivier; de Haan, Judy B

    2014-03-13

    The replacement, refinement, and reduction (3Rs) guidelines are the cornerstone of animal welfare practice for medical research. Nowadays, no animal research can be performed without being approved by an animal ethics committee. Therefore, we should expect that any published article would respect and promote the highest standard of animal welfare. However, in the previous issue of Critical Care, Bara and Joffe reported an unexpected finding: animal welfare is extremely poorly reported in critical care research publications involving animal models.This may have a significant negative impact on the reliability of the results and on future funding for our research.The ability of septic shock animal models to translate into clinical studies has been a challenge. Therefore, every means to improve the quality of these models should be pursued. Animal welfare issues should be seen as an additional benefit to achieve this goal. It is therefore critical to draw conclusions from this study to improve the standard of animal welfare in critical care research. This has already been achieved in other fields of research, and we should follow their example.

  13. Application of neural networks and sensitivity analysis to improved prediction of trauma survival.

    PubMed

    Hunter, A; Kennedy, L; Henry, J; Ferguson, I

    2000-05-01

    The performance of trauma departments is widely audited by applying predictive models that assess probability of survival, and examining the rate of unexpected survivals and deaths. Although the TRISS methodology, a logistic regression modelling technique, is still the de facto standard, it is known that neural network models perform better. A key issue when applying neural network models is the selection of input variables. This paper proposes a novel form of sensitivity analysis, which is simpler to apply than existing techniques, and can be used for both numeric and nominal input variables. The technique is applied to the audit survival problem, and used to analyse the TRISS variables. The conclusions discuss the implications for the design of further improved scoring schemes and predictive models.

  14. An object model and database for functional genomics.

    PubMed

    Jones, Andrew; Hunt, Ela; Wastling, Jonathan M; Pizarro, Angel; Stoeckert, Christian J

    2004-07-10

    Large-scale functional genomics analysis is now feasible and presents significant challenges in data analysis, storage and querying. Data standards are required to enable the development of public data repositories and to improve data sharing. There is an established data format for microarrays (microarray gene expression markup language, MAGE-ML) and a draft standard for proteomics (PEDRo). We believe that all types of functional genomics experiments should be annotated in a consistent manner, and we hope to open up new ways of comparing multiple datasets used in functional genomics. We have created a functional genomics experiment object model (FGE-OM), developed from the microarray model, MAGE-OM and two models for proteomics, PEDRo and our own model (Gla-PSI-Glasgow Proposal for the Proteomics Standards Initiative). FGE-OM comprises three namespaces representing (i) the parts of the model common to all functional genomics experiments; (ii) microarray-specific components; and (iii) proteomics-specific components. We believe that FGE-OM should initiate discussion about the contents and structure of the next version of MAGE and the future of proteomics standards. A prototype database called RNA And Protein Abundance Database (RAPAD), based on FGE-OM, has been implemented and populated with data from microbial pathogenesis. FGE-OM and the RAPAD schema are available from http://www.gusdb.org/fge.html, along with a set of more detailed diagrams. RAPAD can be accessed by registration at the site.

  15. Thermal dynamics on the lattice with exponentially improved accuracy

    NASA Astrophysics Data System (ADS)

    Pawlowski, Jan M.; Rothkopf, Alexander

    2018-03-01

    We present a novel simulation prescription for thermal quantum fields on a lattice that operates directly in imaginary frequency space. By distinguishing initial conditions from quantum dynamics it provides access to correlation functions also outside of the conventional Matsubara frequencies ωn = 2 πnT. In particular it resolves their frequency dependence between ω = 0 and ω1 = 2 πT, where the thermal physics ω ∼ T of e.g. transport phenomena is dominantly encoded. Real-time spectral functions are related to these correlators via an integral transform with rational kernel, so that their unfolding from the novel simulation data is exponentially improved compared to standard Euclidean simulations. We demonstrate this improvement within a non-trivial 0 + 1-dimensional quantum mechanical toy-model and show that spectral features inaccessible in standard Euclidean simulations are quantitatively captured.

  16. Design of a randomized, controlled, comparative-effectiveness trial testing a Family Model of Diabetes Self-Management Education (DSME) vs. Standard DSME for Marshallese in the United States.

    PubMed

    Kim Yeary, Karen Hye-Cheon; Long, Christopher R; Bursac, Zoran; McElfish, Pearl Anna

    2017-06-01

    Type 2 diabetes (T2D) is a significant public health problem, with U.S. Pacific Islander communities-such as the Marshallese-bearing a disproportionate burden. Using a community-based participatory approach (CBPR) that engages the strong family-based social infrastructure characteristic of Marshallese communities is a promising way to manage T2D. Led by a collaborative community-academic partnership, the Family Model of Diabetes Self-Management Education (DSME) aimed to change diabetes management behaviors to improve glycemic control in Marshallese adults with T2D by engaging the entire family. To test the Family Model of DSME, a randomized, controlled, comparative effectiveness trial with 240 primary participants was implemented. Half of the primary participants were randomly assigned to the Standard DSME and half were randomly assigned to the Family Model DSME. Both arms received ten hours of content comprised of 6-8 sessions delivered over a 6-8 week period. The Family Model DSME was a cultural adaptation of DSME, whereby the intervention focused on engaging family support for the primary participant with T2D. The Standard DSME was delivered to the primary participant in a community-based group format. Primary participants and participating family members were assessed at baseline and immediate post-intervention, and will also be assessed at 6 and 12 months. The Family Model of DSME aimed to improve glycemic control in Marshallese with T2D. The utilization of a CBPR approach that involves the local stakeholders and the engagement of the family-based social infrastructure of Marshallese communities increase potential for the intervention's success and sustainability.

  17. The CAFE Experiment: A Joint Seismic and MT Investigation of the Cascadia Subduction System

    DTIC Science & Technology

    2013-02-01

    In this thesis we present results from inversion of data using dense arrays of collocated seismic and magnetotelluric stations located in the Cascadia...implicit in the standard MT inversion provides tools that enable us to generate a more accurate MT model. This final MT model clearly demonstrates...references within, Hacker, 2008) have given us the tools to better interpret geophysical evidence. Improvements in the thermal modeling of subduction zones

  18. Effects of recent energy system changes on CO2 projections for the United States.

    PubMed

    Lenox, Carol S; Loughlin, Daniel H

    2017-09-21

    Recent projections of future United States carbon dioxide (CO 2 ) emissions are considerably lower than projections made just a decade ago. A myriad of factors have contributed to lower forecasts, including reductions in end-use energy service demands, improvements in energy efficiency, and technological innovations. Policies that have encouraged these changes include renewable portfolio standards, corporate vehicle efficiency standards, smart growth initiatives, revisions to building codes, and air and climate regulations. Understanding the effects of these and other factors can be advantageous as society evaluates opportunities for achieving additional CO 2 reductions. Energy system models provide a means to develop such insights. In this analysis, the MARKet ALlocation (MARKAL) model was applied to estimate the relative effects of various energy system changes that have happened since the year 2005 on CO 2 projections for the year 2025. The results indicate that transformations in the transportation and buildings sectors have played major roles in lowering projections. Particularly influential changes include improved vehicle efficiencies, reductions in projected travel demand, reductions in miscellaneous commercial electricity loads, and higher efficiency lighting. Electric sector changes have also contributed significantly to the lowered forecasts, driven by demand reductions, renewable portfolio standards, and air quality regulations.

  19. Feasibility of shutter-speed DCE-MRI for improved prostate cancer detection.

    PubMed

    Li, Xin; Priest, Ryan A; Woodward, William J; Tagge, Ian J; Siddiqui, Faisal; Huang, Wei; Rooney, William D; Beer, Tomasz M; Garzotto, Mark G; Springer, Charles S

    2013-01-01

    The feasibility of shutter-speed model dynamic-contrast-enhanced MRI pharmacokinetic analyses for prostate cancer detection was investigated in a prebiopsy patient cohort. Differences of results from the fast-exchange-regime-allowed (FXR-a) shutter-speed model version and the fast-exchange-limit-constrained (FXL-c) standard model are demonstrated. Although the spatial information is more limited, postdynamic-contrast-enhanced MRI biopsy specimens were also examined. The MRI results were correlated with the biopsy pathology findings. Of all the model parameters, region-of-interest-averaged K(trans) difference [ΔK(trans) ≡ K(trans)(FXR-a) - K(trans)(FXL-c)] or two-dimensional K(trans)(FXR-a) vs. k(ep)(FXR-a) values were found to provide the most useful biomarkers for malignant/benign prostate tissue discrimination (at 100% sensitivity for a population of 13, the specificity is 88%) and disease burden determination. (The best specificity for the fast-exchange-limit-constrained analysis is 63%, with the two-dimensional plot.) K(trans) and k(ep) are each measures of passive transcapillary contrast reagent transfer rate constants. Parameter value increases with shutter-speed model (relative to standard model) analysis are larger in malignant foci than in normal-appearing glandular tissue. Pathology analyses verify the shutter-speed model (FXR-a) promise for prostate cancer detection. Parametric mapping may further improve pharmacokinetic biomarker performance. Copyright © 2012 Wiley Periodicals, Inc.

  20. Reducing RANS Model Error Using Random Forest

    NASA Astrophysics Data System (ADS)

    Wang, Jian-Xun; Wu, Jin-Long; Xiao, Heng; Ling, Julia

    2016-11-01

    Reynolds-Averaged Navier-Stokes (RANS) models are still the work-horse tools in the turbulence modeling of industrial flows. However, the model discrepancy due to the inadequacy of modeled Reynolds stresses largely diminishes the reliability of simulation results. In this work we use a physics-informed machine learning approach to improve the RANS modeled Reynolds stresses and propagate them to obtain the mean velocity field. Specifically, the functional forms of Reynolds stress discrepancies with respect to mean flow features are trained based on an offline database of flows with similar characteristics. The random forest model is used to predict Reynolds stress discrepancies in new flows. Then the improved Reynolds stresses are propagated to the velocity field via RANS equations. The effects of expanding the feature space through the use of a complete basis of Galilean tensor invariants are also studied. The flow in a square duct, which is challenging for standard RANS models, is investigated to demonstrate the merit of the proposed approach. The results show that both the Reynolds stresses and the propagated velocity field are improved over the baseline RANS predictions. SAND Number: SAND2016-7437 A

  1. User-generated quality standards for youth mental health in primary care: a participatory research design using mixed methods

    PubMed Central

    Graham, Tanya; Rose, Diana; Murray, Joanna; Ashworth, Mark; Tylee, André

    2014-01-01

    Objectives To develop user-generated quality standards for young people with mental health problems in primary care using a participatory research model. Methods 50 young people aged 16–25 from community settings and primary care participated in focus groups and interviews about their views and experiences of seeking help for mental health problems in primary care, cofacilitated by young service users and repeated to ensure respondent validation. A second group of young people also aged 16–25 who had sought help for any mental health problem from primary care or secondary care within the last 5 years were trained as focus groups cofacilitators (n=12) developed the quality standards from the qualitative data and participated in four nominal groups (n=28). Results 46 quality standards were developed and ranked by young service users. Agreement was defined as 100% of scores within a two-point region. Group consensus existed for 16 quality standards representing the following aspects of primary care: better advertising and information (three); improved competence through mental health training and skill mix within the practice (two); alternatives to medication (three); improved referral protocol (three); and specific questions and reassurances (five). Alternatives to medication and specific questions and reassurances are aspects of quality which have not been previously reported. Conclusions We have demonstrated the feasibility of using participatory research methods in order to develop user-generated quality standards. The development of patient-generated quality standards may offer a more formal method of incorporating the views of service users into quality improvement initiatives. This method can be adapted for generating quality standards applicable to other patient groups. PMID:24920648

  2. Beauty and the beast: Aligning national curriculum standards with state (high school) graduation requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linder-Scholer, B.

    1994-12-31

    An overview of SCI/MATH/MN - Minnesota`s standards-based, systemic approach to the reform and improvement of the K-12 science and mathematics education delivery system - is offered as an illustration of the challenges of aligning state educational practices with the national curriculum standards, and as a model for business involvement in state educational policy issues that will enable fundamental, across-the-system reform. SCI/MATH/MN illustrates the major challenges involved in developing a statewide vision for math and science education reform, articulating frameworks aligned with the national standards, building capacity for system-oriented change at the local level, and involving business in systemic reform.

  3. Reproducibility of preclinical animal research improves with heterogeneity of study samples

    PubMed Central

    Vogt, Lucile; Sena, Emily S.; Würbel, Hanno

    2018-01-01

    Single-laboratory studies conducted under highly standardized conditions are the gold standard in preclinical animal research. Using simulations based on 440 preclinical studies across 13 different interventions in animal models of stroke, myocardial infarction, and breast cancer, we compared the accuracy of effect size estimates between single-laboratory and multi-laboratory study designs. Single-laboratory studies generally failed to predict effect size accurately, and larger sample sizes rendered effect size estimates even less accurate. By contrast, multi-laboratory designs including as few as 2 to 4 laboratories increased coverage probability by up to 42 percentage points without a need for larger sample sizes. These findings demonstrate that within-study standardization is a major cause of poor reproducibility. More representative study samples are required to improve the external validity and reproducibility of preclinical animal research and to prevent wasting animals and resources for inconclusive research. PMID:29470495

  4. Improved accuracy and precision of tracer kinetic parameters by joint fitting to variable flip angle and dynamic contrast enhanced MRI data.

    PubMed

    Dickie, Ben R; Banerji, Anita; Kershaw, Lucy E; McPartlin, Andrew; Choudhury, Ananya; West, Catharine M; Rose, Chris J

    2016-10-01

    To improve the accuracy and precision of tracer kinetic model parameter estimates for use in dynamic contrast enhanced (DCE) MRI studies of solid tumors. Quantitative DCE-MRI requires an estimate of precontrast T1 , which is obtained prior to fitting a tracer kinetic model. As T1 mapping and tracer kinetic signal models are both a function of precontrast T1 it was hypothesized that its joint estimation would improve the accuracy and precision of both precontrast T1 and tracer kinetic model parameters. Accuracy and/or precision of two-compartment exchange model (2CXM) parameters were evaluated for standard and joint fitting methods in well-controlled synthetic data and for 36 bladder cancer patients. Methods were compared under a number of experimental conditions. In synthetic data, joint estimation led to statistically significant improvements in the accuracy of estimated parameters in 30 of 42 conditions (improvements between 1.8% and 49%). Reduced accuracy was observed in 7 of the remaining 12 conditions. Significant improvements in precision were observed in 35 of 42 conditions (between 4.7% and 50%). In clinical data, significant improvements in precision were observed in 18 of 21 conditions (between 4.6% and 38%). Accuracy and precision of DCE-MRI parameter estimates are improved when signal models are fit jointly rather than sequentially. Magn Reson Med 76:1270-1281, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  5. Global analysis of fermion mixing with exotics

    NASA Technical Reports Server (NTRS)

    Nardi, Enrico; Roulet, Esteban; Tommasini, Daniele

    1991-01-01

    The limits are analyzed on deviation of the lepton and quark weak-couplings from their standard model values in a general class of models where the known fermions are allowed to mix with new heavy particles with exotic SU(2) x U(1) quantum number assignments (left-handed singlets or right-handed doublets). These mixings appear in many extensions of the electroweak theory such as models with mirror fermions, E(sub 6) models, etc. The results update previous analyses and improve considerably the existing bounds.

  6. Intrasystem Analysis Program (IAP) Model Improvement.

    DTIC Science & Technology

    1982-02-01

    of Loop Antennas 2-117 2.11 Transmission Loss of Yagi-Uda Beam Antennas 2-120 2.12 Impedance Matching Factor of Frequency-Independent Antennas 2-121...2.16.5 Directive Gain Model for a Loop Antenna 2-181 2.16.6 Directive Gain Model for a Planer Log-Spiral Antenna 2-182 2.16.7 Directive Gain Model for...The published specifications for the antenns which meet certain standard requirements are based on measure- ments of the terminal impedance of the total

  7. Guidelines for standard preclinical experiments in the mouse model of myasthenia gravis induced by acetylcholine receptor immunization.

    PubMed

    Tuzun, Erdem; Berrih-Aknin, Sonia; Brenner, Talma; Kusner, Linda L; Le Panse, Rozen; Yang, Huan; Tzartos, Socrates; Christadoss, Premkumar

    2015-08-01

    Myasthenia gravis (MG) is an autoimmune disorder characterized by generalized muscle weakness due to neuromuscular junction (NMJ) dysfunction brought by acetylcholine receptor (AChR) antibodies in most cases. Although steroids and other immunosuppressants are effectively used for treatment of MG, these medications often cause severe side effects and a complete remission cannot be obtained in many cases. For pre-clinical evaluation of more effective and less toxic treatment methods for MG, the experimental autoimmune myasthenia gravis (EAMG) induced by Torpedo AChR immunization has become one of the standard animal models. Although numerous compounds have been recently proposed for MG mostly by using the active immunization EAMG model, only a few have been proven to be effective in MG patients. The variability in the experimental design, immunization methods and outcome measurements of pre-clinical EAMG studies make it difficult to interpret the published reports and assess the potential for application to MG patients. In an effort to standardize the active immunization EAMG model, we propose standard procedures for animal care conditions, sampling and randomization of mice, experimental design and outcome measures. Utilization of these standard procedures might improve the power of pre-clinical EAMG experiments and increase the chances for identifying promising novel treatment methods that can be effectively translated into clinical trials for MG. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Weighted least squares techniques for improved received signal strength based localization.

    PubMed

    Tarrío, Paula; Bernardos, Ana M; Casar, José R

    2011-01-01

    The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.

  9. An improved Pearson's correlation proximity-based hierarchical clustering for mining biological association between genes.

    PubMed

    Booma, P M; Prabhakaran, S; Dhanalakshmi, R

    2014-01-01

    Microarray gene expression datasets has concerned great awareness among molecular biologist, statisticians, and computer scientists. Data mining that extracts the hidden and usual information from datasets fails to identify the most significant biological associations between genes. A search made with heuristic for standard biological process measures only the gene expression level, threshold, and response time. Heuristic search identifies and mines the best biological solution, but the association process was not efficiently addressed. To monitor higher rate of expression levels between genes, a hierarchical clustering model was proposed, where the biological association between genes is measured simultaneously using proximity measure of improved Pearson's correlation (PCPHC). Additionally, the Seed Augment algorithm adopts average linkage methods on rows and columns in order to expand a seed PCPHC model into a maximal global PCPHC (GL-PCPHC) model and to identify association between the clusters. Moreover, a GL-PCPHC applies pattern growing method to mine the PCPHC patterns. Compared to existing gene expression analysis, the PCPHC model achieves better performance. Experimental evaluations are conducted for GL-PCPHC model with standard benchmark gene expression datasets extracted from UCI repository and GenBank database in terms of execution time, size of pattern, significance level, biological association efficiency, and pattern quality.

  10. An Improved Pearson's Correlation Proximity-Based Hierarchical Clustering for Mining Biological Association between Genes

    PubMed Central

    Booma, P. M.; Prabhakaran, S.; Dhanalakshmi, R.

    2014-01-01

    Microarray gene expression datasets has concerned great awareness among molecular biologist, statisticians, and computer scientists. Data mining that extracts the hidden and usual information from datasets fails to identify the most significant biological associations between genes. A search made with heuristic for standard biological process measures only the gene expression level, threshold, and response time. Heuristic search identifies and mines the best biological solution, but the association process was not efficiently addressed. To monitor higher rate of expression levels between genes, a hierarchical clustering model was proposed, where the biological association between genes is measured simultaneously using proximity measure of improved Pearson's correlation (PCPHC). Additionally, the Seed Augment algorithm adopts average linkage methods on rows and columns in order to expand a seed PCPHC model into a maximal global PCPHC (GL-PCPHC) model and to identify association between the clusters. Moreover, a GL-PCPHC applies pattern growing method to mine the PCPHC patterns. Compared to existing gene expression analysis, the PCPHC model achieves better performance. Experimental evaluations are conducted for GL-PCPHC model with standard benchmark gene expression datasets extracted from UCI repository and GenBank database in terms of execution time, size of pattern, significance level, biological association efficiency, and pattern quality. PMID:25136661

  11. Improved compliance by BPM-driven workflow automation.

    PubMed

    Holzmüller-Laue, Silke; Göde, Bernd; Fleischer, Heidi; Thurow, Kerstin

    2014-12-01

    Using methods and technologies of business process management (BPM) for the laboratory automation has important benefits (i.e., the agility of high-level automation processes, rapid interdisciplinary prototyping and implementation of laboratory tasks and procedures, and efficient real-time process documentation). A principal goal of the model-driven development is the improved transparency of processes and the alignment of process diagrams and technical code. First experiences of using the business process model and notation (BPMN) show that easy-to-read graphical process models can achieve and provide standardization of laboratory workflows. The model-based development allows one to change processes quickly and an easy adaption to changing requirements. The process models are able to host work procedures and their scheduling in compliance with predefined guidelines and policies. Finally, the process-controlled documentation of complex workflow results addresses modern laboratory needs of quality assurance. BPMN 2.0 as an automation language to control every kind of activity or subprocess is directed to complete workflows in end-to-end relationships. BPMN is applicable as a system-independent and cross-disciplinary graphical language to document all methods in laboratories (i.e., screening procedures or analytical processes). That means, with the BPM standard, a communication method of sharing process knowledge of laboratories is also available. © 2014 Society for Laboratory Automation and Screening.

  12. Weighted Least Squares Techniques for Improved Received Signal Strength Based Localization

    PubMed Central

    Tarrío, Paula; Bernardos, Ana M.; Casar, José R.

    2011-01-01

    The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling. PMID:22164092

  13. Statistical prediction with Kanerva's sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1989-01-01

    A new viewpoint of the processing performed by Kanerva's sparse distributed memory (SDM) is presented. In conditions of near- or over-capacity, where the associative-memory behavior of the model breaks down, the processing performed by the model can be interpreted as that of a statistical predictor. Mathematical results are presented which serve as the framework for a new statistical viewpoint of sparse distributed memory and for which the standard formulation of SDM is a special case. This viewpoint suggests possible enhancements to the SDM model, including a procedure for improving the predictiveness of the system based on Holland's work with genetic algorithms, and a method for improving the capacity of SDM even when used as an associative memory.

  14. Using aircraft and satellite observations to improve regulatory air quality models

    NASA Astrophysics Data System (ADS)

    Canty, T. P.; Vinciguerra, T.; Anderson, D. C.; Carpenter, S. F.; Goldberg, D. L.; Hembeck, L.; Montgomery, L.; Liu, X.; Salawitch, R. J.; Dickerson, R. R.

    2014-12-01

    Federal and state agencies rely on EPA approved models to develop attainment strategies that will bring states into compliance with the National Ambient Air Quality Standards (NAAQS). We will describe modifications to the Community Multi-Scale Air Quality (CMAQ) model and Comprehensive Air Quality Model with Extensions (CAMx) frameworks motivated by analysis of NASA satellite and aircraft measurements. Observations of tropospheric column NO2 from OMI have already led to the identification of an important deficiency in the chemical mechanisms used by models; data collected during the DISCOVER-AQ field campaign has been instrumental in devising an improved representation of the chemistry of nitrogen species. Our recent work has focused on the use of: OMI observations of tropospheric O3 to assess and improve the representation of boundary conditions used by AQ models, OMI NO2 to derive a top down NOx emission inventory from commercial shipping vessels that affect air quality in the Eastern U.S., and OMI HCHO to assess the C5H8 emission inventories provided by bioegenic emissions models. We will describe how these OMI-driven model improvements are being incorporated into the State Implementation Plans (SIPs) being prepared for submission to EPA in summer 2015 and how future modeling efforts may be impacted by our findings.

  15. The New Muon g₋2 experiment at Fermilab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venanzoni, Graziano

    2016-06-02

    There is a long standing discrepancy between the Standard Model prediction for the muon g-2 and the value measured by the Brookhaven E821 Experiment. At present the discrepancy stands at about three standard deviations, with a comparable accuracy between experiment and theory. Two new proposals -- at Fermilab and J-PARC -- plan to improve the experimental uncertainty by a factor of 4, and it is expected that there will be a significant reduction in the uncertainty of the Standard Model prediction. I will review the status of the planned experiment at Fermilab, E989, which will analyse 21 times more muonsmore » than the BNL experiment and discuss how the systematic uncertainty will be reduced by a factor of 3 such that a precision of 0.14 ppm can be achieved.« less

  16. Echocardiography and risk prediction in advanced heart failure: incremental value over clinical markers.

    PubMed

    Agha, Syed A; Kalogeropoulos, Andreas P; Shih, Jeffrey; Georgiopoulou, Vasiliki V; Giamouzis, Grigorios; Anarado, Perry; Mangalat, Deepa; Hussain, Imad; Book, Wendy; Laskar, Sonjoy; Smith, Andrew L; Martin, Randolph; Butler, Javed

    2009-09-01

    Incremental value of echocardiography over clinical parameters for outcome prediction in advanced heart failure (HF) is not well established. We evaluated 223 patients with advanced HF receiving optimal therapy (91.9% angiotensin-converting enzyme inhibitor/angiotensin receptor blocker, 92.8% beta-blockers, 71.8% biventricular pacemaker, and/or defibrillator use). The Seattle Heart Failure Model (SHFM) was used as the reference clinical risk prediction scheme. The incremental value of echocardiographic parameters for event prediction (death or urgent heart transplantation) was measured by the improvement in fit and discrimination achieved by addition of standard echocardiographic parameters to the SHFM. After a median follow-up of 2.4 years, there were 38 (17.0%) events (35 deaths; 3 urgent transplants). The SHFM had likelihood ratio (LR) chi(2) 32.0 and C statistic 0.756 for event prediction. Left ventricular end-systolic volume, stroke volume, and severe tricuspid regurgitation were independent echocardiographic predictors of events. The addition of these parameters to SHFM improved LR chi(2) to 72.0 and C statistic to 0.866 (P < .001 and P=.019, respectively). Reclassifying the SHFM-predicted risk with use of the echocardiography-added model resulted in improved prognostic separation. Addition of standard echocardiographic variables to the SHFM results in significant improvement in risk prediction for patients with advanced HF.

  17. Tidal Models In A New Era of Satellite Gravimetry

    NASA Technical Reports Server (NTRS)

    Ray, Richard D.; Rowlings, David D.; Edbert, G. D.; Chao, Benjamin F. (Technical Monitor)

    2002-01-01

    The high precision gravity measurements to be made by recently launched (and recently approved) satellites place new demands on models of Earth, atmospheric, and oceanic tides. The latter is the most problematic. The ocean tides induce variations in the Earth's geoid by amounts that far exceed the new satellite sensitivities, and tidal models must be used to correct for this. Two methods are used here to determine the standard errors in current ocean tide models. At long wavelengths these errors exceed the sensitivity of the GRACE mission. Tidal errors will not prevent the new satellite missions from improving our knowledge of the geopotential by orders of magnitude, but the errors may well contaminate GRACE estimates of temporal variations in gravity. Solar tides are especially problematic because of their long alias periods. The satellite data may be used to improve tidal models once a sufficiently long time series is obtained. Improvements in the long-wavelength components of lunar tides are especially promising.

  18. The Use of Regulatory Air Quality Models to Develop Successful Ozone Attainment Strategies

    NASA Astrophysics Data System (ADS)

    Canty, T. P.; Salawitch, R. J.; Dickerson, R. R.; Ring, A.; Goldberg, D. L.; He, H.; Anderson, D. C.; Vinciguerra, T.

    2015-12-01

    The Environmental Protection Agency (EPA) recently proposed lowering the 8-hr ozone standard to between 65-70 ppb. Not all regions of the U.S. are in attainment of the current 75 ppb standard and it is expected that many regions currently in attainment will not meet the future, lower surface ozone standard. Ozone production is a nonlinear function of emissions, biological processes, and weather. Federal and state agencies rely on regulatory air quality models such as the Community Multi-Scale Air Quality (CMAQ) model and Comprehensive Air Quality Model with Extensions (CAMx) to test ozone precursor emission reduction strategies that will bring states into compliance with the National Ambient Air Quality Standards (NAAQS). We will describe various model scenarios that simulate how future limits on emission of ozone precursors (i.e. NOx and VOCs) from sources such as power plants and vehicles will affect air quality. These scenarios are currently being developed by states required to submit a State Implementation Plan to the EPA. Projections from these future case scenarios suggest that strategies intended to control local ozone may also bring upwind states into attainment of the new NAAQS. Ground based, aircraft, and satellite observations are used to ensure that air quality models accurately represent photochemical processes within the troposphere. We will highlight some of the improvements made to the CMAQ and CAMx model framework based on our analysis of NASA observations obtained by the OMI instrument on the Aura satellite and by the DISCOVER-AQ field campaign.

  19. Does availability of AIR insulin increase insulin use and improve glycemic control in patients with type 2 diabetes?

    PubMed

    Bergenstal, Richard M; Freemantle, Nick; Leyk, Malgorzata; Cutler, Gordon B; Hayes, Risa P; Muchmore, Douglas B

    2009-09-01

    In the concordance model, physician and patient discuss treatment options, explore the impact of treatment decisions from the patient's perspective, and make treatment choices together. We tested, in a concordance setting, whether the availability of AIR inhaled insulin (developed by Alkermes, Inc. [Cambridge, MA] and Eli Lilly and Company [Indianapolis, IN]; AIR is a registered trademark of Alkermes, Inc.), as compared with existing treatment options alone, leads to greater initiation and maintenance of insulin therapy and improves glycemic control in patients with type 2 diabetes. This was a 9-month, multicenter, parallel, open-label study in adult, nonsmoking patients with diabetes not optimally controlled by two or more oral antihyperglycemic medications. Patients were randomized to the Standard Options group (n = 516), in which patients chose a regimen from drugs in each major treatment class excluding inhaled insulin, or the Standard Options + AIR insulin group (n = 505), in which patients had the same choices plus AIR insulin. The primary end points were the proportion of patients in each group using insulin at end point and change in hemoglobin A1C (A1C) from baseline to end point. At end point, 53% of patients in the Standard Options group and 59% in the Standard Options + AIR insulin group were using insulin (P = 0.07). Both groups reduced A1C by about 1.2% and reported increased well-being and treatment satisfaction. The most common adverse event with AIR insulin was transient cough. The opportunity to choose AIR insulin did not affect overall use of insulin at end point or A1C outcomes. Regardless of group assignment, utilizing a shared decision-making approach to treatment choices (concordance model), resulted in improved treatment satisfaction and A1C values at end point. Therefore, increasing patient involvement in treatment decisions may improve outcomes.

  20. Caracterisation, modelisation et validation du transfert radiatif d'atmospheres non standard; impact sur les corrections atmospheriques d'images de teledetection

    NASA Astrophysics Data System (ADS)

    Zidane, Shems

    This study is based on data acquired with an airborne multi-altitude sensor on July 2004 during a nonstandard atmospheric event in the region of Saint-Jean-sur-Richelieu, Quebec. By non-standard atmospheric event we mean an aerosol atmosphere that does not obey the typical monotonic, scale height variation employed in virtually all atmospheric correction codes. The surfaces imaged during this field campaign included a diverse variety of targets : agricultural land, water bodies, urban areas and forests. The multi-altitude approach employed in this campaign allowed us to better understand the altitude dependent influence of the atmosphere over the array of ground targets and thus to better characterize the perturbation induced by a non-standard (smoke) plume. The transformation of the apparent radiance at 3 different altitudes into apparent reflectance and the insertion of the plume optics into an atmospheric correction model permitted an atmospheric correction of the apparent reflectance at the two higher altitudes. The results showed consistency with the apparent validation reflectances derived from the lowest altitude radiances. This approach effectively confirmed the accuracy of our non-standard atmospheric correction approach. This test was particularly relevant at the highest altitude of 3.17 km : the apparent reflectances at this altitude were above most of the plume and therefore represented a good test of our ability to adequately correct for the influence of the perturbation. Standard atmospheric disturbances are obviously taken into account in most atmospheric correction models, but these are based on monotonically decreasing aerosol variations with increasing altitude. When the atmospheric radiation is affected by a plume or a local, non-standard pollution event, one must adapt the existing models to the radiative transfer constraints of the local perturbation and to the reality of the measurable parameters available for ingestion into the model. The main inputs of this study were those normally used in an atmospheric correction : apparent at-sensor radiance and the aerosol optical depth (AOD) acquired using ground-based sunphotometry. The procedure we employed made use of a standard atmospheric correction code (CAM5S, for Canadian Modified 5S, which comes from the 5S radiative transfer model in the visible and near infrared) : however, we also used other parameters and data to adapt and correctly model the special atmospheric situation which affected the multi-altitude images acquired during the St. Jean field campaign. We then developed a modeling protocol for these atmospheric perturbations where auxiliary data was employed to complement our main data-set. This allowed for the development of a robust and simple methodology adapted to this atmospheric situation. The auxiliary data, i.e. meteorological data, LIDAR profiles, various satellite images and sun photometer retrievals of the scattering phase function, were sufficient to accurately model the observed plume in terms of a unusual, vertical distribution. This distribution was transformed into an aerosol optical depth profile that replaced the standard aerosol optical depth profile employed in the CAM5S atmospheric correction model. Based on this model, a comparison between the apparent ground reflectances obtained after atmospheric corrections and validation values of R*(0) obtained from the lowest altitude data showed that the error between the two was less than 0.01 rms. This correction was shown to be a significantly better estimation of the surface reflectance than that obtained using the atmospheric correction model. Significant differences were nevertheless observed in the non-standard solution : these were mainly caused by the difficulties brought about by the acquisition conditions, significant disparities attributable to inconsistencies in the co-sampling / co-registration of different targets from three different altitudes, and possibly modeling errors and / or calibration. There is accordingly room for improvement in our approach to dealing with such conditions. The modeling and forecasting of such a disturbance is explicitly described in this document: our goal in so doing is to permit the establishment of a better protocol for the acquisition of more suitable supporting data. The originality of this study stems from a new approach for incorporating a plume structure into an operational atmospheric correction model and then demonstrating that the approach was a significant improvement over an approach that ignored the perturbations in the vertical profile while employing the correct overall AOD. The profile model we employed was simple and robust but captured sufficient plume detail to achieve significant improvements in atmospheric correction accuracy. The overall process of addressing all the problems encountered in the analysis of our aerosol perturbation helped us to build an appropriate methodology for characterizing such events based on data availability, distributed freely and accessible to the scientific community. This makes our study adaptable and exportable to other types of non-standard atmospheric events. Keywords : non-standard atmospheric perturbation, multi-altitude apparent radiances, smoke plume, Gaussian plume modelization, radiance fit, AOD, CASI

  1. Modeling of lead air pollution. [Baton Rouge, Louisiana

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monteith, C.S.; Henry, J.M.

    1982-05-01

    A study was performed to determine whether vehicular emissions should be included with industrial emissions when demonstrating attainment of the ambient air quality standard for lead. The impact on ambient lead concentrations of the phaseout of leaded gasoline and improved automobile fuel economy was examined by modeling vehicular emissions for 1972 and 1978. Results show that while automobiles in the Baton Rouge area were a significant source of lead in 1972, the phaseout of leaded gasoline and the increase in fuel economy have resulted in a lower contribution (0.20 ..mu..g/m/sup 3/) by automobiles to the ambient lead concentration in 1978.more » The areas having the greatest potential for exceeding the ambient air quality standard can be identified using CDM (EPA's Climatological Dispersion Model). This information can be used to determine the optimal location for an ambient air monitor to demonstrate compliance with the ambient air quality standard. 9 references, 4 figures, 5 tables. (JMT)« less

  2. Spatial and Activities Models of Airport Based on GIS and Dynamic Model

    NASA Astrophysics Data System (ADS)

    Masri, R. M.; Purwaamijaya, I. M.

    2017-02-01

    The purpose of research were (1) a conceptual, functional model designed and implementation for spatial airports, (2) a causal, flow diagrams and mathematical equations made for airport activity, (3) obtained information on the conditions of space and activities at airports assessment, (4) the space and activities evaluation at airports based on national and international airport services standards, (5) options provided to improve the spatial and airport activities performance become the international standards airport. Descriptive method is used for the research. Husein Sastranegara Airport in Bandung, West Java, Indonesia was study location. The research was conducted on September 2015 to April 2016. A spatial analysis is used to obtain runway, taxiway and building airport geometric information. A system analysis is used to obtain the relationship between components in airports, dynamic simulation activity at airports and information on the results tables and graphs of dynamic model. Airport national and international standard could not be fulfilled by spatial and activity existing condition of Husein Sastranegara. Idea of re-location program is proposed as problem solving for constructing new airport which could be serving international air transportation.

  3. Improving homology modeling of G-protein coupled receptors through multiple-template derived conserved inter-residue interactions

    NASA Astrophysics Data System (ADS)

    Chaudhari, Rajan; Heim, Andrew J.; Li, Zhijun

    2015-05-01

    Evidenced by the three-rounds of G-protein coupled receptors (GPCR) Dock competitions, improving homology modeling methods of helical transmembrane proteins including the GPCRs, based on templates of low sequence identity, remains an eminent challenge. Current approaches addressing this challenge adopt the philosophy of "modeling first, refinement next". In the present work, we developed an alternative modeling approach through the novel application of available multiple templates. First, conserved inter-residue interactions are derived from each additional template through conservation analysis of each template-target pairwise alignment. Then, these interactions are converted into distance restraints and incorporated in the homology modeling process. This approach was applied to modeling of the human β2 adrenergic receptor using the bovin rhodopsin and the human protease-activated receptor 1 as templates and improved model quality was demonstrated compared to the homology model generated by standard single-template and multiple-template methods. This method of "refined restraints first, modeling next", provides a fast and complementary way to the current modeling approaches. It allows rational identification and implementation of additional conserved distance restraints extracted from multiple templates and/or experimental data, and has the potential to be applicable to modeling of all helical transmembrane proteins.

  4. Practical Results from the Application of Model Checking and Test Generation from UML/SysML Models of On-Board Space Applications

    NASA Astrophysics Data System (ADS)

    Faria, J. M.; Mahomad, S.; Silva, N.

    2009-05-01

    The deployment of complex safety-critical applications requires rigorous techniques and powerful tools both for the development and V&V stages. Model-based technologies are increasingly being used to develop safety-critical software, and arguably, turning to them can bring significant benefits to such processes, however, along with new challenges. This paper presents the results of a research project where we tried to extend current V&V methodologies to be applied on UML/SysML models and aiming at answering the demands related to validation issues. Two quite different but complementary approaches were investigated: (i) model checking and the (ii) extraction of robustness test-cases from the same models. These two approaches don't overlap and when combined provide a wider reaching model/design validation ability than each one alone thus offering improved safety assurance. Results are very encouraging, even though they either fell short of the desired outcome as shown for model checking, or still appear as not fully matured as shown for robustness test case extraction. In the case of model checking, it was verified that the automatic model validation process can become fully operational and even expanded in scope once tool vendors help (inevitably) to improve the XMI standard interoperability situation. For the robustness test case extraction methodology, the early approach produced interesting results but need further systematisation and consolidation effort in order to produce results in a more predictable fashion and reduce reliance on expert's heuristics. Finally, further improvements and innovation research projects were immediately apparent for both investigated approaches, which point to either circumventing current limitations in XMI interoperability on one hand and bringing test case specification onto the same graphical level as the models themselves and then attempting to automate the generation of executable test cases from its standard UML notation.

  5. Evaluation of the Clinical LOINC (Logical Observation Identifiers, Names, and Codes) Semantic Structure as a Terminology Model for Standardized Assessment Measures

    PubMed Central

    Bakken, Suzanne; Cimino, James J.; Haskell, Robert; Kukafka, Rita; Matsumoto, Cindi; Chan, Garrett K.; Huff, Stanley M.

    2000-01-01

    Objective: The purpose of this study was to test the adequacy of the Clinical LOINC (Logical Observation Identifiers, Names, and Codes) semantic structure as a terminology model for standardized assessment measures. Methods: After extension of the definitions, 1,096 items from 35 standardized assessment instruments were dissected into the elements of the Clinical LOINC semantic structure. An additional coder dissected at least one randomly selected item from each instrument. When multiple scale types occurred in a single instrument, a second coder dissected one randomly selected item representative of each scale type. Results: The results support the adequacy of the Clinical LOINC semantic structure as a terminology model for standardized assessments. Using the revised definitions, the coders were able to dissect into the elements of Clinical LOINC all the standardized assessment items in the sample instruments. Percentage agreement for each element was as follows: component, 100 percent; property, 87.8 percent; timing, 82.9 percent; system/sample, 100 percent; scale, 92.6 percent; and method, 97.6 percent. Discussion: This evaluation was an initial step toward the representation of standardized assessment items in a manner that facilitates data sharing and re-use. Further clarification of the definitions, especially those related to time and property, is required to improve inter-rater reliability and to harmonize the representations with similar items already in LOINC. PMID:11062226

  6. Understanding Standards and Assessment Policy in Science Education: Relating and Exploring Variations in Policy Implementation by Districts and Teachers in Wisconsin

    NASA Astrophysics Data System (ADS)

    Anderson, Kevin John Boyett

    Current literature shows that many science teachers view policies of standards-based and test-based accountability as conflicting with research-based instruction in science education. With societal goals of improving scientific literacy and using science to spur economic growth, improving science education policy becomes especially important. To understand perceived influences of science education policy, this study looked at three questions: 1) How do teachers perceive state science standards and assessment and their influence on curriculum and instruction? 2) How do these policy perspectives vary by district and teacher level demographic and contextual differences? 3) How do district leaders' interpretations of and efforts within these policy realms relate to teachers' perceptions of the policies? To answer these questions, this study used a stratified sample of 53 districts across Wisconsin, with 343 middle school science teachers responding to an online survey; science instructional leaders from each district were also interviewed. Survey results were analyzed using multiple regression modeling, with models generally predicting 8-14% of variance in teacher perceptions. Open-ended survey and interview responses were analyzed using a constant comparative approach. Results suggested that many teachers saw state testing as limiting use of hands-on pedagogy, while standards were seen more positively. Teachers generally held similar views of the degree of influence of standards and testing regardless of their experience, background in science, credentials, or grade level taught. District SES, size and past WKCE scores had some limited correlations to teachers' views of policy, but teachers' perceptions of district policies and leadership consistently had the largest correlation to their views. District leadership views of these state policies correlated with teachers' views. Implications and future research directions are provided. Keywords: science education, policy, accountability, standards, assessment, district leadership

  7. Improving Measurement in Health Education and Health Behavior Research Using Item Response Modeling: Comparison with the Classical Test Theory Approach

    ERIC Educational Resources Information Center

    Wilson, Mark; Allen, Diane D.; Li, Jun Corser

    2006-01-01

    This paper compares the approach and resultant outcomes of item response models (IRMs) and classical test theory (CTT). First, it reviews basic ideas of CTT, and compares them to the ideas about using IRMs introduced in an earlier paper. It then applies a comparison scheme based on the AERA/APA/NCME "Standards for Educational and…

  8. SEP ENCKE-87 and Halley rendezvous studies and improved S/C model implementation in HILTOP

    NASA Technical Reports Server (NTRS)

    Horsewood, J. L.; Mann, F. I.

    1978-01-01

    Studies were conducted to determine the performance requirements for projected state-of-the-art SEP spacecrafts boosted by the Shuttle/IUS to perform a rendezvous with the comet Halley and a rendezvous with the comet Encke during its 1977 apparition. The spacecraft model of the standard HILTOP computer program was assumed. Numerical and graphical results summarizing the studies are presented.

  9. Global and Regional 3D Tomography for Improved Seismic Event Location and Uncertainty in Explosion Monitoring

    NASA Astrophysics Data System (ADS)

    Downey, N.; Begnaud, M. L.; Hipp, J. R.; Ballard, S.; Young, C. S.; Encarnacao, A. V.

    2017-12-01

    The SALSA3D global 3D velocity model of the Earth was developed to improve the accuracy and precision of seismic travel time predictions for a wide suite of regional and teleseismic phases. Recently, the global SALSA3D model was updated to include additional body wave phases including mantle phases, core phases, reflections off the core-mantle boundary and underside reflections off the surface of the Earth. We show that this update improves travel time predictions and leads directly to significant improvements in the accuracy and precision of seismic event locations as compared to locations computed using standard 1D velocity models like ak135, or 2½D models like RSTT. A key feature of our inversions is that path-specific model uncertainty of travel time predictions are calculated using the full 3D model covariance matrix computed during tomography, which results in more realistic uncertainty ellipses that directly reflect tomographic data coverage. Application of this method can also be done at a regional scale: we present a velocity model with uncertainty obtained using data obtained from the University of Utah Seismograph Stations. These results show a reduction in travel-time residuals for re-located events compared with those obtained using previously published models.

  10. NASA-modified precipitation products to improve USEPA nonpoint source water quality modeling for the Chesapeake Bay.

    PubMed

    Nigro, Joseph; Toll, David; Partington, Ed; Ni-Meister, Wenge; Lee, Shihyan; Gutierrez-Magness, Angelica; Engman, Ted; Arsenault, Kristi

    2010-01-01

    The USEPA has estimated that over 20,000 water bodies within the United States do not meet water quality standards. One of the regulations in the Clean Water Act of 1972 requires states to monitor the total maximum daily load, or the amount of pollution that can be carried by a water body before it is determined to be "polluted," for any watershed in the United States (Copeland, 2005). In response to this mandate, the USEPA developed Better Assessment Science Integrating Nonpoint Sources (BASINS) as a decision support tool for assessing pollution and to guide the decision-making process for improving water quality. One of the models in BASINS, the Hydrological Simulation Program-Fortran (HSPF), computes continuous streamflow rates and pollutant concentration at each basin outlet. By design, precipitation and other meteorological data from weather stations serve as standard model input. In practice, these stations may be unable to capture the spatial heterogeneity of precipitation events, especially if they are few and far between. An attempt was made to resolve this issue by substituting station data with NASA-modified/NOAA precipitation data. Using these data within HSPF, streamflow was calculated for seven watersheds in the Chesapeake Bay Basin during low flow periods, convective storm periods, and annual flows. In almost every case, the modeling performance of HSPF increased when using the NASA-modified precipitation data, resulting in better streamflow statistics and, potentially, in improved water quality assessment.

  11. Lognormal Kalman filter for assimilating phase space density data in the radiation belts

    NASA Astrophysics Data System (ADS)

    Kondrashov, D.; Ghil, M.; Shprits, Y.

    2011-11-01

    Data assimilation combines a physical model with sparse observations and has become an increasingly important tool for scientists and engineers in the design, operation, and use of satellites and other high-technology systems in the near-Earth space environment. Of particular importance is predicting fluxes of high-energy particles in the Van Allen radiation belts, since these fluxes can damage spaceborne platforms and instruments during strong geomagnetic storms. In transiting from a research setting to operational prediction of these fluxes, improved data assimilation is of the essence. The present study is motivated by the fact that phase space densities (PSDs) of high-energy electrons in the outer radiation belt—both simulated and observed—are subject to spatiotemporal variations that span several orders of magnitude. Standard data assimilation methods that are based on least squares minimization of normally distributed errors may not be adequate for handling the range of these variations. We propose herein a modification of Kalman filtering that uses a log-transformed, one-dimensional radial diffusion model for the PSDs and includes parameterized losses. The proposed methodology is first verified on model-simulated, synthetic data and then applied to actual satellite measurements. When the model errors are sufficiently smaller then observational errors, our methodology can significantly improve analysis and prediction skill for the PSDs compared to those of the standard Kalman filter formulation. This improvement is documented by monitoring the variance of the innovation sequence.

  12. Nutrition Standards for Away-from-home Foods in the United States

    PubMed Central

    Cohen, Deborah A.; Bhatia, Rajiv

    2012-01-01

    Away-from-home foods are regulated with respect to the prevention of food-borne diseases and potential contaminants, but not for their contribution to dietary-related chronic diseases. Away-from-home foods have more calories, salt, sugar, and fat and provide fewer fruits and vegetables than recommended by national nutrition guidelines; thus, frequent consumption of away-from-home foods contributes to obesity, hypertension, diabetes, heart disease, and cancer. In light of this, many localities are already adopting regulations or sponsoring programs to improve the quality of away-from-home foods. We review the rationale for developing nutritional performance standards for away-from-home foods in light of limited human capacity to regulate intake or physiologically compensate for a poor diet. We offer a set of model performance standards to be considered as a new area of environmental regulation. Models for voluntary implementation of consumer standards exist in the environmental domain and may be useful templates for implementation. Implementing such standards, whether voluntarily or via regulations, will require addressing a number of practical and ideological challenges. Politically, regulatory standards contradict the belief that adults should be able to navigate dietary risks in away-from-home settings unaided. PMID:22329431

  13. Clinical utility and development of biomarkers in invasive aspergillosis.

    PubMed

    Patterson, Thomas F

    2011-01-01

    The diagnosis of invasive aspergillosis remains very difficult, and there are limited treatment options for the disease. Pre-clinical models have been used to evaluate the diagnosis and treatment of Aspergillus infection and to assess the pathogenicity and virulence of the organism. Extensive efforts in Aspergillus research have significantly expanded the genomic information about this microorganism. The standardization of animal models of invasive aspergillosis can be used to enhance the evaluation of genomic information about the organism to improve the diagnosis and treatment of invasive aspergillosis. One approach to this process has been the award of a contract by the National Institute of Allergy and Infectious Diseases of the National Institutes of Health to establish and standardize animal models of invasive aspergillosis for the development of new diagnostic technologies for both pulmonary and disseminated Aspergillus infection. This work utilizes molecular approaches for the genetic manipulation of Aspergillus strains that can be tested in animal-model systems to establish new diagnostic targets and tools. Studies have evaluated the performance characteristics of assays for cell-wall antigens of Aspergillus including galactomannan and beta-D-glucan, as well as for DNA targets in the organism, through PCR. New targets, such as proteomic and genomic approaches, and novel detection methods, such as point-of-care lateral-flow devices, have also been evaluated. The goal of this paper is to provide a framework for evaluating genomic targets in animal models to improve the diagnosis and treatment of invasive aspergillosis toward ultimately improving the outcomes for patients with this frequently fatal infection.

  14. Lower limb estimation from sparse landmarks using an articulated shape model.

    PubMed

    Zhang, Ju; Fernandez, Justin; Hislop-Jambrich, Jacqui; Besier, Thor F

    2016-12-08

    Rapid generation of lower limb musculoskeletal models is essential for clinically applicable patient-specific gait modeling. Estimation of muscle and joint contact forces requires accurate representation of bone geometry and pose, as well as their muscle attachment sites, which define muscle moment arms. Motion-capture is a routine part of gait assessment but contains relatively sparse geometric information. Standard methods for creating customized models from motion-capture data scale a reference model without considering natural shape variations. We present an articulated statistical shape model of the left lower limb with embedded anatomical landmarks and muscle attachment regions. This model is used in an automatic workflow, implemented in an easy-to-use software application, that robustly and accurately estimates realistic lower limb bone geometry, pose, and muscle attachment regions from seven commonly used motion-capture landmarks. Estimated bone models were validated on noise-free marker positions to have a lower (p=0.001) surface-to-surface root-mean-squared error of 4.28mm, compared to 5.22mm using standard isotropic scaling. Errors at a variety of anatomical landmarks were also lower (8.6mm versus 10.8mm, p=0.001). We improve upon standard lower limb model scaling methods with shape model-constrained realistic bone geometries, regional muscle attachment sites, and higher accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Multilevel Monte Carlo and improved timestepping methods in atmospheric dispersion modelling

    NASA Astrophysics Data System (ADS)

    Katsiolides, Grigoris; Müller, Eike H.; Scheichl, Robert; Shardlow, Tony; Giles, Michael B.; Thomson, David J.

    2018-02-01

    A common way to simulate the transport and spread of pollutants in the atmosphere is via stochastic Lagrangian dispersion models. Mathematically, these models describe turbulent transport processes with stochastic differential equations (SDEs). The computational bottleneck is the Monte Carlo algorithm, which simulates the motion of a large number of model particles in a turbulent velocity field; for each particle, a trajectory is calculated with a numerical timestepping method. Choosing an efficient numerical method is particularly important in operational emergency-response applications, such as tracking radioactive clouds from nuclear accidents or predicting the impact of volcanic ash clouds on international aviation, where accurate and timely predictions are essential. In this paper, we investigate the application of the Multilevel Monte Carlo (MLMC) method to simulate the propagation of particles in a representative one-dimensional dispersion scenario in the atmospheric boundary layer. MLMC can be shown to result in asymptotically superior computational complexity and reduced computational cost when compared to the Standard Monte Carlo (StMC) method, which is currently used in atmospheric dispersion modelling. To reduce the absolute cost of the method also in the non-asymptotic regime, it is equally important to choose the best possible numerical timestepping method on each level. To investigate this, we also compare the standard symplectic Euler method, which is used in many operational models, with two improved timestepping algorithms based on SDE splitting methods.

  16. Developing and Validating Path-Dependent Uncertainty Estimates for use with the Regional Seismic Travel Time (RSTT) Model

    NASA Astrophysics Data System (ADS)

    Begnaud, M. L.; Anderson, D. N.; Phillips, W. S.; Myers, S. C.; Ballard, S.

    2016-12-01

    The Regional Seismic Travel Time (RSTT) tomography model has been developed to improve travel time predictions for regional phases (Pn, Sn, Pg, Lg) in order to increase seismic location accuracy, especially for explosion monitoring. The RSTT model is specifically designed to exploit regional phases for location, especially when combined with teleseismic arrivals. The latest RSTT model (version 201404um) has been released (http://www.sandia.gov/rstt). Travel time uncertainty estimates for RSTT are determined using one-dimensional (1D), distance-dependent error models, that have the benefit of being very fast to use in standard location algorithms, but do not account for path-dependent variations in error, and structural inadequacy of the RSTTT model (e.g., model error). Although global in extent, the RSTT tomography model is only defined in areas where data exist. A simple 1D error model does not accurately model areas where RSTT has not been calibrated. We are developing and validating a new error model for RSTT phase arrivals by mathematically deriving this multivariate model directly from a unified model of RSTT embedded into a statistical random effects model that captures distance, path and model error effects. An initial method developed is a two-dimensional path-distributed method using residuals. The goals for any RSTT uncertainty method are for it to be both readily useful for the standard RSTT user as well as improve travel time uncertainty estimates for location. We have successfully tested using the new error model for Pn phases and will demonstrate the method and validation of the error model for Sn, Pg, and Lg phases.

  17. Improving the Calibration of the SN Ia Anchor Datasets with a Bayesian Hierarchal Model

    NASA Astrophysics Data System (ADS)

    Currie, Miles; Rubin, David

    2018-01-01

    Inter-survey calibration remains one of the largest systematic uncertainties in SN Ia cosmology today. Ideally, each survey would measure their system throughputs and observe well characterized spectrophotometric standard stars, but many important surveys have not done so. For these surveys, we calibrate using tertiary survey stars tied to SDSS and Pan-STARRS. We improve on previous efforts by taking the spatially variable response of each telescope/camera into account, and using improved color transformations in the surveys’ natural instrumental photometric system. We use a global hierarchical model of the data, automatically providing a covariance matrix of magnitude offsets and bandpass shifts which reduces the systematic uncertainty in inter-survey calibration, thereby providing better cosmological constraints.

  18. Revision of the design of a standard for the dimensions of school furniture.

    PubMed

    Molenbroek, J F M; Kroon-Ramaekers, Y M T; Snijders, C J

    2003-06-10

    In this study an anthropometric design process was followed. The aim was to improve the fit of school furniture sizes for European children. It was demonstrated statistically that the draft of a European standard does not cover the target population. No literature on design criteria for sizes exists, and in practice it is common to calculate the fit for only the mean values (P50). The calculations reported here used body dimensions of Dutch children, measured by the authors' Department, and used data from German and British national standards. A design process was followed that contains several steps, including: Target group, Anthropometric model and Percentage exclusion. The criteria developed in this study are (1) a fit on the basis of 1% exclusion (P1 or P99), and (2) a prescription based on popliteal height. Based on this new approach it was concluded that prescription of a set size should be based on popliteal height rather than body height. The drafted standard, Pren 1729, can be improved with this approach. A European standard for school furniture should include the exception that for Dutch children an extra large size is required.

  19. [Satisfaction survey of CatSalut-PLAENSA(©). Strategies to incorporate citizens' perception of the quality of the service in health policies].

    PubMed

    Aguado-Blázquez, Hortensia; Cerdà-Calafat, Ismael; Argimon-Pallàs, Josep Maria; Murillo-Fort, Carles; Canela-Soler, Jaume

    2011-12-01

    The aim of this work is to present the strategies, activities and results of satisfaction surveys Plan CatSalut- PLAENSA(©) 2003-2010 that are making progress in improving the quality of health services. Since 2003, CatSalut has at its disposal the plan known as PLAENSA(©) Satisfaction Surveys, a tool for assessment and improvement proposals addressed to the insurance services provided by contracted public entities. The plan follows 3 key strategies: systematic and objective policyholders' satisfaction measurement, related to the services received; release of improvement proposals according to a standardized model, including standardized monitoring, and promotion of equity through propagation among health centres and territories. Current assessment provided by the insured about most health services has been already collected, leading to more tan 2,500 projects of improvement which are being developed by the providers of the 7 health regions of Catalonia. Copyright © 2011 Elsevier España S.L. All rights reserved.

  20. Risk assessment of manual material handling activities (case study: PT BRS Standard Industry)

    NASA Astrophysics Data System (ADS)

    Deviani; Triyanti, V.

    2017-12-01

    The process of moving material manually has the potential for injury to workers. The risk of injury will increase if we do not pay attention to the working conditions. The purpose of this study is to assess and analyze the injury risk level in manual handling material activity, as well as to improve the condition. The observed manual material handling activities is pole lifting and goods loading. These activities were analyzed using Job Strain Index method, Rapid Entire Body Assessment, and Chaffin’s 2D Planar Static Model. The results show that most workers who perform almost all activities have a high level of risk level with the score of JSI and REBA exceeds 9 points. For some activities, the estimated compression forces in the lumbar area also exceed the standard limits of 3400 N. Concerning this condition, several suggestions for improvement were made, improving the composition of packing, improving body posture, and making guideline posters.

  1. Voluntary wheel running improves recovery from a moderate spinal cord injury.

    PubMed

    Engesser-Cesar, Christie; Anderson, Aileen J; Basso, D Michele; Edgerton, V R; Cotman, Carl W

    2005-01-01

    Recently, locomotor training has been shown to improve overground locomotion in patients with spinal cord injury (SCI). This has triggered renewed interest in the role of exercise in rehabilitation after SCI. However, there are no mouse models for voluntary exercise and recovery of function following SCI. Here, we report voluntary wheel running improves recovery from a SCI in mice. C57Bl/10 female mice received a 60-kdyne T9 contusion injury with an IH impactor after 3 weeks of voluntary wheel running or 3 weeks of standard single housing conditions. Following a 7-day recovery period, running mice were returned to their running wheels. Weekly open-field behavior measured locomotor recovery using the Basso, Beattie and Bresnahan (BBB) locomotor rating scale and the Basso Mouse Scale (BMS) locomotor rating scale, a scale recently developed specifically for mice. Initial experiments using standard rung wheels show that wheel running impaired recovery, but subsequent experiments using a modified flat-surface wheel show improved recovery with exercise. By 14 days post SCI, the modified flat-surface running group had significantly higher BBB and BMS scores than the sedentary group. A repeated measures ANOVA shows locomotor recovery of modified flat-surface running mice was significantly improved compared to sedentary animals (p < 0.05). Locomotor assessment using a ladder beam task also shows a significant improvement in the modified flat-surface runners (p < 0.05). Finally, fibronectin staining shows no significant difference in lesion size between the two groups. These data represent the first mouse model showing voluntary exercise improves recovery after SCI.

  2. Using a discrete-event simulation to balance ambulance availability and demand in static deployment systems.

    PubMed

    Wu, Ching-Han; Hwang, Kevin P

    2009-12-01

    To improve ambulance response time, matching ambulance availability with the emergency demand is crucial. To maintain the standard of 90% of response times within 9 minutes, the authors introduce a discrete-event simulation method to estimate the threshold for expanding the ambulance fleet when demand increases and to find the optimal dispatching strategies when provisional events create temporary decreases in ambulance availability. The simulation model was developed with information from the literature. Although the development was theoretical, the model was validated on the emergency medical services (EMS) system of Tainan City. The data are divided: one part is for model development, and the other for validation. For increasing demand, the effect was modeled on response time when call arrival rates increased. For temporary availability decreases, the authors simulated all possible alternatives of ambulance deployment in accordance with the number of out-of-routine-duty ambulances and the durations of three types of mass gatherings: marathon races (06:00-10:00 hr), rock concerts (18:00-22:00 hr), and New Year's Eve parties (20:00-01:00 hr). Statistical analysis confirmed that the model reasonably represented the actual Tainan EMS system. The response-time standard could not be reached when the incremental ratio of call arrivals exceeded 56%, which is the threshold for the Tainan EMS system to expand its ambulance fleet. When provisional events created temporary availability decreases, the Tainan EMS system could spare at most two ambulances from the standard configuration, except between 20:00 and 01:00, when it could spare three. The model also demonstrated that the current Tainan EMS has two excess ambulances that could be dropped. The authors suggest dispatching strategies to minimize the response times in routine daily emergencies. Strategies of capacity management based on this model improved response times. The more ambulances that are out of routine duty, the better the performance of the optimal strategies that are based on this model.

  3. A new simplified volume-loaded heterotopic rabbit heart transplant model with improved techniques and a standard operating procedure.

    PubMed

    Lu, Wei; Zheng, Jun; Pan, Xu-Dong; Li, Bing; Zhang, Jin-Wei; Wang, Long-Fei; Sun, Li-Zhong

    2015-04-01

    The classic non-working (NW) heterotopic heart transplant (HTX) model in rodents had been widely used for researches related to immunology, graft rejection, evaluation of immunosuppressive therapies and organ preservation. But unloaded models are considered not suitable for some researches. Accordingly, We have constructed a volume-loaded (VL) model by a new and simple technique. Thirty male New Zealand White rabbits were randomly divided into two groups, group NW with 14 rabbits and group VL with 16 rabbits, which served as donors and recipients. We created a large and nonrestrictive shunt to provide left heart a sufficient preload. The donor superior vena cave and ascending aorta (AO) were anastomosed to the recipient abdominal aorta (AAO) and inferior vena cava (IVC), respectively. No animals suffered from paralysis, pneumonia and lethal bleeding. Recipients' mortality and morbidity were 6.7% (1/15) and 13.3% (2/15), respectively. The cold ischemia time in group VL is slight longer than that in group NW. The maximal aortic velocity (MAV) of donor heart was approximately equivalent to half that of native heart in group VL. Moreover, the similar result was achieved in the parameter of late diastolic mitral inflow velocity between donor heart and native heart in group VL. The echocardiography (ECHO) showed a bidirectional flow in donor SVC of VL model, inflow during diastole and outflow during systole. PET-CT imaging showed the standard uptake value (SUV) of allograft was equal to that of native heart in both groups on the postoperative day 3. We have developed a new VL model in rabbits, which imitates a native heart hemodynamically while only requiring a minor additional procedure. Surgical technique is simple compared with currently used HTX models. We also developed a standard operating procedure that significantly improved graft and recipient survival rate. This study may be useful for investigations in transplantation in which a working model is required.

  4. CURVILINEAR FINITE ELEMENT MODEL FOR SIMULATING TWO-WELL TRACER TESTS AND TRANSPORT IN STRATIFIED AQUIFERS

    EPA Science Inventory

    The problem of solute transport in steady nonuniform flow created by a recharging and discharging well pair is investigated. Numerical difficulties encountered with the standard Galerkin formulations in Cartesian coordinates are illustrated. An improved finite element solution st...

  5. "Imitatio" Revived: A Curriculum Based upon Mimesis.

    ERIC Educational Resources Information Center

    Hunt, Maurice

    1988-01-01

    Argues that reintroducing the classical principle of imitation based upon single, model sentences can be highly beneficial by allowing the student to practice handling the sentence, directing attention to grammatical constructions, enlarging vocabulary, improving spelling, and filling the mind with mature standards of prose. (RS)

  6. 75 FR 12753 - Agency Forms Undergoing Paperwork Reduction Act Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-17

    ... effective at improving health care quality. While evidence-based approaches for decisionmaking have become standard in healthcare, this has been limited in laboratory medicine. No single- evidence-based model for... (LMBP) initiative to develop new systematic evidence reviews methods for making evidence-based...

  7. Integrating Dynamic Data and Sensors with Semantic 3D City Models in the Context of Smart Cities

    NASA Astrophysics Data System (ADS)

    Chaturvedi, K.; Kolbe, T. H.

    2016-10-01

    Smart cities provide effective integration of human, physical and digital systems operating in the built environment. The advancements in city and landscape models, sensor web technologies, and simulation methods play a significant role in city analyses and improving quality of life of citizens and governance of cities. Semantic 3D city models can provide substantial benefits and can become a central information backbone for smart city infrastructures. However, current generation semantic 3D city models are static in nature and do not support dynamic properties and sensor observations. In this paper, we propose a new concept called Dynamizer allowing to represent highly dynamic data and providing a method for injecting dynamic variations of city object properties into the static representation. The approach also provides direct capability to model complex patterns based on statistics and general rules and also, real-time sensor observations. The concept is implemented as an Application Domain Extension for the CityGML standard. However, it could also be applied to other GML-based application schemas including the European INSPIRE data themes and national standards for topography and cadasters like the British Ordnance Survey Mastermap or the German cadaster standard ALKIS.

  8. STAR -Space Time Asymmetry Research

    NASA Astrophysics Data System (ADS)

    van Zoest, Tim; Braxmaier, Claus; Schuldt, Thilo; Allab, Mohammed; Theil, Stephan; Pelivan, Ivanka; Herrmann, Sven; Lümmerzahl, Claus; Peters, Achim; Mühle, Katharina; Wicht, Andreas; Nagel, Moritz; Kovalchuk, Evgeny; Düringshoff, Klaus; Dittus, Hansjürg

    STAR is a proposed satellite mission that aims for significantly improved tests of fundamental space-time symmetry and the foundations of special and general relativity. In total STAR comprises a series of five subsequent missions. The STAR1 mission will measure the constancy of the speed of light to one part in 1019 and derive the Kennedy Thorndike (KT) coefficient of the Mansouri-Sexl test theory to 7x10-10 . The KT experiment will be performed by compar-ison of an iodine standard with a highly stable cavity made from ultra low expansion (ULE) ceramics. With an orbital velocity of 7 km/s the sensitivity to a boost dependent violation of Lorentz invariance as modeled by the KT term in the Mansouri Sexl test theory or a Lorentz violating extension of the standard model (SME) will be significantly enhanced as compared to Earth based experiments. The low noise space environment will additionally enhance the measurement precision such that an overall improvement by a factor of 400 over current Earth based experiments is expected.

  9. Improvements to the YbF electron electric dipole moment experiment

    NASA Astrophysics Data System (ADS)

    Sauer, B. E.; Rabey, I. M.; Devlin, J. A.; Tarbutt, M. R.; Ho, C. J.; Hinds, E. A.

    2017-04-01

    The standard model of particle physics predicts that the permanent electric dipole moment (EDM) of the electron is very nearly zero. Many extensions to the standard model predict an electron EDM just below current experimental limits. We are currently working to improve the sensitivity of the Imperial College YbF experiment. We have implemented combined laser-radiofrequency pumping techniques which both increase the number of molecules which participate in the EDM experiment and also increase the probability of detection. Combined, these techniques give nearly two orders of magnitude increase in the experimental sensitivity. At this enhanced sensitivity magnetic effects which were negligible become important. We have developed a new way to construct the electrodes for electric field plates which minimizes the effect of magnetic Johnson noise. The new YbF experiment is expected to comparable in sensitivity to the most sensitive measurements of the electron EDM to date. We will also discuss laser cooling techniques which promise an even larger increase in sensitivity.

  10. Sixfold improved single particle measurement of the magnetic moment of the antiproton.

    PubMed

    Nagahama, H; Smorra, C; Sellner, S; Harrington, J; Higuchi, T; Borchert, M J; Tanaka, T; Besirli, M; Mooser, A; Schneider, G; Blaum, K; Matsuda, Y; Ospelkaus, C; Quint, W; Walz, J; Yamazaki, Y; Ulmer, S

    2017-01-18

    Our current understanding of the Universe comes, among others, from particle physics and cosmology. In particle physics an almost perfect symmetry between matter and antimatter exists. On cosmological scales, however, a striking matter/antimatter imbalance is observed. This contradiction inspires comparisons of the fundamental properties of particles and antiparticles with high precision. Here we report on a measurement of the g-factor of the antiproton with a fractional precision of 0.8 parts per million at 95% confidence level. Our value /2=2.7928465(23) outperforms the previous best measurement by a factor of 6. The result is consistent with our proton g-factor measurement g p /2=2.792847350(9), and therefore agrees with the fundamental charge, parity, time (CPT) invariance of the Standard Model of particle physics. Additionally, our result improves coefficients of the standard model extension which discusses the sensitivity of experiments with respect to CPT violation by up to a factor of 20.

  11. Sixfold improved single particle measurement of the magnetic moment of the antiproton

    PubMed Central

    Nagahama, H.; Smorra, C.; Sellner, S.; Harrington, J.; Higuchi, T.; Borchert, M. J.; Tanaka, T.; Besirli, M.; Mooser, A.; Schneider, G.; Blaum, K.; Matsuda, Y.; Ospelkaus, C.; Quint, W.; Walz, J.; Yamazaki, Y.; Ulmer, S.

    2017-01-01

    Our current understanding of the Universe comes, among others, from particle physics and cosmology. In particle physics an almost perfect symmetry between matter and antimatter exists. On cosmological scales, however, a striking matter/antimatter imbalance is observed. This contradiction inspires comparisons of the fundamental properties of particles and antiparticles with high precision. Here we report on a measurement of the g-factor of the antiproton with a fractional precision of 0.8 parts per million at 95% confidence level. Our value /2=2.7928465(23) outperforms the previous best measurement by a factor of 6. The result is consistent with our proton g-factor measurement gp/2=2.792847350(9), and therefore agrees with the fundamental charge, parity, time (CPT) invariance of the Standard Model of particle physics. Additionally, our result improves coefficients of the standard model extension which discusses the sensitivity of experiments with respect to CPT violation by up to a factor of 20. PMID:28098156

  12. Acoustic Tests of Lorentz Symmetry Using Quartz Oscillators

    DOE PAGES

    Lo, Anthony; Haslinger, Philipp; Mizrachi, Eli; ...

    2016-02-24

    Here we propose and demonstrate a test of Lorentz symmetry based on new, compact, and reliable quartz oscillator technology. Violations of Lorentz invariance in the matter and photon sector of the standard model extension generate anisotropies in particles’ inertial masses and the elastic constants of solids, giving rise to measurable anisotropies in the resonance frequencies of acoustic modes in solids. A first realization of such a “phonon-sector” test of Lorentz symmetry using room-temperature stress-compensated-cut crystals yields 120 h of data at a frequency resolution of 2.4 × 10 -15 and a limit ofmore » $$\\bar{c}$$ $$n\\atop{Q}$$ = (- 1.8 ± 2.2) × 10 -14 GeV on the most weakly constrained neutron-sector c coefficient of the standard model extension. Future experiments with cryogenic oscillators promise significant improvements in accuracy, opening up the potential for improved limits on Lorentz violation in the neutron, proton, electron, and photon sector.« less

  13. American Academy of Sleep Medicine (AASM) Position Paper for the Use of Telemedicine for the Diagnosis and Treatment of Sleep Disorders

    PubMed Central

    Singh, Jaspal; Badr, M. Safwan; Diebert, Wendy; Epstein, Lawrence; Hwang, Dennis; Karres, Valerie; Khosla, Seema; Mims, K. Nicole; Shamim-Uzzaman, Afifa; Kirsch, Douglas; Heald, Jonathan L.; McCann, Kathleen

    2015-01-01

    The American Academy of Sleep Medicine's (AASM) Taskforce on Sleep Telemedicine supports telemedicine as a means of advancing patient health by improving access to the expertise of Board-Certified Sleep Medicine Specialists. However, such access improvement needs to be anchored in attention to quality and value in diagnosing and treating sleep disorders. Telemedicine is also useful to promote professionalism through patient care coordination and communication between other specialties and sleep medicine. Many of the principles and key concepts adopted here are based on U.S. industry standards, with special consideration given to the body of work by the American Telemedicine Association (http://www.americantelemed.org/), and abide by standards endorsed by the American Medical Association (http://www.ama-assn.org/). Practitioners who wish to integrate sleep telemedicine into their practice should have a clear understanding of the salient issues, key terminology, and the following recommendations from the AASM. The Taskforce recommends the following: Clinical care standards for telemedicine services should mirror those of live office visits, including all aspects of diagnosis and treatment decisions as would be reasonably expected in traditional office-based encounters.Clinical judgment should be exercised when determining the scope and extent of telemedicine applications in the diagnosis and treatment of specific patients and sleep disorders.Live Interactive Telemedicine for sleep disorders, if utilized in a manner consistent with the principles outlined in this document, should be recognized and reimbursed in a manner competitive or comparable with traditional in-person visits.Roles, expectations, and responsibilities of providers involved in the delivery of sleep telemedicine should be defined, including those at originating sites and distant sites.The practice of telemedicine should aim to promote a care model in which sleep specialists, patients, primary care providers, and other members of the healthcare team aim to improve the value of healthcare delivery in a coordinated fashion.Appropriate technical standards should be upheld throughout the telemedicine care delivery process, at both the originating and distant sites, and specifically meet the standards set forth by the Health Insurance Portability and Accountability Act (HIPAA).Methods that aim to improve the utility of telemedicine exist and should be explored, including the utilization of patient presenters, local resources and providers, adjunct testing, and add-on technologies.Quality Assurance processes should be in place for telemedicine care delivery models that aim to capture process measures, patient outcomes, and patient/provider experiences with the model(s) employed.Time for data management, quality processes, and other aspects of care delivery related to telemedicine encounters should be recognized in value-based care delivery models.The use of telemedicine services and its equipment should adhere to strict professional and ethical standards so as not to violate the intent of the telemedicine interaction while aiming to improve overall patient access, quality, and/or value of care.When billing for telemedicine services, it is recommended that patients, providers, and others rendering services understand payor reimbursements, and that there be financial transparency throughout the process.Telemedicine utilization for sleep medicine is likely to rapidly expand, as are broader telehealth applications in general; further research into the impact and outcomes of these are needed. This document serves as a resource by defining issues and terminology and explaining recommendations. However, it is not intended to supersede regulatory or credentialing recommendations and guidelines. It is intended to support and be consistent with professional and ethical standards of the profession. Citation: Singh J, Badr MS, Diebert W, Epstein L, Hwang D, Karres V, Khosla S, Mims KN, Shamim-Uzzaman A, Kirsch D, Heald JL, McCann K. American Academy of Sleep Medicine (AASM) position paper for the use of telemedicine for the diagnosis and treatment of sleep disorders. J Clin Sleep Med 2015;11(10):1187–1198. PMID:26414983

  14. American Academy of Sleep Medicine (AASM) Position Paper for the Use of Telemedicine for the Diagnosis and Treatment of Sleep Disorders.

    PubMed

    Singh, Jaspal; Badr, M Safwan; Diebert, Wendy; Epstein, Lawrence; Hwang, Dennis; Karres, Valerie; Khosla, Seema; Mims, K Nicole; Shamim-Uzzaman, Affifa; Kirsch, Douglas; Heald, Jonathan L; McCann, Kathleen

    2015-10-15

    The American Academy of Sleep Medicine's (AASM) Taskforce on Sleep Telemedicine supports telemedicine as a means of advancing patient health by improving access to the expertise of Board-Certified Sleep Medicine Specialists. However, such access improvement needs to be anchored in attention to quality and value in diagnosing and treating sleep disorders. Telemedicine is also useful to promote professionalism through patient care coordination and communication between other specialties and sleep medicine. Many of the principles and key concepts adopted here are based on U.S. industry standards, with special consideration given to the body of work by the American Telemedicine Association (http://www.americantelemed.org/), and abide by standards endorsed by the American Medical Association (http://www.ama-assn.org/). Practitioners who wish to integrate sleep telemedicine into their practice should have a clear understanding of the salient issues, key terminology, and the following recommendations from the AASM. The Taskforce recommends the following: • Clinical care standards for telemedicine services should mirror those of live office visits, including all aspects of diagnosis and treatment decisions as would be reasonably expected in traditional office-based encounters. • Clinical judgment should be exercised when determining the scope and extent of telemedicine applications in the diagnosis and treatment of specific patients and sleep disorders. • Live Interactive Telemedicine for sleep disorders, if utilized in a manner consistent with the principles outlined in this document, should be recognized and reimbursed in a manner competitive or comparable with traditional in-person visits. • Roles, expectations, and responsibilities of providers involved in the delivery of sleep telemedicine should be defined, including those at originating sites and distant sites. • The practice of telemedicine should aim to promote a care model in which sleep specialists, patients, primary care providers, and other members of the healthcare team aim to improve the value of healthcare delivery in a coordinated fashion. • Appropriate technical standards should be upheld throughout the telemedicine care delivery process, at both the originating and distant sites, and specifically meet the standards set forth by the Health Insurance Portability and Accountability Act (HIPAA). • Methods that aim to improve the utility of telemedicine exist and should be explored, including the utilization of patient presenters, local resources and providers, adjunct testing, and add-on technologies. • Quality Assurance processes should be in place for telemedicine care delivery models that aim to capture process measures, patient outcomes, and patient/provider experiences with the model(s) employed. • Time for data management, quality processes, and other aspects of care delivery related to telemedicine encounters should be recognized in value-based care delivery models. • The use of telemedicine services and its equipment should adhere to strict professional and ethical standards so as not to violate the intent of the telemedicine interaction while aiming to improve overall patient access, quality, and/or value of care. • When billing for telemedicine services, it is recommended that patients, providers, and others rendering services understand payor reimbursements, and that there be financial transparency throughout the process. • Telemedicine utilization for sleep medicine is likely to rapidly expand, as are broader telehealth applications in general; further research into the impact and outcomes of these are needed. This document serves as a resource by defining issues and terminology and explaining recommendations. However, it is not intended to supersede regulatory or credentialing recommendations and guidelines. It is intended to support and be consistent with professional and ethical standards of the profession. © 2015 American Academy of Sleep Medicine.

  15. Building a Trustworthy Environmental Science Data Repository: Lessons Learned from the ORNL DAAC

    NASA Astrophysics Data System (ADS)

    Wei, Y.; Santhana Vannan, S. K.; Boyer, A.; Beaty, T.; Deb, D.; Hook, L.

    2017-12-01

    The Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC, https://daac.ornl.gov) for biogeochemical dynamics is one of NASA's Earth Observing System Data and Information System (EOSDIS) data centers. The mission of the ORNL DAAC is to assemble, distribute, and provide data services for a comprehensive archive of terrestrial biogeochemistry and ecological dynamics observations and models to facilitate research, education, and decision-making in support of NASA's Earth Science. Since its establishment in 1994, ORNL DAAC has been continuously building itself into a trustworthy environmental science data repository by not only ensuring the quality and usability of its data holdings, but also optimizing its data publication and management process. This paper describes the lessons learned from ORNL DAAC's effort toward this goal. ORNL DAAC has been proactively implementing international community standards throughout its data management life cycle, including data publication, preservation, discovery, visualization, and distribution. Data files in standard formats, detailed documentation, and metadata following standard models are prepared to improve the usability and longevity of data products. Assignment of a Digital Object Identifier (DOI) ensures the identifiability and accessibility of every data product, including the different versions and revisions of its life cycle. ORNL DAAC's data citation policy assures data producers receive appropriate recognition of use of their products. Web service standards, such as OpenSearch and Open Geospatial Consortium (OGC), promotes the discovery, visualization, distribution, and integration of ORNL DAAC's data holdings. Recently, ORNL DAAC began efforts to optimize and standardize its data archival and data publication workflows, to improve the efficiency and transparency of its data archival and management processes.

  16. Genetic analysis of a red tilapia (Oreochromis spp.) population undergoing three generations of selection for increased body weight at harvest.

    PubMed

    Hamzah, Azhar; Thoa, Ngo Phu; Nguyen, Nguyen Hong

    2017-11-01

    Quantitative genetic analysis was performed on 10,919 data records collected over three generations from the selection programme for increased body weight at harvest in red tilapia (Oreochromis spp.). They were offspring of 224 sires and 226 dams (50 sires and 60 dams per generation, on average). Linear mixed models were used to analyse body traits (weight, length, width and depth), whereas threshold generalised models assuming probit distribution were employed to examine genetic inheritance of survival rate, sexual maturity and body colour. The estimates of heritability for traits studied (body weight, standard length, body width, body depth, body colour, early sexual maturation and survival) across statistical models were moderate to high (0.13-0.45). Genetic correlations among body traits and survival were high and positive (0.68-0.96). Body length and width exhibited negative genetic correlations with body colour (- 0.47 to - 0.25). Sexual maturity was genetically correlated positively with measurements of body traits (weight and length). Direct and correlated genetic responses to selection were measured as estimated breeding values in each generation and expressed in genetic standard deviation units (σ G ). The cumulative improvement achieved for harvest body weight was 1.72 σ G after three generations or 12.5% per generation when the gain was expressed as a percentage of the base population. Selection for improved body weight also resulted in correlated increase in other body traits (length, width and depth) and survival rate (ranging from 0.25 to 0.81 genetic standard deviation units). Avoidance of black spot parent matings also improved the overall red colour of the selected population. It is concluded that the selective breeding programme for red tilapia has succeeded in achieving significant genetic improvement for a range of commercially important traits in this species, and the large genetic variation in body colour and survival also shows that there are prospects for future improvement of these traits in this population of red tilapia.

  17. Improving operating room productivity via parallel anesthesia processing.

    PubMed

    Brown, Michael J; Subramanian, Arun; Curry, Timothy B; Kor, Daryl J; Moran, Steven L; Rohleder, Thomas R

    2014-01-01

    Parallel processing of regional anesthesia may improve operating room (OR) efficiency in patients undergoes upper extremity surgical procedures. The purpose of this paper is to evaluate whether performing regional anesthesia outside the OR in parallel increases total cases per day, improve efficiency and productivity. Data from all adult patients who underwent regional anesthesia as their primary anesthetic for upper extremity surgery over a one-year period were used to develop a simulation model. The model evaluated pure operating modes of regional anesthesia performed within and outside the OR in a parallel manner. The scenarios were used to evaluate how many surgeries could be completed in a standard work day (555 minutes) and assuming a standard three cases per day, what was the predicted end-of-day time overtime. Modeling results show that parallel processing of regional anesthesia increases the average cases per day for all surgeons included in the study. The average increase was 0.42 surgeries per day. Where it was assumed that three cases per day would be performed by all surgeons, the days going to overtime was reduced by 43 percent with parallel block. The overtime with parallel anesthesia was also projected to be 40 minutes less per day per surgeon. Key limitations include the assumption that all cases used regional anesthesia in the comparisons. Many days may have both regional and general anesthesia. Also, as a case study, single-center research may limit generalizability. Perioperative care providers should consider parallel administration of regional anesthesia where there is a desire to increase daily upper extremity surgical case capacity. Where there are sufficient resources to do parallel anesthesia processing, efficiency and productivity can be significantly improved. Simulation modeling can be an effective tool to show practice change effects at a system-wide level.

  18. A post-assembly genome-improvement toolkit (PAGIT) to obtain annotated genomes from contigs.

    PubMed

    Swain, Martin T; Tsai, Isheng J; Assefa, Samual A; Newbold, Chris; Berriman, Matthew; Otto, Thomas D

    2012-06-07

    Genome projects now produce draft assemblies within weeks owing to advanced high-throughput sequencing technologies. For milestone projects such as Escherichia coli or Homo sapiens, teams of scientists were employed to manually curate and finish these genomes to a high standard. Nowadays, this is not feasible for most projects, and the quality of genomes is generally of a much lower standard. This protocol describes software (PAGIT) that is used to improve the quality of draft genomes. It offers flexible functionality to close gaps in scaffolds, correct base errors in the consensus sequence and exploit reference genomes (if available) in order to improve scaffolding and generating annotations. The protocol is most accessible for bacterial and small eukaryotic genomes (up to 300 Mb), such as pathogenic bacteria, malaria and parasitic worms. Applying PAGIT to an E. coli assembly takes ∼24 h: it doubles the average contig size and annotates over 4,300 gene models.

  19. Measuring The Neutron Lifetime to One Second Using in Beam Techniques

    NASA Astrophysics Data System (ADS)

    Mulholland, Jonathan; NIST In Beam Lifetime Collaboration

    2013-10-01

    The decay of the free neutron is the simplest nuclear beta decay and is the prototype for charged current semi-leptonic weak interactions. A precise value for the neutron lifetime is required for consistency tests of the Standard Model and is an essential parameter in the theory of Big Bang Nucleosynthesis. A new measurement of the neutron lifetime using the in-beam method is planned at the National Institute of Standards and Technology Center for Neutron Research. The systematic effects associated with the in-beam method are markedly different than those found in storage experiments utilizing ultracold neutrons. Experimental improvements, specifically recent advances in the determination of absolute neutron fluence, should permit an overall uncertainty of 1 second on the neutron lifetime. The technical improvements in the in-beam technique, and the path toward improving the precision of the new measurement will be discussed.

  20. An improved rocket ozonesonde (Rocoz-A). III - Northern mid-latitude ozone measurements from 1983 to 1985

    NASA Technical Reports Server (NTRS)

    Barnes, Robert A.; Chamberlain, Marcella A.; Parsons, Chester L.; Holland, Alfred C.

    1989-01-01

    The results of the ozone measurements taken during rocket-busted flights of the rocket ozonesonde Rocoz-A at the NASA Wallops Flight Facility from August 1983 to September 1985 are presented. Nineteen profiles were obtained using Rocoz-A and electrochemical concentration cell ozonesondes, standard U.S. meteorological radiosondes, and Super-Loki datasondes. The results were found to agree with the Krueger and Minzner (1976) midlatitude ozone model for the 1976 U.S. Standard Atmosphere.

  1. An improved simulation of the 2015 El Niño event by optimally correcting the initial conditions and model parameters in an intermediate coupled model

    NASA Astrophysics Data System (ADS)

    Zhang, Rong-Hua; Tao, Ling-Jiang; Gao, Chuan

    2017-09-01

    Large uncertainties exist in real-time predictions of the 2015 El Niño event, which have systematic intensity biases that are strongly model-dependent. It is critically important to characterize those model biases so they can be reduced appropriately. In this study, the conditional nonlinear optimal perturbation (CNOP)-based approach was applied to an intermediate coupled model (ICM) equipped with a four-dimensional variational data assimilation technique. The CNOP-based approach was used to quantify prediction errors that can be attributed to initial conditions (ICs) and model parameters (MPs). Two key MPs were considered in the ICM: one represents the intensity of the thermocline effect, and the other represents the relative coupling intensity between the ocean and atmosphere. Two experiments were performed to illustrate the effects of error corrections, one with a standard simulation and another with an optimized simulation in which errors in the ICs and MPs derived from the CNOP-based approach were optimally corrected. The results indicate that simulations of the 2015 El Niño event can be effectively improved by using CNOP-derived error correcting. In particular, the El Niño intensity in late 2015 was adequately captured when simulations were started from early 2015. Quantitatively, the Niño3.4 SST index simulated in Dec. 2015 increased to 2.8 °C in the optimized simulation, compared with only 1.5 °C in the standard simulation. The feasibility and effectiveness of using the CNOP-based technique to improve ENSO simulations are demonstrated in the context of the 2015 El Niño event. The limitations and further applications are also discussed.

  2. Structuring Legacy Pathology Reports by openEHR Archetypes to Enable Semantic Querying.

    PubMed

    Kropf, Stefan; Krücken, Peter; Mueller, Wolf; Denecke, Kerstin

    2017-05-18

    Clinical information is often stored as free text, e.g. in discharge summaries or pathology reports. These documents are semi-structured using section headers, numbered lists, items and classification strings. However, it is still challenging to retrieve relevant documents since keyword searches applied on complete unstructured documents result in many false positive retrieval results. We are concentrating on the processing of pathology reports as an example for unstructured clinical documents. The objective is to transform reports semi-automatically into an information structure that enables an improved access and retrieval of relevant data. The data is expected to be stored in a standardized, structured way to make it accessible for queries that are applied to specific sections of a document (section-sensitive queries) and for information reuse. Our processing pipeline comprises information modelling, section boundary detection and section-sensitive queries. For enabling a focused search in unstructured data, documents are automatically structured and transformed into a patient information model specified through openEHR archetypes. The resulting XML-based pathology electronic health records (PEHRs) are queried by XQuery and visualized by XSLT in HTML. Pathology reports (PRs) can be reliably structured into sections by a keyword-based approach. The information modelling using openEHR allows saving time in the modelling process since many archetypes can be reused. The resulting standardized, structured PEHRs allow accessing relevant data by retrieving data matching user queries. Mapping unstructured reports into a standardized information model is a practical solution for a better access to data. Archetype-based XML enables section-sensitive retrieval and visualisation by well-established XML techniques. Focussing the retrieval to particular sections has the potential of saving retrieval time and improving the accuracy of the retrieval.

  3. Audit of a Scientific Data Center for Certification as a Trustworthy Digital Repository: A Case Study

    NASA Astrophysics Data System (ADS)

    Downs, R. R.; Chen, R. S.

    2011-12-01

    Services that preserve and enable future access to scientific data are necessary to ensure that the data that are being collected today will be available for use by future generations of scientists. Many data centers, archives, and other digital repositories are working to improve their ability to serve as long-term stewards of scientific data. Trust in sustainable data management and preservation capabilities of digital repositories can influence decisions to use these services to deposit or obtain scientific data. Building on the Open Archival Information System (OAIS) Reference Model developed by the Consultative Committee for Space Data Systems (CCSDS) and adopted by the International Organization for Standardization as ISO 14721:2003, new standards are being developed to improve long-term data management processes and documentation. The Draft Information Standard ISO/DIS 16363, "Space data and information transfer systems - Audit and certification of trustworthy digital repositories" offers the potential to evaluate digital repositories objectively in terms of their trustworthiness as long-term stewards of digital resources. In conjunction with this, the CCSDS and ISO are developing another draft standard for the auditing and certification process, ISO/DIS 16919, "Space data and information transfer systems - Requirements for bodies providing audit and certification of candidate trustworthy digital repositories". Six test audits were conducted of scientific data centers and archives in Europe and the United States to test the use of these draft standards and identify potential improvements for the standards and for the participating digital repositories. We present a case study of the test audit conducted on the NASA Socioeconomic Data and Applications Center (SEDAC) and describe the preparation, the audit process, recommendations received, and next steps to obtain certification as a trustworthy digital repository, after approval of the ISO/DIS standards.

  4. Integrated care as a means to improve primary care delivery for adults and adolescents in the developing world: a critical analysis of Integrated Management of Adolescent and Adult Illness (IMAI).

    PubMed

    Vasan, Ashwin; Ellner, Andrew; Lawn, Stephen D; Gove, Sandy; Anatole, Manzi; Gupta, Neil; Drobac, Peter; Nicholson, Tom; Seung, Kwonjune; Mabey, David C; Farmer, Paul E

    2014-01-14

    More than three decades after the 1978 Declaration of Alma-Ata enshrined the goal of 'health for all', high-quality primary care services remain undelivered to the great majority of the world's poor. This failure to effectively reach the most vulnerable populations has been, in part, a failure to develop and implement appropriate and effective primary care delivery models. This paper examines a root cause of these failures, namely that the inability to achieve clear and practical consensus around the scope and aims of primary care may be contributing to ongoing operational inertia. The present work also examines integrated models of care as a strategy to move beyond conceptual dissonance in primary care and toward implementation. Finally, this paper examines the strengths and weaknesses of a particular model, the World Health Organization's Integrated Management of Adolescent and Adult Illness (IMAI), and its potential as a guidepost toward improving the quality of primary care delivery in poor settings. Integration and integrated care may be an important approach in establishing a new paradigm of primary care delivery, though overall, current evidence is mixed. However, a number of successful specific examples illustrate the potential for clinical and service integration to positively impact patient care in primary care settings. One example deserving of further examination is the IMAI, developed by the World Health Organization as an operational model that integrates discrete vertical interventions into a comprehensive delivery system encompassing triage and screening, basic acute and chronic disease care, basic prevention and treatment services, and follow-up and referral guidelines. IMAI is an integrated model delivered at a single point-of-care using a standard approach to each patient based on the universal patient history and physical examination. The evidence base on IMAI is currently weak, but whether or not IMAI itself ultimately proves useful in advancing primary care delivery, it is these principles that should serve as the basis for developing a standard of integrated primary care delivery for adults and adolescents that can serve as the foundation for ongoing quality improvement. As integrated primary care is the standard of care in the developed world, so too must we move toward implementing integrated models of primary care delivery in poorer settings. Models such as IMAI are an important first step in this evolution. A robust and sustained commitment to innovation, research and quality improvement will be required if integrated primary care delivery is to become a reality in developing world.

  5. Geographic Gossip: Efficient Averaging for Sensor Networks

    NASA Astrophysics Data System (ADS)

    Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.

  6. Software for improving the quality of project management, a case study: international manufacture of electrical equipment

    NASA Astrophysics Data System (ADS)

    Preradović, D. M.; Mićić, Lj S.; Barz, C.

    2017-05-01

    Production conditions in today’s world require software support at every stage of production and development of new products, for quality assurance and compliance with ISO standards. In addition to ISO standards such as usual metrics of quality, companies today are focused on other optional standards, such as CMMI (Capability Maturity Model Integrated) or prescribing they own standards. However, while there is intensive progress being made in the PM (project management), there is still a significant number of projects, at the global level, that are failures. These have failed to achieve their goals, within budget or timeframe. This paper focuses on checking the role of software tools through the rate of success in projects implemented in the case of internationally manufactured electrical equipment. The results of this research show the level of contribution of the project management software used to manage and develop new products to improve PM processes and PM functions, and how selection of the software tools affects the quality of PM processes and successfully completed projects.

  7. Improved Surgery Planning Using 3-D Printing: a Case Study.

    PubMed

    Singhal, A J; Shetty, V; Bhagavan, K R; Ragothaman, Ananthan; Shetty, V; Koneru, Ganesh; Agarwala, M

    2016-04-01

    The role of 3-D printing is presented for improved patient-specific surgery planning. Key benefits are time saved and surgery outcome. Two hard-tissue surgery models were 3-D printed, for orthopedic, pelvic surgery, and craniofacial surgery. We discuss software data conversion in computed tomography (CT)/magnetic resonance (MR) medical image for 3-D printing. 3-D printed models save time in surgery planning and help visualize complex pre-operative anatomy. Time saved in surgery planning can be as much as two thirds. In addition to improved surgery accuracy, 3-D printing presents opportunity in materials research. Other hard-tissue and soft-tissue cases in maxillofacial, abdominal, thoracic, cardiac, orthodontics, and neurosurgery are considered. We recommend using 3-D printing as standard protocol for surgery planning and for teaching surgery practices. A quick turnaround time of a 3-D printed surgery model, in improved accuracy in surgery planning, is helpful for the surgery team. It is recommended that these costs be within 20 % of the total surgery budget.

  8. Internal audit in a microbiology laboratory.

    PubMed Central

    Mifsud, A J; Shafi, M S

    1995-01-01

    AIM--To set up a programme of internal laboratory audit in a medical microbiology laboratory. METHODS--A model of laboratory based process audit is described. Laboratory activities were examined in turn by specimen type. Standards were set using laboratory standard operating procedures; practice was observed using a purpose designed questionnaire and the data were analysed by computer; performance was assessed at laboratory audit meetings; and the audit circle was closed by re-auditing topics after an interval. RESULTS--Improvements in performance scores (objective measures) and in staff morale (subjective impression) were observed. CONCLUSIONS--This model of process audit could be applied, with amendments to take local practice into account, in any microbiology laboratory. PMID:7665701

  9. Search for Decays of the Λ$$0\\atop{b}$$ Baryon with the D0 Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Camacho, Enrique

    2011-11-25

    This thesis presents work I performed within the D0 Collaboration to make the measurement of the Branching Ratio of Λmore » $$0\\atop{b}$$ baryon in the channel Λ$$0\\atop{b}$$ → J/ΨΛ 0 . The b-hadron such as the Λ$$0\\atop{b}$$ are currently the subject of much research in both the theorical and experimental particle physics communities. Measurements of the production and decays of b-hadrons can improve the understanding of the electroweak and strong interactions described by the Standard Model of particle physics, as well as proving opportunities to search for physics beyond the Standard Model.« less

  10. The 1-loop effective potential for the Standard Model in curved spacetime

    NASA Astrophysics Data System (ADS)

    Markkanen, Tommi; Nurmi, Sami; Rajantie, Arttu; Stopyra, Stephen

    2018-06-01

    The renormalisation group improved Standard Model effective potential in an arbitrary curved spacetime is computed to one loop order in perturbation theory. The loop corrections are computed in the ultraviolet limit, which makes them independent of the choice of the vacuum state and allows the derivation of the complete set of β-functions. The potential depends on the spacetime curvature through the direct non-minimal Higgs-curvature coupling, curvature contributions to the loop diagrams, and through the curvature dependence of the renormalisation scale. Together, these lead to significant curvature dependence, which needs to be taken into account in cosmological applications, which is demonstrated with the example of vacuum stability in de Sitter space.

  11. Precision Measurement of the β Asymmetry in Spin-Polarized K 37 Decay

    NASA Astrophysics Data System (ADS)

    Fenker, B.; Gorelov, A.; Melconian, D.; Behr, J. A.; Anholm, M.; Ashery, D.; Behling, R. S.; Cohen, I.; Craiciu, I.; Gwinner, G.; McNeil, J.; Mehlman, M.; Olchanski, K.; Shidling, P. D.; Smale, S.; Warner, C. L.

    2018-02-01

    Using Triumf's neutral atom trap, Trinat, for nuclear β decay, we have measured the β asymmetry with respect to the initial nuclear spin in K 37 to be Aβ=-0.5707 (13) syst(13) stat(5) pol , a 0.3% measurement. This is the best relative accuracy of any β -asymmetry measurement in a nucleus or the neutron, and is in agreement with the standard model prediction -0.5706 (7 ). We compare constraints on physics beyond the standard model with other β -decay measurements, and improve the value of Vud measured in this mirror nucleus by a factor of 4.

  12. Use of Multivariate Techniques to Validate and Improve the Current USAF Pilot Candidate Selection Model

    DTIC Science & Technology

    2003-03-01

    organizations . Reducing attrition rates through optimal selection decisions can “reduce training cost, improve job performance, and enhance...capturing the weights for use in the SNR method is not straightforward. A special VBA application had to be written to capture and organize the network...before the VBA application can be used. Appendix D provides the VBA code used to import and organize the network weights and input standardization

  13. Improved Measurement of the π → e ν Branching Ratio

    DOE PAGES

    Aguilar-Arevalo, A.; Aoki, M.; Blecher, M.; ...

    2015-08-01

    A new measurement of the branching ratio R e/μ=Γ(π + → e +ν + π + → e +νγ)/Γ(π+ → μ+ν + π +→μ+νγ) resulted in R exp e/μ=[1.2344±0.0023(stat)±0.0019(syst)] x 10 -4. This is in agreement with the standard model prediction and improves the test of electron-muon universality to the level of 0.1%.

  14. Citygml and the Streets of New York - a Proposal for Detailed Street Space Modelling

    NASA Astrophysics Data System (ADS)

    Beil, C.; Kolbe, T. H.

    2017-10-01

    Three-dimensional semantic city models are increasingly used for the analysis of large urban areas. Until now the focus has mostly been on buildings. Nonetheless many applications could also benefit from detailed models of public street space for further analysis. However, there are only few guidelines for representing roads within city models. Therefore, related standards dealing with street modelling are examined and discussed. Nearly all street representations are based on linear abstractions. However, there are many use cases that require or would benefit from the detailed geometrical and semantic representation of street space. A variety of potential applications for detailed street space models are presented. Subsequently, based on related standards as well as on user requirements, a concept for a CityGML-compliant representation of street space in multiple levels of detail is developed. In the course of this process, the CityGML Transportation model of the currently valid OGC standard CityGML2.0 is examined to discover possibilities for further developments. Moreover, a number of improvements are presented. Finally, based on open data sources, the proposed concept is implemented within a semantic 3D city model of New York City generating a detailed 3D street space model for the entire city. As a result, 11 thematic classes, such as roadbeds, sidewalks or traffic islands are generated and enriched with a large number of thematic attributes.

  15. Improving Global Health Education: Development of a Global Health Competency Model

    PubMed Central

    Ablah, Elizabeth; Biberman, Dorothy A.; Weist, Elizabeth M.; Buekens, Pierre; Bentley, Margaret E.; Burke, Donald; Finnegan, John R.; Flahault, Antoine; Frenk, Julio; Gotsch, Audrey R.; Klag, Michael J.; Lopez, Mario Henry Rodriguez; Nasca, Philip; Shortell, Stephen; Spencer, Harrison C.

    2014-01-01

    Although global health is a recommended content area for the future of education in public health, no standardized global health competency model existed for master-level public health students. Without such a competency model, academic institutions are challenged to ensure that students are able to demonstrate the knowledge, skills, and attitudes (KSAs) needed for successful performance in today's global health workforce. The Association of Schools of Public Health (ASPH) sought to address this need by facilitating the development of a global health competency model through a multistage modified-Delphi process. Practitioners and academic global health experts provided leadership and guidance throughout the competency development process. The resulting product, the Global Health Competency Model 1.1, includes seven domains and 36 competencies. The Global Health Competency Model 1.1 provides a platform for engaging educators, students, and global health employers in discussion of the KSAs needed to improve human health on a global scale. PMID:24445206

  16. Mammographic density, breast cancer risk and risk prediction

    PubMed Central

    Vachon, Celine M; van Gils, Carla H; Sellers, Thomas A; Ghosh, Karthik; Pruthi, Sandhya; Brandt, Kathleen R; Pankratz, V Shane

    2007-01-01

    In this review, we examine the evidence for mammographic density as an independent risk factor for breast cancer, describe the risk prediction models that have incorporated density, and discuss the current and future implications of using mammographic density in clinical practice. Mammographic density is a consistent and strong risk factor for breast cancer in several populations and across age at mammogram. Recently, this risk factor has been added to existing breast cancer risk prediction models, increasing the discriminatory accuracy with its inclusion, albeit slightly. With validation, these models may replace the existing Gail model for clinical risk assessment. However, absolute risk estimates resulting from these improved models are still limited in their ability to characterize an individual's probability of developing cancer. Promising new measures of mammographic density, including volumetric density, which can be standardized using full-field digital mammography, will likely result in a stronger risk factor and improve accuracy of risk prediction models. PMID:18190724

  17. Improving the accuracy of macromolecular structure refinement at 7 Å resolution.

    PubMed

    Brunger, Axel T; Adams, Paul D; Fromme, Petra; Fromme, Raimund; Levitt, Michael; Schröder, Gunnar F

    2012-06-06

    In X-ray crystallography, molecular replacement and subsequent refinement is challenging at low resolution. We compared refinement methods using synchrotron diffraction data of photosystem I at 7.4 Å resolution, starting from different initial models with increasing deviations from the known high-resolution structure. Standard refinement spoiled the initial models, moving them further away from the true structure and leading to high R(free)-values. In contrast, DEN refinement improved even the most distant starting model as judged by R(free), atomic root-mean-square differences to the true structure, significance of features not included in the initial model, and connectivity of electron density. The best protocol was DEN refinement with initial segmented rigid-body refinement. For the most distant initial model, the fraction of atoms within 2 Å of the true structure improved from 24% to 60%. We also found a significant correlation between R(free) values and the accuracy of the model, suggesting that R(free) is useful even at low resolution. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Improving healthcare value through clinical community and supply chain collaboration.

    PubMed

    Ishii, Lisa; Demski, Renee; Ken Lee, K H; Mustafa, Zishan; Frank, Steve; Wolisnky, Jean Paul; Cohen, David; Khanna, Jay; Ammerman, Joshua; Khanuja, Harpal S; Unger, Anthony S; Gould, Lois; Wachter, Patricia Ann; Stearns, Lauren; Werthman, Ronald; Pronovost, Peter

    2017-03-01

    We hypothesized that integrating supply chain with clinical communities would allow for clinician-led supply cost reduction and improved value in an academic health system. Three clinical communities (spine, joint, blood management) and one clinical community-like physician led team of surgeon stakeholders partnered with the supply chain team on specific supply cost initiatives. The teams reviewed their specific utilization and cost data, and the physicians led consensus-building conversations over a series of team meetings to agree to standard supply utilization. The spine and joint clinical communities each agreed upon a vendor capping model that led to cost savings of $3 million dollars and $1.5 million dollars respectively. The blood management decreased blood product utilization and achieved $1.2 million dollars savings. $5.6 million dollars in savings was achieved by a clinical community-like group of surgeon stakeholders through standardization of sutures and endomechanicals. Physician led clinical teams empowered to lead change achieved substantial supply chain cost savings in an academic health system. The model of combining clinical communities with supply chain offers hope for an effective, practical, and scalable approach to improving value and engaging physicians in other academic health systems. This clinician led model could benefit both private and academic health systems engaging in value optimization efforts. N/A. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. So You Want to Be Trustworthy: A Repository's Guide to Taking Reasonable Steps Towards Achieving ISO 16363

    NASA Astrophysics Data System (ADS)

    Stall, S.

    2016-12-01

    To be trustworthy is to be reliable, dependable, honest, principled, ethical, incorruptible, and more. A trustworthy person demonstrates these qualities over time and under all circumstances. A trustworthy repository demonstrates these qualities through the team that manages the repository and its responsible organization. The requirements of a Trusted Digital Repository (TDR) in ISO 16363 can be tough to reach and tough to maintain. Challenges include: limited funds, limited resources and/or skills, and an unclear path to successfully achieve the requirements. The ISO standard defines each requirement separately, but a successful certification recognizes that there are many cross-dependencies among the requirements. Understanding these dependencies leads to a more efficient path towards success. At AGU we recognize that reaching the goal of the TDR ISO standard, or any set of data management objectives defined by an organization, has a better chance at success if the organization clearly knows their current capability, the improvements that are needed, and the best way to make (and maintain) those changes. AGU has partnered with the CMMI® Institute to adapt their Data Management Maturity (DMM)SM model within the Earth and space sciences. Using the DMM, AGU developed a new Data Management Assessment Program aimed at helping data repositories, large and small, domain-specific to general, assess and improve data management practices to meet their goals - including becoming a Trustworthy Digital Repository. The requirements to achieve the TDR ISO standard are aligned to the data management best practices defined in the Data Management Maturity (DMM)SM model. Using the DMM as a process improvement tool in conjunction with the Data Management Assessment method, a team seeking the objective of the TDR ISO standard receives a clear road map to achieving their goal as an outcome of the assessment. Publishers and agencies are beginning to recommend or even require that repositories demonstrate that they are practicing best practices or meeting certain standards. Data preserved in a data facility that is working on achieving a TDR standard will have the level of care desired by the publishing community as well as the science community. Better Data Management results in Better Science.

  20. Angular dependence models for radiance to flux conversion

    NASA Technical Reports Server (NTRS)

    Green, Richard N.; Suttles, John T.; Wielicki, Bruce A.

    1990-01-01

    Angular dependence models (ADM) used for converting the measured radiance to flux at the top of the atmosphere are reviewed, and emphasis is placed on the measure of their effectiveness and the implications of requiring the ADMs to satisfy reciprocity. The overall significance of the ADMs is figured out by analyzing the same satellite data with a single Lambertian model, single mean model, and the 12 Earth Radiation Budget Experiment (ERBE) ADMs. It is shown that the Lambertian ADM is inadequate, while the mean ADM results in nearly unbiased fluxes but creates substantial differences for individual pixel fluxes. The standard ERBE ADM works well except for a 10-pct to 15-pct albedo growth across the scan; a modified ADM based on the standard ERBE ADM but forced to satisfy the principle of reciprocity increases the limb brightening and reduces the albedo growth but does not improve the scanner and nonscanner intercomparison.

  1. A new item response theory model to adjust data allowing examinee choice

    PubMed Central

    Costa, Marcelo Azevedo; Braga Oliveira, Rivert Paulo

    2018-01-01

    In a typical questionnaire testing situation, examinees are not allowed to choose which items they answer because of a technical issue in obtaining satisfactory statistical estimates of examinee ability and item difficulty. This paper introduces a new item response theory (IRT) model that incorporates information from a novel representation of questionnaire data using network analysis. Three scenarios in which examinees select a subset of items were simulated. In the first scenario, the assumptions required to apply the standard Rasch model are met, thus establishing a reference for parameter accuracy. The second and third scenarios include five increasing levels of violating those assumptions. The results show substantial improvements over the standard model in item parameter recovery. Furthermore, the accuracy was closer to the reference in almost every evaluated scenario. To the best of our knowledge, this is the first proposal to obtain satisfactory IRT statistical estimates in the last two scenarios. PMID:29389996

  2. Cardiac rehabilitation using the Family-Centered Empowerment Model versus home-based cardiac rehabilitation in patients with myocardial infarction: a randomised controlled trial

    PubMed Central

    Vahedian-Azimi, Amir; Hajiesmaieli, Mohammadreza; Kangasniemi, Mari; Alhani, Fatemah; Jelvehmoghaddam, Hosseinali; Fathi, Mohammad; Farzanegan, Behrooz; Ardehali, Seyed H; Hatamian, Sevak; Gahremani, Mehdi; Mosavinasab, Seyed M M; Rostami, Zohreh; Madani, Seyed J; Izadi, Morteza

    2016-01-01

    Objective To determine if a hybrid cardiac rehabilitation (CR) programme using the Family-Centered Empowerment Model (FCEM) as compared with standard CR will improve patient quality of life, perceived stress and state anxiety of patients with myocardial infarction (MI). Methods We conducted a randomised controlled trial in which patients received either standard home CR or CR using the FCEM strategy. Patient empowerment was measured with FCEM questionnaires preintervention and postintervention for a total of 9 assessments. Quality of life, perceived stress, and state and trait anxiety were assessed using the 36-Item Short Form Health Survey (SF-36), the 14-item Perceived Stress, and the 20-item State and 20-item Trait Anxiety questionnaires, respectively. Results 70 patients were randomised. Baseline characteristics were similar. Ejection fraction was significantly higher in the intervention group at measurements 2 (p=0.01) and 3 (p=0.001). Exercise tolerance measured as walking distance was significantly improved in the intervention group throughout the study. The quality of life results in the FCEM group showed significant improvement both within the group over time (p<0.0001) and when compared with control (p<0.0001). Similarly, the perceived stress and state anxiety results showed significant improvement both within the FCEM group over time (p<0.0001) and when compared with control (p<0.0001). No significant difference was found either within or between groups for trait anxiety. Conclusions The family-centred empowerment model may be an effective hybrid cardiac rehabilitation method for improving the physical and mental health of patients post-MI; however, further study is needed to validate these findings. Clinical Trials.gov identifier NCT02402582. Trial registration number NCT02402582. PMID:27110376

  3. Turbulence Modeling Workshop

    NASA Technical Reports Server (NTRS)

    Rubinstein, R. (Editor); Rumsey, C. L. (Editor); Salas, M. D. (Editor); Thomas, J. L. (Editor); Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    Advances in turbulence modeling are needed in order to calculate high Reynolds number flows near the onset of separation and beyond. To this end, the participants in this workshop made the following recommendations. (1) A national/international database and standards for turbulence modeling assessment should be established. Existing experimental data sets should be reviewed and categorized. Advantage should be taken of other efforts already under-way, such as that of the European Research Community on Flow, Turbulence, and Combustion (ERCOFTAC) consortium. Carefully selected "unit" experiments will be needed, as well as advances in instrumentation, to fill the gaps in existing datasets. A high priority should be given to document existing turbulence model capabilities in a standard form, including numerical implementation issues such as grid quality and resolution. (2) NASA should support long-term research on Algebraic Stress Models and Reynolds Stress Models. The emphasis should be placed on improving the length-scale equation, since it is the least understood and is a key component of two-equation and higher models. Second priority should be given to the development of improved near-wall models. Direct Numerical Simulations (DNS) and Large Eddy Simulations (LES) would provide valuable guidance in developing and validating new Reynolds-averaged Navier-Stokes (RANS) models. Although not the focus of this workshop, DNS, LES, and hybrid methods currently represent viable approaches for analysis on a limited basis. Therefore, although computer limitations require the use of RANS methods for realistic configurations at high Reynolds number in the foreseeable future, a balanced effort in turbulence modeling development, validation, and implementation should include these approaches as well.

  4. Diesel engine emissions and combustion predictions using advanced mixing models applicable to fuel sprays

    NASA Astrophysics Data System (ADS)

    Abani, Neerav; Reitz, Rolf D.

    2010-09-01

    An advanced mixing model was applied to study engine emissions and combustion with different injection strategies ranging from multiple injections, early injection and grouped-hole nozzle injection in light and heavy duty diesel engines. The model was implemented in the KIVA-CHEMKIN engine combustion code and simulations were conducted at different mesh resolutions. The model was compared with the standard KIVA spray model that uses the Lagrangian-Drop and Eulerian-Fluid (LDEF) approach, and a Gas Jet spray model that improves predictions of liquid sprays. A Vapor Particle Method (VPM) is introduced that accounts for sub-grid scale mixing of fuel vapor and more accurately and predicts the mixing of fuel-vapor over a range of mesh resolutions. The fuel vapor is transported as particles until a certain distance from nozzle is reached where the local jet half-width is adequately resolved by the local mesh scale. Within this distance the vapor particle is transported while releasing fuel vapor locally, as determined by a weighting factor. The VPM model more accurately predicts fuel-vapor penetrations for early cycle injections and flame lift-off lengths for late cycle injections. Engine combustion computations show that as compared to the standard KIVA and Gas Jet spray models, the VPM spray model improves predictions of in-cylinder pressure, heat released rate and engine emissions of NOx, CO and soot with coarse mesh resolutions. The VPM spray model is thus a good tool for efficiently investigating diesel engine combustion with practical mesh resolutions, thereby saving computer time.

  5. Scientific analysis of satellite ranging data

    NASA Technical Reports Server (NTRS)

    Smith, David E.

    1994-01-01

    A network of satellite laser ranging (SLR) tracking systems with continuously improving accuracies is challenging the modelling capabilities of analysts worldwide. Various data analysis techniques have yielded many advances in the development of orbit, instrument and Earth models. The direct measurement of the distance to the satellite provided by the laser ranges has given us a simple metric which links the results obtained by diverse approaches. Different groups have used SLR data, often in combination with observations from other space geodetic techniques, to improve models of the static geopotential, the solid Earth, ocean tides, and atmospheric drag models for low Earth satellites. Radiation pressure models and other non-conservative forces for satellite orbits above the atmosphere have been developed to exploit the full accuracy of the latest SLR instruments. SLR is the baseline tracking system for the altimeter missions TOPEX/Poseidon, and ERS-1 and will play an important role in providing the reference frame for locating the geocentric position of the ocean surface, in providing an unchanging range standard for altimeter calibration, and for improving the geoid models to separate gravitational from ocean circulation signals seen in the sea surface. However, even with the many improvements in the models used to support the orbital analysis of laser observations, there remain systematic effects which limit the full exploitation of SLR accuracy today.

  6. Joint models for longitudinal and time-to-event data: a review of reporting quality with a view to meta-analysis.

    PubMed

    Sudell, Maria; Kolamunnage-Dona, Ruwanthi; Tudur-Smith, Catrin

    2016-12-05

    Joint models for longitudinal and time-to-event data are commonly used to simultaneously analyse correlated data in single study cases. Synthesis of evidence from multiple studies using meta-analysis is a natural next step but its feasibility depends heavily on the standard of reporting of joint models in the medical literature. During this review we aim to assess the current standard of reporting of joint models applied in the literature, and to determine whether current reporting standards would allow or hinder future aggregate data meta-analyses of model results. We undertook a literature review of non-methodological studies that involved joint modelling of longitudinal and time-to-event medical data. Study characteristics were extracted and an assessment of whether separate meta-analyses for longitudinal, time-to-event and association parameters were possible was made. The 65 studies identified used a wide range of joint modelling methods in a selection of software. Identified studies concerned a variety of disease areas. The majority of studies reported adequate information to conduct a meta-analysis (67.7% for longitudinal parameter aggregate data meta-analysis, 69.2% for time-to-event parameter aggregate data meta-analysis, 76.9% for association parameter aggregate data meta-analysis). In some cases model structure was difficult to ascertain from the published reports. Whilst extraction of sufficient information to permit meta-analyses was possible in a majority of cases, the standard of reporting of joint models should be maintained and improved. Recommendations for future practice include clear statement of model structure, of values of estimated parameters, of software used and of statistical methods applied.

  7. Research and development of energy-efficient appliance motor-compressors. Volume IV. Production demonstration and field test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Middleton, M.G.; Sauber, R.S.

    Two models of a high-efficiency compressor were manufactured in a pilot production run. These compressors were for low back-pressure applications. While based on a production compressor, there were many changes that required production process changes. Some changes were performed within our company and others were made by outside vendors. The compressors were used in top mount refrigerator-freezers and sold in normal distribution channels. Forty units were placed in residences for a one-year field test. Additional compressors were built so that a life test program could be performed. The results of the field test reveal a 27.0% improvement in energy consumptionmore » for the 18 ft/sup 3/ high-efficiency model and a 15.6% improvement in the 21 ft/sup 3/ improvement in the 21 ft/sup 3/ high-efficiency model as compared to the standard production unit.« less

  8. Business Model for the Security of a Large-Scale PACS, Compliance with ISO/27002:2013 Standard.

    PubMed

    Gutiérrez-Martínez, Josefina; Núñez-Gaona, Marco Antonio; Aguirre-Meneses, Heriberto

    2015-08-01

    Data security is a critical issue in an organization; a proper information security management (ISM) is an ongoing process that seeks to build and maintain programs, policies, and controls for protecting information. A hospital is one of the most complex organizations, where patient information has not only legal and economic implications but, more importantly, an impact on the patient's health. Imaging studies include medical images, patient identification data, and proprietary information of the study; these data are contained in the storage device of a PACS. This system must preserve the confidentiality, integrity, and availability of patient information. There are techniques such as firewalls, encryption, and data encapsulation that contribute to the protection of information. In addition, the Digital Imaging and Communications in Medicine (DICOM) standard and the requirements of the Health Insurance Portability and Accountability Act (HIPAA) regulations are also used to protect the patient clinical data. However, these techniques are not systematically applied to the picture and archiving and communication system (PACS) in most cases and are not sufficient to ensure the integrity of the images and associated data during transmission. The ISO/IEC 27001:2013 standard has been developed to improve the ISM. Currently, health institutions lack effective ISM processes that enable reliable interorganizational activities. In this paper, we present a business model that accomplishes the controls of ISO/IEC 27002:2013 standard and criteria of security and privacy from DICOM and HIPAA to improve the ISM of a large-scale PACS. The methodology associated with the model can monitor the flow of data in a PACS, facilitating the detection of unauthorized access to images and other abnormal activities.

  9. Problems with Using the Normal Distribution – and Ways to Improve Quality and Efficiency of Data Analysis

    PubMed Central

    Limpert, Eckhard; Stahel, Werner A.

    2011-01-01

    Background The Gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by ± SD, or with the standard error of the mean, ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Methodology/Principal Findings Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the “95% range check”, their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to ± SD, it connects the multiplicative (or geometric) mean * and the multiplicative standard deviation s* in the form * x/s*, that is advantageous and recommended. Conclusions/Significance The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life. PMID:21779325

  10. Problems with using the normal distribution--and ways to improve quality and efficiency of data analysis.

    PubMed

    Limpert, Eckhard; Stahel, Werner A

    2011-01-01

    The gaussian or normal distribution is the most established model to characterize quantitative variation of original data. Accordingly, data are summarized using the arithmetic mean and the standard deviation, by mean ± SD, or with the standard error of the mean, mean ± SEM. This, together with corresponding bars in graphical displays has become the standard to characterize variation. Here we question the adequacy of this characterization, and of the model. The published literature provides numerous examples for which such descriptions appear inappropriate because, based on the "95% range check", their distributions are obviously skewed. In these cases, the symmetric characterization is a poor description and may trigger wrong conclusions. To solve the problem, it is enlightening to regard causes of variation. Multiplicative causes are by far more important than additive ones, in general, and benefit from a multiplicative (or log-) normal approach. Fortunately, quite similar to the normal, the log-normal distribution can now be handled easily and characterized at the level of the original data with the help of both, a new sign, x/, times-divide, and notation. Analogous to mean ± SD, it connects the multiplicative (or geometric) mean mean * and the multiplicative standard deviation s* in the form mean * x/s*, that is advantageous and recommended. The corresponding shift from the symmetric to the asymmetric view will substantially increase both, recognition of data distributions, and interpretation quality. It will allow for savings in sample size that can be considerable. Moreover, this is in line with ethical responsibility. Adequate models will improve concepts and theories, and provide deeper insight into science and life.

  11. An approach for the semantic interoperability of ISO EN 13606 and OpenEHR archetypes.

    PubMed

    Martínez-Costa, Catalina; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2010-10-01

    The communication between health information systems of hospitals and primary care organizations is currently an important challenge to improve the quality of clinical practice and patient safety. However, clinical information is usually distributed among several independent systems that may be syntactically or semantically incompatible. This fact prevents healthcare professionals from accessing clinical information of patients in an understandable and normalized way. In this work, we address the semantic interoperability of two EHR standards: OpenEHR and ISO EN 13606. Both standards follow the dual model approach which distinguishes information and knowledge, this being represented through archetypes. The solution presented here is capable of transforming OpenEHR archetypes into ISO EN 13606 and vice versa by combining Semantic Web and Model-driven Engineering technologies. The resulting software implementation has been tested using publicly available collections of archetypes for both standards.

  12. Robust tuning of robot control systems

    NASA Technical Reports Server (NTRS)

    Minis, I.; Uebel, M.

    1992-01-01

    The computed torque control problem is examined for a robot arm with flexible, geared, joint drive systems which are typical in many industrial robots. The standard computed torque algorithm is not directly applicable to this class of manipulators because of the dynamics introduced by the joint drive system. The proposed approach to computed torque control combines a computed torque algorithm with torque controller at each joint. Three such control schemes are proposed. The first scheme uses the joint torque control system currently implemented on the robot arm and a novel form of the computed torque algorithm. The other two use the standard computed torque algorithm and a novel model following torque control system based on model following techniques. Standard tasks and performance indices are used to evaluate the performance of the controllers. Both numerical simulations and experiments are used in evaluation. The study shows that all three proposed systems lead to improved tracking performance over a conventional PD controller.

  13. Minimum Information about a Cardiac Electrophysiology Experiment (MICEE): Standardised Reporting for Model Reproducibility, Interoperability, and Data Sharing

    PubMed Central

    Quinn, TA; Granite, S; Allessie, MA; Antzelevitch, C; Bollensdorff, C; Bub, G; Burton, RAB; Cerbai, E; Chen, PS; Delmar, M; DiFrancesco, D; Earm, YE; Efimov, IR; Egger, M; Entcheva, E; Fink, M; Fischmeister, R; Franz, MR; Garny, A; Giles, WR; Hannes, T; Harding, SE; Hunter, PJ; Iribe, G; Jalife, J; Johnson, CR; Kass, RS; Kodama, I; Koren, G; Lord, P; Markhasin, VS; Matsuoka, S; McCulloch, AD; Mirams, GR; Morley, GE; Nattel, S; Noble, D; Olesen, SP; Panfilov, AV; Trayanova, NA; Ravens, U; Richard, S; Rosenbaum, DS; Rudy, Y; Sachs, F; Sachse, FB; Saint, DA; Schotten, U; Solovyova, O; Taggart, P; Tung, L; Varró, A; Volders, PG; Wang, K; Weiss, JN; Wettwer, E; White, E; Wilders, R; Winslow, RL; Kohl, P

    2011-01-01

    Cardiac experimental electrophysiology is in need of a well-defined Minimum Information Standard for recording, annotating, and reporting experimental data. As a step toward establishing this, we present a draft standard, called Minimum Information about a Cardiac Electrophysiology Experiment (MICEE). The ultimate goal is to develop a useful tool for cardiac electrophysiologists which facilitates and improves dissemination of the minimum information necessary for reproduction of cardiac electrophysiology research, allowing for easier comparison and utilisation of findings by others. It is hoped that this will enhance the integration of individual results into experimental, computational, and conceptual models. In its present form, this draft is intended for assessment and development by the research community. We invite the reader to join this effort, and, if deemed productive, implement the Minimum Information about a Cardiac Electrophysiology Experiment standard in their own work. PMID:21745496

  14. Air Traffic Control Improvement Using Prioritized CSMA

    NASA Technical Reports Server (NTRS)

    Robinson, Daryl C.

    2001-01-01

    Version 7 simulations of the industry-standard network simulation software "OPNET" are presented of two applications of the Aeronautical Telecommunications Network (ATN), Controller Pilot Data Link Communications (CPDLC) and Automatic Dependent Surveillance-Broadcast mode (ADS-B), over VHF Data Link mode 2 (VDL-2). Communication is modeled for air traffic between just three cities. All aircraft are assumed to have the same equipage. The simulation involves Air Traffic Control (ATC) ground stations and 105 aircraft taking off, flying realistic free-flight trajectories, and landing in a 24-hr period. All communication is modeled as unreliable. Collision-less, prioritized carrier sense multiple access (CSMA) is successfully tested. The statistics presented include latency, queue length, and packet loss. This research may show that a communications system simpler than the currently accepted standard envisioned may not only suffice, but also surpass performance of the standard at a lower cost of deployment.

  15. Measurement of the single-top-quark production cross section at CDF.

    PubMed

    Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzurri, P; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Beringer, J; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Calancha, C; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Copic, K; Cordelli, M; Cortiana, G; Cox, D J; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lorenzo, G; Dell'orso, M; Deluca, C; Demortier, L; Deng, J; Deninno, M; Derwent, P F; di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Elagin, A; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Genser, K; Gerberich, H; Gerdes, D; Gessler, A; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jayatilaka, B; Jeon, E J; Jha, M K; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhr, T; Kulkarni, N P; Kurata, M; Kusakabe, Y; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, E; Lee, H S; Lee, S W; Leone, S; Lewis, J D; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Liss, T M; Lister, A; Litvintsev, D O; Liu, C; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; Macqueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzione, A; Merkel, P; Mesropian, C; Miao, T; Miladinovic, N; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moggi, N; Moon, C S; Moore, R; Morello, M J; Morlok, J; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Peiffer, T; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Pueschel, E; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Renz, M; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Schall, I; Scheidle, T; Schlabach, P; Schmidt, A; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shalhout, S Z; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shiraishi, S; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Tourneur, S; Tu, Y; Turini, N; Ukegawa, F; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Xie, S; Yagil, A; Yamamoto, K; Yamaoka, J; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S

    2008-12-19

    We report a measurement of the single-top-quark production cross section in 2.2 fb;{-1} of pp collision data collected by the Collider Detector at Fermilab at sqrt[s]=1.96 TeV. Candidate events are classified as signal-like by three parallel analyses which use likelihood, matrix element, and neural network discriminants. These results are combined in order to improve the sensitivity. We observe a signal consistent with the standard model prediction, but inconsistent with the background-only model by 3.7 standard deviations with a median expected sensitivity of 4.9 standard deviations. We measure a cross section of 2.2(-0.6)(+0.7)(stat+syst) pb, extract the Cabibbo-Kobayashi-Maskawa matrix-element value |V(tb)|=0.88(-0.12)(+0.13)(stat+syst)+/-0.07(theory), and set the limit |V(tb)|>0.66 at the 95% C.L.

  16. 78 FR 9698 - Agency Forms Undergoing Paperwork Reduction Act Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-11

    ... effective at improving health care quality. While evidence-based approaches for decision-making have become standard in healthcare, this has been limited in laboratory medicine. No single-evidence-based model for... (LMBP) initiative to develop new systematic evidence reviews methods for making evidence-based...

  17. Nursing Quality Assurance: The Wisconsin System

    ERIC Educational Resources Information Center

    Hover, Julie; Zimmer, Marie J.

    1978-01-01

    Evaluation model guidelines for hospital departments of nursing to use in their nursing quality assurance programs are presented as developed in Wisconsin. Four essential components of the Wisconsin outcome evaluation system are criteria, assessment, standards, and improvement of care. Sample tests and charts are included in the article. (MF)

  18. Improved limits on dark matter annihilation in the Sun with the 79-string IceCube detector and implications for supersymmetry

    NASA Astrophysics Data System (ADS)

    Aartsen, M. G.; Abraham, K.; Ackermann, M.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Ahrens, M.; Altmann, D.; Anderson, T.; Ansseau, I.; Anton, G.; Archinger, M.; Arguelles, C.; Arlen, T. C.; Auffenberg, J.; Bai, X.; Barwick, S. W.; Baum, V.; Bay, R.; Beatty, J. J.; Becker Tjus, J.; Becker, K.-H.; Beiser, E.; BenZvi, S.; Berghaus, P.; Berley, D.; Bernardini, E.; Bernhard, A.; Besson, D. Z.; Binder, G.; Bindig, D.; Bissok, M.; Blaufuss, E.; Blumenthal, J.; Boersma, D. J.; Bohm, C.; Börner, M.; Bos, F.; Bose, D.; Böser, S.; Botner, O.; Braun, J.; Brayeur, L.; Bretz, H.-P.; Buzinsky, N.; Casey, J.; Casier, M.; Cheung, E.; Chirkin, D.; Christov, A.; Clark, K.; Classen, L.; Coenders, S.; Collin, G. H.; Conrad, J. M.; Cowen, D. F.; Cruz Silva, A. H.; Danninger, M.; Daughhetee, J.; Davis, J. C.; Day, M.; de André, J. P. A. M.; De Clercq, C.; del Pino Rosendo, E.; Dembinski, H.; De Ridder, S.; Desiati, P.; de Vries, K. D.; de Wasseige, G.; de With, M.; DeYoung, T.; Díaz-Vélez, J. C.; di Lorenzo, V.; Dumm, J. P.; Dunkman, M.; Eberhardt, B.; Edsjö, J.; Ehrhardt, T.; Eichmann, B.; Euler, S.; Evenson, P. A.; Fahey, S.; Fazely, A. R.; Feintzeig, J.; Felde, J.; Filimonov, K.; Finley, C.; Flis, S.; Fösig, C.-C.; Fuchs, T.; Gaisser, T. K.; Gaior, R.; Gallagher, J.; Gerhardt, L.; Ghorbani, K.; Gier, D.; Gladstone, L.; Glagla, M.; Glüsenkamp, T.; Goldschmidt, A.; Golup, G.; Gonzalez, J. G.; Góra, D.; Grant, D.; Griffith, Z.; Groß, A.; Ha, C.; Haack, C.; Haj Ismail, A.; Hallgren, A.; Halzen, F.; Hansen, E.; Hansmann, B.; Hanson, K.; Hebecker, D.; Heereman, D.; Helbing, K.; Hellauer, R.; Hickford, S.; Hignight, J.; Hill, G. C.; Hoffman, K. D.; Hoffmann, R.; Holzapfel, K.; Homeier, A.; Hoshina, K.; Huang, F.; Huber, M.; Huelsnitz, W.; Hulth, P. O.; Hultqvist, K.; In, S.; Ishihara, A.; Jacobi, E.; Japaridze, G. S.; Jeong, M.; Jero, K.; Jones, B. J. P.; Jurkovic, M.; Kappes, A.; Karg, T.; Karle, A.; Katz, U.; Kauer, M.; Keivani, A.; Kelley, J. L.; Kemp, J.; Kheirandish, A.; Kiryluk, J.; Klein, S. R.; Kohnen, G.; Koirala, R.; Kolanoski, H.; Konietz, R.; Köpke, L.; Kopper, C.; Kopper, S.; Koskinen, D. J.; Kowalski, M.; Krings, K.; Kroll, G.; Kroll, M.; Krückl, G.; Kunnen, J.; Kurahashi, N.; Kuwabara, T.; Labare, M.; Lanfranchi, J. L.; Larson, M. J.; Lesiak-Bzdak, M.; Leuermann, M.; Leuner, J.; Lu, L.; Lünemann, J.; Madsen, J.; Maggi, G.; Mahn, K. B. M.; Mandelartz, M.; Maruyama, R.; Mase, K.; Matis, H. S.; Maunu, R.; McNally, F.; Meagher, K.; Medici, M.; Meier, M.; Meli, A.; Menne, T.; Merino, G.; Meures, T.; Miarecki, S.; Middell, E.; Mohrmann, L.; Montaruli, T.; Morse, R.; Nahnhauer, R.; Naumann, U.; Neer, G.; Niederhausen, H.; Nowicki, S. C.; Nygren, D. R.; Obertacke Pollmann, A.; Olivas, A.; Omairat, A.; O'Murchadha, A.; Palczewski, T.; Pandya, H.; Pankova, D. V.; Paul, L.; Pepper, J. A.; Pérez de los Heros, C.; Pfendner, C.; Pieloth, D.; Pinat, E.; Posselt, J.; Price, P. B.; Przybylski, G. T.; Quinnan, M.; Raab, C.; Rädel, L.; Rameez, M.; Rawlins, K.; Reimann, R.; Relich, M.; Resconi, E.; Rhode, W.; Richman, M.; Richter, S.; Riedel, B.; Robertson, S.; Rongen, M.; Rott, C.; Ruhe, T.; Ryckbosch, D.; Sabbatini, L.; Sander, H.-G.; Sandrock, A.; Sandroos, J.; Sarkar, S.; Savage, C.; Schatto, K.; Schimp, M.; Schlunder, P.; Schmidt, T.; Schoenen, S.; Schöneberg, S.; Schönwald, A.; Schulte, L.; Schumacher, L.; Scott, P.; Seckel, D.; Seunarine, S.; Silverwood, H.; Soldin, D.; Song, M.; Spiczak, G. M.; Spiering, C.; Stahlberg, M.; Stamatikos, M.; Stanev, T.; Stasik, A.; Steuer, A.; Stezelberger, T.; Stokstad, R. G.; Stößl, A.; Ström, R.; Strotjohann, N. L.; Sullivan, G. W.; Sutherland, M.; Taavola, H.; Taboada, I.; Tatar, J.; Ter-Antonyan, S.; Terliuk, A.; Te{š}ić, G.; Tilav, S.; Toale, P. A.; Tobin, M. N.; Toscano, S.; Tosi, D.; Tselengidou, M.; Turcati, A.; Unger, E.; Usner, M.; Vallecorsa, S.; Vandenbroucke, J.; van Eijndhoven, N.; Vanheule, S.; van Santen, J.; Veenkamp, J.; Vehring, M.; Voge, M.; Vraeghe, M.; Walck, C.; Wallace, A.; Wallraff, M.; Wandkowsky, N.; Weaver, Ch.; Wendt, C.; Westerhoff, S.; Whelan, B. J.; Wiebe, K.; Wiebusch, C. H.; Wille, L.; Williams, D. R.; Wills, L.; Wissing, H.; Wolf, M.; Wood, T. R.; Woschnagg, K.; Xu, D. L.; Xu, X. W.; Xu, Y.; Yanez, J. P.; Yodh, G.; Yoshida, S.; Zoll, M.

    2016-04-01

    We present an improved event-level likelihood formalism for including neutrino telescope data in global fits to new physics. We derive limits on spin-dependent dark matter-proton scattering by employing the new formalism in a re-analysis of data from the 79-string IceCube search for dark matter annihilation in the Sun, including explicit energy information for each event. The new analysis excludes a number of models in the weak-scale minimal supersymmetric standard model (MSSM) for the first time. This work is accompanied by the public release of the 79-string IceCube data, as well as an associated computer code for applying the new likelihood to arbitrary dark matter models.

  19. IRI-2016: Description and Introduction

    NASA Astrophysics Data System (ADS)

    Bilitza, Dieter; Watanabe, Shigeto; Truhlik, Vladimir; Altadill, David

    2016-07-01

    The International Reference Ionosphere (IRI) is recognized as the official standard for the ionosphere (COSPAR, URSI, ISO) and is widely used for a multitude of different applications as evidenced by the many papers in science and engineering journals that acknowledge the use of IRI (e.g., about 11% of all Radio Science papers each year and citations in 21 different journals in 2015). The improvement process of the model is continuing as new data become fully available and new modeling techniques provide a more optimal representation of the observed variation patterns. We will introduce and present the latest version of the IRI model (IRI-2016) and discuss the impact of the various improvements and new additions. Most importantly, two new models will be introduced for the F2 peak height, hmF2, that were developed based on ionosonde measurements and COSMIC radio occultation data, respectively. In addition IRI-2016 includes an improved representation of the ionosphere during the very low solar activities that were reached during the last solar minimum in 2008/2009. A number of other improvements and corrections were implemented in the model and will be discussed in this presentation. We will also report about recent IRI workshops and their findings and plans for the future.

  20. Operational Resiliency Management: An Introduction to the Resiliency Engineering Framework

    DTIC Science & Technology

    2006-09-20

    Maturity Model Integration (CMMI) . 5 © 2006 Carnegie Mellon University y FRB Bus Con Conference 2006 Managing Today’s Operational Risk Challenges ...Bus Con Conference 2006 A model is needed to. . . Identify and prioritize risk exposures Define a process improvement roadmap Measure and facilitate...University y FRB Bus Con Conference 2006 Why use a “model” approach? Provides an operational risk roadmap Vendor-neutral, standardized, unbiased

  1. Learning to read aloud: A neural network approach using sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Joglekar, Umesh Dwarkanath

    1989-01-01

    An attempt to solve a problem of text-to-phoneme mapping is described which does not appear amenable to solution by use of standard algorithmic procedures. Experiments based on a model of distributed processing are also described. This model (sparse distributed memory (SDM)) can be used in an iterative supervised learning mode to solve the problem. Additional improvements aimed at obtaining better performance are suggested.

  2. Model for integrated management of quality, labor risks prevention, environment and ethical aspects, applied to R&D&I and production processes in an organization

    NASA Astrophysics Data System (ADS)

    González, M. R.; Torres, F.; Yoldi, V.; Arcega, F.; Plaza, I.

    2012-04-01

    It is proposed an integrated management model for an organization. This model is based on the continuous improvement Plan-Do-Check-Act cycle and it intends to integrate the environmental, risk prevention and ethical aspects as well as research, development and innovation projects management in the general quality management structure proposed by ISO 9001:2008. It aims to fulfill the standards ISO 9001, ISO 14001, OSHAS 18001, SGE 21 y 166002.

  3. Modeling of Photoionized Plasmas

    NASA Technical Reports Server (NTRS)

    Kallman, Timothy R.

    2010-01-01

    In this paper I review the motivation and current status of modeling of plasmas exposed to strong radiation fields, as it applies to the study of cosmic X-ray sources. This includes some of the astrophysical issues which can be addressed, the ingredients for the models, the current computational tools, the limitations imposed by currently available atomic data, and the validity of some of the standard assumptions. I will also discuss ideas for the future: challenges associated with future missions, opportunities presented by improved computers, and goals for atomic data collection.

  4. Analysis of Wind Turbine Simulation Models: Assessment of Simplified versus Complete Methodologies: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Honrubia-Escribano, A.; Jimenez-Buendia, F.; Molina-Garcia, A.

    This paper presents the current status of simplified wind turbine models used for power system stability analysis. This work is based on the ongoing work being developed in IEC 61400-27. This international standard, for which a technical committee was convened in October 2009, is focused on defining generic (also known as simplified) simulation models for both wind turbines and wind power plants. The results of the paper provide an improved understanding of the usability of generic models to conduct power system simulations.

  5. Barriers and attitudes influencing non-engagement in a peer feedback model to inform evidence for GP appraisal

    PubMed Central

    2012-01-01

    Background The UK general practitioner (GP) appraisal system is deemed to be an inadequate source of performance evidence to inform a future medical revalidation process. A long-running voluntary model of external peer review in the west of Scotland provides feedback by trained peers on the standard of GP colleagues' core appraisal activities and may 'add value' in strengthening the robustness of the current system in support of revalidation. A significant minority of GPs has participated in the peer feedback model, but a clear majority has yet to engage with it. We aimed to explore the views of non-participants to identify barriers to engagement and attitudes to external peer review as a means to inform the current appraisal system. Methods We conducted semi-structured interviews with a sample of west of Scotland GPs who had yet to participate in the peer review model. A thematic analysis of the interview transcriptions was conducted using a constant comparative approach. Results 13 GPs were interviewed of whom nine were males. Four core themes were identified in relation to the perceived and experienced 'value' placed on the topics discussed and their relevance to routine clinical practice and professional appraisal: 1. Value of the appraisal improvement activity. 2. Value of external peer review. 3. Value of the external peer review model and host organisation and 4. Attitudes to external peer review. Conclusions GPs in this study questioned the 'value' of participation in the external peer review model and the national appraisal system over the standard of internal feedback received from immediate work colleagues. There was a limited understanding of the concept, context and purpose of external peer review and some distrust of the host educational provider. Future engagement with the model by these GPs is likely to be influenced by policy to improve the standard of appraisal and contractual related activities, rather than a self-directed recognition of learning needs. PMID:22443714

  6. Barriers and attitudes influencing non-engagement in a peer feedback model to inform evidence for GP appraisal.

    PubMed

    Curnock, Esther; Bowie, Paul; Pope, Lindsey; McKay, John

    2012-03-23

    The UK general practitioner (GP) appraisal system is deemed to be an inadequate source of performance evidence to inform a future medical revalidation process. A long-running voluntary model of external peer review in the west of Scotland provides feedback by trained peers on the standard of GP colleagues' core appraisal activities and may 'add value' in strengthening the robustness of the current system in support of revalidation. A significant minority of GPs has participated in the peer feedback model, but a clear majority has yet to engage with it. We aimed to explore the views of non-participants to identify barriers to engagement and attitudes to external peer review as a means to inform the current appraisal system. We conducted semi-structured interviews with a sample of west of Scotland GPs who had yet to participate in the peer review model. A thematic analysis of the interview transcriptions was conducted using a constant comparative approach. 13 GPs were interviewed of whom nine were males. Four core themes were identified in relation to the perceived and experienced 'value' placed on the topics discussed and their relevance to routine clinical practice and professional appraisal: 1. Value of the appraisal improvement activity. 2. Value of external peer review. 3. Value of the external peer review model and host organisation and 4. Attitudes to external peer review. GPs in this study questioned the 'value' of participation in the external peer review model and the national appraisal system over the standard of internal feedback received from immediate work colleagues. There was a limited understanding of the concept, context and purpose of external peer review and some distrust of the host educational provider. Future engagement with the model by these GPs is likely to be influenced by policy to improve the standard of appraisal and contractual related activities, rather than a self-directed recognition of learning needs.

  7. Improved Traceability of a Small Satellite Mission Concept to Requirements Using Model Based System Engineering

    NASA Technical Reports Server (NTRS)

    Reil, Robin L.

    2014-01-01

    Model Based Systems Engineering (MBSE) has recently been gaining significant support as a means to improve the "traditional" document-based systems engineering (DBSE) approach to engineering complex systems. In the spacecraft design domain, there are many perceived and propose benefits of an MBSE approach, but little analysis has been presented to determine the tangible benefits of such an approach (e.g. time and cost saved, increased product quality). This paper presents direct examples of how developing a small satellite system model can improve traceability of the mission concept to its requirements. A comparison of the processes and approaches for MBSE and DBSE is made using the NASA Ames Research Center SporeSat CubeSat mission as a case study. A model of the SporeSat mission is built using the Systems Modeling Language standard and No Magic's MagicDraw modeling tool. The model incorporates mission concept and requirement information from the mission's original DBSE design efforts. Active dependency relationships are modeled to demonstrate the completeness and consistency of the requirements to the mission concept. Anecdotal information and process-duration metrics are presented for both the MBSE and original DBSE design efforts of SporeSat.

  8. Improved Traceability of Mission Concept to Requirements Using Model Based Systems Engineering

    NASA Technical Reports Server (NTRS)

    Reil, Robin

    2014-01-01

    Model Based Systems Engineering (MBSE) has recently been gaining significant support as a means to improve the traditional document-based systems engineering (DBSE) approach to engineering complex systems. In the spacecraft design domain, there are many perceived and propose benefits of an MBSE approach, but little analysis has been presented to determine the tangible benefits of such an approach (e.g. time and cost saved, increased product quality). This thesis presents direct examples of how developing a small satellite system model can improve traceability of the mission concept to its requirements. A comparison of the processes and approaches for MBSE and DBSE is made using the NASA Ames Research Center SporeSat CubeSat mission as a case study. A model of the SporeSat mission is built using the Systems Modeling Language standard and No Magics MagicDraw modeling tool. The model incorporates mission concept and requirement information from the missions original DBSE design efforts. Active dependency relationships are modeled to analyze the completeness and consistency of the requirements to the mission concept. Overall experience and methodology are presented for both the MBSE and original DBSE design efforts of SporeSat.

  9. Status of emerging standards for data definitions and transfer in the petroleum industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winczewski, L.M.

    1991-03-01

    Leading-edge hardware and software to store, retrieve, process, analyze, visualize, and interpret geoscience and petroleum data are improving continuously. A babel of definitions and formats for common industry data items limits the overall effectiveness of these computer-aided exploration and production tools. Custom data conversion required to load applications causes delays and exposes data content to error and degradation. Emerging industry-wide standards for management of geoscience and petroleum-related data are poised to overcome long-standing internal barriers to the full exploitation of these high-tech hardware/software systems. Industry technical organizations, such as AAPG, SEG, and API, have been actively pursuing industry-wide standards formore » data transfer, data definitions, and data models. These standard-defining groups are non-fee and solicit active participation from the entire petroleum community. The status of the most active of these groups is presented here. Data transfer standards are being pursued within AAPG (AAPG-B Data Transfer Standard), API (DLIS, for log data) and SEG (SEG-DEF, for seismic data). Converging data definitions, models, and glossaries are coming from the Petroleum Industry Data Dictionary Group (PIDD) and from subcommittees of the AAPG Computer Applications Committee. The National Computer Graphics Association is promoting development of standards for transfer of geographically oriented data. The API Well-Number standard is undergoing revision.« less

  10. Implementation of a standards-based anaesthesia record compliant with the health level 7 (HL7) clinical document architecture (CDA).

    PubMed

    Hurrell, M J; Monk, T G; Nicol, A; Norton, A N; Reich, D L; Walsh, J L

    2012-08-01

    With the increasing use of anaesthesia information management systems (AIMS) there is the opportunity for different institutions to aggregate and share information both nationally and internationally. Potential uses of such aggregated data include outcomes research, benchmarking and improvement in clinical practice and patient safety. However, these goals can only be achieved if data contained in records from different sources are truly comparable and there is semantic inter-operability. This paper describes the development of a standard terminology for anaesthesia and also a Domain Analysis Model and implementation guide to facilitate a standard representation of AIMS records as extensible markup language documents that are compliant with the Health Level 7 Version 3 clinical document architecture. A representation of vital signs that is compliant with the International Standards Organization 11073 standard is also discussed.

  11. A numerical analysis of the aortic blood flow pattern during pulsed cardiopulmonary bypass.

    PubMed

    Gramigna, V; Caruso, M V; Rossi, M; Serraino, G F; Renzulli, A; Fragomeni, G

    2015-01-01

    In the modern era, stroke remains a main cause of morbidity after cardiac surgery despite continuing improvements in the cardiopulmonary bypass (CPB) techniques. The aim of the current work was to numerically investigate the blood flow in aorta and epiaortic vessels during standard and pulsed CPB, obtained with the intra-aortic balloon pump (IABP). A multi-scale model, realized coupling a 3D computational fluid dynamics study with a 0D model, was developed and validated with in vivo data. The presence of IABP improved the flow pattern directed towards the epiaortic vessels with a mean flow increase of 6.3% and reduced flow vorticity.

  12. Computationally efficient method for Fourier transform of highly chirped pulses for laser and parametric amplifier modeling.

    PubMed

    Andrianov, Alexey; Szabo, Aron; Sergeev, Alexander; Kim, Arkady; Chvykov, Vladimir; Kalashnikov, Mikhail

    2016-11-14

    We developed an improved approach to calculate the Fourier transform of signals with arbitrary large quadratic phase which can be efficiently implemented in numerical simulations utilizing Fast Fourier transform. The proposed algorithm significantly reduces the computational cost of Fourier transform of a highly chirped and stretched pulse by splitting it into two separate transforms of almost transform limited pulses, thereby reducing the required grid size roughly by a factor of the pulse stretching. The application of our improved Fourier transform algorithm in the split-step method for numerical modeling of CPA and OPCPA shows excellent agreement with standard algorithms.

  13. A standardization model based on image recognition for performance evaluation of an oral scanner.

    PubMed

    Seo, Sang-Wan; Lee, Wan-Sun; Byun, Jae-Young; Lee, Kyu-Bok

    2017-12-01

    Accurate information is essential in dentistry. The image information of missing teeth is used in optically based medical equipment in prosthodontic treatment. To evaluate oral scanners, the standardized model was examined from cases of image recognition errors of linear discriminant analysis (LDA), and a model that combines the variables with reference to ISO 12836:2015 was designed. The basic model was fabricated by applying 4 factors to the tooth profile (chamfer, groove, curve, and square) and the bottom surface. Photo-type and video-type scanners were used to analyze 3D images after image capture. The scans were performed several times according to the prescribed sequence to distinguish the model from the one that did not form, and the results confirmed it to be the best. In the case of the initial basic model, a 3D shape could not be obtained by scanning even if several shots were taken. Subsequently, the recognition rate of the image was improved with every variable factor, and the difference depends on the tooth profile and the pattern of the floor surface. Based on the recognition error of the LDA, the recognition rate decreases when the model has a similar pattern. Therefore, to obtain the accurate 3D data, the difference of each class needs to be provided when developing a standardized model.

  14. Effectiveness of Japanese SHARE model in improving Taiwanese healthcare personnel's preference for cancer truth telling.

    PubMed

    Tang, Woung-Ru; Chen, Kuan-Yu; Hsu, Sheng-Hui; Juang, Yeong-Yuh; Chiu, Shin-Che; Hsiao, Shu-Chun; Fujimori, Maiko; Fang, Chun-Kai

    2014-03-01

    Communication skills training (CST) based on the Japanese SHARE model of family-centered truth telling in Asian countries has been adopted in Taiwan. However, its effectiveness in Taiwan has only been preliminarily verified. This study aimed to test the effect of SHARE model-centered CST on Taiwanese healthcare providers' truth-telling preference, to determine the effect size, and to compare the effect of 1-day and 2-day CST programs on participants' truth-telling preference. For this one-group, pretest-posttest study, 10 CST programs were conducted from August 2010 to November 2011 under certified facilitators and with standard patients. Participants (257 healthcare personnel from northern, central, southern, and eastern Taiwan) chose the 1-day (n = 94) or 2-day (n = 163) CST program as convenient. Participants' self-reported truth-telling preference was measured before and immediately after CST programs, with CST program assessment afterward. The CST programs significantly improved healthcare personnel's truth-telling preference (mean pretest and posttest scores ± standard deviation (SD): 263.8 ± 27.0 vs. 281.8 ± 22.9, p < 0.001). The CST programs effected a significant, large (d = 0.91) improvement in overall truth-telling preference and significantly improved method of disclosure, emotional support, and additional information (p < 0.001). Participation in 1-day or 2-day CST programs did not significantly affect participants' truth-telling preference (p > 0.05) except for the setting subscale. Most participants were satisfied with the CST programs (93.8%) and were willing to recommend them to colleagues (98.5%). The SHARE model-centered CST programs significantly improved Taiwanese healthcare personnel's truth-telling preference. Future studies should objectively assess participants' truth-telling preference, for example, by cancer patients, their families, and other medical team personnel and at longer times after CST programs. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Application of data assimilation methods for analysis and integration of observed and modeled Arctic Sea ice motions

    NASA Astrophysics Data System (ADS)

    Meier, Walter Neil

    This thesis demonstrates the applicability of data assimilation methods to improve observed and modeled ice motion fields and to demonstrate the effects of assimilated motion on Arctic processes important to the global climate and of practical concern to human activities. Ice motions derived from 85 GHz and 37 GHz SSM/I imagery and estimated from two-dimensional dynamic-thermodynamic sea ice models are compared to buoy observations. Mean error, error standard deviation, and correlation with buoys are computed for the model domain. SSM/I motions generally have a lower bias, but higher error standard deviations and lower correlation with buoys than model motions. There are notable variations in the statistics depending on the region of the Arctic, season, and ice characteristics. Assimilation methods are investigated and blending and optimal interpolation strategies are implemented. Blending assimilation improves error statistics slightly, but the effect of the assimilation is reduced due to noise in the SSM/I motions and is thus not an effective method to improve ice motion estimates. However, optimal interpolation assimilation reduces motion errors by 25--30% over modeled motions and 40--45% over SSM/I motions. Optimal interpolation assimilation is beneficial in all regions, seasons and ice conditions, and is particularly effective in regimes where modeled and SSM/I errors are high. Assimilation alters annual average motion fields. Modeled ice products of ice thickness, ice divergence, Fram Strait ice volume export, transport across the Arctic and interannual basin averages are also influenced by assimilated motions. Assimilation improves estimates of pollutant transport and corrects synoptic-scale errors in the motion fields caused by incorrect forcings or errors in model physics. The portability of the optimal interpolation assimilation method is demonstrated by implementing the strategy in an ice thickness distribution (ITD) model. This research presents an innovative method of combining a new data set of SSM/I-derived ice motions with three different sea ice models via two data assimilation methods. The work described here is the first example of assimilating remotely-sensed data within high-resolution and detailed dynamic-thermodynamic sea ice models. The results demonstrate that assimilation is a valuable resource for determining accurate ice motion in the Arctic.

  16. Diabetes Care Program of Nova Scotia: Celebrating 25 Years of Improving Diabetes Care in Nova Scotia.

    PubMed

    Payne, Jennifer I; Dunbar, Margaret J; Talbot, Pamela; Tan, Meng H

    2018-06-01

    The Diabetes Care Program of Nova Scotia (DCPNS)'s mission is "to improve, through leadership and partnerships, the health of Nova Scotians living with, affected by, or at risk of developing diabetes." Working together with local, provincial and national partners, the DCPNS has improved and standardized diabetes care in Nova Scotia over the past 25 years by developing and deploying a resourceful and collaborative program model. This article describes the model and highlights its key achievements. With balanced representation from frontline providers through to senior decision makers in health care, the DCPNS works across the age continuum, supporting the implementation of national clinical practice guidelines and, when necessary, developing provincial guidelines to meet local needs. The development and implementation of standardized documentation and data collection tools in all diabetes centres created a robust opportunity for the development and expansion of the DCPNS registry. This registry provides useful clinical and statistical information to staff, providers within the circle of care, management and senior leadership. Data are used to support individual care, program planning, quality improvement and business planning at both the local and the provincial levels. The DCPNS supports the sharing of new knowledge and advances through continuous education for providers. The DCPNS's ability to engage diabetes educators and key physician champions has ensured balanced perspectives in the creation of tools and resources that can be effective in real-world practice. The DCPNS has evolved to become an illustrative example of the chronic care model in action. Copyright © 2017 Diabetes Canada. Published by Elsevier Inc. All rights reserved.

  17. [New ways of higher education in nursing: globalisation of nursing leadership and its teaching--dual degree in nursing].

    PubMed

    Pop, Marcel; Hollós, Sándor; Vingender, István; Mészáros, Judit

    2009-03-08

    Our paper is presenting a new initiative regarding an international cooperation willing to develop a dual degree program in nursing, the so-called Transatlantic Curriculum in Nursing. The candidates--after successful completion of their studies--will get a European and an American partner diploma in nursing. The objective is to prepare an internationally and culturally competent workforce; develop the practice of nursing students' exchange programs; process the model of dual degree independent of geographical, political or cultural borders; spread the evidence-based nursing standards in the daily practice. The partners in this initiative are Semmelweis University in Budapest, Hungary, Nazareth College of Rochester, NY, USA and Laurea University in Tikkurila, Finland. The planned activities in the framework of the program: mutual student and staff mobility, joint curriculum development and teaching process, determining joint standards. The expected outcomes are: to develop a standardised model for the enhancement and implementation of international educational programs in nursing; to improve institutional work culture; to improve professional terminology and cultural abilities; to create the model of a new type of nursing professional having a high level of cultural and language competence which are indispensable for participating in global programs.

  18. REAL-PANLAR Project for the Implementation and Accreditation of Centers of Excellence in Rheumatoid Arthritis Throughout Latin America: A Consensus Position Paper From REAL-PANLAR Group on Improvement of Rheumatoid Arthritis Care in Latin America Establishing Centers of Excellence.

    PubMed

    Santos-Moreno, Pedro; Galarza-Maldonado, Claudio; Caballero-Uribe, Carlo V; Cardiel, Mario H; Massardo, Loreto; Soriano, Enrique R; Olano, José Aguilar; Díaz Coto, José F; Durán Pozo, Gabriel R; da Silveira, Inês Guimarães; de Castrejón, Vianna J Khoury; Pérez, Leticia Lino; Méndez Justo, Carlos A; Montufar Guardado, Rubén A; Muños, Rafael; Elvir, Sergio Murillo; Paredes Domínguez, Ernesto R; Pons-Estel, Bernardo; Ríos Acosta, Carlos R; Sandino, Sayonara; Toro Gutiérrez, Carlos E; Villegas de Morales, Sol María; Pineda, Carlos

    2015-06-01

    A consensus meeting of representatives of 16 Latin American and Caribbean countries and the REAL-PANLAR group met in the city of Bogota to provide recommendations for improving quality of care of patients with rheumatoid arthritis (RA) in Latin America, defining a minimum standards of care and the concept of center of excellence in RA. Twenty-two rheumatologists from 16 Latin American countries with a special interest in quality of care in RA participated in the consensus meeting. Two RA Colombian patients and 2 health care excellence advisors were also invited to the meeting. A RAND-modified Delphi procedure of 5 steps was applied to define categories of centers of excellence. During a 1-day meeting, working groups were created in order to discuss and validate the minimum quality-of-care standards for the 3 proposed types of centers of excellence in RA. Positive votes from at least 60% of the attending leaders were required for the approval of each standard. Twenty-two opinion leaders from the PANLAR countries and the REAL-PANLAR group participated in the discussion and definition of the standards. One hundred percent of the participants agreed with setting up centers of excellence in RA throughout Latin America. Three types of centers of excellence and its criteria were defined, according to indicators of structure, processes, and outcomes: standard, optimal, and model. The standard level should have basic structure and process indicators, the intermediate or optimal level should accomplish more structure and process indicators, and model level should also fulfill outcome indicators and patient experience. This is the first Latin American effort to standardize and harmonize the treatment provided to RA patients and to establish centers of excellence that would offer to RA patients acceptable clinical results and high levels of safety.

  19. Are students' impressions of improved learning through active learning methods reflected by improved test scores?

    PubMed

    Everly, Marcee C

    2013-02-01

    To report the transformation from lecture to more active learning methods in a maternity nursing course and to evaluate whether student perception of improved learning through active-learning methods is supported by improved test scores. The process of transforming a course into an active-learning model of teaching is described. A voluntary mid-semester survey for student acceptance of the new teaching method was conducted. Course examination results, from both a standardized exam and a cumulative final exam, among students who received lecture in the classroom and students who had active learning activities in the classroom were compared. Active learning activities were very acceptable to students. The majority of students reported learning more from having active-learning activities in the classroom rather than lecture-only and this belief was supported by improved test scores. Students who had active learning activities in the classroom scored significantly higher on a standardized assessment test than students who received lecture only. The findings support the use of student reflection to evaluate the effectiveness of active-learning methods and help validate the use of student reflection of improved learning in other research projects. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Comparison of Predictive Modeling Methods of Aircraft Landing Speed

    NASA Technical Reports Server (NTRS)

    Diallo, Ousmane H.

    2012-01-01

    Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.

  1. State-of-the-art satellite laser range modeling for geodetic and oceanographic applications

    NASA Technical Reports Server (NTRS)

    Klosko, Steve M.; Smith, David E.

    1993-01-01

    Significant improvements have been made in the modeling and accuracy of Satellite Laser Range (SLR) data since the launch of LAGEOS in 1976. Some of these include: improved models of the static geopotential, solid-Earth and ocean tides, more advanced atmospheric drag models, and the adoption of the J2000 reference system with improved nutation and precession. Site positioning using SLR systems currently yield approximately 2 cm static and 5 mm/y kinematic descriptions of the geocentric location of these sites. Incorporation of a large set of observations from advanced Satellite Laser Ranging (SLR) tracking systems have directly made major contributions to the gravitational fields and in advancing the state-of-the-art in precision orbit determination. SLR is the baseline tracking system for the altimeter bearing TOPEX/Poseidon and ERS-1 satellites and thus, will play an important role in providing the Conventional Terrestrial Reference Frame for instantaneously locating the geocentric position of the ocean surface over time, in providing an unchanging range standard for altimeter range calibration, and for improving the geoid models to separate gravitational from ocean circulation signals seen in the sea surface. Nevertheless, despite the unprecedented improvements in the accuracy of the models used to support orbit reduction of laser observations, there still remain systematic unmodeled effects which limit the full exploitation of modern SLR data.

  2. Toward Standardizing a Lexicon of Infectious Disease Modeling Terms.

    PubMed

    Milwid, Rachael; Steriu, Andreea; Arino, Julien; Heffernan, Jane; Hyder, Ayaz; Schanzer, Dena; Gardner, Emma; Haworth-Brockman, Margaret; Isfeld-Kiely, Harpa; Langley, Joanne M; Moghadas, Seyed M

    2016-01-01

    Disease modeling is increasingly being used to evaluate the effect of health intervention strategies, particularly for infectious diseases. However, the utility and application of such models are hampered by the inconsistent use of infectious disease modeling terms between and within disciplines. We sought to standardize the lexicon of infectious disease modeling terms and develop a glossary of terms commonly used in describing models' assumptions, parameters, variables, and outcomes. We combined a comprehensive literature review of relevant terms with an online forum discussion in a virtual community of practice, mod4PH (Modeling for Public Health). Using a convergent discussion process and consensus amongst the members of mod4PH, a glossary of terms was developed as an online resource. We anticipate that the glossary will improve inter- and intradisciplinary communication and will result in a greater uptake and understanding of disease modeling outcomes in heath policy decision-making. We highlight the role of the mod4PH community of practice and the methodologies used in this endeavor to link theory, policy, and practice in the public health domain.

  3. DSMC study of oxygen shockwaves based on high-fidelity vibrational relaxation and dissociation models

    NASA Astrophysics Data System (ADS)

    Borges Sebastião, Israel; Kulakhmetov, Marat; Alexeenko, Alina

    2017-01-01

    This work evaluates high-fidelity vibrational-translational (VT) energy relaxation and dissociation models for pure O2 normal shockwave simulations with the direct simulation Monte Carlo (DSMC) method. The O2-O collisions are described using ab initio state-specific relaxation and dissociation models. The Macheret-Fridman (MF) dissociation model is adapted to the DSMC framework by modifying the standard implementation of the total collision energy (TCE) model. The O2-O2 dissociation is modeled with this TCE+MF approach, which is calibrated with O2-O ab initio data and experimental equilibrium dissociation rates. The O2-O2 vibrational relaxation is modeled via the Larsen-Borgnakke model, calibrated to experimental VT rates. All the present results are compared to experimental data and previous calculations available in the literature. It is found that, in general, the ab initio dissociation model is better than the TCE model at matching the shock experiments. Therefore, when available, efficient ab initio models are preferred over phenomenological models. We also show that the proposed TCE + MF formulation can be used to improve the standard TCE model results when ab initio data are not available or limited.

  4. A PRELIMINARY EVALUATION OF MODELS-3 CMAQ USING PARTICULATE MATTER DATA FROM THE IMPROVE NETWORK

    EPA Science Inventory

    The Clean Air Act and its Amendments require the United States Environmental Protection Agency (EPA) to establish National Ambient Air Quality Standards for Particulate Matter (PM) and to assess current and future air quality regulations designed to protect human health and wel...

  5. Getting Out of the Way: A Lesson in Change

    ERIC Educational Resources Information Center

    McEnery, Douglas

    2005-01-01

    When the author's school implemented a model of standards-driven, research-based teaching practices, he realized that working individually with teachers on their professional development goals was not improving teacher performance or student achievement. As such, he developed a more facilitative role and listened to groups of teachers discuss…

  6. The Higher Education of Gaming

    ERIC Educational Resources Information Center

    Squire, Kurt D.; Giovanetto, Levi

    2008-01-01

    New models of schooling are necessary as educational institutions attempt to transition into the digital age. This article is an ethnography of Apolyton University, an informal online university of gamers created to enhance pleasure from the game experience, teach the game, and improve upon the game's standard rule set. It identifies the life…

  7. Freshman Learning Communities, College Performance, and Retention. Working Paper 2005-22

    ERIC Educational Resources Information Center

    Hotchkiss, Julie L.; Moore, Robert E.; Pitts, M. Melinda

    2005-01-01

    This paper applies a standard treatment effects model to determine that participation in Freshman Learning Communities (FLCs) improves academic performance and retention. Not controlling for individual self-selection into FLC participation leads one to incorrectly conclude that the impact is the same across race and gender groups. Accurately…

  8. Infection dynamics of foot-and-mouth disease virus in cattle following intra-nasopharyngeal inoculation or contact exposure

    USDA-ARS?s Scientific Manuscript database

    For the purpose of developing an improved experimental model for studies of foot-and-mouth disease virus (FMDV) infection in cattle, three different experimental systems based on natural or simulated-natural virus exposure were compared under standardized experimental conditions. Antemortem infecti...

  9. Syncope management unit: evolution of the concept and practice implementation.

    PubMed

    Shen, Win K; Traub, Stephen J; Decker, Wyatt W

    2013-01-01

    Syncope, a clinical syndrome, has many potential causes. The prognosis of a patient experiencing syncope varies from benign outcome to increased risk of mortality or sudden death, determined by the etiology of syncope and the presence of underlying disease. Because a definitive diagnosis often cannot be established immediately, hospital admission is frequently recommended as the "default" approach to ensure patient's safety and an expedited evaluation. Hospital care is costly while no studies have shown that clinical outcomes are improved by the in-patient practice approach. The syncope unit is an evolving practice model based on the hypothesis that a multidisciplinary team of physicians and allied staff with expertise in syncope management, working together and equipped with standard clinical tools could improve clinical outcomes. Preliminary data have demonstrated that a specialized syncope unit can improve diagnosis in a timely manner, reduce hospital admission and decrease the use of unnecessary diagnostic tests. In this review, models of syncope units in the emergency department, hospital and outpatient clinics from different practices in different countries are discussed. Similarities and differences of these syncope units are compared. Outcomes and endpoints from these studies are summarized. Developing a syncope unit with a standardized protocol applicable to most practice settings would be an ultimate goal for clinicians and investigators who have interest, expertise, and commitment to improve care for this large patient population. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. The Value Estimation of an HFGW Frequency Time Standard for Telecommunications Network Optimization

    NASA Astrophysics Data System (ADS)

    Harper, Colby; Stephenson, Gary

    2007-01-01

    The emerging technology of gravitational wave control is used to augment a communication system using a development roadmap suggested in Stephenson (2003) for applications emphasized in Baker (2005). In the present paper consideration is given to the value of a High Frequency Gravitational Wave (HFGW) channel purely as providing a method of frequency and time reference distribution for use within conventional Radio Frequency (RF) telecommunications networks. Specifically, the native value of conventional telecommunications networks may be optimized by using an unperturbed frequency time standard (FTS) to (1) improve terminal navigation and Doppler estimation performance via improved time difference of arrival (TDOA) from a universal time reference, and (2) improve acquisition speed, coding efficiency, and dynamic bandwidth efficiency through the use of a universal frequency reference. A model utilizing a discounted cash flow technique provides an estimation of the additional value using HFGW FTS technology could bring to a mixed technology HFGW/RF network. By applying a simple net present value analysis with supporting reference valuations to such a network, it is demonstrated that an HFGW FTS could create a sizable improvement within an otherwise conventional RF telecommunications network. Our conservative model establishes a low-side value estimate of approximately 50B USD Net Present Value for an HFGW FTS service, with reasonable potential high-side values to significant multiples of this low-side value floor.

  11. Assessment of State-of-the-Art Dust Emission Scheme in GEOS

    NASA Technical Reports Server (NTRS)

    Darmenov, Anton; Liu, Xiaohong; Prigent, Catherine

    2017-01-01

    The GEOS modeling system has been extended with state of the art parameterization of dust emissions based on the vertical flux formulation described in Kok et al 2014. The new dust scheme was coupled with the GOCART and MAM aerosol models. In the present study we compare dust emissions, aerosol optical depth (AOD) and radiative fluxes from GEOS experiments with the standard and new dust emissions. AOD from the model experiments are also compared with AERONET and satellite based data. Based on this comparative analysis we concluded that the new parameterization improves the GEOS capability to model dust aerosols originating from African sources, however it lead to overestimation of dust emissions from Asian and Arabian sources. Further regional tuning of key parameters controlling the threshold friction velocity may be required in order to achieve more definitive and uniform improvement in the dust modeling skill.

  12. Decadal climate predictions improved by ocean ensemble dispersion filtering

    NASA Astrophysics Data System (ADS)

    Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.

    2017-06-01

    Decadal predictions by Earth system models aim to capture the state and phase of the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-term weather forecasts represent an initial value problem and long-term climate projections represent a boundary condition problem, the decadal climate prediction falls in-between these two time scales. In recent years, more precise initialization techniques of coupled Earth system models and increased ensemble sizes have improved decadal predictions. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure, called ensemble dispersion filter, results in more accurate results than the standard decadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from ocean ensemble dispersion filtering toward the ensemble mean.Plain Language SummaryDecadal predictions aim to predict the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. The ocean memory due to its heat capacity holds big potential skill. In recent years, more precise initialization techniques of coupled Earth system models (incl. atmosphere and ocean) have improved decadal predictions. Ensembles are another important aspect. Applying slightly perturbed predictions to trigger the famous butterfly effect results in an ensemble. Instead of evaluating one prediction, but the whole ensemble with its ensemble average, improves a prediction system. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Our study shows that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure applying the average during the model run, called ensemble dispersion filter, results in more accurate results than the standard prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhDT........15A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhDT........15A"><span>The impact of Next Generation Science Standards (NGSS) professional development on the self-efficacy of science teachers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Akella, Somi Devi M.</p> <p></p> <p>In 2012, the National Research Council introduced the Next Generation Science Standards (NGSS), which were created to improve the K-12 education in the U.S. and stress the importance of providing professional development (PD) to acquire the knowledge, skills, and self-efficacy to design lessons to meet high standards of teaching and learning. Bandura's (1977) theory of self-efficacy posits that people are motivated to perform an action if they are confident that they can perform the action successfully. The purpose of this survey research was to investigate the impact of professional development on the self-efficacy of science teachers with regard to the NGSS practice of Analyzing and Interpreting Data as well as to probe teachers' perceptions of barriers to their self-efficacy in applying this practice. The study found that focused and targeted PD helped improve participants' self-efficacy in incorporating the NGSS practices and addressed several barriers to teacher self-efficacy. In response to findings, Akella's Science Teaching Efficacy Professional Development (ASTEPD) model is proposed as a tool to guide PD practice and, thus, helps improve teacher self-efficacy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018GeoJI.tmp..147P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018GeoJI.tmp..147P"><span>Spectral combination of spherical gravitational curvature boundary-value problems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>PitoÅák, Martin; Eshagh, Mehdi; Šprlák, Michal; Tenzer, Robert; Novák, Pavel</p> <p>2018-04-01</p> <p>Four solutions of the spherical gravitational curvature boundary-value problems can be exploited for the determination of the Earth's gravitational potential. In this article we discuss the combination of simulated satellite gravitational curvatures, i.e., components of the third-order gravitational tensor, by merging these solutions using the spectral combination method. For this purpose, integral estimators of biased- and unbiased-types are derived. In numerical studies, we investigate the performance of the developed mathematical models for the gravitational field modelling in the area of Central Europe based on simulated satellite measurements. Firstly, we verify the correctness of the integral estimators for the spectral downward continuation by a closed-loop test. Estimated errors of the combined solution are about eight orders smaller than those from the individual solutions. Secondly, we perform a numerical experiment by considering the Gaussian noise with the standard deviation of 6.5× 10-17 m-1s-2 in the input data at the satellite altitude of 250 km above the mean Earth sphere. This value of standard deviation is equivalent to a signal-to-noise ratio of 10. Superior results with respect to the global geopotential model TIM-r5 are obtained by the spectral downward continuation of the vertical-vertical-vertical component with the standard deviation of 2.104 m2s-2, but the root mean square error is the largest and reaches 9.734 m2s-2. Using the spectral combination of all gravitational curvatures the root mean square error is more than 400 times smaller but the standard deviation reaches 17.234 m2s-2. The combination of more components decreases the root mean square error of the corresponding solutions while the standard deviations of the combined solutions do not improve as compared to the solution from the vertical-vertical-vertical component. The presented method represents a weight mean in the spectral domain that minimizes the root mean square error of the combined solutions and improves standard deviation of the solution based only on the least accurate components.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20060052899','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20060052899"><span>Improved Airport Noise Modeling for High Altitudes and Flexible Flight Operations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Forsyth, David W.; Follet, Jesse I.</p> <p>2006-01-01</p> <p>The FAA's Integrated Noise Model (INM) is widely used to estimate noise in the vicinity of airports. This study supports the development of standards by which the fleet data in the INM can be updated. A comparison of weather corrections to noise data using INM Spectral Classes is made with the Boeing integrated method. The INM spectral class method is shown to work well, capturing noise level differences due to weather especially at long distances. Two studies conducted at the Denver International Airport are included in the appendices. The two studies adopted different approaches to modeling flight operations at the airport. When compared to the original, year 2000, results, it is apparent that changes made to the INM in terms of modeling processes and databases have resulted in improved agreement between predicted and measured noise levels.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26082543','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26082543"><span>Improving global environmental management with standard corporate reporting.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kareiva, Peter M; McNally, Brynn W; McCormick, Steve; Miller, Tom; Ruckelshaus, Mary</p> <p>2015-06-16</p> <p>Multinational corporations play a prominent role in shaping the environmental trajectory of the planet. The integration of environmental costs and benefits into corporate decision-making has enormous, but as yet unfulfilled, potential to promote sustainable development. To help steer business decisions toward better environmental outcomes, corporate reporting frameworks need to develop scientifically informed standards that consistently consider land use and land conversion, clean air (including greenhouse gas emissions), availability and quality of freshwater, degradation of coastal and marine habitats, and sustainable use of renewable resources such as soil, timber, and fisheries. Standardization by itself will not be enough--also required are advances in ecosystem modeling and in our understanding of critical ecological thresholds. With improving ecosystem science, the opportunity for realizing a major breakthrough in reporting corporate environmental impacts and dependencies has never been greater. Now is the time for ecologists to take advantage of an explosion of sustainability commitments from business leaders and expanding pressure for sustainable practices from shareholders, financial institutions, and consumers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4475964','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4475964"><span>Improving global environmental management with standard corporate reporting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kareiva, Peter M.; McNally, Brynn W.; McCormick, Steve; Miller, Tom; Ruckelshaus, Mary</p> <p>2015-01-01</p> <p>Multinational corporations play a prominent role in shaping the environmental trajectory of the planet. The integration of environmental costs and benefits into corporate decision-making has enormous, but as yet unfulfilled, potential to promote sustainable development. To help steer business decisions toward better environmental outcomes, corporate reporting frameworks need to develop scientifically informed standards that consistently consider land use and land conversion, clean air (including greenhouse gas emissions), availability and quality of freshwater, degradation of coastal and marine habitats, and sustainable use of renewable resources such as soil, timber, and fisheries. Standardization by itself will not be enough—also required are advances in ecosystem modeling and in our understanding of critical ecological thresholds. With improving ecosystem science, the opportunity for realizing a major breakthrough in reporting corporate environmental impacts and dependencies has never been greater. Now is the time for ecologists to take advantage of an explosion of sustainability commitments from business leaders and expanding pressure for sustainable practices from shareholders, financial institutions, and consumers. PMID:26082543</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvD..97g3003N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvD..97g3003N"><span>Decay of standard-model-like Higgs boson h →μ τ in a 3-3-1 model with inverse seesaw neutrino masses</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nguyen, T. Phong; Le, T. Thuy; Hong, T. T.; Hue, L. T.</p> <p>2018-04-01</p> <p>By adding new gauge singlets of neutral leptons, the improved versions of the 3-3-1 models with right-handed neutrinos have been recently introduced in order to explain recent experimental neutrino oscillation data through the inverse seesaw mechanism. We prove that these models predict promising signals of lepton-flavor-violating decays of the standard-model-like Higgs boson h10→μ τ ,e τ , which are suppressed in the original versions. One-loop contributions to these decay amplitudes are introduced in the unitary gauge. Based on a numerical investigation, we find that the branching ratios of the decays h10→μ τ ,e τ can reach values of 10-5 in the regions of parameter space satisfying the current experimental data of the decay μ →e γ . The value of 10-4 appears when the Yukawa couplings of leptons are close to the perturbative limit. Some interesting properties of these regions of parameter space are also discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/21308561-probing-particle-nuclear-physics-models-neutrinoless-double-beta-decay-different-nuclei','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/21308561-probing-particle-nuclear-physics-models-neutrinoless-double-beta-decay-different-nuclei"><span>Probing particle and nuclear physics models of neutrinoless double beta decay with different nuclei</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Fogli, G. L.; Rotunno, A. M.; Istituto Nazionale di Fisica Nucleare, Sezione di Bari, Via Orabona 4, 70126 Bari</p> <p>2009-07-01</p> <p>Half-life estimates for neutrinoless double beta decay depend on particle physics models for lepton-flavor violation, as well as on nuclear physics models for the structure and transitions of candidate nuclei. Different models considered in the literature can be contrasted - via prospective data - with a 'standard' scenario characterized by light Majorana neutrino exchange and by the quasiparticle random phase approximation, for which the theoretical covariance matrix has been recently estimated. We show that, assuming future half-life data in four promising nuclei ({sup 76}Ge, {sup 82}Se, {sup 130}Te, and {sup 136}Xe), the standard scenario can be distinguished from a fewmore » nonstandard physics models, while being compatible with alternative state-of-the-art nuclear calculations (at 95% C.L.). Future signals in different nuclei may thus help to discriminate at least some decay mechanisms, without being spoiled by current nuclear uncertainties. Prospects for possible improvements are also discussed.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ESD.....8..889G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ESD.....8..889G"><span>A method to preserve trends in quantile mapping bias correction of climate modeled temperature</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Grillakis, Manolis G.; Koutroulis, Aristeidis G.; Daliakopoulos, Ioannis N.; Tsanis, Ioannis K.</p> <p>2017-09-01</p> <p>Bias correction of climate variables is a standard practice in climate change impact (CCI) studies. Various methodologies have been developed within the framework of quantile mapping. However, it is well known that quantile mapping may significantly modify the long-term statistics due to the time dependency of the temperature bias. Here, a method to overcome this issue without compromising the day-to-day correction statistics is presented. The methodology separates the modeled temperature signal into a normalized and a residual component relative to the modeled reference period climatology, in order to adjust the biases only for the former and preserve the signal of the later. The results show that this method allows for the preservation of the originally modeled long-term signal in the mean, the standard deviation and higher and lower percentiles of temperature. To illustrate the improvements, the methodology is tested on daily time series obtained from five Euro CORDEX regional climate models (RCMs).</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhRvD..94e3014G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhRvD..94e3014G"><span>Boosting invisible searches via Z H : From the Higgs boson to dark matter simplified models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gonçalves, Dorival; Krauss, Frank; Kuttimalai, Silvan; Maierhöfer, Philipp</p> <p>2016-09-01</p> <p>Higgs boson production in association with a Z boson at the LHC is analyzed, both in the Standard Model and in simplified model extensions for dark matter. We focus on H →invisibles searches and show that loop-induced components for both the signal and background present phenomenologically relevant contributions to the B R (H →inv) limits. We also show how multijet merging improves the description of key distributions to this analysis. In addition, the constraining power of this channel to simplified models for dark matter with scalar and pseudoscalar mediators ϕ and A is discussed and compared with noncollider constraints. We find that with 100 fb-1 of LHC data, this channel provides competitive constraints to the noncollider bounds, for most of the parameter space we consider, bounding the universal Standard Model fermion-mediator strength at gv<1 for moderate masses in the range of 100 GeV <mϕ /A<400 GeV .</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3016398','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3016398"><span>Applying an Empirical Hydropathic Forcefield in Refinement May Improve Low-Resolution Protein X-Ray Crystal Structures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Koparde, Vishal N.; Scarsdale, J. Neel; Kellogg, Glen E.</p> <p>2011-01-01</p> <p>Background The quality of X-ray crystallographic models for biomacromolecules refined from data obtained at high-resolution is assured by the data itself. However, at low-resolution, >3.0 Å, additional information is supplied by a forcefield coupled with an associated refinement protocol. These resulting structures are often of lower quality and thus unsuitable for downstream activities like structure-based drug discovery. Methodology An X-ray crystallography refinement protocol that enhances standard methodology by incorporating energy terms from the HINT (Hydropathic INTeractions) empirical forcefield is described. This protocol was tested by refining synthetic low-resolution structural data derived from 25 diverse high-resolution structures, and referencing the resulting models to these structures. The models were also evaluated with global structural quality metrics, e.g., Ramachandran score and MolProbity clashscore. Three additional structures, for which only low-resolution data are available, were also re-refined with this methodology. Results The enhanced refinement protocol is most beneficial for reflection data at resolutions of 3.0 Å or worse. At the low-resolution limit, ≥4.0 Å, the new protocol generated models with Cα positions that have RMSDs that are 0.18 Å more similar to the reference high-resolution structure, Ramachandran scores improved by 13%, and clashscores improved by 51%, all in comparison to models generated with the standard refinement protocol. The hydropathic forcefield terms are at least as effective as Coulombic electrostatic terms in maintaining polar interaction networks, and significantly more effective in maintaining hydrophobic networks, as synthetic resolution is decremented. Even at resolutions ≥4.0 Å, these latter networks are generally native-like, as measured with a hydropathic interactions scoring tool. PMID:21246043</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/945536','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/945536"><span>Improvements in Modeling Au Sphere Non-LTE X-ray Emission</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Rosen, M D; Scott, H A; Suter, L J</p> <p>2008-10-30</p> <p>We've previously reported on experiments at the Omega laser at URLLE, in which 1.0 mm in diameter, Au coated, spheres, were illuminated at either 10{sup 14} W/cm{sup 2} (10 kJ/3 ns) or at 10{sup 15} W/cm{sup 2} (30 kJ/1 ns). Spectral information on the 1 keV thermal x-rays, as well as the multi-keV M-band were obtained. We compared a variety of non-LTE atomic physics packages to this data with varying degrees of success. In this paper we broaden the scope of the investigation, and compare the data to newer models: (1) An improved Detailed Configuration Accounting (DCA) method; and (2)more » This model involves adjustments to the standard XSN non-LTE model which lead to a better match of coronal emission as calculated by XSN to that calculated by SCRAM, a more sophisticated stand-alone model. We show some improvements in the agreement with Omega data when using either of these new approaches.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008SPIE.6941E..0KD','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008SPIE.6941E..0KD"><span>Modeling the effects of contrast enhancement on target acquisition performance</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Du Bosq, Todd W.; Fanning, Jonathan D.</p> <p>2008-04-01</p> <p>Contrast enhancement and dynamic range compression are currently being used to improve the performance of infrared imagers by increasing the contrast between the target and the scene content, by better utilizing the available gray levels either globally or locally. This paper assesses the range-performance effects of various contrast enhancement algorithms for target identification with well contrasted vehicles. Human perception experiments were performed to determine field performance using contrast enhancement on the U.S. Army RDECOM CERDEC NVESD standard military eight target set using an un-cooled LWIR camera. The experiments compare the identification performance of observers viewing linearly scaled images and various contrast enhancement processed images. Contrast enhancement is modeled in the US Army thermal target acquisition model (NVThermIP) by changing the scene contrast temperature. The model predicts improved performance based on any improved target contrast, regardless of feature saturation or enhancement. To account for the equivalent blur associated with each contrast enhancement algorithm, an additional effective MTF was calculated and added to the model. The measured results are compared with the predicted performance based on the target task difficulty metric used in NVThermIP.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20130012748','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20130012748"><span>Dimensions of Credibility in Models and Simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Steele, Martin J.</p> <p>2008-01-01</p> <p>Based on the National Aeronautics and Space Administration's (NASA's) work in developing a standard for models and simulations (M&S), the subject of credibility in M&S became a distinct focus. This is an indirect result from the Space Shuttle Columbia Accident Investigation Board (CAIB), which eventually resulted in an action, among others, to improve the rigor in NASA's M&S practices. The focus of this action came to mean a standardized method for assessing and reporting results from any type of M&S. As is typical in the standards development process, this necessarily developed into defming a common terminology base, common documentation requirements (especially for M&S used in critical decision making), and a method for assessing the credibility of M&S results. What surfaced in the development of the NASA Standard was the various dimensions credibility to consider when accepting the results from any model or simulation analysis. The eight generally relevant factors of credibility chosen in the NASA Standard proved only one aspect in the dimensionality of M&S credibility. At the next level of detail, the full comprehension of some of the factors requires an understanding along a couple of dimensions as well. Included in this discussion are the prerequisites for the appropriate use of a given M&S, the choice of factors in credibility assessment with their inherent dimensionality, and minimum requirements for fully reporting M&S results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28433428','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28433428"><span>Effect of Risk Acceptance for Bundled Care Payments on Clinical Outcomes in a High-Volume Total Joint Arthroplasty Practice After Implementation of a Standardized Clinical Pathway.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kee, James R; Edwards, Paul K; Barnes, Charles L</p> <p>2017-08-01</p> <p>The Bundled Payments for Care Improvement (BPCI) initiative and the Arkansas Payment Improvement (API) initiative seek to incentivize reduced costs and improved outcomes compared with the previous fee-for-service model. Before participation, our practice initiated a standardized clinical pathway (CP) to reduce length of stay (LOS), readmissions, and discharge to postacute care facilities. This practice implemented a standardized CP focused on patient education, managing patient expectations, and maximizing cost outcomes. We retrospectively reviewed all primary total joint arthroplasty patients during the initial 2-year "at risk" period for both BPCI and API and determined discharge disposition, LOS, and readmission rate. During the "at risk" period, the average LOS decreased in our total joint arthroplasty patients and our patients discharged home >94%. Patients within the BPCI group had a decreased discharge to home and decreased readmission rates after total hip arthroplasty, but also tended to be older than both API and nonbundled payment patients. While participating in the BPCI and API, continued use of a standardized CP in a high-performing, high-volume total joint practice resulted in maintenance of a low-average LOS. In addition, BPCI patients had similar outcomes after total knee arthroplasty, but had decreased rates of discharge to home and readmission after total hip arthroplasty. Copyright © 2017 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1110844-fuel-economy-co2-emissions-standards-manufacturer-pricing-strategies-feebates','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1110844-fuel-economy-co2-emissions-standards-manufacturer-pricing-strategies-feebates"><span>FUEL ECONOMY AND CO2 EMISSIONS STANDARDS, MANUFACTURER PRICING STRATEGIES, AND FEEBATES</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Liu, Changzheng; Greene, David L; Bunch, Dr David S.</p> <p>2012-01-01</p> <p>Corporate Average Fuel Economy (CAFE) standards and CO2 emissions standards for 2012 to 2016 have significantly increased the stringency of requirements for new light-duty vehicle fuel efficiency. This study investigates the role of technology adoption and pricing strategies in meeting new standards, as well as the impact of feebate policies. The analysis is carried out by means of a dynamic optimization model that simulates manufacturer decisions with the objective of maximizing social surplus while simultaneously considering consumer response and meeting CAFE and emissions standards. The results indicate that technology adoption plays the major role and that the provision of compliancemore » flexibility and the availability of cost-effective advanced technologies help manufacturers reduce the need for pricing to induce changes in the mix of vehicles sold. Feebates, when implemented along with fuel economy and emissions standards, can bring additional fuel economy improvement and emissions reduction, but the benefit diminishes with the increasing stringency of the standards.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013APS..SHK.W5001K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013APS..SHK.W5001K"><span>Probing planetary interiors: Shock compression of water to 700 GPa and 3.8 g/cc, and recent high precision Hugoniot measurements of deuterium</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Knudson, Marcus</p> <p>2013-06-01</p> <p>The past several years have seen tremendous increase in the number of identified extra-solar planetary systems. Our understanding of the formation of these systems is tied to our understanding of the internal structure of these exoplanets, which in turn rely upon equations of state of light elements and compounds such as water and hydrogen. Here we present shock compression data for water with unprecedented accuracy that shows commonly used models for water in planetary modeling significantly overestimate the compressibility at conditions relevant to planetary interiors. Furthermore, we show that its behavior at these conditions, including reflectivity and isentropic response, is well described by a recent first-principles based equation of state. These findings advocate the use of this model as the standard for modeling Neptune, Uranus, and ``hot Neptune'' exoplanets, and should contribute to improved understanding of the interior structure of these planets, and perhaps improved understanding of formation mechanisms of planetary systems. We also present very recent experiments on deuterium that have taken advantage of continued improvements in both experimental configuration and the understanding of the quartz shock standard to obtain Hugoniot data with a significant increase in precision. These data will prove to provide a stringent test for the equation of state of hydrogen and its isotopes. Sandia is a multiprogram laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Company, for the US Department of Energy's National Nuclear Security Administration under Contract No. DE-ACO4-94AL85000.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20100030622&hterms=storm+water+quality&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dstorm%2Bwater%2Bquality','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20100030622&hterms=storm+water+quality&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dstorm%2Bwater%2Bquality"><span>NASA-Modified Precipitation Products to Improve EPA Nonpoint Source Water Quality Modeling for the Chesapeake Bay</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Nigro, Joseph; Toll, David; Partington, Ed; Ni-Meister, Wenge; Lee, Shihyan; Gutierrez-Magness, Angelica; Engman, Ted; Arsenault, Kristi</p> <p>2010-01-01</p> <p>The Environmental Protection Agency (EPA) has estimated that over 20,000 water bodies within the United States do not meet water quality standards. Ninety percent of the impairments are typically caused by nonpoint sources. One of the regulations in the Clean Water Act of 1972 requires States to monitor the Total Maximum Daily Load (TMDL), or the amount of pollution that can be carried by a water body before it is determined to be "polluted", for any watershed in the U.S.. In response to this mandate, the EPA developed Better Assessment Science Integrating Nonpoint Sources (BASINS) as a Decision Support Tool (DST) for assessing pollution and to guide the decision making process for improving water quality. One of the models in BASINS, the Hydrological Simulation Program -- Fortran (HSPF), computes daily stream flow rates and pollutant concentration at each basin outlet. By design, precipitation and other meteorological data from weather stations serve as standard model input. In practice, these stations may be unable to capture the spatial heterogeneity of precipitation events especially if they are few and far between. An attempt was made to resolve this issue by substituting station data with NASA modified/NOAA precipitation data. Using these data within HSPF, stream flow was calculated for seven watersheds in the Chesapeake Bay Basin during low flow periods, convective storm periods, and annual flows. In almost every case, the modeling performance of HSPF increased when using the NASA-modified precipitation data, resulting in better stream flow statistics and, ultimately, in improved water quality assessment.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20100030620&hterms=improvement+products&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dimprovement%2Bproducts','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20100030620&hterms=improvement+products&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dimprovement%2Bproducts"><span>Development and Evaluation of a Cloud-Gap-Filled MODIS Daily Snow-Cover Product</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hall, Dorothy K.; Riggs, George A.; Foster, James L.; Kumar, Sujay V.</p> <p>2010-01-01</p> <p>The utility of the Moderate Resolution Imaging Spectroradiometer (MODIS) snow-cover products is limited by cloud cover which causes gaps in the daily snow-cover map products. We describe a cloud-gap-filled (CGF) daily snowcover map using a simple algorithm to track cloud persistence, to account for the uncertainty created by the age of the snow observation. Developed from the 0.050 resolution climate-modeling grid daily snow-cover product, MOD10C1, each grid cell of the CGF map provides a cloud-persistence count (CPC) that tells whether the current or a prior day was used to make the snow decision. Percentage of grid cells "observable" is shown to increase dramatically when prior days are considered. The effectiveness of the CGF product is evaluated by conducting a suite of data assimilation experiments using the community Noah land surface model in the NASA Land Information System (LIS) framework. The Noah model forecasts of snow conditions, such as snow-water equivalent (SWE), are updated based on the observations of snow cover which are obtained either from the MOD1 OC1 standard product or the new CGF product. The assimilation integrations using the CGF maps provide domain averaged bias improvement of -11 %, whereas such improvement using the standard MOD1 OC1 maps is -3%. These improvements suggest that the Noah model underestimates SWE and snow depth fields, and that the assimilation integrations contribute to correcting this systematic error. We conclude that the gap-filling strategy is an effective approach for increasing cloud-free observations of snow cover.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19920024107&hterms=Petit&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3DPetit','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19920024107&hterms=Petit&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3DPetit"><span>The need for GPS standardization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lewandowski, Wlodzimierz W.; Petit, Gerard; Thomas, Claudine</p> <p>1992-01-01</p> <p>A desirable and necessary step for improvement of the accuracy of Global Positioning System (GPS) time comparisons is the establishment of common GPS standards. For this reason, the CCDS proposed the creation of a special group of experts with the objective of recommending procedures and models for operational time transfer by GPS common-view method. Since the announcement of the implementation of Selective Availability at the end of last spring, action has become much more urgent and this CCDS Group on GPS Time Transfer Standards has now been set up. It operates under the auspices of the permanent CCDS Working Group on TAI and works in close cooperation with the Sub-Committee on Time of the Civil GPS Service Interface Committee (CGSIC). Taking as an example the implementation of SA during the first week of July 1991, this paper illustrates the need to develop urgently at least two standardized procedures in GPS receiver software: monitoring GPS tracks with a common time scale and retaining broadcast ephemeris parameters throughout the duration of a track. Other matters requiring action are the adoption of common models for atmospheric delay, a common approach to hardware design and agreement about short-term data processing. Several examples of such deficiencies in standardization are presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26528570','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26528570"><span>Synthetic Biology Open Language (SBOL) Version 2.0.0.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bartley, Bryan; Beal, Jacob; Clancy, Kevin; Misirli, Goksel; Roehner, Nicholas; Oberortner, Ernst; Pocock, Matthew; Bissell, Michael; Madsen, Curtis; Nguyen, Tramy; Zhang, Zhen; Gennari, John H; Myers, Chris; Wipat, Anil; Sauro, Herbert</p> <p>2015-09-04</p> <p>Synthetic biology builds upon the techniques and successes of genetics, molecular biology, and metabolic engineering by applying engineering principles to the design of biological systems. The field still faces substantial challenges, including long development times, high rates of failure, and poor reproducibility. One method to ameliorate these problems would be to improve the exchange of information about designed systems between laboratories. The Synthetic Biology Open Language (SBOL) has been developed as a standard to support the specification and exchange of biological design information in synthetic biology, filling a need not satisfied by other pre-existing standards. This document details version 2.0 of SBOL, introducing a standardized format for the electronic exchange of information on the structural and functional aspects of biological designs. The standard has been designed to support the explicit and unambiguous description of biological designs by means of a well defined data model. The standard also includes rules and best practices on how to use this data model and populate it with relevant design details. The publication of this specification is intended to make these capabilities more widely accessible to potential developers and users in the synthetic biology community and beyond.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28179374','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28179374"><span>The effects of the lower ignition propensity cigarettes standard in Estonia: time-series analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Saar, Indrek</p> <p>2018-02-01</p> <p>In 2011, the lower ignition propensity (LIP) standard for cigarettes was implemented in the European Union. Evidence about the impact of that safety measure is scarce. The aim of this paper is to examine the effects of the LIP standard on fire safety in Estonia. The absolute level of smoking-related fire incidents and related deaths was modelled using dynamic time-series regression analysis. The data about house fire incidents for the 2007-2013 period were obtained from the Estonian Rescue Board. Implementation of the LIP standard has reduced the monthly level of smoking-related fires by 6.2 (p<0.01, SE=1.95) incidents and by 26% (p<0.01, SE=9%) when estimated on the log scale. Slightly weaker evidence was found about the fatality reduction effects of the LIP regulation. All results were confirmed through counterfactual models for non-smoking-related fire incidents and deaths. This paper indicates that implementation of the LIP cigarettes standard has improved fire safety in Estonia. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16334699','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16334699"><span>Effects and modeling of phonetic and acoustic confusions in accented speech.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fung, Pascale; Liu, Yi</p> <p>2005-11-01</p> <p>Accented speech recognition is more challenging than standard speech recognition due to the effects of phonetic and acoustic confusions. Phonetic confusion in accented speech occurs when an expected phone is pronounced as a different one, which leads to erroneous recognition. Acoustic confusion occurs when the pronounced phone is found to lie acoustically between two baseform models and can be equally recognized as either one. We propose that it is necessary to analyze and model these confusions separately in order to improve accented speech recognition without degrading standard speech recognition. Since low phonetic confusion units in accented speech do not give rise to automatic speech recognition errors, we focus on analyzing and reducing phonetic and acoustic confusability under high phonetic confusion conditions. We propose using likelihood ratio test to measure phonetic confusion, and asymmetric acoustic distance to measure acoustic confusion. Only accent-specific phonetic units with low acoustic confusion are used in an augmented pronunciation dictionary, while phonetic units with high acoustic confusion are reconstructed using decision tree merging. Experimental results show that our approach is effective and superior to methods modeling phonetic confusion or acoustic confusion alone in accented speech, with a significant 5.7% absolute WER reduction, without degrading standard speech recognition.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007IJBm...51..169V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007IJBm...51..169V"><span>Validation of an individualised model of human thermoregulation for predicting responses to cold air</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>van Marken Lichtenbelt, Wouter D.; Frijns, Arjan J. H.; van Ooijen, Marieke J.; Fiala, Dusan; Kester, Arnold M.; van Steenhoven, Anton A.</p> <p>2007-01-01</p> <p>Most computer models of human thermoregulation are population based. Here, we individualised the Fiala model [Fiala et al. (2001) Int J Biometeorol 45:143 159] with respect to anthropometrics, body fat, and metabolic rate. The predictions of the adapted multisegmental thermoregulatory model were compared with measured skin temperatures of individuals. Data from two experiments, in which reclining subjects were suddenly exposed to mild to moderate cold environmental conditions, were used to study the effect on dynamic skin temperature responses. Body fat was measured by the three-compartment method combining underwater weighing and deuterium dilution. Metabolic rate was determined by indirect calorimetry. In experiment 1, the bias (mean difference) between predicted and measured mean skin temperature decreased from 1.8°C to -0.15°C during cold exposure. The standard deviation of the mean difference remained of the same magnitude (from 0.7°C to 0.9°C). In experiment 2 the bias of the skin temperature changed from 2.0±1.09°C using the standard model to 1.3±0.93°C using individual characteristics in the model. The inclusion of individual characteristics thus improved the predictions for an individual and led to a significantly smaller systematic error. However, a large part of the discrepancies in individual response to cold remained unexplained. Possible further improvements to the model accomplished by inclusion of more subject characteristics (i.e. body fat distribution, body shape) and model refinements on the level of (skin) blood perfusion, and control functions, are discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1198062-improved-parallel-hashed-oct-tree-body-algorithm-cosmological-simulation','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1198062-improved-parallel-hashed-oct-tree-body-algorithm-cosmological-simulation"><span>2HOT: An Improved Parallel Hashed Oct-Tree N-Body Algorithm for Cosmological Simulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Warren, Michael S.</p> <p>2014-01-01</p> <p>We report on improvements made over the past two decades to our adaptive treecode N-body method (HOT). A mathematical and computational approach to the cosmological N-body problem is described, with performance and scalability measured up to 256k (2 18 ) processors. We present error analysis and scientific application results from a series of more than ten 69 billion (4096 3 ) particle cosmological simulations, accounting for 4×10 20 floating point operations. These results include the first simulations using the new constraints on the standard model of cosmology from the Planck satellite. Our simulations set a new standard for accuracy andmore » scientific throughput, while meeting or exceeding the computational efficiency of the latest generation of hybrid TreePM N-body methods.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013CoPhC.184.1220L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013CoPhC.184.1220L"><span>CPsuperH2.3: An updated tool for phenomenology in the MSSM with explicit CP violation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lee, J. S.; Carena, M.; Ellis, J.; Pilaftsis, A.; Wagner, C. E. M.</p> <p>2013-04-01</p> <p>We describe the Fortran code CPsuperH2.3, which incorporates the following updates compared with its predecessor CPsuperH2.0. It implements improved calculations of the Higgs-boson masses and mixing including stau contributions and finite threshold effects on the tau-lepton Yukawa coupling. It incorporates the LEP limits on the processes e+e-→HiZ,HiHj and the CMS limits on Hi→τ¯τ obtained from 4.6 fb-1 of data at a center-of-mass energy of 7 TeV. It also includes the decay mode Hi→Zγ and the Schiff-moment contributions to the electric dipole moments of Mercury and Radium 225, with several calculational options for the case of Mercury. These additions make CPsuperH2.3 a suitable tool for analyzing possible CP-violating effects in the MSSM in the era of the LHC and a new generation of EDM experiments.<ce:footnote Catalogue identifier: ADSR_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSR_v3_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 24058 No. of bytes in distributed program, including test data, etc.: 158721 Distribution format: tar.gz Programming language: Fortran77. Computer: PC running under Linux and computers in Unix environment. Operating system: Linux. RAM: 32 MB Classification: 11.1. Does the new version supersede the previous version?: Yes Catalogue identifier of previous version: ADSR_v2_0 Journal reference of previous version: Comput. Phys. Comm. 180(2009)312 Nature of problem: The calculations of mass spectrum, decay widths and branching ratios of the neutral and charged Higgs bosons in the Minimal Supersymmetric Standard Model with explicit CP violation have been improved. The program is based on renormalization-group-improved diagrammatic calculations that include dominant higher-order logarithmic and threshold corrections, b-quark and τ-lepton Yukawa-coupling resummation effects and improved treatment of Higgs-boson pole-mass shifts. The couplings of the Higgs bosons to the Standard Model gauge bosons and fermions, to their supersymmetric partners and all the trilinear and quartic Higgs-boson self-couplings are also calculated. Also included are a full treatment of the 4×4 (2×2) neutral (charged) Higgs propagator matrix together with the center-of-mass dependent Higgs-boson couplings to gluons and photons, and an integrated treatment of several B-meson observables. The new implementations include the EDMs of Thallium, neutron, Mercury, Deuteron, Radium, and muon, as well as the anomalous magnetic moment of muon, (gμ-2), the top-quark decays, improved calculations of the Higgs-boson masses and mixing including stau contributions, the LEP limits, and the CMS limits on Hi→ττ¯. It also implements the decay mode Hi→Zγ and includes the corresponding Standard Model branching ratios of the three neutral Higgs bosons in the array GAMBRN(IM,IWB = 2,IH). Solution method: One-dimensional numerical integration for several Higgs-decay modes and EDMs, iterative treatment of the threshold corrections and Higgs-boson pole masses, and the numerical diagonalization of the neutralino mass matrix. Reasons for new version: Mainly to provide the full calculations of the EDMs of Thallium, neutron, Mercury, Deuteron, Radium, and muon as well as (gμ-2), improved calculations of the Higgs-boson masses and mixing including stau contributions, the LEP limits, the CMS limits on Hi→ττ¯, the top-quark decays, Hi→Zγ decay, and the corresponding Standard Model branching ratios of the three neutral Higgs bosons. Summary of revisions: Full calculations of the EDMs of Thallium, neutron, Mercury, Deuteron, Radium, and muon as well as (gμ-2). Improved treatment of Higgs-boson masses and mixing including stau contributions. The LEP limits. The CMS limits on Hi→ττ¯. The top-quark decays. The Hi→Zγ decay. The corresponding Standard Model branching ratios of the three neutral Higgs bosons. Running time: Less than 1.0 s.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1155812','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1155812"><span>CPsuperH2.3: an Updated Tool for Phenomenology in the MSSM with Explicit CP Violation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Lee, J.S.; Carena, M.; Ellis, J.</p> <p>2013-04-01</p> <p>We describe the Fortran code CPsuperH2.3, which incorporates the following updates compared with its predecessor CPsuperH2.0. It implements improved calculations of the Higgs-boson masses and mixing including stau contributions and finite threshold effects on the tau-lepton Yukawa coupling. It incorporates the LEP limits on the processes e^+e^-->H_iZ,H_iH_j and the CMS limits on H_i->@t@?@t obtained from 4.6 fb^-^1 of data at a center-of-mass energy of 7 TeV. It also includes the decay mode H_i->Z@c and the Schiff-moment contributions to the electric dipole moments of Mercury and Radium 225, with several calculational options for the case of Mercury. These additions make CPsuperH2.3more » a suitable tool for analyzing possible CP-violating effects in the MSSM in the era of the LHC and a new generation of EDM experiments. Program summary: Program title: CPsuperH2.3 Catalogue identifier: ADSR_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSR_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 24058 No. of bytes in distributed program, including test data, etc.: 158721 Distribution format: tar.gz Programming language: Fortran77. Computer: PC running under Linux and computers in Unix environment. Operating system: Linux. RAM: 32 MB Classification: 11.1. Does the new version supersede the previous version?: Yes Catalogue identifier of previous version: ADSR_v2_0 Journal reference of previous version: Comput. Phys. Comm. 180(2009)312 Nature of problem: The calculations of mass spectrum, decay widths and branching ratios of the neutral and charged Higgs bosons in the Minimal Supersymmetric Standard Model with explicit CP violation have been improved. The program is based on renormalization-group-improved diagrammatic calculations that include dominant higher-order logarithmic and threshold corrections, b-quark and @t-lepton Yukawa-coupling resummation effects and improved treatment of Higgs-boson pole-mass shifts. The couplings of the Higgs bosons to the Standard Model gauge bosons and fermions, to their supersymmetric partners and all the trilinear and quartic Higgs-boson self-couplings are also calculated. Also included are a full treatment of the 4x4 (2x2) neutral (charged) Higgs propagator matrix together with the center-of-mass dependent Higgs-boson couplings to gluons and photons, and an integrated treatment of several B-meson observables. The new implementations include the EDMs of Thallium, neutron, Mercury, Deuteron, Radium, and muon, as well as the anomalous magnetic moment of muon, (g_@m-2), the top-quark decays, improved calculations of the Higgs-boson masses and mixing including stau contributions, the LEP limits, and the CMS limits on H_i->@t@t@?. It also implements the decay mode H_i->Z@c and includes the corresponding Standard Model branching ratios of the three neutral Higgs bosons in the array GAMBRN(IM,IWB = 2,IH). Solution method: One-dimensional numerical integration for several Higgs-decay modes and EDMs, iterative treatment of the threshold corrections and Higgs-boson pole masses, and the numerical diagonalization of the neutralino mass matrix. Reasons for new version: Mainly to provide the full calculations of the EDMs of Thallium, neutron, Mercury, Deuteron, Radium, and muon as well as (g_@m-2), improved calculations of the Higgs-boson masses and mixing including stau contributions, the LEP limits, the CMS limits on H_i->@t@t@?, the top-quark decays, H_i->Z@c decay, and the corresponding Standard Model branching ratios of the three neutral Higgs bosons. Summary of revisions: Full calculations of the EDMs of Thallium, neutron, Mercury, Deuteron, Radium, and muon as well as (g_@m-2). Improved treatment of Higgs-boson masses and mixing including stau contributions. The LEP limits. The CMS limits on H_i->@t@t@?. The top-quark decays. The H_i->Z@c decay. The corresponding Standard Model branching ratios of the three neutral Higgs bosons. Running time: Less than 1.0 s.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JCAP...05..007K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JCAP...05..007K"><span>Domain walls in the extensions of the Standard Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Krajewski, Tomasz; Lalak, Zygmunt; Lewicki, Marek; Olszewski, Paweł</p> <p>2018-05-01</p> <p>Our main interest is the evolution of domain walls of the Higgs field in the early Universe. The aim of this paper is to understand how dynamics of Higgs domain walls could be influenced by yet unknown interactions from beyond the Standard Model. We assume that the Standard Model is valid up to certain, high, energy scale Λ and use the framework of the effective field theory to describe physics below that scale. Performing numerical simulations with different values of the scale Λ we are able to extend our previous analysis [1]. Our recent numerical simulations show that evolution of Higgs domain walls is rather insensitive to interactions beyond the Standard Model as long as masses of new particles are grater than 1012 GeV. For lower values of Λ the RG improved effective potential is strongly modified at field strengths crucial to the evolution of domain walls. However, we find that even for low values of Λ, Higgs domain walls decayed shortly after their formation for generic initial conditions. On the other hand, in simulations with specifically chosen initial conditions Higgs domain walls can live longer and enter the scaling regime. We also determine the energy spectrum of gravitational waves produced by decaying domain walls of the Higgs field. For generic initial field configurations the amplitude of the signal is too small to be observed in planned detectors.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22329431','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22329431"><span>Nutrition standards for away-from-home foods in the USA.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cohen, D A; Bhatia, R</p> <p>2012-07-01</p> <p>Away-from-home foods are regulated with respect to the prevention of food-borne diseases and potential contaminants, but not for their contribution to dietary-related chronic diseases. Away-from-home foods have more calories, salt, sugar and fat, and include fewer fruits and vegetables than recommended by national nutrition guidelines. Thus, frequent consumption of away-from-home foods contributes to obesity, hypertension, diabetes, heart disease, and cancer. In light of this, many localities are already adopting regulations or sponsoring programs to improve the quality of away-from-home foods. We review the rationale for developing nutritional performance standards for away-from-home foods in light of limited human capacity to regulate intake or physiologically compensate for a poor diet. We offer a set of model performance standards to be considered as a new area of environmental regulation. Models for voluntary implementation of consumer standards exist in the environmental domain and may be useful templates for implementation. Implementing such standards, whether voluntarily or via regulations, will require addressing a number of practical and ideological challenges. Politically, regulatory standards contradict the belief that adults should be able to navigate dietary risks in away-from-home settings unaided. © 2012 The Authors. obesity reviews © 2012 International Association for the Study of Obesity.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>