Sample records for improved reliability modeling

  1. Creating High Reliability in Health Care Organizations

    PubMed Central

    Pronovost, Peter J; Berenholtz, Sean M; Goeschel, Christine A; Needham, Dale M; Sexton, J Bryan; Thompson, David A; Lubomski, Lisa H; Marsteller, Jill A; Makary, Martin A; Hunt, Elizabeth

    2006-01-01

    Objective The objective of this paper was to present a comprehensive approach to help health care organizations reliably deliver effective interventions. Context Reliability in healthcare translates into using valid rate-based measures. Yet high reliability organizations have proven that the context in which care is delivered, called organizational culture, also has important influences on patient safety. Model for Improvement Our model to improve reliability, which also includes interventions to improve culture, focuses on valid rate-based measures. This model includes (1) identifying evidence-based interventions that improve the outcome, (2) selecting interventions with the most impact on outcomes and converting to behaviors, (3) developing measures to evaluate reliability, (4) measuring baseline performance, and (5) ensuring patients receive the evidence-based interventions. The comprehensive unit-based safety program (CUSP) is used to improve culture and guide organizations in learning from mistakes that are important, but cannot be measured as rates. Conclusions We present how this model was used in over 100 intensive care units in Michigan to improve culture and eliminate catheter-related blood stream infections—both were accomplished. Our model differs from existing models in that it incorporates efforts to improve a vital component for system redesign—culture, it targets 3 important groups—senior leaders, team leaders, and front line staff, and facilitates change management—engage, educate, execute, and evaluate for planned interventions. PMID:16898981

  2. Reliability model generator

    NASA Technical Reports Server (NTRS)

    Cohen, Gerald C. (Inventor); McMann, Catherine M. (Inventor)

    1991-01-01

    An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.

  3. On the next generation of reliability analysis tools

    NASA Technical Reports Server (NTRS)

    Babcock, Philip S., IV; Leong, Frank; Gai, Eli

    1987-01-01

    The current generation of reliability analysis tools concentrates on improving the efficiency of the description and solution of the fault-handling processes and providing a solution algorithm for the full system model. The tools have improved user efficiency in these areas to the extent that the problem of constructing the fault-occurrence model is now the major analysis bottleneck. For the next generation of reliability tools, it is proposed that techniques be developed to improve the efficiency of the fault-occurrence model generation and input. Further, the goal is to provide an environment permitting a user to provide a top-down design description of the system from which a Markov reliability model is automatically constructed. Thus, the user is relieved of the tedious and error-prone process of model construction, permitting an efficient exploration of the design space, and an independent validation of the system's operation is obtained. An additional benefit of automating the model construction process is the opportunity to reduce the specialized knowledge required. Hence, the user need only be an expert in the system he is analyzing; the expertise in reliability analysis techniques is supplied.

  4. Constraining uncertainties in water supply reliability in a tropical data scarce basin

    NASA Astrophysics Data System (ADS)

    Kaune, Alexander; Werner, Micha; Rodriguez, Erasmo; de Fraiture, Charlotte

    2015-04-01

    Assessing the water supply reliability in river basins is essential for adequate planning and development of irrigated agriculture and urban water systems. In many cases hydrological models are applied to determine the surface water availability in river basins. However, surface water availability and variability is often not appropriately quantified due to epistemic uncertainties, leading to water supply insecurity. The objective of this research is to determine the water supply reliability in order to support planning and development of irrigated agriculture in a tropical, data scarce environment. The approach proposed uses a simple hydrological model, but explicitly includes model parameter uncertainty. A transboundary river basin in the tropical region of Colombia and Venezuela with an approximately area of 2100 km² was selected as a case study. The Budyko hydrological framework was extended to consider climatological input variability and model parameter uncertainty, and through this the surface water reliability to satisfy the irrigation and urban demand was estimated. This provides a spatial estimate of the water supply reliability across the basin. For the middle basin the reliability was found to be less than 30% for most of the months when the water is extracted from an upstream source. Conversely, the monthly water supply reliability was high (r>98%) in the lower basin irrigation areas when water was withdrawn from a source located further downstream. Including model parameter uncertainty provides a complete estimate of the water supply reliability, but that estimate is influenced by the uncertainty in the model. Reducing the uncertainty in the model through improved data and perhaps improved model structure will improve the estimate of the water supply reliability allowing better planning of irrigated agriculture and dependable water allocation decisions.

  5. Improving Metrological Reliability of Information-Measuring Systems Using Mathematical Modeling of Their Metrological Characteristics

    NASA Astrophysics Data System (ADS)

    Kurnosov, R. Yu; Chernyshova, T. I.; Chernyshov, V. N.

    2018-05-01

    The algorithms for improving the metrological reliability of analogue blocks of measuring channels and information-measuring systems are developed. The proposed algorithms ensure the optimum values of their metrological reliability indices for a given analogue circuit block solution.

  6. Modeling heterogeneous (co)variances from adjacent-SNP groups improves genomic prediction for milk protein composition traits.

    PubMed

    Gebreyesus, Grum; Lund, Mogens S; Buitenhuis, Bart; Bovenhuis, Henk; Poulsen, Nina A; Janss, Luc G

    2017-12-05

    Accurate genomic prediction requires a large reference population, which is problematic for traits that are expensive to measure. Traits related to milk protein composition are not routinely recorded due to costly procedures and are considered to be controlled by a few quantitative trait loci of large effect. The amount of variation explained may vary between regions leading to heterogeneous (co)variance patterns across the genome. Genomic prediction models that can efficiently take such heterogeneity of (co)variances into account can result in improved prediction reliability. In this study, we developed and implemented novel univariate and bivariate Bayesian prediction models, based on estimates of heterogeneous (co)variances for genome segments (BayesAS). Available data consisted of milk protein composition traits measured on cows and de-regressed proofs of total protein yield derived for bulls. Single-nucleotide polymorphisms (SNPs), from 50K SNP arrays, were grouped into non-overlapping genome segments. A segment was defined as one SNP, or a group of 50, 100, or 200 adjacent SNPs, or one chromosome, or the whole genome. Traditional univariate and bivariate genomic best linear unbiased prediction (GBLUP) models were also run for comparison. Reliabilities were calculated through a resampling strategy and using deterministic formula. BayesAS models improved prediction reliability for most of the traits compared to GBLUP models and this gain depended on segment size and genetic architecture of the traits. The gain in prediction reliability was especially marked for the protein composition traits β-CN, κ-CN and β-LG, for which prediction reliabilities were improved by 49 percentage points on average using the MT-BayesAS model with a 100-SNP segment size compared to the bivariate GBLUP. Prediction reliabilities were highest with the BayesAS model that uses a 100-SNP segment size. The bivariate versions of our BayesAS models resulted in extra gains of up to 6% in prediction reliability compared to the univariate versions. Substantial improvement in prediction reliability was possible for most of the traits related to milk protein composition using our novel BayesAS models. Grouping adjacent SNPs into segments provided enhanced information to estimate parameters and allowing the segments to have different (co)variances helped disentangle heterogeneous (co)variances across the genome.

  7. A Flexible Latent Class Approach to Estimating Test-Score Reliability

    ERIC Educational Resources Information Center

    van der Palm, Daniël W.; van der Ark, L. Andries; Sijtsma, Klaas

    2014-01-01

    The latent class reliability coefficient (LCRC) is improved by using the divisive latent class model instead of the unrestricted latent class model. This results in the divisive latent class reliability coefficient (DLCRC), which unlike LCRC avoids making subjective decisions about the best solution and thus avoids judgment error. A computational…

  8. Bi-Factor Multidimensional Item Response Theory Modeling for Subscores Estimation, Reliability, and Classification

    ERIC Educational Resources Information Center

    Md Desa, Zairul Nor Deana

    2012-01-01

    In recent years, there has been increasing interest in estimating and improving subscore reliability. In this study, the multidimensional item response theory (MIRT) and the bi-factor model were combined to estimate subscores, to obtain subscores reliability, and subscores classification. Both the compensatory and partially compensatory MIRT…

  9. The relationship between cost estimates reliability and BIM adoption: SEM analysis

    NASA Astrophysics Data System (ADS)

    Ismail, N. A. A.; Idris, N. H.; Ramli, H.; Rooshdi, R. R. Raja Muhammad; Sahamir, S. R.

    2018-02-01

    This paper presents the usage of Structural Equation Modelling (SEM) approach in analysing the effects of Building Information Modelling (BIM) technology adoption in improving the reliability of cost estimates. Based on the questionnaire survey results, SEM analysis using SPSS-AMOS application examined the relationships between BIM-improved information and cost estimates reliability factors, leading to BIM technology adoption. Six hypotheses were established prior to SEM analysis employing two types of SEM models, namely the Confirmatory Factor Analysis (CFA) model and full structural model. The SEM models were then validated through the assessment on their uni-dimensionality, validity, reliability, and fitness index, in line with the hypotheses tested. The final SEM model fit measures are: P-value=0.000, RMSEA=0.079<0.08, GFI=0.824, CFI=0.962>0.90, TLI=0.956>0.90, NFI=0.935>0.90 and ChiSq/df=2.259; indicating that the overall index values achieved the required level of model fitness. The model supports all the hypotheses evaluated, confirming that all relationship exists amongst the constructs are positive and significant. Ultimately, the analysis verified that most of the respondents foresee better understanding of project input information through BIM visualization, its reliable database and coordinated data, in developing more reliable cost estimates. They also perceive to accelerate their cost estimating task through BIM adoption.

  10. Towards cost-effective reliability through visualization of the reliability option space

    NASA Technical Reports Server (NTRS)

    Feather, Martin S.

    2004-01-01

    In planning a complex system's development there can be many options to improve its reliability. Typically their sum total cost exceeds the budget available, so it is necessary to select judiciously from among them. Reliability models can be employed to calculate the cost and reliability implications of a candidate selection.

  11. Proposed Reliability/Cost Model

    NASA Technical Reports Server (NTRS)

    Delionback, L. M.

    1982-01-01

    New technique estimates cost of improvement in reliability for complex system. Model format/approach is dependent upon use of subsystem cost-estimating relationships (CER's) in devising cost-effective policy. Proposed methodology should have application in broad range of engineering management decisions.

  12. Chapter 15: Reliability of Wind Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, Shuangwen; O'Connor, Ryan

    The global wind industry has witnessed exciting developments in recent years. The future will be even brighter with further reductions in capital and operation and maintenance costs, which can be accomplished with improved turbine reliability, especially when turbines are installed offshore. One opportunity for the industry to improve wind turbine reliability is through the exploration of reliability engineering life data analysis based on readily available data or maintenance records collected at typical wind plants. If adopted and conducted appropriately, these analyses can quickly save operation and maintenance costs in a potentially impactful manner. This chapter discusses wind turbine reliability bymore » highlighting the methodology of reliability engineering life data analysis. It first briefly discusses fundamentals for wind turbine reliability and the current industry status. Then, the reliability engineering method for life analysis, including data collection, model development, and forecasting, is presented in detail and illustrated through two case studies. The chapter concludes with some remarks on potential opportunities to improve wind turbine reliability. An owner and operator's perspective is taken and mechanical components are used to exemplify the potential benefits of reliability engineering analysis to improve wind turbine reliability and availability.« less

  13. Applying the High Reliability Health Care Maturity Model to Assess Hospital Performance: A VA Case Study.

    PubMed

    Sullivan, Jennifer L; Rivard, Peter E; Shin, Marlena H; Rosen, Amy K

    2016-09-01

    The lack of a tool for categorizing and differentiating hospitals according to their high reliability organization (HRO)-related characteristics has hindered progress toward implementing and sustaining evidence-based HRO practices. Hospitals would benefit both from an understanding of the organizational characteristics that support HRO practices and from knowledge about the steps necessary to achieve HRO status to reduce the risk of harm and improve outcomes. The High Reliability Health Care Maturity (HRHCM) model, a model for health care organizations' achievement of high reliability with zero patient harm, incorporates three major domains critical for promoting HROs-Leadership, Safety Culture, and Robust Process Improvement ®. A study was conducted to examine the content validity of the HRHCM model and evaluate whether it can differentiate hospitals' maturity levels for each of the model's components. Staff perceptions of patient safety at six US Department of Veterans Affairs (VA) hospitals were examined to determine whether all 14 HRHCM components were present and to characterize each hospital's level of organizational maturity. Twelve of the 14 components from the HRHCM model were detected; two additional characteristics emerged that are present in the HRO literature but not represented in the model-teamwork culture and system-focused tools for learning and improvement. Each hospital's level of organizational maturity could be characterized for 9 of the 14 components. The findings suggest the HRHCM model has good content validity and that there is differentiation between hospitals on model components. Additional research is needed to understand how these components can be used to build the infrastructure necessary for reaching high reliability.

  14. System and Software Reliability (C103)

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores

    2003-01-01

    Within the last decade better reliability models (hardware. software, system) than those currently used have been theorized and developed but not implemented in practice. Previous research on software reliability has shown that while some existing software reliability models are practical, they are no accurate enough. New paradigms of development (e.g. OO) have appeared and associated reliability models have been proposed posed but not investigated. Hardware models have been extensively investigated but not integrated into a system framework. System reliability modeling is the weakest of the three. NASA engineers need better methods and tools to demonstrate that the products meet NASA requirements for reliability measurement. For the new models for the software component of the last decade, there is a great need to bring them into a form that they can be used on software intensive systems. The Statistical Modeling and Estimation of Reliability Functions for Systems (SMERFS'3) tool is an existing vehicle that may be used to incorporate these new modeling advances. Adapting some existing software reliability modeling changes to accommodate major changes in software development technology may also show substantial improvement in prediction accuracy. With some additional research, the next step is to identify and investigate system reliability. System reliability models could then be incorporated in a tool such as SMERFS'3. This tool with better models would greatly add value in assess in GSFC projects.

  15. Enhancing model prediction reliability through improved soil representation and constrained model auto calibration - A paired waterhsed study

    USDA-ARS?s Scientific Manuscript database

    Process based and distributed watershed models possess a large number of parameters that are not directly measured in field and need to be calibrated through matching modeled in-stream fluxes with monitored data. Recently, there have been waves of concern about the reliability of this common practic...

  16. Comparing the reliability of related populations with the probability of agreement

    DOE PAGES

    Stevens, Nathaniel T.; Anderson-Cook, Christine M.

    2016-07-26

    Combining information from different populations to improve precision, simplify future predictions, or improve underlying understanding of relationships can be advantageous when considering the reliability of several related sets of systems. Using the probability of agreement to help quantify the similarities of populations can help to give a realistic assessment of whether the systems have reliability that are sufficiently similar for practical purposes to be treated as a homogeneous population. In addition, the new method is described and illustrated with an example involving two generations of a complex system where the reliability is modeled using either a logistic or probit regressionmore » model. Note that supplementary materials including code, datasets, and added discussion are available online.« less

  17. Comparing the reliability of related populations with the probability of agreement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Nathaniel T.; Anderson-Cook, Christine M.

    Combining information from different populations to improve precision, simplify future predictions, or improve underlying understanding of relationships can be advantageous when considering the reliability of several related sets of systems. Using the probability of agreement to help quantify the similarities of populations can help to give a realistic assessment of whether the systems have reliability that are sufficiently similar for practical purposes to be treated as a homogeneous population. In addition, the new method is described and illustrated with an example involving two generations of a complex system where the reliability is modeled using either a logistic or probit regressionmore » model. Note that supplementary materials including code, datasets, and added discussion are available online.« less

  18. Reliability and coverage analysis of non-repairable fault-tolerant memory systems

    NASA Technical Reports Server (NTRS)

    Cox, G. W.; Carroll, B. D.

    1976-01-01

    A method was developed for the construction of probabilistic state-space models for nonrepairable systems. Models were developed for several systems which achieved reliability improvement by means of error-coding, modularized sparing, massive replication and other fault-tolerant techniques. From the models developed, sets of reliability and coverage equations for the systems were developed. Comparative analyses of the systems were performed using these equation sets. In addition, the effects of varying subunit reliabilities on system reliability and coverage were described. The results of these analyses indicated that a significant gain in system reliability may be achieved by use of combinations of modularized sparing, error coding, and software error control. For sufficiently reliable system subunits, this gain may far exceed the reliability gain achieved by use of massive replication techniques, yet result in a considerable saving in system cost.

  19. Validation and Improvement of Reliability Methods for Air Force Building Systems

    DTIC Science & Technology

    focusing primarily on HVAC systems . This research used contingency analysis to assess the performance of each model for HVAC systems at six Air Force...probabilistic model produced inflated reliability calculations for HVAC systems . In light of these findings, this research employed a stochastic method, a...Nonhomogeneous Poisson Process (NHPP), in an attempt to produce accurate HVAC system reliability calculations. This effort ultimately concluded that

  20. Module Degradation Mechanisms Studied by a Multi-Scale Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, Steve; Al-Jassim, Mowafak; Hacke, Peter

    2016-11-21

    A key pathway to meeting the Department of Energy SunShot 2020 goals is to reduce financing costs by improving investor confidence through improved photovoltaic (PV) module reliability. A comprehensive approach to further understand and improve PV reliability includes characterization techniques and modeling from module to atomic scale. Imaging techniques, which include photoluminescence, electroluminescence, and lock-in thermography, are used to locate localized defects responsible for module degradation. Small area samples containing such defects are prepared using coring techniques and are then suitable and available for microscopic study and specific defect modeling and analysis.

  1. Improving Water Level and Soil Moisture Over Peatlands in a Global Land Modeling System

    NASA Technical Reports Server (NTRS)

    Bechtold, M.; De Lannoy, G. J. M.; Roose, D.; Reichle, R. H.; Koster, R. D.; Mahanama, S. P.

    2017-01-01

    New model structure for peatlands results in improved skill metrics (without any parameter calibration) Simulated surface soil moisture strongly affected by new model, but reliable soil moisture data lacking for validation.

  2. Failure rate and reliability of the KOMATSU hydraulic excavator in surface limestone mine

    NASA Astrophysics Data System (ADS)

    Harish Kumar N., S.; Choudhary, R. P.; Murthy, Ch. S. N.

    2018-04-01

    The model with failure rate function of bathtub-shaped is helpful in reliability analysis of any system and particularly in reliability associated privative maintenance. The usual Weibull distribution is, however, not capable to model the complete lifecycle of the any with a bathtub-shaped failure rate function. In this paper, failure rate and reliability analysis of the KOMATSU hydraulic excavator/shovel in surface mine is presented and also to improve the reliability and decrease the failure rate of each subsystem of the shovel based on the preventive maintenance. The model of the bathtub-shaped for shovel can also be seen as a simplification of the Weibull distribution.

  3. Learning reliable manipulation strategies without initial physical models

    NASA Technical Reports Server (NTRS)

    Christiansen, Alan D.; Mason, Matthew T.; Mitchell, Tom M.

    1990-01-01

    A description is given of a robot, possessing limited sensory and effectory capabilities but no initial model of the effects of its actions on the world, that acquires such a model through exploration, practice, and observation. By acquiring an increasingly correct model of its actions, it generates increasingly successful plans to achieve its goals. In an apparently nondeterministic world, achieving reliability requires the identification of reliable actions and a preference for using such actions. Furthermore, by selecting its training actions carefully, the robot can significantly improve its learning rate.

  4. Improving reliability of a residency interview process.

    PubMed

    Peeters, Michael J; Serres, Michelle L; Gundrum, Todd E

    2013-10-14

    To improve the reliability and discrimination of a pharmacy resident interview evaluation form, and thereby improve the reliability of the interview process. In phase 1 of the study, authors used a Many-Facet Rasch Measurement model to optimize an existing evaluation form for reliability and discrimination. In phase 2, interviewer pairs used the modified evaluation form within 4 separate interview stations. In phase 3, 8 interviewers individually-evaluated each candidate in one-on-one interviews. In phase 1, the evaluation form had a reliability of 0.98 with person separation of 6.56; reproducibly, the form separated applicants into 6 distinct groups. Using that form in phase 2 and 3, our largest variation source was candidates, while content specificity was the next largest variation source. The phase 2 g-coefficient was 0.787, while confirmatory phase 3 was 0.922. Process reliability improved with more stations despite fewer interviewers per station-impact of content specificity was greatly reduced with more interview stations. A more reliable, discriminating evaluation form was developed to evaluate candidates during resident interviews, and a process was designed that reduced the impact from content specificity.

  5. Real-time emergency forecasting technique for situation management systems

    NASA Astrophysics Data System (ADS)

    Kopytov, V. V.; Kharechkin, P. V.; Naumenko, V. V.; Tretyak, R. S.; Tebueva, F. B.

    2018-05-01

    The article describes the real-time emergency forecasting technique that allows increasing accuracy and reliability of forecasting results of any emergency computational model applied for decision making in situation management systems. Computational models are improved by the Improved Brown’s method applying fractal dimension to forecast short time series data being received from sensors and control systems. Reliability of emergency forecasting results is ensured by the invalid sensed data filtering according to the methods of correlation analysis.

  6. Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model

    NASA Astrophysics Data System (ADS)

    Yuan, Zhongda; Deng, Junxiang; Wang, Dawei

    2018-02-01

    Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.

  7. Study of complete interconnect reliability for a GaAs MMIC power amplifier

    NASA Astrophysics Data System (ADS)

    Lin, Qian; Wu, Haifeng; Chen, Shan-ji; Jia, Guoqing; Jiang, Wei; Chen, Chao

    2018-05-01

    By combining the finite element analysis (FEA) and artificial neural network (ANN) technique, the complete prediction of interconnect reliability for a monolithic microwave integrated circuit (MMIC) power amplifier (PA) at the both of direct current (DC) and alternating current (AC) operation conditions is achieved effectively in this article. As a example, a MMIC PA is modelled to study the electromigration failure of interconnect. This is the first time to study the interconnect reliability for an MMIC PA at the conditions of DC and AC operation simultaneously. By training the data from FEA, a high accuracy ANN model for PA reliability is constructed. Then, basing on the reliability database which is obtained from the ANN model, it can give important guidance for improving the reliability design for IC.

  8. Reliability evaluation of microgrid considering incentive-based demand response

    NASA Astrophysics Data System (ADS)

    Huang, Ting-Cheng; Zhang, Yong-Jun

    2017-07-01

    Incentive-based demand response (IBDR) can guide customers to adjust their behaviour of electricity and curtail load actively. Meanwhile, distributed generation (DG) and energy storage system (ESS) can provide time for the implementation of IBDR. The paper focus on the reliability evaluation of microgrid considering IBDR. Firstly, the mechanism of IBDR and its impact on power supply reliability are analysed. Secondly, the IBDR dispatch model considering customer’s comprehensive assessment and the customer response model are developed. Thirdly, the reliability evaluation method considering IBDR based on Monte Carlo simulation is proposed. Finally, the validity of the above models and method is studied through numerical tests on modified RBTS Bus6 test system. Simulation results demonstrated that IBDR can improve the reliability of microgrid.

  9. Reliability of Wireless Sensor Networks

    PubMed Central

    Dâmaso, Antônio; Rosa, Nelson; Maciel, Paulo

    2014-01-01

    Wireless Sensor Networks (WSNs) consist of hundreds or thousands of sensor nodes with limited processing, storage, and battery capabilities. There are several strategies to reduce the power consumption of WSN nodes (by increasing the network lifetime) and increase the reliability of the network (by improving the WSN Quality of Service). However, there is an inherent conflict between power consumption and reliability: an increase in reliability usually leads to an increase in power consumption. For example, routing algorithms can send the same packet though different paths (multipath strategy), which it is important for reliability, but they significantly increase the WSN power consumption. In this context, this paper proposes a model for evaluating the reliability of WSNs considering the battery level as a key factor. Moreover, this model is based on routing algorithms used by WSNs. In order to evaluate the proposed models, three scenarios were considered to show the impact of the power consumption on the reliability of WSNs. PMID:25157553

  10. Improving reliability of aggregation, numerical simulation and analysis of complex systems by empirical data

    NASA Astrophysics Data System (ADS)

    Dobronets, Boris S.; Popova, Olga A.

    2018-05-01

    The paper considers a new approach of regression modeling that uses aggregated data presented in the form of density functions. Approaches to Improving the reliability of aggregation of empirical data are considered: improving accuracy and estimating errors. We discuss the procedures of data aggregation as a preprocessing stage for subsequent to regression modeling. An important feature of study is demonstration of the way how represent the aggregated data. It is proposed to use piecewise polynomial models, including spline aggregate functions. We show that the proposed approach to data aggregation can be interpreted as the frequency distribution. To study its properties density function concept is used. Various types of mathematical models of data aggregation are discussed. For the construction of regression models, it is proposed to use data representation procedures based on piecewise polynomial models. New approaches to modeling functional dependencies based on spline aggregations are proposed.

  11. Reliability based fatigue design and maintenance procedures

    NASA Technical Reports Server (NTRS)

    Hanagud, S.

    1977-01-01

    A stochastic model has been developed to describe a probability for fatigue process by assuming a varying hazard rate. This stochastic model can be used to obtain the desired probability of a crack of certain length at a given location after a certain number of cycles or time. Quantitative estimation of the developed model was also discussed. Application of the model to develop a procedure for reliability-based cost-effective fail-safe structural design is presented. This design procedure includes the reliability improvement due to inspection and repair. Methods of obtaining optimum inspection and maintenance schemes are treated.

  12. Improved reliability of wind turbine towers with active tuned mass dampers (ATMDs)

    NASA Astrophysics Data System (ADS)

    Fitzgerald, Breiffni; Sarkar, Saptarshi; Staino, Andrea

    2018-04-01

    Modern multi-megawatt wind turbines are composed of slender, flexible, and lightly damped blades and towers. These components exhibit high susceptibility to wind-induced vibrations. As the size, flexibility and cost of the towers have increased in recent years, the need to protect these structures against damage induced by turbulent aerodynamic loading has become apparent. This paper combines structural dynamic models and probabilistic assessment tools to demonstrate improvements in structural reliability when modern wind turbine towers are equipped with active tuned mass dampers (ATMDs). This study proposes a multi-modal wind turbine model for wind turbine control design and analysis. This study incorporates an ATMD into the tower of this model. The model is subjected to stochastically generated wind loads of varying speeds to develop wind-induced probabilistic demand models for towers of modern multi-megawatt wind turbines under structural uncertainty. Numerical simulations have been carried out to ascertain the effectiveness of the active control system to improve the structural performance of the wind turbine and its reliability. The study constructs fragility curves, which illustrate reductions in the vulnerability of towers to wind loading owing to the inclusion of the damper. Results show that the active controller is successful in increasing the reliability of the tower responses. According to the analysis carried out in this paper, a strong reduction of the probability of exceeding a given displacement at the rated wind speed has been observed.

  13. Software reliability studies

    NASA Technical Reports Server (NTRS)

    Hoppa, Mary Ann; Wilson, Larry W.

    1994-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.

  14. Improving Reliability of a Residency Interview Process

    PubMed Central

    Serres, Michelle L.; Gundrum, Todd E.

    2013-01-01

    Objective. To improve the reliability and discrimination of a pharmacy resident interview evaluation form, and thereby improve the reliability of the interview process. Methods. In phase 1 of the study, authors used a Many-Facet Rasch Measurement model to optimize an existing evaluation form for reliability and discrimination. In phase 2, interviewer pairs used the modified evaluation form within 4 separate interview stations. In phase 3, 8 interviewers individually-evaluated each candidate in one-on-one interviews. Results. In phase 1, the evaluation form had a reliability of 0.98 with person separation of 6.56; reproducibly, the form separated applicants into 6 distinct groups. Using that form in phase 2 and 3, our largest variation source was candidates, while content specificity was the next largest variation source. The phase 2 g-coefficient was 0.787, while confirmatory phase 3 was 0.922. Process reliability improved with more stations despite fewer interviewers per station—impact of content specificity was greatly reduced with more interview stations. Conclusion. A more reliable, discriminating evaluation form was developed to evaluate candidates during resident interviews, and a process was designed that reduced the impact from content specificity. PMID:24159209

  15. Long-term reliability of ImPACT in professional ice hockey.

    PubMed

    Echemendia, Ruben J; Bruce, Jared M; Meeuwisse, Willem; Comper, Paul; Aubry, Mark; Hutchison, Michael

    2016-02-01

    This study sought to assess the test-retest reliability of Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT) across 2-4 year time intervals and evaluate the utility of a newly proposed two-factor (Speed/Memory) model of ImPACT across multiple language versions. Test-retest data were collected from non-concussed National Hockey League (NHL) players across 2-, 3-, and 4-year time intervals. The two-factor model was examined using different language versions (English, French, Czech, Swedish) of the test using a one-year interval, and across 2-4 year intervals using the English version of the test. The two-factor Speed index improved reliability across multiple language versions of ImPACT. The Memory factor also improved but reliability remained below the traditional cutoff of .70 for use in clinical decision-making. ImPACT reliabilities remained low (below .70) regardless of whether the four-composite or the two-factor model was used across 2-, 3-, and 4-year time intervals. The two-factor approach increased ImPACT's one-year reliability over the traditional four-composite model among NHL players. The increased stability in test scores improves the test's ability to detect cognitive changes following injury, which increases the diagnostic utility of the test and allows for better return to play decision-making by reducing the risk of exposing an athlete to additional trauma while the brain may be at a heightened vulnerability to such trauma. Although the Speed Index increases the clinical utility of the test, the stability of the Memory index remains low. Irrespective of whether the two-factor or traditional four-composite approach is used, these data suggest that new baselines should occur on a yearly basis in order to maximize clinical utility.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpson, L.; Britt, J.; Birkmire, R.

    ITN Energy Systems, Inc., and Global Solar Energy, Inc., assisted by NREL's PV Manufacturing R&D program, have continued to advance CIGS production technology by developing trajectory-oriented predictive/control models, fault-tolerance control, control platform development, in-situ sensors, and process improvements. Modeling activities included developing physics-based and empirical models for CIGS and sputter-deposition processing, implementing model-based control, and applying predictive models to the construction of new evaporation sources and for control. Model-based control is enabled by implementing reduced or empirical models into a control platform. Reliability improvement activities include implementing preventive maintenance schedules; detecting failed sensors/equipment and reconfiguring to tinue processing; and systematicmore » development of fault prevention and reconfiguration strategies for the full range of CIGS PV production deposition processes. In-situ sensor development activities have resulted in improved control and indicated the potential for enhanced process status monitoring and control of the deposition processes. Substantial process improvements have been made, including significant improvement in CIGS uniformity, thickness control, efficiency, yield, and throughput. In large measure, these gains have been driven by process optimization, which in turn have been enabled by control and reliability improvements due to this PV Manufacturing R&D program.« less

  17. Patient safety in anesthesia: learning from the culture of high-reliability organizations.

    PubMed

    Wright, Suzanne M

    2015-03-01

    There has been an increased awareness of and interest in patient safety and improved outcomes, as well as a growing body of evidence substantiating medical error as a leading cause of death and injury in the United States. According to The Joint Commission, US hospitals demonstrate improvements in health care quality and patient safety. Although this progress is encouraging, much room for improvement remains. High-reliability organizations, industries that deliver reliable performances in the face of complex working environments, can serve as models of safety for our health care system until plausible explanations for patient harm are better understood. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. An improved spanning tree approach for the reliability analysis of supply chain collaborative network

    NASA Astrophysics Data System (ADS)

    Lam, C. Y.; Ip, W. H.

    2012-11-01

    A higher degree of reliability in the collaborative network can increase the competitiveness and performance of an entire supply chain. As supply chain networks grow more complex, the consequences of unreliable behaviour become increasingly severe in terms of cost, effort and time. Moreover, it is computationally difficult to calculate the network reliability of a Non-deterministic Polynomial-time hard (NP-hard) all-terminal network using state enumeration, as this may require a huge number of iterations for topology optimisation. Therefore, this paper proposes an alternative approach of an improved spanning tree for reliability analysis to help effectively evaluate and analyse the reliability of collaborative networks in supply chains and reduce the comparative computational complexity of algorithms. Set theory is employed to evaluate and model the all-terminal reliability of the improved spanning tree algorithm and present a case study of a supply chain used in lamp production to illustrate the application of the proposed approach.

  19. Uncertainty quantification and reliability assessment in operational oil spill forecast modeling system.

    PubMed

    Hou, Xianlong; Hodges, Ben R; Feng, Dongyu; Liu, Qixiao

    2017-03-15

    As oil transport increasing in the Texas bays, greater risks of ship collisions will become a challenge, yielding oil spill accidents as a consequence. To minimize the ecological damage and optimize rapid response, emergency managers need to be informed with how fast and where oil will spread as soon as possible after a spill. The state-of-the-art operational oil spill forecast modeling system improves the oil spill response into a new stage. However uncertainty due to predicted data inputs often elicits compromise on the reliability of the forecast result, leading to misdirection in contingency planning. Thus understanding the forecast uncertainty and reliability become significant. In this paper, Monte Carlo simulation is implemented to provide parameters to generate forecast probability maps. The oil spill forecast uncertainty is thus quantified by comparing the forecast probability map and the associated hindcast simulation. A HyosPy-based simple statistic model is developed to assess the reliability of an oil spill forecast in term of belief degree. The technologies developed in this study create a prototype for uncertainty and reliability analysis in numerical oil spill forecast modeling system, providing emergency managers to improve the capability of real time operational oil spill response and impact assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Reliability and Validity of Inferences about Teachers Based on Student Scores. William H. Angoff Memorial Lecture Series

    ERIC Educational Resources Information Center

    Haertel, Edward H.

    2013-01-01

    Policymakers and school administrators have embraced value-added models of teacher effectiveness as tools for educational improvement. Teacher value-added estimates may be viewed as complicated scores of a certain kind. This suggests using a test validation model to examine their reliability and validity. Validation begins with an interpretive…

  1. Rollover risk prediction of heavy vehicles by reliability index and empirical modelling

    NASA Astrophysics Data System (ADS)

    Sellami, Yamine; Imine, Hocine; Boubezoul, Abderrahmane; Cadiou, Jean-Charles

    2018-03-01

    This paper focuses on a combination of a reliability-based approach and an empirical modelling approach for rollover risk assessment of heavy vehicles. A reliability-based warning system is developed to alert the driver to a potential rollover before entering into a bend. The idea behind the proposed methodology is to estimate the rollover risk by the probability that the vehicle load transfer ratio (LTR) exceeds a critical threshold. Accordingly, a so-called reliability index may be used as a measure to assess the vehicle safe functioning. In the reliability method, computing the maximum of LTR requires to predict the vehicle dynamics over the bend which can be in some cases an intractable problem or time-consuming. With the aim of improving the reliability computation time, an empirical model is developed to substitute the vehicle dynamics and rollover models. This is done by using the SVM (Support Vector Machines) algorithm. The preliminary obtained results demonstrate the effectiveness of the proposed approach.

  2. Improved Acquisition for System Sustainment: Multiobjective Tradeoff Analysis for Condition-Based Decision-Making

    DTIC Science & Technology

    2013-10-21

    depend on the quality of allocating resources. This work uses a reliability model of system and environmental covariates incorporating information at...state space. Further, the use of condition variables allows for the direct modeling of maintenance impact with the assumption that a nominal value ... value ), the model in the application of aviation maintenance can provide a useful estimation of reliability at multiple levels. Adjusted survival

  3. On modeling human reliability in space flights - Redundancy and recovery operations

    NASA Astrophysics Data System (ADS)

    Aarset, M.; Wright, J. F.

    The reliability of humans is of paramount importance to the safety of space flight systems. This paper describes why 'back-up' operators might not be the best solution, and in some cases, might even degrade system reliability. The problem associated with human redundancy calls for special treatment in reliability analyses. The concept of Standby Redundancy is adopted, and psychological and mathematical models are introduced to improve the way such problems can be estimated and handled. In the past, human reliability has practically been neglected in most reliability analyses, and, when included, the humans have been modeled as a component and treated numerically the way technical components are. This approach is not wrong in itself, but it may lead to systematic errors if too simple analogies from the technical domain are used in the modeling of human behavior. In this paper redundancy in a man-machine system will be addressed. It will be shown how simplification from the technical domain, when applied to human components of a system, may give non-conservative estimates of system reliability.

  4. Improving statistical inference on pathogen densities estimated by quantitative molecular methods: malaria gametocytaemia as a case study.

    PubMed

    Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S

    2015-01-16

    Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.

  5. Statistical model selection for better prediction and discovering science mechanisms that affect reliability

    DOE PAGES

    Anderson-Cook, Christine M.; Morzinski, Jerome; Blecker, Kenneth D.

    2015-08-19

    Understanding the impact of production, environmental exposure and age characteristics on the reliability of a population is frequently based on underlying science and empirical assessment. When there is incomplete science to prescribe which inputs should be included in a model of reliability to predict future trends, statistical model/variable selection techniques can be leveraged on a stockpile or population of units to improve reliability predictions as well as suggest new mechanisms affecting reliability to explore. We describe a five-step process for exploring relationships between available summaries of age, usage and environmental exposure and reliability. The process involves first identifying potential candidatemore » inputs, then second organizing data for the analysis. Third, a variety of models with different combinations of the inputs are estimated, and fourth, flexible metrics are used to compare them. As a result, plots of the predicted relationships are examined to distill leading model contenders into a prioritized list for subject matter experts to understand and compare. The complexity of the model, quality of prediction and cost of future data collection are all factors to be considered by the subject matter experts when selecting a final model.« less

  6. Modeling of a Stacked Power Module for Parasitic Inductance Extraction

    DTIC Science & Technology

    2017-09-15

    issues of heat dissipation, reliability, and parasitic inductance. An improved packaging approach has been proposed to simultaneously address each of...and mechanical attachments. The power devices in the resulting module design are stacked between copper layers with an integrated heat sink. By...stacking devices, the module’s parasitic inductance should be reduced, with concurrent improvement of reliability and heat dissipation, in comparison to

  7. Opportunities of probabilistic flood loss models

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Kreibich, Heidi; Lüdtke, Stefan; Vogel, Kristin; Merz, Bruno

    2016-04-01

    Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. However, reliable flood damage models are a prerequisite for the practical usefulness of the model results. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks and traditional stage damage functions. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005, 2006 and 2013 in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of sharpness of the predictions the reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The comparison of the uni-variable Stage damage function and the multivariable model approach emphasises the importance to quantify predictive uncertainty. With each explanatory variable, the multi-variable model reveals an additional source of uncertainty. However, the predictive performance in terms of precision (mbe), accuracy (mae) and reliability (HR) is clearly improved in comparison to uni-variable Stage damage function. Overall, Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.

  8. Improving rice models for more reliable prediction of responses of rice yield to CO2 and temperature elevaton

    USDA-ARS?s Scientific Manuscript database

    Materials and Methods The simulation exercise and model improvement were implemented in phase-wise. In the first modelling activities, the model sensitivities were evaluated to given CO2 concentrations varying from 360 to 720 'mol mol-1 at an interval of 90 'mol mol-1 and air temperature increments...

  9. Gearbox Reliability Collaborative Phase 3 Gearbox 3 Test

    DOE Data Explorer

    Keller, Jonathan (ORCID:0000000177243885)

    2016-12-28

    The GRC uses a combined gearbox testing, modeling, and analysis approach disseminating data and results to the industry and facilitating improvement of gearbox reliability. This test data describes the tests of GRC gearbox 3 in the National Wind Technology Center dynamometer and documents any modifications to the original test plan. It serves as a guide to interpret the publicly released data sets with brief analyses to illustrate the data. TDMS viewer and Solidworks software required to view data files. The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) was established by the U.S. Department of Energy in 2006; its key goal is to understand the root causes of premature gearbox failures and improve their reliability.

  10. Gearbox Reliability Collaborative Phase 3 Gearbox 2 Test

    DOE Data Explorer

    Keller, Jonathan; Robb, Wallen

    2016-05-12

    The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) was established by the U.S. Department of Energy in 2006; its key goal is to understand the root causes of premature gearbox failures and improve their reliability. The GRC uses a combined gearbox testing, modeling, and analysis approach disseminating data and results to the industry and facilitating improvement of gearbox reliability. This test data describes the tests of GRC gearbox 2 in the National Wind Technology Center dynamometer and documents any modifications to the original test plan. It serves as a guide to interpret the publicly released data sets with brief analyses to illustrate the data. TDMS viewer and Solidworks software required to view data files.

  11. Ceramic component reliability with the restructured NASA/CARES computer program

    NASA Technical Reports Server (NTRS)

    Powers, Lynn M.; Starlinger, Alois; Gyekenyesi, John P.

    1992-01-01

    The Ceramics Analysis and Reliability Evaluation of Structures (CARES) integrated design program on statistical fast fracture reliability and monolithic ceramic components is enhanced to include the use of a neutral data base, two-dimensional modeling, and variable problem size. The data base allows for the efficient transfer of element stresses, temperatures, and volumes/areas from the finite element output to the reliability analysis program. Elements are divided to insure a direct correspondence between the subelements and the Gaussian integration points. Two-dimensional modeling is accomplished by assessing the volume flaw reliability with shell elements. To demonstrate the improvements in the algorithm, example problems are selected from a round-robin conducted by WELFEP (WEakest Link failure probability prediction by Finite Element Postprocessors).

  12. PV System 'Availability' as a Reliability Metric -- Improving Standards, Contract Language and Performance Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klise, Geoffrey T.; Hill, Roger; Walker, Andy

    The use of the term 'availability' to describe a photovoltaic (PV) system and power plant has been fraught with confusion for many years. A term that is meant to describe equipment operational status is often omitted, misapplied or inaccurately combined with PV performance metrics due to attempts to measure performance and reliability through the lens of traditional power plant language. This paper discusses three areas where current research in standards, contract language and performance modeling is improving the way availability is used with regards to photovoltaic systems and power plants.

  13. Does a web-based feedback training program result in improved reliability in clinicians' ratings of the Global Assessment of Functioning (GAF) Scale?

    PubMed

    Støre-Valen, Jakob; Ryum, Truls; Pedersen, Geir A F; Pripp, Are H; Jose, Paul E; Karterud, Sigmund

    2015-09-01

    The Global Assessment of Functioning (GAF) Scale is used in routine clinical practice and research to estimate symptom and functional severity and longitudinal change. Concerns about poor interrater reliability have been raised, and the present study evaluated the effect of a Web-based GAF training program designed to improve interrater reliability in routine clinical practice. Clinicians rated up to 20 vignettes online, and received deviation scores as immediate feedback (i.e., own scores compared with expert raters) after each rating. Growth curves of absolute SD scores across the vignettes were modeled. A linear mixed effects model, using the clinician's deviation scores from expert raters as the dependent variable, indicated an improvement in reliability during training. Moderation by content of scale (symptoms; functioning), scale range (average; extreme), previous experience with GAF rating, profession, and postgraduate training were assessed. Training reduced deviation scores for inexperienced GAF raters, for individuals in clinical professions other than nursing and medicine, and for individuals with no postgraduate specialization. In addition, training was most beneficial for cases with average severity of symptoms compared with cases with extreme severity. The results support the use of Web-based training with feedback routines as a means to improve the reliability of GAF ratings performed by clinicians in mental health practice. These results especially pertain to clinicians in mental health practice who do not have a masters or doctoral degree. (c) 2015 APA, all rights reserved.

  14. Producing Cochrane systematic reviews-a qualitative study of current approaches and opportunities for innovation and improvement.

    PubMed

    Turner, Tari; Green, Sally; Tovey, David; McDonald, Steve; Soares-Weiser, Karla; Pestridge, Charlotte; Elliott, Julian

    2017-08-01

    Producing high-quality, relevant systematic reviews and keeping them up to date is challenging. Cochrane is a leading provider of systematic reviews in health. For Cochrane to continue to contribute to improvements in heath, Cochrane Reviews must be rigorous, reliable and up to date. We aimed to explore existing models of Cochrane Review production and emerging opportunities to improve the efficiency and sustainability of these processes. To inform discussions about how to best achieve this, we conducted 26 interviews and an online survey with 106 respondents. Respondents highlighted the importance and challenge of creating reliable, timely systematic reviews. They described the challenges and opportunities presented by current production models, and they shared what they are doing to improve review production. They particularly highlighted significant challenges with increasing complexity of review methods; difficulty keeping authors on board and on track; and the length of time required to complete the process. Strong themes emerged about the roles of authors and Review Groups, the central actors in the review production process. The results suggest that improvements to Cochrane's systematic review production models could come from improving clarity of roles and expectations, ensuring continuity and consistency of input, enabling active management of the review process, centralising some review production steps; breaking reviews into smaller "chunks", and improving approaches to building capacity of and sharing information between authors and Review Groups. Respondents noted the important role new technologies have to play in enabling these improvements. The findings of this study will inform the development of new Cochrane Review production models and may provide valuable data for other systematic review producers as they consider how best to produce rigorous, reliable, up-to-date reviews.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpson, L.

    ITN Energy Systems, Inc., and Global Solar Energy, Inc., with the assistance of NREL's PV Manufacturing R&D program, have continued the advancement of CIGS production technology through the development of trajectory-oriented predictive/control models, fault-tolerance control, control-platform development, in-situ sensors, and process improvements. Modeling activities to date include the development of physics-based and empirical models for CIGS and sputter-deposition processing, implementation of model-based control, and application of predictive models to the construction of new evaporation sources and for control. Model-based control is enabled through implementation of reduced or empirical models into a control platform. Reliability improvement activities include implementation of preventivemore » maintenance schedules; detection of failed sensors/equipment and reconfiguration to continue processing; and systematic development of fault prevention and reconfiguration strategies for the full range of CIGS PV production deposition processes. In-situ sensor development activities have resulted in improved control and indicated the potential for enhanced process status monitoring and control of the deposition processes. Substantial process improvements have been made, including significant improvement in CIGS uniformity, thickness control, efficiency, yield, and throughput. In large measure, these gains have been driven by process optimization, which, in turn, have been enabled by control and reliability improvements due to this PV Manufacturing R&D program. This has resulted in substantial improvements of flexible CIGS PV module performance and efficiency.« less

  16. Validity and feasibility of the american college of surgeons colectomy composite outcome quality measure.

    PubMed

    Merkow, Ryan P; Hall, Bruce L; Cohen, Mark E; Wang, Xue; Adams, John L; Chow, Warren B; Lawson, Elise H; Bilimoria, Karl Y; Richards, Karen; Ko, Clifford Y

    2013-03-01

    To develop a reliable, robust, parsimonious, risk-adjusted 30-day composite colectomy outcome measure. A fundamental aspect in the pursuit of high-quality care is the development of valid and reliable performance measures in surgery. Colon resection is associated with appreciable morbidity and mortality and therefore is an ideal quality improvement target. From 2010 American College of Surgeons National Surgical Quality Improvement Program data, patients were identified who underwent colon resection for any indication. A composite outcome of death or any serious morbidity within 30 days of the index operation was established. A 6-predictor, parsimonious model was developed and compared with a more complex model with more variables. National caseload requirements were calculated on the basis of increasing reliability thresholds. From 255 hospitals, 22,346 patients were accrued who underwent a colon resection in 2010, most commonly for neoplasm (46.7%). A mortality or serious morbidity event occurred in 4461 patients (20.0%). At the hospital level, the median composite event rate was 20.7% (interquartile range: 15.8%-26.3%). The parsimonious model performed similarly to the full model (Akaike information criterion: 19,411 vs 18,988), and hospital-level performance comparisons were highly correlated (R = 0.97). At a reliability threshold of 0.4, 56 annual colon resections would be required and achievable at an estimated 42% of US and 69% of American College of Surgeons National Surgical Quality Improvement Program hospitals. This 42% of US hospitals performed approximately 84% of all colon resections in the country in 2008. It is feasible to design a measure with a composite outcome of death or serious morbidity after colon surgery that has a low burden for data collection, has substantial clinical importance, and has acceptable reliability.

  17. Optimizing preventive maintenance policy: A data-driven application for a light rail braking system.

    PubMed

    Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel

    2017-10-01

    This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions.

  18. Optimizing preventive maintenance policy: A data-driven application for a light rail braking system

    PubMed Central

    Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel

    2017-01-01

    This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions. PMID:29278245

  19. Approximation of reliabilities for multiple-trait model with maternal effects.

    PubMed

    Strabel, T; Misztal, I; Bertrand, J K

    2001-04-01

    Reliabilities for a multiple-trait maternal model were obtained by combining reliabilities obtained from single-trait models. Single-trait reliabilities were obtained using an approximation that supported models with additive and permanent environmental effects. For the direct effect, the maternal and permanent environmental variances were assigned to the residual. For the maternal effect, variance of the direct effect was assigned to the residual. Data included 10,550 birth weight, 11,819 weaning weight, and 3,617 postweaning gain records of Senepol cattle. Reliabilities were obtained by generalized inversion and by using single-trait and multiple-trait approximation methods. Some reliabilities obtained by inversion were negative because inbreeding was ignored in calculating the inverse of the relationship matrix. The multiple-trait approximation method reduced the bias of approximation when compared with the single-trait method. The correlations between reliabilities obtained by inversion and by multiple-trait procedures for the direct effect were 0.85 for birth weight, 0.94 for weaning weight, and 0.96 for postweaning gain. Correlations for maternal effects for birth weight and weaning weight were 0.96 to 0.98 for both approximations. Further improvements can be achieved by refining the single-trait procedures.

  20. A Survey of Techniques for Modeling and Improving Reliability of Computing Systems

    DOE PAGES

    Mittal, Sparsh; Vetter, Jeffrey S.

    2015-04-24

    Recent trends of aggressive technology scaling have greatly exacerbated the occurrences and impact of faults in computing systems. This has made `reliability' a first-order design constraint. To address the challenges of reliability, several techniques have been proposed. In this study, we provide a survey of architectural techniques for improving resilience of computing systems. We especially focus on techniques proposed for microarchitectural components, such as processor registers, functional units, cache and main memory etc. In addition, we discuss techniques proposed for non-volatile memory, GPUs and 3D-stacked processors. To underscore the similarities and differences of the techniques, we classify them based onmore » their key characteristics. We also review the metrics proposed to quantify vulnerability of processor structures. Finally, we believe that this survey will help researchers, system-architects and processor designers in gaining insights into the techniques for improving reliability of computing systems.« less

  1. A Survey of Techniques for Modeling and Improving Reliability of Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Sparsh; Vetter, Jeffrey S.

    Recent trends of aggressive technology scaling have greatly exacerbated the occurrences and impact of faults in computing systems. This has made `reliability' a first-order design constraint. To address the challenges of reliability, several techniques have been proposed. In this study, we provide a survey of architectural techniques for improving resilience of computing systems. We especially focus on techniques proposed for microarchitectural components, such as processor registers, functional units, cache and main memory etc. In addition, we discuss techniques proposed for non-volatile memory, GPUs and 3D-stacked processors. To underscore the similarities and differences of the techniques, we classify them based onmore » their key characteristics. We also review the metrics proposed to quantify vulnerability of processor structures. Finally, we believe that this survey will help researchers, system-architects and processor designers in gaining insights into the techniques for improving reliability of computing systems.« less

  2. Tackling reliability and construct validity: the systematic development of a qualitative protocol for skill and incident analysis.

    PubMed

    Savage, Trevor Nicholas; McIntosh, Andrew Stuart

    2017-03-01

    It is important to understand factors contributing to and directly causing sports injuries to improve the effectiveness and safety of sports skills. The characteristics of injury events must be evaluated and described meaningfully and reliably. However, many complex skills cannot be effectively investigated quantitatively because of ethical, technological and validity considerations. Increasingly, qualitative methods are being used to investigate human movement for research purposes, but there are concerns about reliability and measurement bias of such methods. Using the tackle in Rugby union as an example, we outline a systematic approach for developing a skill analysis protocol with a focus on improving objectivity, validity and reliability. Characteristics for analysis were selected using qualitative analysis and biomechanical theoretical models and epidemiological and coaching literature. An expert panel comprising subject matter experts provided feedback and the inter-rater reliability of the protocol was assessed using ten trained raters. The inter-rater reliability results were reviewed by the expert panel and the protocol was revised and assessed in a second inter-rater reliability study. Mean agreement in the second study improved and was comparable (52-90% agreement and ICC between 0.6 and 0.9) with other studies that have reported inter-rater reliability of qualitative analysis of human movement.

  3. Distributed collaborative probabilistic design of multi-failure structure with fluid-structure interaction using fuzzy neural network of regression

    NASA Astrophysics Data System (ADS)

    Song, Lu-Kai; Wen, Jie; Fei, Cheng-Wei; Bai, Guang-Chen

    2018-05-01

    To improve the computing efficiency and precision of probabilistic design for multi-failure structure, a distributed collaborative probabilistic design method-based fuzzy neural network of regression (FR) (called as DCFRM) is proposed with the integration of distributed collaborative response surface method and fuzzy neural network regression model. The mathematical model of DCFRM is established and the probabilistic design idea with DCFRM is introduced. The probabilistic analysis of turbine blisk involving multi-failure modes (deformation failure, stress failure and strain failure) was investigated by considering fluid-structure interaction with the proposed method. The distribution characteristics, reliability degree, and sensitivity degree of each failure mode and overall failure mode on turbine blisk are obtained, which provides a useful reference for improving the performance and reliability of aeroengine. Through the comparison of methods shows that the DCFRM reshapes the probability of probabilistic analysis for multi-failure structure and improves the computing efficiency while keeping acceptable computational precision. Moreover, the proposed method offers a useful insight for reliability-based design optimization of multi-failure structure and thereby also enriches the theory and method of mechanical reliability design.

  4. Operation Reliability Assessment for Cutting Tools by Applying a Proportional Covariate Model to Condition Monitoring Information

    PubMed Central

    Cai, Gaigai; Chen, Xuefeng; Li, Bing; Chen, Baojia; He, Zhengjia

    2012-01-01

    The reliability of cutting tools is critical to machining precision and production efficiency. The conventional statistic-based reliability assessment method aims at providing a general and overall estimation of reliability for a large population of identical units under given and fixed conditions. However, it has limited effectiveness in depicting the operational characteristics of a cutting tool. To overcome this limitation, this paper proposes an approach to assess the operation reliability of cutting tools. A proportional covariate model is introduced to construct the relationship between operation reliability and condition monitoring information. The wavelet packet transform and an improved distance evaluation technique are used to extract sensitive features from vibration signals, and a covariate function is constructed based on the proportional covariate model. Ultimately, the failure rate function of the cutting tool being assessed is calculated using the baseline covariate function obtained from a small sample of historical data. Experimental results and a comparative study show that the proposed method is effective for assessing the operation reliability of cutting tools. PMID:23201980

  5. Model testing for reliability and validity of the Outcome Expectations for Exercise Scale.

    PubMed

    Resnick, B; Zimmerman, S; Orwig, D; Furstenberg, A L; Magaziner, J

    2001-01-01

    Development of a reliable and valid measure of outcome expectations for exercise appropriate for older adults will help establish the relationship between outcome expectations and exercise. Once established, this measure can be used to facilitate the development of interventions to strengthen outcome expectations and improve adherence to regular exercise in older adults. Building on initial psychometrics of the Outcome Expectation for Exercise (OEE) Scale, the purpose of the current study was to use structural equation modeling to provide additional support for the reliability and validity of this measure. The OEE scale is a 9-item measure specifically focusing on the perceived consequences of exercise for older adults. The OEE scale was given to 191 residents in a continuing care retirement community. The mean age of the participants was 85 +/- 6.1 and the majority were female (76%), White (99%), and unmarried (76%). Using structural equation modeling, reliability was based on R2 values, and validity was based on a confirmatory factor analysis and path coefficients. There was continued evidence for reliability of the OEE based on R2 values ranging from .42 to .77, and validity with path coefficients ranging from .69 to .87, and evidence of model fit (X2 of 69, df = 27, p < .05, NFI = .98, RMSEA = .07). The evidence of reliability and validity of this measure has important implications for clinical work and research. The OEE scale can be used to identify older adults who have low outcome expectations for exercise, and interventions can then be implemented to strengthen these expectations and thereby improve exercise behavior.

  6. An Integrated Miniature Pulse Tube Cryocooler at 80K

    NASA Astrophysics Data System (ADS)

    Chen, H. L.; Yang, L. W.; Cai, J. H.; Liang, J. T.; Zhang, L.; Zhou, Y.

    2008-03-01

    Two integrated models of coaxial miniature pulse tube coolers based on an experimental model are manufactured. Performance of the integrated models is compared to that of the experimental model. Reliability and stability of an integrated model are tested and improved.

  7. Blockmodeling and the Estimation of Evolutionary Architectural Growth in Major Defense Acquisition Programs

    DTIC Science & Technology

    2016-04-30

    Dabkowski, and Dixit (2015), we demonstrate that the DoDAF models required pre–MS A map to 14 of the 18 parameters of the Constructive Systems...engineering effort in complex systems. Saarbrücken, Germany: VDM Verlag. Valerdi, R., Dabkowski, M., & Dixit , I. (2015). Reliability improvement of...R., Dabkowski, M., & Dixit , I. (2015). Reliability Improvement of Major Defense Acquisition Program Cost Estimates – Mapping DoDAF to COSYSMO

  8. A New Reliability Analysis Model of the Chegongzhuang Heat-Supplying Tunnel Structure Considering the Coupling of Pipeline Thrust and Thermal Effect

    PubMed Central

    Zhang, Jiawen; He, Shaohui; Wang, Dahai; Liu, Yangpeng; Yao, Wenbo; Liu, Xiabing

    2018-01-01

    Based on the operating Chegongzhuang heat-supplying tunnel in Beijing, the reliability of its lining structure under the action of large thrust and thermal effect is studied. According to the characteristics of a heat-supplying tunnel service, a three-dimensional numerical analysis model was established based on the mechanical tests on the in-situ specimens. The stress and strain of the tunnel structure were obtained before and after the operation. Compared with the field monitoring data, the rationality of the model was verified. After extracting the internal force of the lining structure, the improved method of subset simulation was proposed as the performance function to calculate the reliability of the main control section of the tunnel. In contrast to the traditional calculation method, the analytic relationship between the sample numbers in the subset simulation method and Monte Carlo method was given. The results indicate that the lining structure is greatly influenced by coupling in the range of six meters from the fixed brackets, especially the tunnel floor. The improved subset simulation method can greatly save computation time and improve computational efficiency under the premise of ensuring the accuracy of calculation. It is suitable for the reliability calculation of tunnel engineering, because “the lower the probability, the more efficient the calculation.” PMID:29401691

  9. Interactive Reliability Model for Whisker-toughened Ceramics

    NASA Technical Reports Server (NTRS)

    Palko, Joseph L.

    1993-01-01

    Wider use of ceramic matrix composites (CMC) will require the development of advanced structural analysis technologies. The use of an interactive model to predict the time-independent reliability of a component subjected to multiaxial loads is discussed. The deterministic, three-parameter Willam-Warnke failure criterion serves as the theoretical basis for the reliability model. The strength parameters defining the model are assumed to be random variables, thereby transforming the deterministic failure criterion into a probabilistic criterion. The ability of the model to account for multiaxial stress states with the same unified theory is an improvement over existing models. The new model was coupled with a public-domain finite element program through an integrated design program. This allows a design engineer to predict the probability of failure of a component. A simple structural problem is analyzed using the new model, and the results are compared to existing models.

  10. Feature reliability determines specificity and transfer of perceptual learning in orientation search.

    PubMed

    Yashar, Amit; Denison, Rachel N

    2017-12-01

    Training can modify the visual system to produce a substantial improvement on perceptual tasks and therefore has applications for treating visual deficits. Visual perceptual learning (VPL) is often specific to the trained feature, which gives insight into processes underlying brain plasticity, but limits VPL's effectiveness in rehabilitation. Under what circumstances VPL transfers to untrained stimuli is poorly understood. Here we report a qualitatively new phenomenon: intrinsic variation in the representation of features determines the transfer of VPL. Orientations around cardinal are represented more reliably than orientations around oblique in V1, which has been linked to behavioral consequences such as visual search asymmetries. We studied VPL for visual search of near-cardinal or oblique targets among distractors of the other orientation while controlling for other display and task attributes, including task precision, task difficulty, and stimulus exposure. Learning was the same in all training conditions; however, transfer depended on the orientation of the target, with full transfer of learning from near-cardinal to oblique targets but not the reverse. To evaluate the idea that representational reliability was the key difference between the orientations in determining VPL transfer, we created a model that combined orientation-dependent reliability, improvement of reliability with learning, and an optimal search strategy. Modeling suggested that not only search asymmetries but also the asymmetric transfer of VPL depended on preexisting differences between the reliability of near-cardinal and oblique representations. Transfer asymmetries in model behavior also depended on having different learning rates for targets and distractors, such that greater learning for low-reliability distractors facilitated transfer. These findings suggest that training on sensory features with intrinsically low reliability may maximize the generalizability of learning in complex visual environments.

  11. Feature reliability determines specificity and transfer of perceptual learning in orientation search

    PubMed Central

    2017-01-01

    Training can modify the visual system to produce a substantial improvement on perceptual tasks and therefore has applications for treating visual deficits. Visual perceptual learning (VPL) is often specific to the trained feature, which gives insight into processes underlying brain plasticity, but limits VPL’s effectiveness in rehabilitation. Under what circumstances VPL transfers to untrained stimuli is poorly understood. Here we report a qualitatively new phenomenon: intrinsic variation in the representation of features determines the transfer of VPL. Orientations around cardinal are represented more reliably than orientations around oblique in V1, which has been linked to behavioral consequences such as visual search asymmetries. We studied VPL for visual search of near-cardinal or oblique targets among distractors of the other orientation while controlling for other display and task attributes, including task precision, task difficulty, and stimulus exposure. Learning was the same in all training conditions; however, transfer depended on the orientation of the target, with full transfer of learning from near-cardinal to oblique targets but not the reverse. To evaluate the idea that representational reliability was the key difference between the orientations in determining VPL transfer, we created a model that combined orientation-dependent reliability, improvement of reliability with learning, and an optimal search strategy. Modeling suggested that not only search asymmetries but also the asymmetric transfer of VPL depended on preexisting differences between the reliability of near-cardinal and oblique representations. Transfer asymmetries in model behavior also depended on having different learning rates for targets and distractors, such that greater learning for low-reliability distractors facilitated transfer. These findings suggest that training on sensory features with intrinsically low reliability may maximize the generalizability of learning in complex visual environments. PMID:29240813

  12. A reliability analysis tool for SpaceWire network

    NASA Astrophysics Data System (ADS)

    Zhou, Qiang; Zhu, Longjiang; Fei, Haidong; Wang, Xingyou

    2017-04-01

    A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. It is becoming more and more popular in space applications due to its technical advantages, including reliability, low power and fault protection, etc. High reliability is the vital issue for spacecraft. Therefore, it is very important to analyze and improve the reliability performance of the SpaceWire network. This paper deals with the problem of reliability modeling and analysis with SpaceWire network. According to the function division of distributed network, a reliability analysis method based on a task is proposed, the reliability analysis of every task can lead to the system reliability matrix, the reliability result of the network system can be deduced by integrating these entire reliability indexes in the matrix. With the method, we develop a reliability analysis tool for SpaceWire Network based on VC, where the computation schemes for reliability matrix and the multi-path-task reliability are also implemented. By using this tool, we analyze several cases on typical architectures. And the analytic results indicate that redundancy architecture has better reliability performance than basic one. In practical, the dual redundancy scheme has been adopted for some key unit, to improve the reliability index of the system or task. Finally, this reliability analysis tool will has a directive influence on both task division and topology selection in the phase of SpaceWire network system design.

  13. Analysis of fatigue reliability for high temperature and high pressure multi-stage decompression control valve

    NASA Astrophysics Data System (ADS)

    Yu, Long; Xu, Juanjuan; Zhang, Lifang; Xu, Xiaogang

    2018-03-01

    Based on stress-strength interference theory to establish the reliability mathematical model for high temperature and high pressure multi-stage decompression control valve (HMDCV), and introduced to the temperature correction coefficient for revising material fatigue limit at high temperature. Reliability of key dangerous components and fatigue sensitivity curve of each component are calculated and analyzed by the means, which are analyzed the fatigue life of control valve and combined with reliability theory of control valve model. The impact proportion of each component on the control valve system fatigue failure was obtained. The results is shown that temperature correction factor makes the theoretical calculations of reliability more accurate, prediction life expectancy of main pressure parts accords with the technical requirements, and valve body and the sleeve have obvious influence on control system reliability, the stress concentration in key part of control valve can be reduced in the design process by improving structure.

  14. The Challenges of Credible Thermal Protection System Reliability Quantification

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.

    2013-01-01

    The paper discusses several of the challenges associated with developing a credible reliability estimate for a human-rated crew capsule thermal protection system. The process of developing such a credible estimate is subject to the quantification, modeling and propagation of numerous uncertainties within a probabilistic analysis. The development of specific investment recommendations, to improve the reliability prediction, among various potential testing and programmatic options is then accomplished through Bayesian analysis.

  15. Two tradeoffs between economy and reliability in loss of load probability constrained unit commitment

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Wang, Mingqiang; Ning, Xingyao

    2018-02-01

    Spinning reserve (SR) should be scheduled considering the balance between economy and reliability. To address the computational intractability cursed by the computation of loss of load probability (LOLP), many probabilistic methods use simplified formulations of LOLP to improve the computational efficiency. Two tradeoffs embedded in the SR optimization model are not explicitly analyzed in these methods. In this paper, two tradeoffs including primary tradeoff and secondary tradeoff between economy and reliability in the maximum LOLP constrained unit commitment (UC) model are explored and analyzed in a small system and in IEEE-RTS System. The analysis on the two tradeoffs can help in establishing new efficient simplified LOLP formulations and new SR optimization models.

  16. What do we gain with Probabilistic Flood Loss Models?

    NASA Astrophysics Data System (ADS)

    Schroeter, K.; Kreibich, H.; Vogel, K.; Merz, B.; Lüdtke, S.

    2015-12-01

    The reliability of flood loss models is a prerequisite for their practical usefulness. Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks and traditional stage damage functions which are cast in a probabilistic framework. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005, 2006 and 2013 in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The reliability of the probabilistic predictions within validation runs decreases only slightly and achieves a very good coverage of observations within the predictive interval. Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.

  17. Confronting uncertainty in flood damage predictions

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Merz, Bruno

    2015-04-01

    Reliable flood damage models are a prerequisite for the practical usefulness of the model results. Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005 and 2006, in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The reliability of the probabilistic predictions within validation runs decreases only slightly and achieves a very good coverage of observations within the predictive interval. Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.

  18. Improving the modelling of irradiation-induced brain activation for in vivo PET verification of proton therapy.

    PubMed

    Bauer, Julia; Chen, Wenjing; Nischwitz, Sebastian; Liebl, Jakob; Rieken, Stefan; Welzel, Thomas; Debus, Juergen; Parodi, Katia

    2018-04-24

    A reliable Monte Carlo prediction of proton-induced brain tissue activation used for comparison to particle therapy positron-emission-tomography (PT-PET) measurements is crucial for in vivo treatment verification. Major limitations of current approaches to overcome include the CT-based patient model and the description of activity washout due to tissue perfusion. Two approaches were studied to improve the activity prediction for brain irradiation: (i) a refined patient model using tissue classification based on MR information and (ii) a PT-PET data-driven refinement of washout model parameters. Improvements of the activity predictions compared to post-treatment PT-PET measurements were assessed in terms of activity profile similarity for six patients treated with a single or two almost parallel fields delivered by active proton beam scanning. The refined patient model yields a generally higher similarity for most of the patients, except in highly pathological areas leading to tissue misclassification. Using washout model parameters deduced from clinical patient data could considerably improve the activity profile similarity for all patients. Current methods used to predict proton-induced brain tissue activation can be improved with MR-based tissue classification and data-driven washout parameters, thus providing a more reliable basis for PT-PET verification. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Flow Channel Influence of a Collision-Based Piezoelectric Jetting Dispenser on Jet Performance

    PubMed Central

    Deng, Guiling; Li, Junhui; Duan, Ji’an

    2018-01-01

    To improve the jet performance of a bi-piezoelectric jet dispenser, mathematical and simulation models were established according to the operating principle. In order to improve the accuracy and reliability of the simulation calculation, a viscosity model of the fluid was fitted to a fifth-order function with shear rate based on rheological test data, and the needle displacement model was fitted to a nine-order function with time based on real-time displacement test data. The results show that jet performance is related to the diameter of the nozzle outlet and the cone angle of the nozzle, and the impacts of the flow channel structure were confirmed. The approach of numerical simulation is confirmed by the testing results of droplet volume. It will provide a reliable simulation platform for mechanical collision-based jet dispensing and a theoretical basis for micro jet valve design and improvement. PMID:29677140

  20. Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics

    NASA Technical Reports Server (NTRS)

    Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rabiti, Cristian; Alfonsi, Andrea; Huang, Dongli

    This report collect the effort performed to improve the reliability analysis capabilities of the RAVEN code and explore new opportunity in the usage of surrogate model by extending the current RAVEN capabilities to multi physics surrogate models and construction of surrogate models for high dimensionality fields.

  2. Quantitative metal magnetic memory reliability modeling for welded joints

    NASA Astrophysics Data System (ADS)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  3. Risk-adjusted hospital outcomes for children's surgery.

    PubMed

    Saito, Jacqueline M; Chen, Li Ern; Hall, Bruce L; Kraemer, Kari; Barnhart, Douglas C; Byrd, Claudia; Cohen, Mark E; Fei, Chunyuan; Heiss, Kurt F; Huffman, Kristopher; Ko, Clifford Y; Latus, Melissa; Meara, John G; Oldham, Keith T; Raval, Mehul V; Richards, Karen E; Shah, Rahul K; Sutton, Laura C; Vinocur, Charles D; Moss, R Lawrence

    2013-09-01

    BACKGROUND The American College of Surgeons National Surgical Quality Improvement Program-Pediatric was initiated in 2008 to drive quality improvement in children's surgery. Low mortality and morbidity in previous analyses limited differentiation of hospital performance. Participating institutions included children's units within general hospitals and free-standing children's hospitals. Cases selected by Current Procedural Terminology codes encompassed procedures within pediatric general, otolaryngologic, orthopedic, urologic, plastic, neurologic, thoracic, and gynecologic surgery. Trained personnel abstracted demographic, surgical profile, preoperative, intraoperative, and postoperative variables. Incorporating procedure-specific risk, hierarchical models for 30-day mortality and morbidities were developed with significant predictors identified by stepwise logistic regression. Reliability was estimated to assess the balance of information versus error within models. In 2011, 46 281 patients from 43 hospitals were accrued; 1467 codes were aggregated into 226 groupings. Overall mortality was 0.3%, composite morbidity 5.8%, and surgical site infection (SSI) 1.8%. Hierarchical models revealed outlier hospitals with above or below expected performance for composite morbidity in the entire cohort, pediatric abdominal subgroup, and spine subgroup; SSI in the entire cohort and pediatric abdominal subgroup; and urinary tract infection in the entire cohort. Based on reliability estimates, mortality discriminates performance poorly due to very low event rate; however, reliable model construction for composite morbidity and SSI that differentiate institutions is feasible. The National Surgical Quality Improvement Program-Pediatric expansion has yielded risk-adjusted models to differentiate hospital performance in composite and specific morbidities. However, mortality has low utility as a children's surgery performance indicator. Programmatic improvements have resulted in actionable data.

  4. Educational Management Organizations as High Reliability Organizations: A Study of Victory's Philadelphia High School Reform Work

    ERIC Educational Resources Information Center

    Thomas, David E.

    2013-01-01

    This executive position paper proposes recommendations for designing reform models between public and private sectors dedicated to improving school reform work in low performing urban high schools. It reviews scholarly research about for-profit educational management organizations, high reliability organizations, American high school reform, and…

  5. Reliability of MEG source imaging of anterior temporal spikes: analysis of an intracranially characterized spike focus.

    PubMed

    Wennberg, Richard; Cheyne, Douglas

    2014-05-01

    To assess the reliability of MEG source imaging (MSI) of anterior temporal spikes through detailed analysis of the localization and orientation of source solutions obtained for a large number of spikes that were separately confirmed by intracranial EEG to be focally generated within a single, well-characterized spike focus. MSI was performed on 64 identical right anterior temporal spikes from an anterolateral temporal neocortical spike focus. The effects of different volume conductors (sphere and realistic head model), removal of noise with low frequency filters (LFFs) and averaging multiple spikes were assessed in terms of the reliability of the source solutions. MSI of single spikes resulted in scattered dipole source solutions that showed reasonable reliability for localization at the lobar level, but only for solutions with a goodness-of-fit exceeding 80% using a LFF of 3 Hz. Reliability at a finer level of intralobar localization was limited. Spike averaging significantly improved the reliability of source solutions and averaging 8 or more spikes reduced dependency on goodness-of-fit and data filtering. MSI performed on topographically identical individual spikes from an intracranially defined classical anterior temporal lobe spike focus was limited by low reliability (i.e., scattered source solutions) in terms of fine, sublobar localization within the ipsilateral temporal lobe. Spike averaging significantly improved reliability. MSI performed on individual anterior temporal spikes is limited by low reliability. Reduction of background noise through spike averaging significantly improves the reliability of MSI solutions. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  6. Separating predictable and unpredictable work to manage interruptions and promote safe and effective work flow.

    PubMed

    Kowinsky, Amy M; Shovel, Judith; McLaughlin, Maribeth; Vertacnik, Lisa; Greenhouse, Pamela K; Martin, Susan Christie; Minnier, Tamra E

    2012-01-01

    Predictable and unpredictable patient care tasks compete for caregiver time and attention, making it difficult for patient care staff to reliably and consistently meet patient needs. We have piloted a redesigned care model that separates the work of patient care technicians based on task predictability and creates role specificity. This care model shows promise in improving the ability of staff to reliably complete tasks in a more consistent and timely manner.

  7. Methodology and estimation of the welfare impact of energy reforms on households in Azerbaijan

    NASA Astrophysics Data System (ADS)

    Klytchnikova, Irina

    This dissertation develops a new approach that enables policy-makers to analyze welfare gains from improvements in the quality of infrastructure services in developing countries where data are limited and supply is subject to interruptions. An application of the proposed model in the former Soviet Republic of Azerbaijan demonstrates how this approach can be used in welfare assessment of energy sector reforms. The planned reforms in Azerbaijan include a set of measures that will result in a significant improvement in supply reliability, accompanied by a significant increase in the prices of energy services so that they reach the cost recovery level. Currently, households in rural areas receive electricity and gas for only a few hours a day because of a severe deterioration of the energy infrastructure following the collapse of the Soviet Union. The reforms that have recently been initiated will have far-reaching poverty and distributional consequences for the country as they result in an improvement in supply reliability and an increase in energy prices. The new model of intermittent supply developed in this dissertation is based on the household production function approach and draws on previous research in the energy reliability literature. Since modern energy sources (network gas and electricity) in Azerbaijan are cleaner and cheaper than the traditional fuels (fuel wood, etc.), households choose modern fuels whenever they are available. During outages, they rely on traditional fuels. Theoretical welfare measures are derived from a system of fuel demands that takes into account the intermittent availability of energy sources. The model is estimated with the data from the Azerbaijan Household Energy Survey, implemented by the World Bank in December 2003/January 2004. This survey includes an innovative contingent behavior module in which the respondents were asked about their energy consumption patterns in specified reform scenarios. Estimation results strongly indicate that households in the areas with poor supply quality have a high willingness to pay for reliability improvements. However, a relatively small group of households may incur substantial welfare losses from an electricity price increase even when it is combined with a partial reliability improvement. Unlike an earlier assessment of the same reforms in Azerbaijan, analysis in this dissertation clearly shows that targeted investments in improving service reliability may be the best way to mitigate adverse welfare consequences of electricity price increases. Hence, policymakers should focus their attention on ensuring that quality improvements are a central component of power sector reforms. Survey evidence also shows that, although households may incur sizable welfare losses from indoor air pollution when they rely on traditional fuels, they do not recognize indoor air pollution as a factor contributing to the high incidence of respiratory illness among fuel wood users. Therefore, benefits may be greater if policy interventions that improve the reliability of modern energy sources are combined with an information campaign about the adverse health effects of fuel wood use. (Abstract shortened by UMI.)

  8. Hydrologic Design in the Anthropocene

    NASA Astrophysics Data System (ADS)

    Vogel, R. M.; Farmer, W. H.; Read, L.

    2014-12-01

    In an era dubbed the Anthropocene, the natural world is being transformed by a myriad of human influences. As anthropogenic impacts permeate hydrologic systems, hydrologists are challenged to fully account for such changes and develop new methods of hydrologic design. Deterministic watershed models (DWM), which can account for the impacts of changes in land use, climate and infrastructure, are becoming increasing popular for the design of flood and/or drought protection measures. As with all models that are calibrated to existing datasets, DWMs are subject to model error or uncertainty. In practice, the model error component of DWM predictions is typically ignored yet DWM simulations which ignore model error produce model output which cannot reproduce the statistical properties of the observations they are intended to replicate. In the context of hydrologic design, we demonstrate how ignoring model error can lead to systematic downward bias in flood quantiles, upward bias in drought quantiles and upward bias in water supply yields. By reincorporating model error, we document how DWM models can be used to generate results that mimic actual observations and preserve their statistical behavior. In addition to use of DWM for improved predictions in a changing world, improved communication of the risk and reliability is also needed. Traditional statements of risk and reliability in hydrologic design have been characterized by return periods, but such statements often assume that the annual probability of experiencing a design event remains constant throughout the project horizon. We document the general impact of nonstationarity on the average return period and reliability in the context of hydrologic design. Our analyses reveal that return periods do not provide meaningful expressions of the likelihood of future hydrologic events. Instead, knowledge of system reliability over future planning horizons can more effectively prepare society and communicate the likelihood of future hydrologic events of interest.

  9. A simulated training model for laparoscopic pyloromyotomy: Is 3D printing the way of the future?

    PubMed

    Williams, Andrew; McWilliam, Morgan; Ahlin, James; Davidson, Jacob; Quantz, Mackenzie A; Bütter, Andreana

    2018-05-01

    Hypertrophic pyloric stenosis (HPS) is a common neonatal condition treated with open or laparoscopic pyloromyotomy. 3D-printed organs offer realistic simulations to practice surgical techniques. The purpose of this study was to validate a 3D HPS stomach model and assess model reliability and surgical realism. Medical students, general surgery residents, and adult and pediatric general surgeons were recruited from a single center. Participants were videotaped three times performing a laparoscopic pyloromyotomy using box trainers and 3D-printed stomachs. Attempts were graded independently by three reviewers using GOALS and Task Specific Assessments (TSA). Participants were surveyed using the Index of Agreement of Assertions on Model Accuracy (IAAMA). Participants reported their experience levels as novice (22%), inexperienced (26%), intermediate (19%), and experienced (33%). Interrater reliability was similar for overall average GOALS and TSA scores. There was a significant improvement in GOALS (p<0.0001) and TSA scores (p=0.03) between attempts and overall. Participants felt the model accurately simulated a laparoscopic pyloromyotomy (82%) and would be a useful tool for beginners (100%). A 3D-printed stomach model for simulated laparoscopic pyloromyotomy is a useful training tool for learners to improve laparoscopic skills. The GOALS and TSA provide reliable technical skills assessments. II. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Reliability and Productivity Modeling for the Optimization of Separated Spacecraft Interferometers

    NASA Technical Reports Server (NTRS)

    Kenny, Sean (Technical Monitor); Wertz, Julie

    2002-01-01

    As technological systems grow in capability, they also grow in complexity. Due to this complexity, it is no longer possible for a designer to use engineering judgement to identify the components that have the largest impact on system life cycle metrics, such as reliability, productivity, cost, and cost effectiveness. One way of identifying these key components is to build quantitative models and analysis tools that can be used to aid the designer in making high level architecture decisions. Once these key components have been identified, two main approaches to improving a system using these components exist: add redundancy or improve the reliability of the component. In reality, the most effective approach to almost any system will be some combination of these two approaches, in varying orders of magnitude for each component. Therefore, this research tries to answer the question of how to divide funds, between adding redundancy and improving the reliability of components, to most cost effectively improve the life cycle metrics of a system. While this question is relevant to any complex system, this research focuses on one type of system in particular: Separate Spacecraft Interferometers (SSI). Quantitative models are developed to analyze the key life cycle metrics of different SSI system architectures. Next, tools are developed to compare a given set of architectures in terms of total performance, by coupling different life cycle metrics together into one performance metric. Optimization tools, such as simulated annealing and genetic algorithms, are then used to search the entire design space to find the "optimal" architecture design. Sensitivity analysis tools have been developed to determine how sensitive the results of these analyses are to uncertain user defined parameters. Finally, several possibilities for the future work that could be done in this area of research are presented.

  11. Hardware and software reliability estimation using simulations

    NASA Technical Reports Server (NTRS)

    Swern, Frederic L.

    1994-01-01

    The simulation technique is used to explore the validation of both hardware and software. It was concluded that simulation is a viable means for validating both hardware and software and associating a reliability number with each. This is useful in determining the overall probability of system failure of an embedded processor unit, and improving both the code and the hardware where necessary to meet reliability requirements. The methodologies were proved using some simple programs, and simple hardware models.

  12. Biochemical methane potential prediction of plant biomasses: Comparing chemical composition versus near infrared methods and linear versus non-linear models.

    PubMed

    Godin, Bruno; Mayer, Frédéric; Agneessens, Richard; Gerin, Patrick; Dardenne, Pierre; Delfosse, Philippe; Delcarte, Jérôme

    2015-01-01

    The reliability of different models to predict the biochemical methane potential (BMP) of various plant biomasses using a multispecies dataset was compared. The most reliable prediction models of the BMP were those based on the near infrared (NIR) spectrum compared to those based on the chemical composition. The NIR predictions of local (specific regression and non-linear) models were able to estimate quantitatively, rapidly, cheaply and easily the BMP. Such a model could be further used for biomethanation plant management and optimization. The predictions of non-linear models were more reliable compared to those of linear models. The presentation form (green-dried, silage-dried and silage-wet form) of biomasses to the NIR spectrometer did not influence the performances of the NIR prediction models. The accuracy of the BMP method should be improved to enhance further the BMP prediction models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Refinement, Validation and Benchmarking of a Model for E-Government Service Quality

    NASA Astrophysics Data System (ADS)

    Magoutas, Babis; Mentzas, Gregoris

    This paper presents the refinement and validation of a model for Quality of e-Government Services (QeGS). We built upon our previous work where a conceptualized model was identified and put focus on the confirmatory phase of the model development process, in order to come up with a valid and reliable QeGS model. The validated model, which was benchmarked with very positive results with similar models found in the literature, can be used for measuring the QeGS in a reliable and valid manner. This will form the basis for a continuous quality improvement process, unleashing the full potential of e-government services for both citizens and public administrations.

  14. Reliability and Validity of the Sexual Pressure Scale for Women-Revised

    PubMed Central

    Jones, Rachel; Gulick, Elsie

    2008-01-01

    Sexual pressure among young urban women represents adherence to gender stereotypical expectations to engage in sex. Revision of the original 5-factor Sexual Pressure Scale was undertaken in two studies to improve reliabilities in two of the five factors. In Study 1 the reliability of the Sexual Pressure Scale for Women-Revised (SPSW-R) was tested, and principal components analysis was performed in a sample of 325 young, urban women. A parsimonious 18-item, 4-factor model explained 61% of the variance. In Study 2 the theory underlying sexual pressure was supported by confirmatory factor analysis using structural equation modeling in a sample of 181 women. Reliabilities of the SPSW-R total and subscales were very satisfactory, suggesting it may be used in intervention research. PMID:18666222

  15. The transparency, reliability and utility of tropical rainforest land-use and land-cover change models.

    PubMed

    Rosa, Isabel M D; Ahmed, Sadia E; Ewers, Robert M

    2014-06-01

    Land-use and land-cover (LULC) change is one of the largest drivers of biodiversity loss and carbon emissions globally. We use the tropical rainforests of the Amazon, the Congo basin and South-East Asia as a case study to investigate spatial predictive models of LULC change. Current predictions differ in their modelling approaches, are highly variable and often poorly validated. We carried out a quantitative review of 48 modelling methodologies, considering model spatio-temporal scales, inputs, calibration and validation methods. In addition, we requested model outputs from each of the models reviewed and carried out a quantitative assessment of model performance for tropical LULC predictions in the Brazilian Amazon. We highlight existing shortfalls in the discipline and uncover three key points that need addressing to improve the transparency, reliability and utility of tropical LULC change models: (1) a lack of openness with regard to describing and making available the model inputs and model code; (2) the difficulties of conducting appropriate model validations; and (3) the difficulty that users of tropical LULC models face in obtaining the model predictions to help inform their own analyses and policy decisions. We further draw comparisons between tropical LULC change models in the tropics and the modelling approaches and paradigms in other disciplines, and suggest that recent changes in the climate change and species distribution modelling communities may provide a pathway that tropical LULC change modellers may emulate to further improve the discipline. Climate change models have exerted considerable influence over public perceptions of climate change and now impact policy decisions at all political levels. We suggest that tropical LULC change models have an equally high potential to influence public opinion and impact the development of land-use policies based on plausible future scenarios, but, to do that reliably may require further improvements in the discipline. © 2014 John Wiley & Sons Ltd.

  16. A Topology Control Strategy with Reliability Assurance for Satellite Cluster Networks in Earth Observation

    PubMed Central

    Chen, Qing; Zhang, Jinxiu; Hu, Ze

    2017-01-01

    This article investigates the dynamic topology control problem of satellite cluster networks (SCNs) in Earth observation (EO) missions by applying a novel metric of stability for inter-satellite links (ISLs). The properties of the periodicity and predictability of satellites’ relative position are involved in the link cost metric which is to give a selection criterion for choosing the most reliable data routing paths. Also, a cooperative work model with reliability is proposed for the situation of emergency EO missions. Based on the link cost metric and the proposed reliability model, a reliability assurance topology control algorithm and its corresponding dynamic topology control (RAT) strategy are established to maximize the stability of data transmission in the SCNs. The SCNs scenario is tested through some numeric simulations of the topology stability of average topology lifetime and average packet loss rate. Simulation results show that the proposed reliable strategy applied in SCNs significantly improves the data transmission performance and prolongs the average topology lifetime. PMID:28241474

  17. A Topology Control Strategy with Reliability Assurance for Satellite Cluster Networks in Earth Observation.

    PubMed

    Chen, Qing; Zhang, Jinxiu; Hu, Ze

    2017-02-23

    This article investigates the dynamic topology control problemof satellite cluster networks (SCNs) in Earth observation (EO) missions by applying a novel metric of stability for inter-satellite links (ISLs). The properties of the periodicity and predictability of satellites' relative position are involved in the link cost metric which is to give a selection criterion for choosing the most reliable data routing paths. Also, a cooperative work model with reliability is proposed for the situation of emergency EO missions. Based on the link cost metric and the proposed reliability model, a reliability assurance topology control algorithm and its corresponding dynamic topology control (RAT) strategy are established to maximize the stability of data transmission in the SCNs. The SCNs scenario is tested through some numeric simulations of the topology stability of average topology lifetime and average packet loss rate. Simulation results show that the proposed reliable strategy applied in SCNs significantly improves the data transmission performance and prolongs the average topology lifetime.

  18. One-year test-retest reliability of intrinsic connectivity network fMRI in older adults

    PubMed Central

    Guo, Cong C.; Kurth, Florian; Zhou, Juan; Mayer, Emeran A.; Eickhoff, Simon B; Kramer, Joel H.; Seeley, William W.

    2014-01-01

    “Resting-state” or task-free fMRI can assess intrinsic connectivity network (ICN) integrity in health and disease, suggesting a potential for use of these methods as disease-monitoring biomarkers. Numerous analytical options are available, including model-driven ROI-based correlation analysis and model-free, independent component analysis (ICA). High test-retest reliability will be a necessary feature of a successful ICN biomarker, yet available reliability data remains limited. Here, we examined ICN fMRI test-retest reliability in 24 healthy older subjects scanned roughly one year apart. We focused on the salience network, a disease-relevant ICN not previously subjected to reliability analysis. Most ICN analytical methods proved reliable (intraclass coefficients > 0.4) and could be further improved by wavelet analysis. Seed-based ROI correlation analysis showed high map-wise reliability, whereas graph theoretical measures and temporal concatenation group ICA produced the most reliable individual unit-wise outcomes. Including global signal regression in ROI-based correlation analyses reduced reliability. Our study provides a direct comparison between the most commonly used ICN fMRI methods and potential guidelines for measuring intrinsic connectivity in aging control and patient populations over time. PMID:22446491

  19. Benchmark analysis of forecasted seasonal temperature over different climatic areas

    NASA Astrophysics Data System (ADS)

    Giunta, G.; Salerno, R.; Ceppi, A.; Ercolani, G.; Mancini, M.

    2015-12-01

    From a long-term perspective, an improvement of seasonal forecasting, which is often exclusively based on climatology, could provide a new capability for the management of energy resources in a time scale of just a few months. This paper regards a benchmark analysis in relation to long-term temperature forecasts over Italy in the year 2010, comparing the eni-kassandra meteo forecast (e-kmf®) model, the Climate Forecast System-National Centers for Environmental Prediction (CFS-NCEP) model, and the climatological reference (based on 25-year data) with observations. Statistical indexes are used to understand the reliability of the prediction of 2-m monthly air temperatures with a perspective of 12 weeks ahead. The results show how the best performance is achieved by the e-kmf® system which improves the reliability for long-term forecasts compared to climatology and the CFS-NCEP model. By using the reliable high-performance forecast system, it is possible to optimize the natural gas portfolio and management operations, thereby obtaining a competitive advantage in the European energy market.

  20. Modeling and Simulation Reliable Spacecraft On-Board Computing

    NASA Technical Reports Server (NTRS)

    Park, Nohpill

    1999-01-01

    The proposed project will investigate modeling and simulation-driven testing and fault tolerance schemes for Spacecraft On-Board Computing, thereby achieving reliable spacecraft telecommunication. A spacecraft communication system has inherent capabilities of providing multipoint and broadcast transmission, connectivity between any two distant nodes within a wide-area coverage, quick network configuration /reconfiguration, rapid allocation of space segment capacity, and distance-insensitive cost. To realize the capabilities above mentioned, both the size and cost of the ground-station terminals have to be reduced by using reliable, high-throughput, fast and cost-effective on-board computing system which has been known to be a critical contributor to the overall performance of space mission deployment. Controlled vulnerability of mission data (measured in sensitivity), improved performance (measured in throughput and delay) and fault tolerance (measured in reliability) are some of the most important features of these systems. The system should be thoroughly tested and diagnosed before employing a fault tolerance into the system. Testing and fault tolerance strategies should be driven by accurate performance models (i.e. throughput, delay, reliability and sensitivity) to find an optimal solution in terms of reliability and cost. The modeling and simulation tools will be integrated with a system architecture module, a testing module and a module for fault tolerance all of which interacting through a centered graphical user interface.

  1. Using multivariate regression modeling for sampling and predicting chemical characteristics of mixed waste in old landfills.

    PubMed

    Brandstätter, Christian; Laner, David; Prantl, Roman; Fellner, Johann

    2014-12-01

    Municipal solid waste landfills pose a threat on environment and human health, especially old landfills which lack facilities for collection and treatment of landfill gas and leachate. Consequently, missing information about emission flows prevent site-specific environmental risk assessments. To overcome this gap, the combination of waste sampling and analysis with statistical modeling is one option for estimating present and future emission potentials. Optimizing the tradeoff between investigation costs and reliable results requires knowledge about both: the number of samples to be taken and variables to be analyzed. This article aims to identify the optimized number of waste samples and variables in order to predict a larger set of variables. Therefore, we introduce a multivariate linear regression model and tested the applicability by usage of two case studies. Landfill A was used to set up and calibrate the model based on 50 waste samples and twelve variables. The calibrated model was applied to Landfill B including 36 waste samples and twelve variables with four predictor variables. The case study results are twofold: first, the reliable and accurate prediction of the twelve variables can be achieved with the knowledge of four predictor variables (Loi, EC, pH and Cl). For the second Landfill B, only ten full measurements would be needed for a reliable prediction of most response variables. The four predictor variables would exhibit comparably low analytical costs in comparison to the full set of measurements. This cost reduction could be used to increase the number of samples yielding an improved understanding of the spatial waste heterogeneity in landfills. Concluding, the future application of the developed model potentially improves the reliability of predicted emission potentials. The model could become a standard screening tool for old landfills if its applicability and reliability would be tested in additional case studies. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Reliability Growth in Space Life Support Systems

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2014-01-01

    A hardware system's failure rate often increases over time due to wear and aging, but not always. Some systems instead show reliability growth, a decreasing failure rate with time, due to effective failure analysis and remedial hardware upgrades. Reliability grows when failure causes are removed by improved design. A mathematical reliability growth model allows the reliability growth rate to be computed from the failure data. The space shuttle was extensively maintained, refurbished, and upgraded after each flight and it experienced significant reliability growth during its operational life. In contrast, the International Space Station (ISS) is much more difficult to maintain and upgrade and its failure rate has been constant over time. The ISS Carbon Dioxide Removal Assembly (CDRA) reliability has slightly decreased. Failures on ISS and with the ISS CDRA continue to be a challenge.

  3. Design of high reliability organizations in health care.

    PubMed

    Carroll, J S; Rudolph, J W

    2006-12-01

    To improve safety performance, many healthcare organizations have sought to emulate high reliability organizations from industries such as nuclear power, chemical processing, and military operations. We outline high reliability design principles for healthcare organizations including both the formal structures and the informal practices that complement those structures. A stage model of organizational structures and practices, moving from local autonomy to formal controls to open inquiry to deep self-understanding, is used to illustrate typical challenges and design possibilities at each stage. We suggest how organizations can use the concepts and examples presented to increase their capacity to self-design for safety and reliability.

  4. The impact of statistical adjustment on conditional standard errors of measurement in the assessment of physician communication skills.

    PubMed

    Raymond, Mark R; Clauser, Brian E; Furman, Gail E

    2010-10-01

    The use of standardized patients to assess communication skills is now an essential part of assessing a physician's readiness for practice. To improve the reliability of communication scores, it has become increasingly common in recent years to use statistical models to adjust ratings provided by standardized patients. This study employed ordinary least squares regression to adjust ratings, and then used generalizability theory to evaluate the impact of these adjustments on score reliability and the overall standard error of measurement. In addition, conditional standard errors of measurement were computed for both observed and adjusted scores to determine whether the improvements in measurement precision were uniform across the score distribution. Results indicated that measurement was generally less precise for communication ratings toward the lower end of the score distribution; and the improvement in measurement precision afforded by statistical modeling varied slightly across the score distribution such that the most improvement occurred in the upper-middle range of the score scale. Possible reasons for these patterns in measurement precision are discussed, as are the limitations of the statistical models used for adjusting performance ratings.

  5. Seals Research at AlliedSignal

    NASA Technical Reports Server (NTRS)

    Ullah, M. Rifat

    1996-01-01

    A consortium has been formed to address seal problems in the Aerospace sector of Allied Signal, Inc. The consortium is represented by makers of Propulsion Engines, Auxiliary Power Units, Gas Turbine Starters, etc. The goal is to improve Face Seal reliability, since Face Seals have become reliability drivers in many of our product lines. Several research programs are being implemented simultaneously this year. They include: Face Seal Modeling and Analysis Methodology; Oil Cooling of Seals; Seal Tracking Dynamics; Coking Formation & Prevention; and Seal Reliability Methods.

  6. Reliability of four models for clinical gait analysis.

    PubMed

    Kainz, Hans; Graham, David; Edwards, Julie; Walsh, Henry P J; Maine, Sheanna; Boyd, Roslyn N; Lloyd, David G; Modenese, Luca; Carty, Christopher P

    2017-05-01

    Three-dimensional gait analysis (3DGA) has become a common clinical tool for treatment planning in children with cerebral palsy (CP). Many clinical gait laboratories use the conventional gait analysis model (e.g. Plug-in-Gait model), which uses Direct Kinematics (DK) for joint kinematic calculations, whereas, musculoskeletal models, mainly used for research, use Inverse Kinematics (IK). Musculoskeletal IK models have the advantage of enabling additional analyses which might improve the clinical decision-making in children with CP. Before any new model can be used in a clinical setting, its reliability has to be evaluated and compared to a commonly used clinical gait model (e.g. Plug-in-Gait model) which was the purpose of this study. Two testers performed 3DGA in eleven CP and seven typically developing participants on two occasions. Intra- and inter-tester standard deviations (SD) and standard error of measurement (SEM) were used to compare the reliability of two DK models (Plug-in-Gait and a six degrees-of-freedom model solved using Vicon software) and two IK models (two modifications of 'gait2392' solved using OpenSim). All models showed good reliability (mean SEM of 3.0° over all analysed models and joint angles). Variations in joint kinetics were less in typically developed than in CP participants. The modified 'gait2392' model which included all the joint rotations commonly reported in clinical 3DGA, showed reasonable reliable joint kinematic and kinetic estimates, and allows additional musculoskeletal analysis on surgically adjustable parameters, e.g. muscle-tendon lengths, and, therefore, is a suitable model for clinical gait analysis. Copyright © 2017. Published by Elsevier B.V.

  7. Ensemble-Based Parameter Estimation in a Coupled General Circulation Model

    DOE PAGES

    Liu, Y.; Liu, Z.; Zhang, S.; ...

    2014-09-10

    Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less

  8. Incorporation of prior information on parameters into nonlinear regression groundwater flow models: 1. Theory

    USGS Publications Warehouse

    Cooley, Richard L.

    1982-01-01

    Prior information on the parameters of a groundwater flow model can be used to improve parameter estimates obtained from nonlinear regression solution of a modeling problem. Two scales of prior information can be available: (1) prior information having known reliability (that is, bias and random error structure) and (2) prior information consisting of best available estimates of unknown reliability. A regression method that incorporates the second scale of prior information assumes the prior information to be fixed for any particular analysis to produce improved, although biased, parameter estimates. Approximate optimization of two auxiliary parameters of the formulation is used to help minimize the bias, which is almost always much smaller than that resulting from standard ridge regression. It is shown that if both scales of prior information are available, then a combined regression analysis may be made.

  9. Moon Trek: NASA's New Online Portal for Lunar Mapping and Modeling

    NASA Astrophysics Data System (ADS)

    Day, B. H.; Law, E. S.

    2016-11-01

    This presentation introduces Moon Trek, a new name for a major new release of NASA's Lunar Mapping and Modeling Portal (LMMP). The new Trek interface provides greatly improved navigation, 3D visualization, performance, and reliability.

  10. Reliability of risk-adjusted outcomes for profiling hospital surgical quality.

    PubMed

    Krell, Robert W; Hozain, Ahmed; Kao, Lillian S; Dimick, Justin B

    2014-05-01

    Quality improvement platforms commonly use risk-adjusted morbidity and mortality to profile hospital performance. However, given small hospital caseloads and low event rates for some procedures, it is unclear whether these outcomes reliably reflect hospital performance. To determine the reliability of risk-adjusted morbidity and mortality for hospital performance profiling using clinical registry data. A retrospective cohort study was conducted using data from the American College of Surgeons National Surgical Quality Improvement Program, 2009. Participants included all patients (N = 55,466) who underwent colon resection, pancreatic resection, laparoscopic gastric bypass, ventral hernia repair, abdominal aortic aneurysm repair, and lower extremity bypass. Outcomes included risk-adjusted overall morbidity, severe morbidity, and mortality. We assessed reliability (0-1 scale: 0, completely unreliable; and 1, perfectly reliable) for all 3 outcomes. We also quantified the number of hospitals meeting minimum acceptable reliability thresholds (>0.70, good reliability; and >0.50, fair reliability) for each outcome. For overall morbidity, the most common outcome studied, the mean reliability depended on sample size (ie, how high the hospital caseload was) and the event rate (ie, how frequently the outcome occurred). For example, mean reliability for overall morbidity was low for abdominal aortic aneurysm repair (reliability, 0.29; sample size, 25 cases per year; and event rate, 18.3%). In contrast, mean reliability for overall morbidity was higher for colon resection (reliability, 0.61; sample size, 114 cases per year; and event rate, 26.8%). Colon resection (37.7% of hospitals), pancreatic resection (7.1% of hospitals), and laparoscopic gastric bypass (11.5% of hospitals) were the only procedures for which any hospitals met a reliability threshold of 0.70 for overall morbidity. Because severe morbidity and mortality are less frequent outcomes, their mean reliability was lower, and even fewer hospitals met the thresholds for minimum reliability. Most commonly reported outcome measures have low reliability for differentiating hospital performance. This is especially important for clinical registries that sample rather than collect 100% of cases, which can limit hospital case accrual. Eliminating sampling to achieve the highest possible caseloads, adjusting for reliability, and using advanced modeling strategies (eg, hierarchical modeling) are necessary for clinical registries to increase their benchmarking reliability.

  11. Probabilistic versus deterministic skill in predicting the western North Pacific-East Asian summer monsoon variability with multimodel ensembles

    NASA Astrophysics Data System (ADS)

    Yang, Xiu-Qun; Yang, Dejian; Xie, Qian; Zhang, Yaocun; Ren, Xuejuan; Tang, Youmin

    2017-04-01

    Based on historical forecasts of three quasi-operational multi-model ensemble (MME) systems, this study assesses the superiority of coupled MME over contributing single-model ensembles (SMEs) and over uncoupled atmospheric MME in predicting the Western North Pacific-East Asian summer monsoon variability. The probabilistic and deterministic forecast skills are measured by Brier skill score (BSS) and anomaly correlation (AC), respectively. A forecast-format dependent MME superiority over SMEs is found. The probabilistic forecast skill of the MME is always significantly better than that of each SME, while the deterministic forecast skill of the MME can be lower than that of some SMEs. The MME superiority arises from both the model diversity and the ensemble size increase in the tropics, and primarily from the ensemble size increase in the subtropics. The BSS is composed of reliability and resolution, two attributes characterizing probabilistic forecast skill. The probabilistic skill increase of the MME is dominated by the dramatic improvement in reliability, while resolution is not always improved, similar to AC. A monotonic resolution-AC relationship is further found and qualitatively explained, whereas little relationship can be identified between reliability and AC. It is argued that the MME's success in improving the reliability arises from an effective reduction of the overconfidence in forecast distributions. Moreover, it is examined that the seasonal predictions with coupled MME are more skillful than those with the uncoupled atmospheric MME forced by persisting sea surface temperature (SST) anomalies, since the coupled MME has better predicted the SST anomaly evolution in three key regions.

  12. Estimating the impact on health of poor reliability of drinking water interventions in developing countries.

    PubMed

    Hunter, Paul R; Zmirou-Navier, Denis; Hartemann, Philippe

    2009-04-01

    Recent evidence suggests that many improved drinking water supplies suffer from poor reliability. This study investigates what impact poor reliability may have on achieving health improvement targets. A Quantitative Microbiological Risk Assessment was conducted of the impact of interruptions in water supplies that forced people to revert to drinking raw water. Data from the literature were used to construct models on three waterborne pathogens common in Africa: Rotavirus, Cryptosporidium and Enterotoxigenic E. coli. Risk of infection by the target pathogens is substantially greater on days that people revert to raw water consumption. Over the course of a few days raw water consumption, the annual health benefits attributed to consumption of water from an improved supply will be almost all lost. Furthermore, risk of illness on days drinking raw water will fall substantially on very young children who have the highest risk of death following infection. Agencies responsible for implementing improved drinking water provision will not make meaningful contributions to public health targets if those systems are subject to poor reliability. Funders of water quality interventions in developing countries should put more effort into auditing whether interventions are sustainable and whether the health benefits are being achieved.

  13. Gearbox Reliability Collaborative Phase 3 Gearbox 2 Test Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Link, H.; Keller, J.; Guo, Y.

    2013-04-01

    Gearboxes in wind turbines have not been achieving their expected design life even though they commonly meet or exceed the design criteria specified in current design standards. One of the basic premises of the National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) is that the low gearbox reliability results from the absence of critical elements in the design process or insufficient design tools. Key goals of the GRC are to improve design approaches and analysis tools and to recommend practices and test methods resulting in improved design standards for wind turbine gearboxes that lower the cost of energy (COE)more » through improved reliability. The GRC uses a combined gearbox testing, modeling and analysis approach, along with a database of information from gearbox failures collected from overhauls and investigation of gearbox condition monitoring techniques to improve wind turbine operations and maintenance practices. Testing of Gearbox 2 (GB2) using the two-speed turbine controller that has been used in prior testing. This test series will investigate non-torque loads, high-speed shaft misalignment, and reproduction of field conditions in the dynamometer. This test series will also include vibration testing using an eddy-current brake on the gearbox's high speed shaft.« less

  14. Improved object optimal synthetic description, modeling, learning, and discrimination by GEOGINE computational kernel

    NASA Astrophysics Data System (ADS)

    Fiorini, Rodolfo A.; Dacquino, Gianfranco

    2005-03-01

    GEOGINE (GEOmetrical enGINE), a state-of-the-art OMG (Ontological Model Generator) based on n-D Tensor Invariants for n-Dimensional shape/texture optimal synthetic representation, description and learning, was presented in previous conferences elsewhere recently. Improved computational algorithms based on the computational invariant theory of finite groups in Euclidean space and a demo application is presented. Progressive model automatic generation is discussed. GEOGINE can be used as an efficient computational kernel for fast reliable application development and delivery in advanced biomedical engineering, biometric, intelligent computing, target recognition, content image retrieval, data mining technological areas mainly. Ontology can be regarded as a logical theory accounting for the intended meaning of a formal dictionary, i.e., its ontological commitment to a particular conceptualization of the world object. According to this approach, "n-D Tensor Calculus" can be considered a "Formal Language" to reliably compute optimized "n-Dimensional Tensor Invariants" as specific object "invariant parameter and attribute words" for automated n-Dimensional shape/texture optimal synthetic object description by incremental model generation. The class of those "invariant parameter and attribute words" can be thought as a specific "Formal Vocabulary" learned from a "Generalized Formal Dictionary" of the "Computational Tensor Invariants" language. Even object chromatic attributes can be effectively and reliably computed from object geometric parameters into robust colour shape invariant characteristics. As a matter of fact, any highly sophisticated application needing effective, robust object geometric/colour invariant attribute capture and parameterization features, for reliable automated object learning and discrimination can deeply benefit from GEOGINE progressive automated model generation computational kernel performance. Main operational advantages over previous, similar approaches are: 1) Progressive Automated Invariant Model Generation, 2) Invariant Minimal Complete Description Set for computational efficiency, 3) Arbitrary Model Precision for robust object description and identification.

  15. Reviewing Reliability and Validity of Information for University Educational Evaluation

    NASA Astrophysics Data System (ADS)

    Otsuka, Yusaku

    To better utilize evaluations in higher education, it is necessary to share the methods of reviewing reliability and validity of examination scores and grades, and to accumulate and share data for confirming results. Before the GPA system is first introduced into a university or college, the reliability of examination scores and grades, especially for essay examinations, must be assured. Validity is a complicated concept, so should be assured in various ways, including using professional audits, theoretical models, and statistical data analysis. Because individual students and teachers are continually improving, using evaluations to appraise their progress is not always compatible with using evaluations in appraising the implementation of accountability in various departments or the university overall. To better utilize evaluations and improve higher education, evaluations should be integrated into the current system by sharing the vision of an academic learning community and promoting interaction between students and teachers based on sufficiently reliable and validated evaluation tools.

  16. The Real World Significance of Performance Prediction

    ERIC Educational Resources Information Center

    Pardos, Zachary A.; Wang, Qing Yang; Trivedi, Shubhendu

    2012-01-01

    In recent years, the educational data mining and user modeling communities have been aggressively introducing models for predicting student performance on external measures such as standardized tests as well as within-tutor performance. While these models have brought statistically reliable improvement to performance prediction, the real world…

  17. Towards early software reliability prediction for computer forensic tools (case study).

    PubMed

    Abu Talib, Manar

    2016-01-01

    Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.

  18. Distributed collaborative probabilistic design for turbine blade-tip radial running clearance using support vector machine of regression

    NASA Astrophysics Data System (ADS)

    Fei, Cheng-Wei; Bai, Guang-Chen

    2014-12-01

    To improve the computational precision and efficiency of probabilistic design for mechanical dynamic assembly like the blade-tip radial running clearance (BTRRC) of gas turbine, a distribution collaborative probabilistic design method-based support vector machine of regression (SR)(called as DCSRM) is proposed by integrating distribution collaborative response surface method and support vector machine regression model. The mathematical model of DCSRM is established and the probabilistic design idea of DCSRM is introduced. The dynamic assembly probabilistic design of aeroengine high-pressure turbine (HPT) BTRRC is accomplished to verify the proposed DCSRM. The analysis results reveal that the optimal static blade-tip clearance of HPT is gained for designing BTRRC, and improving the performance and reliability of aeroengine. The comparison of methods shows that the DCSRM has high computational accuracy and high computational efficiency in BTRRC probabilistic analysis. The present research offers an effective way for the reliability design of mechanical dynamic assembly and enriches mechanical reliability theory and method.

  19. An improved cellular automata model for train operation simulation with dynamic acceleration

    NASA Astrophysics Data System (ADS)

    Li, Wen-Jun; Nie, Lei

    2018-03-01

    Urban rail transit plays an important role in the urban public traffic because of its advantages of fast speed, large transport capacity, high safety, reliability and low pollution. This study proposes an improved cellular automaton (CA) model by considering the dynamic characteristic of the train acceleration to analyze the energy consumption and train running time. Constructing an effective model for calculating energy consumption to aid train operation improvement is the basis for studying and analyzing energy-saving measures for urban rail transit system operation.

  20. Improved Rubin-Bodner Model for the Prediction of Soft Tissue Deformations

    PubMed Central

    Zhang, Guangming; Xia, James J.; Liebschner, Michael; Zhang, Xiaoyan; Kim, Daeseung; Zhou, Xiaobo

    2016-01-01

    In craniomaxillofacial (CMF) surgery, a reliable way of simulating the soft tissue deformation resulted from skeletal reconstruction is vitally important for preventing the risks of facial distortion postoperatively. However, it is difficult to simulate the soft tissue behaviors affected by different types of CMF surgery. This study presents an integrated bio-mechanical and statistical learning model to improve accuracy and reliability of predictions on soft facial tissue behavior. The Rubin-Bodner (RB) model is initially used to describe the biomechanical behavior of the soft facial tissue. Subsequently, a finite element model (FEM) computers the stress of each node in soft facial tissue mesh data resulted from bone displacement. Next, the Generalized Regression Neural Network (GRNN) method is implemented to obtain the relationship between the facial soft tissue deformation and the stress distribution corresponding to different CMF surgical types and to improve evaluation of elastic parameters included in the RB model. Therefore, the soft facial tissue deformation can be predicted by biomechanical properties and statistical model. Leave-one-out cross-validation is used on eleven patients. As a result, the average prediction error of our model (0.7035mm) is lower than those resulting from other approaches. It also demonstrates that the more accurate bio-mechanical information the model has, the better prediction performance it could achieve. PMID:27717593

  1. Performance of a system of reservoirs on futuristic front

    NASA Astrophysics Data System (ADS)

    Saha, Satabdi; Roy, Debasri; Mazumdar, Asis

    2017-10-01

    Application of simulation model HEC-5 to analyze the performance of the DVC Reservoir System (a multipurpose system with a network of five reservoirs and one barrage) on the river Damodar in Eastern India in meeting projected future demand as well as controlling flood for synthetically generated future scenario is addressed here with a view to develop an appropriate strategy for its operation. Thomas-Fiering model (based on Markov autoregressive model) has been adopted for generation of synthetic scenario (monthly streamflow series) and subsequently downscaling of modeled monthly streamflow to daily values was carried out. The performance of the system (analysed on seasonal basis) in terms of `Performance Indices' (viz., both quantity based reliability and time based reliability, mean daily deficit, average failure period, resilience and maximum vulnerability indices) for the projected scenario with enhanced demand turned out to be poor compared to that for historical scenario. However, judicious adoption of resource enhancement (marginal reallocation of reservoir storage capacity) and demand management strategy (curtailment of projected high water requirements and trading off between demands) was found to be a viable option for improvement of the performance of the reservoir system appreciably [improvement being (1-51 %), (2-35 %), (16-96 %), (25-50 %), (8-36 %) and (12-30 %) for the indices viz., quantity based reliability, time based reliability, mean daily deficit, average failure period, resilience and maximum vulnerability, respectively] compared to that with normal storage and projected demand. Again, 100 % reliability for flood control for current as well as future synthetically generated scenarios was noted. The results from the study would assist concerned authority in successful operation of reservoirs in the context of growing demand and dwindling resource.

  2. Cotton irrigation scheduling using a crop growth model and FAO-56 methods: Field and simulation studies

    USDA-ARS?s Scientific Manuscript database

    Crop growth simulation models can address a variety of agricultural problems, but their use to directly assist in-season irrigation management decisions is less common. Confidence in model reliability can be increased if models are shown to provide improved in-season management recommendations, whi...

  3. A Thermal Runaway Failure Model for Low-Voltage BME Ceramic Capacitors with Defects

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2017-01-01

    Reliability of base metal electrode (BME) multilayer ceramic capacitors (MLCCs) that until recently were used mostly in commercial applications, have been improved substantially by using new materials and processes. Currently, the inception of intrinsic wear-out failures in high quality capacitors became much greater than the mission duration in most high-reliability applications. However, in capacitors with defects degradation processes might accelerate substantially and cause infant mortality failures. In this work, a physical model that relates the presence of defects to reduction of breakdown voltages and decreasing times to failure has been suggested. The effect of the defect size has been analyzed using a thermal runaway model of failures. Adequacy of highly accelerated life testing (HALT) to predict reliability at normal operating conditions and limitations of voltage acceleration are considered. The applicability of the model to BME capacitors with cracks is discussed and validated experimentally.

  4. Integrating Machine Learning into a Crowdsourced Model for Earthquake-Induced Damage Assessment

    NASA Technical Reports Server (NTRS)

    Rebbapragada, Umaa; Oommen, Thomas

    2011-01-01

    On January 12th, 2010, a catastrophic 7.0M earthquake devastated the country of Haiti. In the aftermath of an earthquake, it is important to rapidly assess damaged areas in order to mobilize the appropriate resources. The Haiti damage assessment effort introduced a promising model that uses crowdsourcing to map damaged areas in freely available remotely-sensed data. This paper proposes the application of machine learning methods to improve this model. Specifically, we apply work on learning from multiple, imperfect experts to the assessment of volunteer reliability, and propose the use of image segmentation to automate the detection of damaged areas. We wrap both tasks in an active learning framework in order to shift volunteer effort from mapping a full catalog of images to the generation of high-quality training data. We hypothesize that the integration of machine learning into this model improves its reliability, maintains the speed of damage assessment, and allows the model to scale to higher data volumes.

  5. Engine System Model Development for Nuclear Thermal Propulsion

    NASA Technical Reports Server (NTRS)

    Nelson, Karl W.; Simpson, Steven P.

    2006-01-01

    In order to design, analyze, and evaluate conceptual Nuclear Thermal Propulsion (NTP) engine systems, an improved NTP design and analysis tool has been developed. The NTP tool utilizes the Rocket Engine Transient Simulation (ROCETS) system tool and many of the routines from the Enabler reactor model found in Nuclear Engine System Simulation (NESS). Improved non-nuclear component models and an external shield model were added to the tool. With the addition of a nearly complete system reliability model, the tool will provide performance, sizing, and reliability data for NERVA-Derived NTP engine systems. A new detailed reactor model is also being developed and will replace Enabler. The new model will allow more flexibility in reactor geometry and include detailed thermal hydraulics and neutronics models. A description of the reactor, component, and reliability models is provided. Another key feature of the modeling process is the use of comprehensive spreadsheets for each engine case. The spreadsheets include individual worksheets for each subsystem with data, plots, and scaled figures, making the output very useful to each engineering discipline. Sample performance and sizing results with the Enabler reactor model are provided including sensitivities. Before selecting an engine design, all figures of merit must be considered including the overall impacts on the vehicle and mission. Evaluations based on key figures of merit of these results and results with the new reactor model will be performed. The impacts of clustering and external shielding will also be addressed. Over time, the reactor model will be upgraded to design and analyze other NTP concepts with CERMET and carbide fuel cores.

  6. Flood loss model transfer: on the value of additional data

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Lüdtke, Stefan; Vogel, Kristin; Kreibich, Heidi; Thieken, Annegret; Merz, Bruno

    2017-04-01

    The transfer of models across geographical regions and flood events is a key challenge in flood loss estimation. Variations in local characteristics and continuous system changes require regional adjustments and continuous updating with current evidence. However, acquiring data on damage influencing factors is expensive and therefore assessing the value of additional data in terms of model reliability and performance improvement is of high relevance. The present study utilizes empirical flood loss data on direct damage to residential buildings available from computer aided telephone interviews that were carried out after the floods in 2002, 2005, 2006, 2010, 2011 and 2013 mainly in the Elbe and Danube catchments in Germany. Flood loss model performance is assessed for incrementally increased numbers of loss data which are differentiated according to region and flood event. Two flood loss modeling approaches are considered: (i) a multi-variable flood loss model approach using Random Forests and (ii) a uni-variable stage damage function. Both model approaches are embedded in a bootstrapping process which allows evaluating the uncertainty of model predictions. Predictive performance of both models is evaluated with regard to mean bias, mean absolute and mean squared errors, as well as hit rate and sharpness. Mean bias and mean absolute error give information about the accuracy of model predictions; mean squared error and sharpness about precision and hit rate is an indicator for model reliability. The results of incremental, regional and temporal updating demonstrate the usefulness of additional data to improve model predictive performance and increase model reliability, particularly in a spatial-temporal transfer setting.

  7. A CONSISTENT APPROACH FOR THE APPLICATION OF PHARMACOKINETIC MODELING IN CANCER RISK ASSESSMENT

    EPA Science Inventory

    Physiologically based pharmacokinetic (PBPK) modeling provides important capabilities for improving the reliability of the extrapolations across dose, species, and exposure route that are generally required in chemical risk assessment regardless of the toxic endpoint being consid...

  8. A pragmatic decision model for inventory management with heterogeneous suppliers

    NASA Astrophysics Data System (ADS)

    Nakandala, Dilupa; Lau, Henry; Zhang, Jingjing; Gunasekaran, Angappa

    2018-05-01

    For enterprises, it is imperative that the trade-off between the cost of inventory and risk implications is managed in the most efficient manner. To explore this, we use the common example of a wholesaler operating in an environment where suppliers demonstrate heterogeneous reliability. The wholesaler has partial orders with dual suppliers and uses lateral transshipments. While supplier reliability is a key concern in inventory management, reliable suppliers are more expensive and investment in strategic approaches that improve supplier performance carries a high cost. Here we consider the operational strategy of dual sourcing with reliable and unreliable suppliers and model the total inventory cost where the likely scenario lead-time of the unreliable suppliers extends beyond the scheduling period. We then develop a Customized Integer Programming Optimization Model to determine the optimum size of partial orders with multiple suppliers. In addition to the objective of total cost optimization, this study takes into account the volatility of the cost associated with the uncertainty of an inventory system.

  9. What Is the Right RFID for Your Process?

    DTIC Science & Technology

    2006-01-30

    Support Model for Valuing Proposed Improvements in Component Reliability. June 2005. NPS-PM-05-007 Dillard, John T., and Mark E. Nissen...Arlington, VA. 2005. Kang, Keebom, Ken Doerr, Uday Apte, and Michael Boudreau. “Decision Support Models for Valuing Improvements in Component...courses in the Executive and Full-time MBA programs. Areas of Uday’s research interests include managing service operations, supply chain

  10. Improving SWAT model prediction using an upgraded denitrification scheme and constrained auto calibration

    USDA-ARS?s Scientific Manuscript database

    The reliability of common calibration practices for process based water quality models has recently been questioned. A so-called “adequately calibrated model” may contain input errors not readily identifiable by model users, or may not realistically represent intra-watershed responses. These short...

  11. Sleep versus wake classification from heart rate variability using computational intelligence: consideration of rejection in classification models.

    PubMed

    Lewicke, Aaron; Sazonov, Edward; Corwin, Michael J; Neuman, Michael; Schuckers, Stephanie

    2008-01-01

    Reliability of classification performance is important for many biomedical applications. A classification model which considers reliability in the development of the model such that unreliable segments are rejected would be useful, particularly, in large biomedical data sets. This approach is demonstrated in the development of a technique to reliably determine sleep and wake using only the electrocardiogram (ECG) of infants. Typically, sleep state scoring is a time consuming task in which sleep states are manually derived from many physiological signals. The method was tested with simultaneous 8-h ECG and polysomnogram (PSG) determined sleep scores from 190 infants enrolled in the collaborative home infant monitoring evaluation (CHIME) study. Learning vector quantization (LVQ) neural network, multilayer perceptron (MLP) neural network, and support vector machines (SVMs) are tested as the classifiers. After systematic rejection of difficult to classify segments, the models can achieve 85%-87% correct classification while rejecting only 30% of the data. This corresponds to a Kappa statistic of 0.65-0.68. With rejection, accuracy improves by about 8% over a model without rejection. Additionally, the impact of the PSG scored indeterminate state epochs is analyzed. The advantages of a reliable sleep/wake classifier based only on ECG include high accuracy, simplicity of use, and low intrusiveness. Reliability of the classification can be built directly in the model, such that unreliable segments are rejected.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y.; Liu, Z.; Zhang, S.

    Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less

  13. An improved classification tree analysis of high cost modules based upon an axiomatic definition of complexity

    NASA Technical Reports Server (NTRS)

    Tian, Jianhui; Porter, Adam; Zelkowitz, Marvin V.

    1992-01-01

    Identification of high cost modules has been viewed as one mechanism to improve overall system reliability, since such modules tend to produce more than their share of problems. A decision tree model was used to identify such modules. In this current paper, a previously developed axiomatic model of program complexity is merged with the previously developed decision tree process for an improvement in the ability to identify such modules. This improvement was tested using data from the NASA Software Engineering Laboratory.

  14. Design of high reliability organizations in health care

    PubMed Central

    Carroll, J S; Rudolph, J W

    2006-01-01

    To improve safety performance, many healthcare organizations have sought to emulate high reliability organizations from industries such as nuclear power, chemical processing, and military operations. We outline high reliability design principles for healthcare organizations including both the formal structures and the informal practices that complement those structures. A stage model of organizational structures and practices, moving from local autonomy to formal controls to open inquiry to deep self‐understanding, is used to illustrate typical challenges and design possibilities at each stage. We suggest how organizations can use the concepts and examples presented to increase their capacity to self‐design for safety and reliability. PMID:17142607

  15. Assessing the Culture and Climate for Quality Improvement in the Work Environment. AIR 1994 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Cameron, Kim; And Others

    This study attempted to develop a reliable and valid instrument for assessing work environment and continuous quality improvement efforts in the non-academic sectors of colleges and universities particularly those institutions who have adopted Total Quality Management programs. A model of a work environment for continuous quality improvement was…

  16. Reliability Growth of Tactical Coolers at CMC Electronics Cincinnati: 1/5-Watt Cooler Test Report

    NASA Astrophysics Data System (ADS)

    Kuo, D. T.; Lody, T. D.

    2004-06-01

    CMC Electronics Cincinnati (CMC) is conducting a reliability growth program to extend the life of tactical Stirling-cycle cryocoolers. The continuous product improvement processes consist of testing production coolers to failure, determining the root cause, incorporating improvements and verification. The most recent life data for the 1/5-Watt Cooler (Model B512B) is presented with a discussion of leading root causes and potential improvements. The mean time to failure (MTTF) life of the coolers was found to be 22,552 hours with the root cause of failure attributed to the accumulation of methane and carbon dioxide in the cooler and the wear of the piston.

  17. Improving Hydrological Simulations by Incorporating GRACE Data for Parameter Calibration

    NASA Astrophysics Data System (ADS)

    Bai, P.

    2017-12-01

    Hydrological model parameters are commonly calibrated by observed streamflow data. This calibration strategy is questioned when the modeled hydrological variables of interest are not limited to streamflow. Well-performed streamflow simulations do not guarantee the reliable reproduction of other hydrological variables. One of the reasons is that hydrological model parameters are not reasonably identified. The Gravity Recovery and Climate Experiment (GRACE) satellite-derived total water storage change (TWSC) data provide an opportunity to constrain hydrological model parameterizations in combination with streamflow observations. We constructed a multi-objective calibration scheme based on GRACE-derived TWSC and streamflow observations, with the aim of improving the parameterizations of hydrological models. The multi-objective calibration scheme was compared with the traditional single-objective calibration scheme, which is based only on streamflow observations. Two monthly hydrological models were employed on 22 Chinese catchments with different hydroclimatic conditions. The model evaluation was performed using observed streamflows, GRACE-derived TWSC, and evapotranspiraiton (ET) estimates from flux towers and from the water balance approach. Results showed that the multi-objective calibration provided more reliable TWSC and ET simulations without significant deterioration in the accuracy of streamflow simulations than the single-objective calibration. In addition, the improvements of TWSC and ET simulations were more significant in relatively dry catchments than in relatively wet catchments. This study highlights the importance of including additional constraints besides streamflow observations in the parameter estimation to improve the performances of hydrological models.

  18. A multicriteria decision making approach based on fuzzy theory and credibility mechanism for logistics center location selection.

    PubMed

    Wang, Bowen; Xiong, Haitao; Jiang, Chengrui

    2014-01-01

    As a hot topic in supply chain management, fuzzy method has been widely used in logistics center location selection to improve the reliability and suitability of the logistics center location selection with respect to the impacts of both qualitative and quantitative factors. However, it does not consider the consistency and the historical assessments accuracy of experts in predecisions. So this paper proposes a multicriteria decision making model based on credibility of decision makers by introducing priority of consistency and historical assessments accuracy mechanism into fuzzy multicriteria decision making approach. In this way, only decision makers who pass the credibility check are qualified to perform the further assessment. Finally, a practical example is analyzed to illustrate how to use the model. The result shows that the fuzzy multicriteria decision making model based on credibility mechanism can improve the reliability and suitability of site selection for the logistics center.

  19. A Multicriteria Decision Making Approach Based on Fuzzy Theory and Credibility Mechanism for Logistics Center Location Selection

    PubMed Central

    Wang, Bowen; Jiang, Chengrui

    2014-01-01

    As a hot topic in supply chain management, fuzzy method has been widely used in logistics center location selection to improve the reliability and suitability of the logistics center location selection with respect to the impacts of both qualitative and quantitative factors. However, it does not consider the consistency and the historical assessments accuracy of experts in predecisions. So this paper proposes a multicriteria decision making model based on credibility of decision makers by introducing priority of consistency and historical assessments accuracy mechanism into fuzzy multicriteria decision making approach. In this way, only decision makers who pass the credibility check are qualified to perform the further assessment. Finally, a practical example is analyzed to illustrate how to use the model. The result shows that the fuzzy multicriteria decision making model based on credibility mechanism can improve the reliability and suitability of site selection for the logistics center. PMID:25215319

  20. Power degradation and reliability study of high-power laser bars at quasi-CW operation

    NASA Astrophysics Data System (ADS)

    Zhang, Haoyu; Fan, Yong; Liu, Hui; Wang, Jingwei; Zah, Chungen; Liu, Xingsheng

    2017-02-01

    The solid state laser relies on the laser diode (LD) pumping array. Typically for high peak power quasi-CW (QCW) operation, both energy output per pulse and long term reliability are critical. With the improved bonding technique, specially Indium-free bonded diode laser bars, most of the device failures were caused by failure within laser diode itself (wearout failure), which are induced from dark line defect (DLD), bulk failure, point defect generation, facet mirror damage and etc. Measuring the reliability of LD under QCW condition will take a rather long time. Alternatively, an accelerating model could be a quicker way to estimate the LD life time under QCW operation. In this report, diode laser bars were mounted on micro channel cooler (MCC) and operated under QCW condition with different current densities and junction temperature (Tj ). The junction temperature is varied by modulating pulse width and repetition frequency. The major concern here is the power degradation due to the facet failure. Reliability models of QCW and its corresponding failures are studied. In conclusion, QCW accelerated life-time model is discussed, with a few variable parameters. The model is compared with CW model to find their relationship.

  1. Thermal Management and Reliability of Automotive Power Electronics and Electric Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narumanchi, Sreekant V; Bennion, Kevin S; Cousineau, Justine E

    Low-cost, high-performance thermal management technologies are helping meet aggressive power density, specific power, cost, and reliability targets for power electronics and electric machines. The National Renewable Energy Laboratory is working closely with numerous industry and research partners to help influence development of components that meet aggressive performance and cost targets through development and characterization of cooling technologies, and thermal characterization and improvements of passive stack materials and interfaces. Thermomechanical reliability and lifetime estimation models are important enablers for industry in cost-and time-effective design.

  2. Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission

    NASA Astrophysics Data System (ADS)

    Huang, Yuechen; Li, Haiyang

    2018-06-01

    This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.

  3. Reliability prediction of large fuel cell stack based on structure stress analysis

    NASA Astrophysics Data System (ADS)

    Liu, L. F.; Liu, B.; Wu, C. W.

    2017-09-01

    The aim of this paper is to improve the reliability of Proton Electrolyte Membrane Fuel Cell (PEMFC) stack by designing the clamping force and the thickness difference between the membrane electrode assembly (MEA) and the gasket. The stack reliability is directly determined by the component reliability, which is affected by the material property and contact stress. The component contact stress is a random variable because it is usually affected by many uncertain factors in the production and clamping process. We have investigated the influences of parameter variation coefficient on the probability distribution of contact stress using the equivalent stiffness model and the first-order second moment method. The optimal contact stress to make the component stay in the highest level reliability is obtained by the stress-strength interference model. To obtain the optimal contact stress between the contact components, the optimal thickness of the component and the stack clamping force are optimally designed. Finally, a detailed description is given how to design the MEA and gasket dimensions to obtain the highest stack reliability. This work can provide a valuable guidance in the design of stack structure for a high reliability of fuel cell stack.

  4. Analysis Testing of Sociocultural Factors Influence on Human Reliability within Sociotechnical Systems: The Algerian Oil Companies.

    PubMed

    Laidoune, Abdelbaki; Rahal Gharbi, Med El Hadi

    2016-09-01

    The influence of sociocultural factors on human reliability within an open sociotechnical systems is highlighted. The design of such systems is enhanced by experience feedback. The study was focused on a survey related to the observation of working cases, and by processing of incident/accident statistics and semistructured interviews in the qualitative part. In order to consolidate the study approach, we considered a schedule for the purpose of standard statistical measurements. We tried to be unbiased by supporting an exhaustive list of all worker categories including age, sex, educational level, prescribed task, accountability level, etc. The survey was reinforced by a schedule distributed to 300 workers belonging to two oil companies. This schedule comprises 30 items related to six main factors that influence human reliability. Qualitative observations and schedule data processing had shown that the sociocultural factors can negatively and positively influence operator behaviors. The explored sociocultural factors influence the human reliability both in qualitative and quantitative manners. The proposed model shows how reliability can be enhanced by some measures such as experience feedback based on, for example, safety improvements, training, and information. With that is added the continuous systems improvements to improve sociocultural reality and to reduce negative behaviors.

  5. Evaluation of Commercial Automotive-Grade BME Capacitors

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2014-01-01

    Three Ni-BaTiO3 ceramic capacitor lots with the same specification (chip size, capacitance, and rated voltage) and the same reliability level, made by three different manufacturers, were degraded using highly accelerated life stress testing (HALST) with the same temperature and applied voltage conditions. The reliability, as characterized by mean time to failure (MTTF), differed by more than one order of magnitude among the capacitor lots. A theoretical model based on the existence of depletion layers at grain boundaries and the entrapment of oxygen vacancies has been proposed to explain the MTTF difference among these BME capacitors. It is the conclusion of this model that reliability will not be improved simply by increasing the insulation resistance of a BME capacitor. Indeed, Ni-BaTiO3 ceramic capacitors with a smaller degradation rate constant K will always give rise to a longer reliability life.

  6. Evaluation of Commercial Automotive-Grade BME Capacitors

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2014-01-01

    Three Ni-BaTiO3 ceramic capacitor lots with the same specification (chip size, capacitance, and rated voltage) and the same reliability level, made by three different manufacturers, were degraded using highly accelerated life stress testing (HALST) with the same temperature and applied voltage conditions. The reliability, as characterized by mean time to failure (MTTF), differed by more than one order of magnitude among the capacitor lots. A theoretical model based on the existence of depletion layers at grain boundaries and the entrapment of oxygen vacancies has been proposed to explain the MTTF difference among these BME capacitors. It is the conclusion of this model that reliability will not be improved simply by increasing the insulation resistance of a BME capacitor. Indeed, Ni-BaTiO3 ceramic capacitors with a smaller degradation rate constant K will always give rise to a longer reliability life

  7. APPLICATION OF TRAVEL TIME RELIABILITY FOR PERFORMANCE ORIENTED OPERATIONAL PLANNING OF EXPRESSWAYS

    NASA Astrophysics Data System (ADS)

    Mehran, Babak; Nakamura, Hideki

    Evaluation of impacts of congestion improvement scheme s on travel time reliability is very significant for road authorities since travel time reliability repr esents operational performance of expressway segments. In this paper, a methodology is presented to estimate travel tim e reliability prior to implementation of congestion relief schemes based on travel time variation modeling as a function of demand, capacity, weather conditions and road accident s. For subject expressway segmen ts, traffic conditions are modeled over a whole year considering demand and capacity as random variables. Patterns of demand and capacity are generated for each five minute interval by appl ying Monte-Carlo simulation technique, and accidents are randomly generated based on a model that links acci dent rate to traffic conditions. A whole year analysis is performed by comparing de mand and available capacity for each scenario and queue length is estimated through shockwave analysis for each time in terval. Travel times are estimated from refined speed-flow relationships developed for intercity expressways and buffer time index is estimated consequently as a measure of travel time reliability. For validation, estimated reliability indices are compared with measured values from empirical data, and it is shown that the proposed method is suitable for operational evaluation and planning purposes.

  8. Reliability and Maintainability Model (RAM): User and Maintenance Manual. Part 2; Improved Supportability Analysis

    NASA Technical Reports Server (NTRS)

    Ebeling, Charles E.

    1996-01-01

    This report documents the procedures for utilizing and maintaining the Reliability & Maintainability Model (RAM) developed by the University of Dayton for the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC). The purpose of the grant is to provide support to NASA in establishing operational and support parameters and costs of proposed space systems. As part of this research objective, the model described here was developed. This Manual updates and supersedes the 1995 RAM User and Maintenance Manual. Changes and enhancements from the 1995 version of the model are primarily a result of the addition of more recent aircraft and shuttle R&M data.

  9. Improved model for detection of homogeneous production batches of electronic components

    NASA Astrophysics Data System (ADS)

    Kazakovtsev, L. A.; Orlov, V. I.; Stashkov, D. V.; Antamoshkin, A. N.; Masich, I. S.

    2017-10-01

    Supplying the electronic units of the complex technical systems with electronic devices of the proper quality is one of the most important problems for increasing the whole system reliability. Moreover, for reaching the highest reliability of an electronic unit, the electronic devices of the same type must have equal characteristics which assure their coherent operation. The highest homogeneity of the characteristics is reached if the electronic devices are manufactured as a single production batch. Moreover, each production batch must contain homogeneous raw materials. In this paper, we propose an improved model for detecting the homogeneous production batches of shipped lot of electronic components based on implementing the kurtosis criterion for the results of non-destructive testing performed for each lot of electronic devices used in the space industry.

  10. Modeling human disease using organotypic cultures.

    PubMed

    Schweiger, Pawel J; Jensen, Kim B

    2016-12-01

    Reliable disease models are needed in order to improve quality of healthcare. This includes gaining better understanding of disease mechanisms, developing new therapeutic interventions and personalizing treatment. Up-to-date, the majority of our knowledge about disease states comes from in vivo animal models and in vitro cell culture systems. However, it has been exceedingly difficult to model disease at the tissue level. Since recently, the gap between cell line studies and in vivo modeling has been narrowing thanks to progress in biomaterials and stem cell research. Development of reliable 3D culture systems has enabled a rapid expansion of sophisticated in vitro models. Here we focus on some of the latest advances and future perspectives in 3D organoids for human disease modeling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Experimental infection of calves by two genetically-distinct strains of rift valley fever virus

    USDA-ARS?s Scientific Manuscript database

    Recent outbreaks of Rift Valley fever in ruminant livestock, characterized by mass abortion and high mortality rates in neonates, have raised international interest in improving vaccine control strategies. Previously we developed a reliable challenge model for sheep that improves the evaluation of ...

  12. Reliability analysis method of a solar array by using fault tree analysis and fuzzy reasoning Petri net

    NASA Astrophysics Data System (ADS)

    Wu, Jianing; Yan, Shaoze; Xie, Liyang

    2011-12-01

    To address the impact of solar array anomalies, it is important to perform analysis of the solar array reliability. This paper establishes the fault tree analysis (FTA) and fuzzy reasoning Petri net (FRPN) models of a solar array mechanical system and analyzes reliability to find mechanisms of the solar array fault. The index final truth degree (FTD) and cosine matching function (CMF) are employed to resolve the issue of how to evaluate the importance and influence of different faults. So an improvement reliability analysis method is developed by means of the sorting of FTD and CMF. An example is analyzed using the proposed method. The analysis results show that harsh thermal environment and impact caused by particles in space are the most vital causes of the solar array fault. Furthermore, other fault modes and the corresponding improvement methods are discussed. The results reported in this paper could be useful for the spacecraft designers, particularly, in the process of redesigning the solar array and scheduling its reliability growth plan.

  13. Business Cases for Microgrids: Modeling Interactions of Technology Choice, Reliability, Cost, and Benefit

    NASA Astrophysics Data System (ADS)

    Hanna, Ryan

    Distributed energy resources (DERs), and increasingly microgrids, are becoming an integral part of modern distribution systems. Interest in microgrids--which are insular and autonomous power networks embedded within the bulk grid--stems largely from the vast array of flexibilities and benefits they can offer stakeholders. Managed well, they can improve grid reliability and resiliency, increase end-use energy efficiency by coupling electric and thermal loads, reduce transmission losses by generating power locally, and may reduce system-wide emissions, among many others. Whether these public benefits are realized, however, depends on whether private firms see a "business case", or private value, in investing. To this end, firms need models that evaluate costs, benefits, risks, and assumptions that underlie decisions to invest. The objectives of this dissertation are to assess the business case for microgrids that provide what industry analysts forecast as two primary drivers of market growth--that of providing energy services (similar to an electric utility) as well as reliability service to customers within. Prototypical first adopters are modeled--using an existing model to analyze energy services and a new model that couples that analysis with one of reliability--to explore interactions between technology choice, reliability, costs, and benefits. The new model has a bi-level hierarchy; it uses heuristic optimization to select and size DERs and analytical optimization to schedule them. It further embeds Monte Carlo simulation to evaluate reliability as well as regression models for customer damage functions to monetize reliability. It provides least-cost microgrid configurations for utility customers who seek to reduce interruption and operating costs. Lastly, the model is used to explore the impact of such adoption on system-wide greenhouse gas emissions in California. Results indicate that there are, at present, co-benefits for emissions reductions when customers adopt and operate microgrids for private benefit, though future analysis is needed as the bulk grid continues to transition toward a less carbon intensive system.

  14. Village-level supply reliability of surface water irrigation in rural China: effects of climate change

    NASA Astrophysics Data System (ADS)

    Li, Yanrong; Wang, Jinxia

    2018-06-01

    Surface water, as the largest part of water resources, plays an important role on China's agricultural production and food security. And surface water is vulnerable to climate change. This paper aims to examine the status of the supply reliability of surface water irrigation, and discusses how it is affected by climate change in rural China. The field data we used in this study was collected from a nine-province field survey during 2012 and 2013. Climate data are offered by China's National Meteorological Information Center which contains temperature and precipitation in the past 30 years. A Tobit model (or censored regression model) was used to estimate the influence of climate change on supply reliability of surface water irrigation. Descriptive results showed that, surface water supply reliability was 74 % in the past 3 years. Econometric results revealed that climate variables significantly influenced the supply reliability of surface water irrigation. Specifically, temperature is negatively related with the supply reliability of surface water irrigation; but precipitation positively influences the supply reliability of surface water irrigation. Besides, climate influence differs by seasons. In a word, this paper improves our understanding of the impact of climate change on agriculture irrigation and water supply reliability in the micro scale, and provides a scientific basis for relevant policy making.

  15. Projecting technology change to improve space technology planning and systems management

    NASA Astrophysics Data System (ADS)

    Walk, Steven Robert

    2011-04-01

    Projecting technology performance evolution has been improving over the years. Reliable quantitative forecasting methods have been developed that project the growth, diffusion, and performance of technology in time, including projecting technology substitutions, saturation levels, and performance improvements. These forecasts can be applied at the early stages of space technology planning to better predict available future technology performance, assure the successful selection of technology, and improve technology systems management strategy. Often what is published as a technology forecast is simply scenario planning, usually made by extrapolating current trends into the future, with perhaps some subjective insight added. Typically, the accuracy of such predictions falls rapidly with distance in time. Quantitative technology forecasting (QTF), on the other hand, includes the study of historic data to identify one of or a combination of several recognized universal technology diffusion or substitution patterns. In the same manner that quantitative models of physical phenomena provide excellent predictions of system behavior, so do QTF models provide reliable technological performance trajectories. In practice, a quantitative technology forecast is completed to ascertain with confidence when the projected performance of a technology or system of technologies will occur. Such projections provide reliable time-referenced information when considering cost and performance trade-offs in maintaining, replacing, or migrating a technology, component, or system. This paper introduces various quantitative technology forecasting techniques and illustrates their practical application in space technology and technology systems management.

  16. Using Model Replication to Improve the Reliability of Agent-Based Models

    NASA Astrophysics Data System (ADS)

    Zhong, Wei; Kim, Yushim

    The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.

  17. Do downscaled general circulation models reliably simulate historical climatic conditions?

    USGS Publications Warehouse

    Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight

    2018-01-01

    The accuracy of statistically downscaled (SD) general circulation model (GCM) simulations of monthly surface climate for historical conditions (1950–2005) was assessed for the conterminous United States (CONUS). The SD monthly precipitation (PPT) and temperature (TAVE) from 95 GCMs from phases 3 and 5 of the Coupled Model Intercomparison Project (CMIP3 and CMIP5) were used as inputs to a monthly water balance model (MWBM). Distributions of MWBM input (PPT and TAVE) and output [runoff (RUN)] variables derived from gridded station data (GSD) and historical SD climate were compared using the Kolmogorov–Smirnov (KS) test For all three variables considered, the KS test results showed that variables simulated using CMIP5 generally are more reliable than those derived from CMIP3, likely due to improvements in PPT simulations. At most locations across the CONUS, the largest differences between GSD and SD PPT and RUN occurred in the lowest part of the distributions (i.e., low-flow RUN and low-magnitude PPT). Results indicate that for the majority of the CONUS, there are downscaled GCMs that can reliably simulate historical climatic conditions. But, in some geographic locations, none of the SD GCMs replicated historical conditions for two of the three variables (PPT and RUN) based on the KS test, with a significance level of 0.05. In these locations, improved GCM simulations of PPT are needed to more reliably estimate components of the hydrologic cycle. Simple metrics and statistical tests, such as those described here, can provide an initial set of criteria to help simplify GCM selection.

  18. Enhancing recovery rates: lessons from year one of IAPT.

    PubMed

    Gyani, Alex; Shafran, Roz; Layard, Richard; Clark, David M

    2013-09-01

    The English Improving Access to Psychological Therapies (IAPT) initiative aims to make evidence-based psychological therapies for depression and anxiety disorder more widely available in the National Health Service (NHS). 32 IAPT services based on a stepped care model were established in the first year of the programme. We report on the reliable recovery rates achieved by patients treated in the services and identify predictors of recovery at patient level, service level, and as a function of compliance with National Institute of Health and Care Excellence (NICE) Treatment Guidelines. Data from 19,395 patients who were clinical cases at intake, attended at least two sessions, had at least two outcomes scores and had completed their treatment during the period were analysed. Outcome was assessed with the patient health questionnaire depression scale (PHQ-9) and the anxiety scale (GAD-7). Data completeness was high for a routine cohort study. Over 91% of treated patients had paired (pre-post) outcome scores. Overall, 40.3% of patients were reliably recovered at post-treatment, 63.7% showed reliable improvement and 6.6% showed reliable deterioration. Most patients received treatments that were recommended by NICE. When a treatment not recommended by NICE was provided, recovery rates were reduced. Service characteristics that predicted higher reliable recovery rates were: high average number of therapy sessions; higher step-up rates among individuals who started with low intensity treatment; larger services; and a larger proportion of experienced staff. Compliance with the IAPT clinical model is associated with enhanced rates of reliable recovery. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Cost-effective solutions to maintaining smart grid reliability

    NASA Astrophysics Data System (ADS)

    Qin, Qiu

    As the aging power systems are increasingly working closer to the capacity and thermal limits, maintaining an sufficient reliability has been of great concern to the government agency, utility companies and users. This dissertation focuses on improving the reliability of transmission and distribution systems. Based on the wide area measurements, multiple model algorithms are developed to diagnose transmission line three-phase short to ground faults in the presence of protection misoperations. The multiple model algorithms utilize the electric network dynamics to provide prompt and reliable diagnosis outcomes. Computational complexity of the diagnosis algorithm is reduced by using a two-step heuristic. The multiple model algorithm is incorporated into a hybrid simulation framework, which consist of both continuous state simulation and discrete event simulation, to study the operation of transmission systems. With hybrid simulation, line switching strategy for enhancing the tolerance to protection misoperations is studied based on the concept of security index, which involves the faulted mode probability and stability coverage. Local measurements are used to track the generator state and faulty mode probabilities are calculated in the multiple model algorithms. FACTS devices are considered as controllers for the transmission system. The placement of FACTS devices into power systems is investigated with a criterion of maintaining a prescribed level of control reconfigurability. Control reconfigurability measures the small signal combined controllability and observability of a power system with an additional requirement on fault tolerance. For the distribution systems, a hierarchical framework, including a high level recloser allocation scheme and a low level recloser placement scheme, is presented. The impacts of recloser placement on the reliability indices is analyzed. Evaluation of reliability indices in the placement process is carried out via discrete event simulation. The reliability requirements are described with probabilities and evaluated from the empirical distributions of reliability indices.

  20. Implications of scaling on static RAM bit cell stability and reliability

    NASA Astrophysics Data System (ADS)

    Coones, Mary Ann; Herr, Norm; Bormann, Al; Erington, Kent; Soorholtz, Vince; Sweeney, John; Phillips, Michael

    1993-01-01

    In order to lower manufacturing costs and increase performance, static random access memory (SRAM) bit cells are scaled progressively toward submicron geometries. The reliability of an SRAM is highly dependent on the bit cell stability. Smaller memory cells with less capacitance and restoring current make the array more susceptible to failures from defectivity, alpha hits, and other instabilities and leakage mechanisms. Improving long term reliability while migrating to higher density devices makes the task of building in and improving reliability increasingly difficult. Reliability requirements for high density SRAMs are very demanding with failure rates of less than 100 failures per billion device hours (100 FITs) being a common criteria. Design techniques for increasing bit cell stability and manufacturability must be implemented in order to build in this level of reliability. Several types of analyses are performed to benchmark the performance of the SRAM device. Examples of these analysis techniques which are presented here include DC parametric measurements of test structures, functional bit mapping of the circuit used to characterize the entire distribution of bits, electrical microprobing of weak and/or failing bits, and system and accelerated soft error rate measurements. These tests allow process and design improvements to be evaluated prior to implementation on the final product. These results are used to provide comprehensive bit cell characterization which can then be compared to device models and adjusted accordingly to provide optimized cell stability versus cell size for a particular technology. The result is designed in reliability which can be accomplished during the early stages of product development.

  1. Weather models as virtual sensors to data-driven rainfall predictions in urban watersheds

    NASA Astrophysics Data System (ADS)

    Cozzi, Lorenzo; Galelli, Stefano; Pascal, Samuel Jolivet De Marc; Castelletti, Andrea

    2013-04-01

    Weather and climate predictions are a key element of urban hydrology where they are used to inform water management and assist in flood warning delivering. Indeed, the modelling of the very fast dynamics of urbanized catchments can be substantially improved by the use of weather/rainfall predictions. For example, in Singapore Marina Reservoir catchment runoff processes have a very short time of concentration (roughly one hour) and observational data are thus nearly useless for runoff predictions and weather prediction are required. Unfortunately, radar nowcasting methods do not allow to carrying out long - term weather predictions, whereas numerical models are limited by their coarse spatial scale. Moreover, numerical models are usually poorly reliable because of the fast motion and limited spatial extension of rainfall events. In this study we investigate the combined use of data-driven modelling techniques and weather variables observed/simulated with a numerical model as a way to improve rainfall prediction accuracy and lead time in the Singapore metropolitan area. To explore the feasibility of the approach, we use a Weather Research and Forecast (WRF) model as a virtual sensor network for the input variables (the states of the WRF model) to a machine learning rainfall prediction model. More precisely, we combine an input variable selection method and a non-parametric tree-based model to characterize the empirical relation between the rainfall measured at the catchment level and all possible weather input variables provided by WRF model. We explore different lead time to evaluate the model reliability for different long - term predictions, as well as different time lags to see how past information could improve results. Results show that the proposed approach allow a significant improvement of the prediction accuracy of the WRF model on the Singapore urban area.

  2. LACIE: Wheat yield models for the USSR

    NASA Technical Reports Server (NTRS)

    Sakamoto, C. M.; Leduc, S. K.

    1977-01-01

    A quantitative model determining the relationship between weather conditions and wheat yield in the U.S.S.R. was studied to provide early reliable forecasts on the size of the U.S.S.R. wheat harvest. Separate models are developed for spring wheat and for winter. Differences in yield potential and responses to stress conditions and cultural improvements necessitate models for each class.

  3. Reliability of infarct volumetry: Its relevance and the improvement by a software-assisted approach.

    PubMed

    Friedländer, Felix; Bohmann, Ferdinand; Brunkhorst, Max; Chae, Ju-Hee; Devraj, Kavi; Köhler, Yvette; Kraft, Peter; Kuhn, Hannah; Lucaciu, Alexandra; Luger, Sebastian; Pfeilschifter, Waltraud; Sadler, Rebecca; Liesz, Arthur; Scholtyschik, Karolina; Stolz, Leonie; Vutukuri, Rajkumar; Brunkhorst, Robert

    2017-08-01

    Despite the efficacy of neuroprotective approaches in animal models of stroke, their translation has so far failed from bench to bedside. One reason is presumed to be a low quality of preclinical study design, leading to bias and a low a priori power. In this study, we propose that the key read-out of experimental stroke studies, the volume of the ischemic damage as commonly measured by free-handed planimetry of TTC-stained brain sections, is subject to an unrecognized low inter-rater and test-retest reliability with strong implications for statistical power and bias. As an alternative approach, we suggest a simple, open-source, software-assisted method, taking advantage of automatic-thresholding techniques. The validity and the improvement of reliability by an automated method to tMCAO infarct volumetry are demonstrated. In addition, we show the probable consequences of increased reliability for precision, p-values, effect inflation, and power calculation, exemplified by a systematic analysis of experimental stroke studies published in the year 2015. Our study reveals an underappreciated quality problem in translational stroke research and suggests that software-assisted infarct volumetry might help to improve reproducibility and therefore the robustness of bench to bedside translation.

  4. Distributed collaborative response surface method for mechanical dynamic assembly reliability design

    NASA Astrophysics Data System (ADS)

    Bai, Guangchen; Fei, Chengwei

    2013-11-01

    Because of the randomness of many impact factors influencing the dynamic assembly relationship of complex machinery, the reliability analysis of dynamic assembly relationship needs to be accomplished considering the randomness from a probabilistic perspective. To improve the accuracy and efficiency of dynamic assembly relationship reliability analysis, the mechanical dynamic assembly reliability(MDAR) theory and a distributed collaborative response surface method(DCRSM) are proposed. The mathematic model of DCRSM is established based on the quadratic response surface function, and verified by the assembly relationship reliability analysis of aeroengine high pressure turbine(HPT) blade-tip radial running clearance(BTRRC). Through the comparison of the DCRSM, traditional response surface method(RSM) and Monte Carlo Method(MCM), the results show that the DCRSM is not able to accomplish the computational task which is impossible for the other methods when the number of simulation is more than 100 000 times, but also the computational precision for the DCRSM is basically consistent with the MCM and improved by 0.40˜4.63% to the RSM, furthermore, the computational efficiency of DCRSM is up to about 188 times of the MCM and 55 times of the RSM under 10000 times simulations. The DCRSM is demonstrated to be a feasible and effective approach for markedly improving the computational efficiency and accuracy of MDAR analysis. Thus, the proposed research provides the promising theory and method for the MDAR design and optimization, and opens a novel research direction of probabilistic analysis for developing the high-performance and high-reliability of aeroengine.

  5. A reliability study on brain activation during active and passive arm movements supported by an MRI-compatible robot.

    PubMed

    Estévez, Natalia; Yu, Ningbo; Brügger, Mike; Villiger, Michael; Hepp-Reymond, Marie-Claude; Riener, Robert; Kollias, Spyros

    2014-11-01

    In neurorehabilitation, longitudinal assessment of arm movement related brain function in patients with motor disability is challenging due to variability in task performance. MRI-compatible robots monitor and control task performance, yielding more reliable evaluation of brain function over time. The main goals of the present study were first to define the brain network activated while performing active and passive elbow movements with an MRI-compatible arm robot (MaRIA) in healthy subjects, and second to test the reproducibility of this activation over time. For the fMRI analysis two models were compared. In model 1 movement onset and duration were included, whereas in model 2 force and range of motion were added to the analysis. Reliability of brain activation was tested with several statistical approaches applied on individual and group activation maps and on summary statistics. The activated network included mainly the primary motor cortex, primary and secondary somatosensory cortex, superior and inferior parietal cortex, medial and lateral premotor regions, and subcortical structures. Reliability analyses revealed robust activation for active movements with both fMRI models and all the statistical methods used. Imposed passive movements also elicited mainly robust brain activation for individual and group activation maps, and reliability was improved by including additional force and range of motion using model 2. These findings demonstrate that the use of robotic devices, such as MaRIA, can be useful to reliably assess arm movement related brain activation in longitudinal studies and may contribute in studies evaluating therapies and brain plasticity following injury in the nervous system.

  6. Optimal clustering of MGs based on droop controller for improving reliability using a hybrid of harmony search and genetic algorithms.

    PubMed

    Abedini, Mohammad; Moradi, Mohammad H; Hosseinian, S M

    2016-03-01

    This paper proposes a novel method to address reliability and technical problems of microgrids (MGs) based on designing a number of self-adequate autonomous sub-MGs via adopting MGs clustering thinking. In doing so, a multi-objective optimization problem is developed where power losses reduction, voltage profile improvement and reliability enhancement are considered as the objective functions. To solve the optimization problem a hybrid algorithm, named HS-GA, is provided, based on genetic and harmony search algorithms, and a load flow method is given to model different types of DGs as droop controller. The performance of the proposed method is evaluated in two case studies. The results provide support for the performance of the proposed method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Improving Retention and Enrollment Forecasting in Part-Time Programs

    ERIC Educational Resources Information Center

    Shapiro, Joel; Bray, Christopher

    2011-01-01

    This article describes a model that can be used to analyze student enrollment data and can give insights for improving retention of part-time students and refining institutional budgeting and planning efforts. Adult higher-education programs are often challenged in that part-time students take courses less reliably than full-time students. For…

  8. 78 FR 21879 - Improving 9-1-1 Reliability; Reliability and Continuity of Communications Networks, Including...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-12

    ... maps? What are the public safety and homeland security implications of public disclosure of key network... 13-33] Improving 9-1-1 Reliability; Reliability and Continuity of Communications Networks, Including... improve the reliability and resiliency of the Nation's 9-1-1 networks. The Notice of Proposed Rulemaking...

  9. Using meta-quality to assess the utility of volunteered geographic information for science.

    PubMed

    Langley, Shaun A; Messina, Joseph P; Moore, Nathan

    2017-11-06

    Volunteered geographic information (VGI) has strong potential to be increasingly valuable to scientists in collaboration with non-scientists. The abundance of mobile phones and other wireless forms of communication open up significant opportunities for the public to get involved in scientific research. As these devices and activities become more abundant, questions of uncertainty and error in volunteer data are emerging as critical components for using volunteer-sourced spatial data. Here we present a methodology for using VGI and assessing its sensitivity to three types of error. More specifically, this study evaluates the reliability of data from volunteers based on their historical patterns. The specific context is a case study in surveillance of tsetse flies, a health concern for being the primary vector of African Trypanosomiasis. Reliability, as measured by a reputation score, determines the threshold for accepting the volunteered data for inclusion in a tsetse presence/absence model. Higher reputation scores are successful in identifying areas of higher modeled tsetse prevalence. A dynamic threshold is needed but the quality of VGI will improve as more data are collected and the errors in identifying reliable participants will decrease. This system allows for two-way communication between researchers and the public, and a way to evaluate the reliability of VGI. Boosting the public's ability to participate in such work can improve disease surveillance and promote citizen science. In the absence of active surveillance, VGI can provide valuable spatial information given that the data are reliable.

  10. The Effects of Q-Matrix Design on Classification Accuracy in the Log-Linear Cognitive Diagnosis Model.

    PubMed

    Madison, Matthew J; Bradshaw, Laine P

    2015-06-01

    Diagnostic classification models are psychometric models that aim to classify examinees according to their mastery or non-mastery of specified latent characteristics. These models are well-suited for providing diagnostic feedback on educational assessments because of their practical efficiency and increased reliability when compared with other multidimensional measurement models. A priori specifications of which latent characteristics or attributes are measured by each item are a core element of the diagnostic assessment design. This item-attribute alignment, expressed in a Q-matrix, precedes and supports any inference resulting from the application of the diagnostic classification model. This study investigates the effects of Q-matrix design on classification accuracy for the log-linear cognitive diagnosis model. Results indicate that classification accuracy, reliability, and convergence rates improve when the Q-matrix contains isolated information from each measured attribute.

  11. 75 FR 2523 - Office of Innovation and Improvement; Overview Information; Arts in Education Model Development...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-15

    ... that is based on rigorous scientifically based research methods to assess the effectiveness of a...) Relies on measurements or observational methods that provide reliable and valid data across evaluators... of innovative, cohesive models that are based on research and have demonstrated that they effectively...

  12. Can training improve the quality of inferences made by raters in competency modeling? A quasi-experiment.

    PubMed

    Lievens, Filip; Sanchez, Juan I

    2007-05-01

    A quasi-experiment was conducted to investigate the effects of frame-of-reference training on the quality of competency modeling ratings made by consultants. Human resources consultants from a large consulting firm were randomly assigned to either a training or a control condition. The discriminant validity, interrater reliability, and accuracy of the competency ratings were significantly higher in the training group than in the control group. Further, the discriminant validity and interrater reliability of competency inferences were highest among an additional group of trained consultants who also had competency modeling experience. Together, these results suggest that procedural interventions such as rater training can significantly enhance the quality of competency modeling. 2007 APA, all rights reserved

  13. Microgrid Analysis Tools Summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jimenez, Antonio; Haase, Scott G; Mathur, Shivani

    2018-03-05

    The over-arching goal of the Alaska Microgrid Partnership is to reduce the use of total imported fuel into communities to secure all energy services by at least 50% in Alaska's remote microgrids without increasing system life cycle costs while also improving overall system reliability, security, and resilience. One goal of the Alaska Microgrid Partnership is to investigate whether a combination of energy efficiency and high-contribution (from renewable energy) power systems can reduce total imported energy usage by 50% while reducing life cycle costs and improving reliability and resiliency. This presentation provides an overview of the following four renewable energy optimizationmore » tools. Information is from respective tool websites, tool developers, and author experience. Distributed Energy Resources Customer Adoption Model (DER-CAM) Microgrid Design Toolkit (MDT) Renewable Energy Optimization (REopt) Tool Hybrid Optimization Model for Electric Renewables (HOMER).« less

  14. Automatic training and reliability estimation for 3D ASM applied to cardiac MRI segmentation

    NASA Astrophysics Data System (ADS)

    Tobon-Gomez, Catalina; Sukno, Federico M.; Butakoff, Constantine; Huguet, Marina; Frangi, Alejandro F.

    2012-07-01

    Training active shape models requires collecting manual ground-truth meshes in a large image database. While shape information can be reused across multiple imaging modalities, intensity information needs to be imaging modality and protocol specific. In this context, this study has two main purposes: (1) to test the potential of using intensity models learned from MRI simulated datasets and (2) to test the potential of including a measure of reliability during the matching process to increase robustness. We used a population of 400 virtual subjects (XCAT phantom), and two clinical populations of 40 and 45 subjects. Virtual subjects were used to generate simulated datasets (MRISIM simulator). Intensity models were trained both on simulated and real datasets. The trained models were used to segment the left ventricle (LV) and right ventricle (RV) from real datasets. Segmentations were also obtained with and without reliability information. Performance was evaluated with point-to-surface and volume errors. Simulated intensity models obtained average accuracy comparable to inter-observer variability for LV segmentation. The inclusion of reliability information reduced volume errors in hypertrophic patients (EF errors from 17 ± 57% to 10 ± 18% LV MASS errors from -27 ± 22 g to -14 ± 25 g), and in heart failure patients (EF errors from -8 ± 42% to -5 ± 14%). The RV model of the simulated images needs further improvement to better resemble image intensities around the myocardial edges. Both for real and simulated models, reliability information increased segmentation robustness without penalizing accuracy.

  15. Automatic training and reliability estimation for 3D ASM applied to cardiac MRI segmentation.

    PubMed

    Tobon-Gomez, Catalina; Sukno, Federico M; Butakoff, Constantine; Huguet, Marina; Frangi, Alejandro F

    2012-07-07

    Training active shape models requires collecting manual ground-truth meshes in a large image database. While shape information can be reused across multiple imaging modalities, intensity information needs to be imaging modality and protocol specific. In this context, this study has two main purposes: (1) to test the potential of using intensity models learned from MRI simulated datasets and (2) to test the potential of including a measure of reliability during the matching process to increase robustness. We used a population of 400 virtual subjects (XCAT phantom), and two clinical populations of 40 and 45 subjects. Virtual subjects were used to generate simulated datasets (MRISIM simulator). Intensity models were trained both on simulated and real datasets. The trained models were used to segment the left ventricle (LV) and right ventricle (RV) from real datasets. Segmentations were also obtained with and without reliability information. Performance was evaluated with point-to-surface and volume errors. Simulated intensity models obtained average accuracy comparable to inter-observer variability for LV segmentation. The inclusion of reliability information reduced volume errors in hypertrophic patients (EF errors from 17 ± 57% to 10 ± 18%; LV MASS errors from -27 ± 22 g to -14 ± 25 g), and in heart failure patients (EF errors from -8 ± 42% to -5 ± 14%). The RV model of the simulated images needs further improvement to better resemble image intensities around the myocardial edges. Both for real and simulated models, reliability information increased segmentation robustness without penalizing accuracy.

  16. Reliability Quantification of Advanced Stirling Convertor (ASC) Components

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward

    2010-01-01

    The Advanced Stirling Convertor, is intended to provide power for an unmanned planetary spacecraft and has an operational life requirement of 17 years. Over this 17 year mission, the ASC must provide power with desired performance and efficiency and require no corrective maintenance. Reliability demonstration testing for the ASC was found to be very limited due to schedule and resource constraints. Reliability demonstration must involve the application of analysis, system and component level testing, and simulation models, taken collectively. Therefore, computer simulation with limited test data verification is a viable approach to assess the reliability of ASC components. This approach is based on physics-of-failure mechanisms and involves the relationship among the design variables based on physics, mechanics, material behavior models, interaction of different components and their respective disciplines such as structures, materials, fluid, thermal, mechanical, electrical, etc. In addition, these models are based on the available test data, which can be updated, and analysis refined as more data and information becomes available. The failure mechanisms and causes of failure are included in the analysis, especially in light of the new information, in order to develop guidelines to improve design reliability and better operating controls to reduce the probability of failure. Quantified reliability assessment based on fundamental physical behavior of components and their relationship with other components has demonstrated itself to be a superior technique to conventional reliability approaches based on utilizing failure rates derived from similar equipment or simply expert judgment.

  17. Critical analysis of 3-D organoid in vitro cell culture models for high-throughput drug candidate toxicity assessments.

    PubMed

    Astashkina, Anna; Grainger, David W

    2014-04-01

    Drug failure due to toxicity indicators remains among the primary reasons for staggering drug attrition rates during clinical studies and post-marketing surveillance. Broader validation and use of next-generation 3-D improved cell culture models are expected to improve predictive power and effectiveness of drug toxicological predictions. However, after decades of promising research significant gaps remain in our collective ability to extract quality human toxicity information from in vitro data using 3-D cell and tissue models. Issues, challenges and future directions for the field to improve drug assay predictive power and reliability of 3-D models are reviewed. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Improving 1D Site Specific Velocity Profiles for the Kik-Net Network

    NASA Astrophysics Data System (ADS)

    Holt, James; Edwards, Benjamin; Pilz, Marco; Fäh, Donat; Rietbrock, Andreas

    2017-04-01

    Ground motion predication equations (GMPEs) form the cornerstone of modern seismic hazard assessments. When produced to a high standard they provide reliable estimates of ground motion/spectral acceleration for a given site and earthquake scenario. This information is crucial for engineers to optimise design and for regulators who enforce legal minimum safe design capacities. Classically, GMPEs were built upon the assumption that variability around the median model could be treated as aleatory. As understanding improved, it was noted that the propagation could be segregated into the response of the average path from the source and the response of the site. This is because the heterogeneity of the near-surface lithology is significantly different from that of the bulk path. It was then suggested that the semi-ergodic approach could be taken if the site response could be determined, moving uncertainty away from aleatory to epistemic. The determination of reliable site-specific response models is therefore becoming increasingly critical for ground motion models used in engineering practice. Today it is common practice to include proxies for site response within the scope of a GMPE, such as Vs30 or site classification, in an effort to reduce the overall uncertainty of the predication at a given site. However, these proxies are not always reliable enough to give confident ground motion estimates, due to the complexity of the near-surface. Other approaches of quantifying the response of the site include detailed numerical simulations (1/2/3D - linear, EQL, non-linear etc.). However, in order to be reliable, they require highly detailed and accurate velocity and, for non-linear analyses, material property models. It is possible to obtain this information through invasive methods, but is expensive, and not feasible for most projects. Here we propose an alternative method to derive reliable velocity profiles (and their uncertainty), calibrated using almost 20 years of recorded data from the Kik-Net network. First, using a reliable subset of sites, the empirical surface to borehole (S/B) ratio is calculated in the frequency domain using all events recorded at that site. In a subsequent step, we use numerical simulation to produce 1D SH transfer function curves using a suite of stochastic velocity models. Comparing the resulting amplification with the empirical S/B ratio we find optimal 1D velocity models and their uncertainty. The method will be tested to determine the level of initial information required to obtain a reliable Vs profile (e.g., starting Vs model, only Vs30, site-class, H/V ratio etc.) and then applied and tested against data from other regions using site-to-reference or empirical spectral model amplification.

  19. Reliability analysis of the solar array based on Fault Tree Analysis

    NASA Astrophysics Data System (ADS)

    Jianing, Wu; Shaoze, Yan

    2011-07-01

    The solar array is an important device used in the spacecraft, which influences the quality of in-orbit operation of the spacecraft and even the launches. This paper analyzes the reliability of the mechanical system and certifies the most vital subsystem of the solar array. The fault tree analysis (FTA) model is established according to the operating process of the mechanical system based on DFH-3 satellite; the logical expression of the top event is obtained by Boolean algebra and the reliability of the solar array is calculated. The conclusion shows that the hinges are the most vital links between the solar arrays. By analyzing the structure importance(SI) of the hinge's FTA model, some fatal causes, including faults of the seal, insufficient torque of the locking spring, temperature in space, and friction force, can be identified. Damage is the initial stage of the fault, so limiting damage is significant to prevent faults. Furthermore, recommendations for improving reliability associated with damage limitation are discussed, which can be used for the redesigning of the solar array and the reliability growth planning.

  20. Development and Validation of the Primary Care Team Dynamics Survey

    PubMed Central

    Song, Hummy; Chien, Alyna T; Fisher, Josephine; Martin, Julia; Peters, Antoinette S; Hacker, Karen; Rosenthal, Meredith B; Singer, Sara J

    2015-01-01

    Objective To develop and validate a survey instrument designed to measure team dynamics in primary care. Data Sources/Study Setting We studied 1,080 physician and nonphysician health care professionals working at 18 primary care practices participating in a learning collaborative aimed at improving team-based care. Study Design We developed a conceptual model and administered a cross-sectional survey addressing team dynamics, and we assessed reliability and discriminant validity of survey factors and the overall survey's goodness-of-fit using structural equation modeling. Data Collection We administered the survey between September 2012 and March 2013. Principal Findings Overall response rate was 68 percent (732 respondents). Results support a seven-factor model of team dynamics, suggesting that conditions for team effectiveness, shared understanding, and three supportive processes are associated with acting and feeling like a team and, in turn, perceived team effectiveness. This model demonstrated adequate fit (goodness-of-fit index: 0.91), scale reliability (Cronbach's alphas: 0.71–0.91), and discriminant validity (average factor correlations: 0.49). Conclusions It is possible to measure primary care team dynamics reliably using a 29-item survey. This survey may be used in ambulatory settings to study teamwork and explore the effect of efforts to improve team-based care. Future studies should demonstrate the importance of team dynamics for markers of team effectiveness (e.g., work satisfaction, care quality, clinical outcomes). PMID:25423886

  1. Development and validation of the primary care team dynamics survey.

    PubMed

    Song, Hummy; Chien, Alyna T; Fisher, Josephine; Martin, Julia; Peters, Antoinette S; Hacker, Karen; Rosenthal, Meredith B; Singer, Sara J

    2015-06-01

    To develop and validate a survey instrument designed to measure team dynamics in primary care. We studied 1,080 physician and nonphysician health care professionals working at 18 primary care practices participating in a learning collaborative aimed at improving team-based care. We developed a conceptual model and administered a cross-sectional survey addressing team dynamics, and we assessed reliability and discriminant validity of survey factors and the overall survey's goodness-of-fit using structural equation modeling. We administered the survey between September 2012 and March 2013. Overall response rate was 68 percent (732 respondents). Results support a seven-factor model of team dynamics, suggesting that conditions for team effectiveness, shared understanding, and three supportive processes are associated with acting and feeling like a team and, in turn, perceived team effectiveness. This model demonstrated adequate fit (goodness-of-fit index: 0.91), scale reliability (Cronbach's alphas: 0.71-0.91), and discriminant validity (average factor correlations: 0.49). It is possible to measure primary care team dynamics reliably using a 29-item survey. This survey may be used in ambulatory settings to study teamwork and explore the effect of efforts to improve team-based care. Future studies should demonstrate the importance of team dynamics for markers of team effectiveness (e.g., work satisfaction, care quality, clinical outcomes). © Health Research and Educational Trust.

  2. Review of Dynamic Modeling and Simulation of Large Scale Belt Conveyor System

    NASA Astrophysics Data System (ADS)

    He, Qing; Li, Hong

    Belt conveyor is one of the most important devices to transport bulk-solid material for long distance. Dynamic analysis is the key to decide whether the design is rational in technique, safe and reliable in running, feasible in economy. It is very important to study dynamic properties, improve efficiency and productivity, guarantee conveyor safe, reliable and stable running. The dynamic researches and applications of large scale belt conveyor are discussed. The main research topics, the state-of-the-art of dynamic researches on belt conveyor are analyzed. The main future works focus on dynamic analysis, modeling and simulation of main components and whole system, nonlinear modeling, simulation and vibration analysis of large scale conveyor system.

  3. Standardizing an approach to the evaluation of implementation science proposals.

    PubMed

    Crable, Erika L; Biancarelli, Dea; Walkey, Allan J; Allen, Caitlin G; Proctor, Enola K; Drainoni, Mari-Lynn

    2018-05-29

    The fields of implementation and improvement sciences have experienced rapid growth in recent years. However, research that seeks to inform health care change may have difficulty translating core components of implementation and improvement sciences within the traditional paradigms used to evaluate efficacy and effectiveness research. A review of implementation and improvement sciences grant proposals within an academic medical center using a traditional National Institutes of Health framework highlighted the need for tools that could assist investigators and reviewers in describing and evaluating proposed implementation and improvement sciences research. We operationalized existing recommendations for writing implementation science proposals as the ImplemeNtation and Improvement Science Proposals Evaluation CriTeria (INSPECT) scoring system. The resulting system was applied to pilot grants submitted to a call for implementation and improvement science proposals at an academic medical center. We evaluated the reliability of the INSPECT system using Krippendorff's alpha coefficients and explored the utility of the INSPECT system to characterize common deficiencies in implementation research proposals. We scored 30 research proposals using the INSPECT system. Proposals received a median cumulative score of 7 out of a possible score of 30. Across individual elements of INSPECT, proposals scored highest for criteria rating evidence of a care or quality gap. Proposals generally performed poorly on all other criteria. Most proposals received scores of 0 for criteria identifying an evidence-based practice or treatment (50%), conceptual model and theoretical justification (70%), setting's readiness to adopt new services/treatment/programs (54%), implementation strategy/process (67%), and measurement and analysis (70%). Inter-coder reliability testing showed excellent reliability (Krippendorff's alpha coefficient 0.88) for the application of the scoring system overall and demonstrated reliability scores ranging from 0.77 to 0.99 for individual elements. The INSPECT scoring system presents a new scoring criteria with a high degree of inter-rater reliability and utility for evaluating the quality of implementation and improvement sciences grant proposals.

  4. Effects of DEM source and resolution on WEPP hydrologic and erosion simulation: A case study of two forest watersheds in northern Idaho

    Treesearch

    J. X. Zhang; J. Q. Wu; K. Chang; W. J. Elliot; S. Dun

    2009-01-01

    The recent modification of the Water Erosion Prediction Project (WEPP) model has improved its applicability to hydrology and erosion modeling in forest watersheds. To generate reliable topographic and hydrologic inputs for the WEPP model, carefully selecting digital elevation models (DEMs) with appropriate resolution and accuracy is essential because topography is a...

  5. Identification of reliable gridded reference data for statistical downscaling methods in Alberta

    NASA Astrophysics Data System (ADS)

    Eum, H. I.; Gupta, A.

    2017-12-01

    Climate models provide essential information to assess impacts of climate change at regional and global scales. However, statistical downscaling methods have been applied to prepare climate model data for various applications such as hydrologic and ecologic modelling at a watershed scale. As the reliability and (spatial and temporal) resolution of statistically downscaled climate data mainly depend on a reference data, identifying the most reliable reference data is crucial for statistical downscaling. A growing number of gridded climate products are available for key climate variables which are main input data to regional modelling systems. However, inconsistencies in these climate products, for example, different combinations of climate variables, varying data domains and data lengths and data accuracy varying with physiographic characteristics of the landscape, have caused significant challenges in selecting the most suitable reference climate data for various environmental studies and modelling. Employing various observation-based daily gridded climate products available in public domain, i.e. thin plate spline regression products (ANUSPLIN and TPS), inverse distance method (Alberta Townships), and numerical climate model (North American Regional Reanalysis) and an optimum interpolation technique (Canadian Precipitation Analysis), this study evaluates the accuracy of the climate products at each grid point by comparing with the Adjusted and Homogenized Canadian Climate Data (AHCCD) observations for precipitation, minimum and maximum temperature over the province of Alberta. Based on the performance of climate products at AHCCD stations, we ranked the reliability of these publically available climate products corresponding to the elevations of stations discretized into several classes. According to the rank of climate products for each elevation class, we identified the most reliable climate products based on the elevation of target points. A web-based system was developed to allow users to easily select the most reliable reference climate data at each target point based on the elevation of grid cell. By constructing the best combination of reference data for the study domain, the accurate and reliable statistically downscaled climate projections could be significantly improved.

  6. Model-centric distribution automation: Capacity, reliability, and efficiency

    DOE PAGES

    Onen, Ahmet; Jung, Jaesung; Dilek, Murat; ...

    2016-02-26

    A series of analyses along with field validations that evaluate efficiency, reliability, and capacity improvements of model-centric distribution automation are presented. With model-centric distribution automation, the same model is used from design to real-time control calculations. A 14-feeder system with 7 substations is considered. The analyses involve hourly time-varying loads and annual load growth factors. Phase balancing and capacitor redesign modifications are used to better prepare the system for distribution automation, where the designs are performed considering time-varying loads. Coordinated control of load tap changing transformers, line regulators, and switched capacitor banks is considered. In evaluating distribution automation versus traditionalmore » system design and operation, quasi-steady-state power flow analysis is used. In evaluating distribution automation performance for substation transformer failures, reconfiguration for restoration analysis is performed. In evaluating distribution automation for storm conditions, Monte Carlo simulations coupled with reconfiguration for restoration calculations are used. As a result, the evaluations demonstrate that model-centric distribution automation has positive effects on system efficiency, capacity, and reliability.« less

  7. Model-centric distribution automation: Capacity, reliability, and efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onen, Ahmet; Jung, Jaesung; Dilek, Murat

    A series of analyses along with field validations that evaluate efficiency, reliability, and capacity improvements of model-centric distribution automation are presented. With model-centric distribution automation, the same model is used from design to real-time control calculations. A 14-feeder system with 7 substations is considered. The analyses involve hourly time-varying loads and annual load growth factors. Phase balancing and capacitor redesign modifications are used to better prepare the system for distribution automation, where the designs are performed considering time-varying loads. Coordinated control of load tap changing transformers, line regulators, and switched capacitor banks is considered. In evaluating distribution automation versus traditionalmore » system design and operation, quasi-steady-state power flow analysis is used. In evaluating distribution automation performance for substation transformer failures, reconfiguration for restoration analysis is performed. In evaluating distribution automation for storm conditions, Monte Carlo simulations coupled with reconfiguration for restoration calculations are used. As a result, the evaluations demonstrate that model-centric distribution automation has positive effects on system efficiency, capacity, and reliability.« less

  8. Description of Data Acquisition Efforts

    DOT National Transportation Integrated Search

    1999-09-01

    As part of the overall strategy of refining and improving the existing transportation and air-quality modeling framework, the current project focuses extensively on acquiring disaggregate and reliable data for analysis. In this report, we discuss the...

  9. Impact of Market Behavior, Fleet Composition, and Ancillary Services on Revenue Sufficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frew, Bethany

    This presentation provides an overview of new and ongoing NREL research that aims to improve our understanding of reliability and revenue sufficiency challenges through modeling tools within a markets framework.

  10. Towards improving software security by using simulation to inform requirements and conceptual design

    DOE PAGES

    Nutaro, James J.; Allgood, Glenn O.; Kuruganti, Teja

    2015-06-17

    We illustrate the use of modeling and simulation early in the system life-cycle to improve security and reduce costs. The models that we develop for this illustration are inspired by problems in reliability analysis and supervisory control, for which similar models are used to quantify failure probabilities and rates. In the context of security, we propose that models of this general type can be used to understand trades between risk and cost while writing system requirements and during conceptual design, and thereby significantly reduce the need for expensive security corrections after a system enters operation

  11. Improving Learner Handovers in Medical Education.

    PubMed

    Warm, Eric J; Englander, Robert; Pereira, Anne; Barach, Paul

    2017-07-01

    Multiple studies have demonstrated that the information included in the Medical Student Performance Evaluation fails to reliably predict medical students' future performance. This faulty transfer of information can lead to harm when poorly prepared students fail out of residency or, worse, are shuttled through the medical education system without an honest accounting of their performance. Such poor learner handovers likely arise from two root causes: (1) the absence of agreed-on outcomes of training and/or accepted assessments of those outcomes, and (2) the lack of standardized ways to communicate the results of those assessments. To improve the current learner handover situation, an authentic, shared mental model of competency is needed; high-quality tools to assess that competency must be developed and tested; and transparent, reliable, and safe ways to communicate this information must be created.To achieve these goals, the authors propose using a learner handover process modeled after a patient handover process. The CLASS model includes a description of the learner's Competency attainment, a summary of the Learner's performance, an Action list and statement of Situational awareness, and Synthesis by the receiving program. This model also includes coaching oriented towards improvement along the continuum of education and care. Just as studies have evaluated patient handover models using metrics that matter most to patients, studies must evaluate this learner handover model using metrics that matter most to providers, patients, and learners.

  12. Improving the Reliability of Student Scores from Speeded Assessments: An Illustration of Conditional Item Response Theory Using a Computer-Administered Measure of Vocabulary.

    PubMed

    Petscher, Yaacov; Mitchell, Alison M; Foorman, Barbara R

    2015-01-01

    A growing body of literature suggests that response latency, the amount of time it takes an individual to respond to an item, may be an important factor to consider when using assessment data to estimate the ability of an individual. Considering that tests of passage and list fluency are being adapted to a computer administration format, it is possible that accounting for individual differences in response times may be an increasingly feasible option to strengthen the precision of individual scores. The present research evaluated the differential reliability of scores when using classical test theory and item response theory as compared to a conditional item response model which includes response time as an item parameter. Results indicated that the precision of student ability scores increased by an average of 5 % when using the conditional item response model, with greater improvements for those who were average or high ability. Implications for measurement models of speeded assessments are discussed.

  13. Improving the Reliability of Student Scores from Speeded Assessments: An Illustration of Conditional Item Response Theory Using a Computer-Administered Measure of Vocabulary

    PubMed Central

    Petscher, Yaacov; Mitchell, Alison M.; Foorman, Barbara R.

    2016-01-01

    A growing body of literature suggests that response latency, the amount of time it takes an individual to respond to an item, may be an important factor to consider when using assessment data to estimate the ability of an individual. Considering that tests of passage and list fluency are being adapted to a computer administration format, it is possible that accounting for individual differences in response times may be an increasingly feasible option to strengthen the precision of individual scores. The present research evaluated the differential reliability of scores when using classical test theory and item response theory as compared to a conditional item response model which includes response time as an item parameter. Results indicated that the precision of student ability scores increased by an average of 5 % when using the conditional item response model, with greater improvements for those who were average or high ability. Implications for measurement models of speeded assessments are discussed. PMID:27721568

  14. Quantifying the Economic and Grid Reliability Impacts of Improved Wind Power Forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qin; Martinez-Anido, Carlo Brancucci; Wu, Hongyu

    Wind power forecasting is an important tool in power system operations to address variability and uncertainty. Accurately doing so is important to reducing the occurrence and length of curtailment, enhancing market efficiency, and improving the operational reliability of the bulk power system. This research quantifies the value of wind power forecasting improvements in the IEEE 118-bus test system as modified to emulate the generation mixes of Midcontinent, California, and New England independent system operator balancing authority areas. To measure the economic value, a commercially available production cost modeling tool was used to simulate the multi-timescale unit commitment (UC) and economicmore » dispatch process for calculating the cost savings and curtailment reductions. To measure the reliability improvements, an in-house tool, FESTIV, was used to calculate the system's area control error and the North American Electric Reliability Corporation Control Performance Standard 2. The approach allowed scientific reproducibility of results and cross-validation of the tools. A total of 270 scenarios were evaluated to accommodate the variation of three factors: generation mix, wind penetration level, and wind fore-casting improvements. The modified IEEE 118-bus systems utilized 1 year of data at multiple timescales, including the day-ahead UC, 4-hour-ahead UC, and 5-min real-time dispatch. The value of improved wind power forecasting was found to be strongly tied to the conventional generation mix, existence of energy storage devices, and the penetration level of wind energy. The simulation results demonstrate that wind power forecasting brings clear benefits to power system operations.« less

  15. Temporal similarity perfusion mapping: A standardized and model-free method for detecting perfusion deficits in stroke

    PubMed Central

    Song, Sunbin; Luby, Marie; Edwardson, Matthew A.; Brown, Tyler; Shah, Shreyansh; Cox, Robert W.; Saad, Ziad S.; Reynolds, Richard C.; Glen, Daniel R.; Cohen, Leonardo G.; Latour, Lawrence L.

    2017-01-01

    Introduction Interpretation of the extent of perfusion deficits in stroke MRI is highly dependent on the method used for analyzing the perfusion-weighted signal intensity time-series after gadolinium injection. In this study, we introduce a new model-free standardized method of temporal similarity perfusion (TSP) mapping for perfusion deficit detection and test its ability and reliability in acute ischemia. Materials and methods Forty patients with an ischemic stroke or transient ischemic attack were included. Two blinded readers compared real-time generated interactive maps and automatically generated TSP maps to traditional TTP/MTT maps for presence of perfusion deficits. Lesion volumes were compared for volumetric inter-rater reliability, spatial concordance between perfusion deficits and healthy tissue and contrast-to-noise ratio (CNR). Results Perfusion deficits were correctly detected in all patients with acute ischemia. Inter-rater reliability was higher for TSP when compared to TTP/MTT maps and there was a high similarity between the lesion volumes depicted on TSP and TTP/MTT (r(18) = 0.73). The Pearson's correlation between lesions calculated on TSP and traditional maps was high (r(18) = 0.73, p<0.0003), however the effective CNR was greater for TSP compared to TTP (352.3 vs 283.5, t(19) = 2.6, p<0.03.) and MTT (228.3, t(19) = 2.8, p<0.03). Discussion TSP maps provide a reliable and robust model-free method for accurate perfusion deficit detection and improve lesion delineation compared to traditional methods. This simple method is also computationally faster and more easily automated than model-based methods. This method can potentially improve the speed and accuracy in perfusion deficit detection for acute stroke treatment and clinical trial inclusion decision-making. PMID:28973000

  16. Can generic knee joint models improve the measurement of osteoarthritic knee kinematics during squatting activity?

    PubMed

    Clément, Julien; Dumas, Raphaël; Hagemeister, Nicola; de Guise, Jaques A

    2017-01-01

    Knee joint kinematics derived from multi-body optimisation (MBO) still requires evaluation. The objective of this study was to corroborate model-derived kinematics of osteoarthritic knees obtained using four generic knee joint models used in musculoskeletal modelling - spherical, hinge, degree-of-freedom coupling curves and parallel mechanism - against reference knee kinematics measured by stereo-radiography. Root mean square errors ranged from 0.7° to 23.4° for knee rotations and from 0.6 to 9.0 mm for knee displacements. Model-derived knee kinematics computed from generic knee joint models was inaccurate. Future developments and experiments should improve the reliability of osteoarthritic knee models in MBO and musculoskeletal modelling.

  17. Assuring Electronics Reliability: What Could and Should Be Done Differently

    NASA Astrophysics Data System (ADS)

    Suhir, E.

    The following “ ten commandments” for the predicted and quantified reliability of aerospace electronic, and photonic products are addressed and discussed: 1) The best product is the best compromise between the needs for reliability, cost effectiveness and time-to-market; 2) Reliability cannot be low, need not be higher than necessary, but has to be adequate for a particular product; 3) When reliability is imperative, ability to quantify it is a must, especially if optimization is considered; 4) One cannot design a product with quantified, optimized and assured reliability by limiting the effort to the highly accelerated life testing (HALT) that does not quantify reliability; 5) Reliability is conceived at the design stage and should be taken care of, first of all, at this stage, when a “ genetically healthy” product should be created; reliability evaluations and assurances cannot be delayed until the product is fabricated and shipped to the customer, i.e., cannot be left to the prognostics-and-health-monitoring/managing (PHM) stage; it is too late at this stage to change the design or the materials for improved reliability; that is why, when reliability is imperative, users re-qualify parts to assess their lifetime and use redundancy to build a highly reliable system out of insufficiently reliable components; 6) Design, fabrication, qualification and PHM efforts should consider and be specific for particular products and their most likely actual or at least anticipated application(s); 7) Probabilistic design for reliability (PDfR) is an effective means for improving the state-of-the-art in the field: nothing is perfect, and the difference between an unreliable product and a robust one is “ merely” the probability of failure (PoF); 8) Highly cost-effective and highly focused failure oriented accelerated testing (FOAT) geared to a particular pre-determined reliability model and aimed at understanding the physics of failure- anticipated by this model is an important constituent part of the PDfR effort; 9) Predictive modeling (PM) is another important constituent of the PDfR approach; in combination with FOAT, it is a powerful means to carry out sensitivity analyses (SA), to quantify and nearly eliminate failures (“ principle of practical confidence” ); 10) Consistent, comprehensive and physically meaningful PDfR can effectively contribute to the most feasible and the most effective qualification test (QT) methodologies, practices and specifications. The general concepts addressed in the paper are illustrated by numerical examples. It is concluded that although the suggested concept is promising and fruitful, further research, refinement, and validations are needed before this concept becomes widely accepted by the engineering community and implemented into practice. It is important that this novel approach is introduced gradually, whenever feasible and appropriate, in addition to, and in some situations even instead of, the currently employed various types and modifications of the forty year old HALT.

  18. Reliability Analysis and Reliability-Based Design Optimization of Circular Composite Cylinders Under Axial Compression

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2001-01-01

    This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.

  19. Identification of Extracellular Segments by Mass Spectrometry Improves Topology Prediction of Transmembrane Proteins.

    PubMed

    Langó, Tamás; Róna, Gergely; Hunyadi-Gulyás, Éva; Turiák, Lilla; Varga, Julia; Dobson, László; Várady, György; Drahos, László; Vértessy, Beáta G; Medzihradszky, Katalin F; Szakács, Gergely; Tusnády, Gábor E

    2017-02-13

    Transmembrane proteins play crucial role in signaling, ion transport, nutrient uptake, as well as in maintaining the dynamic equilibrium between the internal and external environment of cells. Despite their important biological functions and abundance, less than 2% of all determined structures are transmembrane proteins. Given the persisting technical difficulties associated with high resolution structure determination of transmembrane proteins, additional methods, including computational and experimental techniques remain vital in promoting our understanding of their topologies, 3D structures, functions and interactions. Here we report a method for the high-throughput determination of extracellular segments of transmembrane proteins based on the identification of surface labeled and biotin captured peptide fragments by LC/MS/MS. We show that reliable identification of extracellular protein segments increases the accuracy and reliability of existing topology prediction algorithms. Using the experimental topology data as constraints, our improved prediction tool provides accurate and reliable topology models for hundreds of human transmembrane proteins.

  20. A road map for integrating eco-evolutionary processes into biodiversity models.

    PubMed

    Thuiller, Wilfried; Münkemüller, Tamara; Lavergne, Sébastien; Mouillot, David; Mouquet, Nicolas; Schiffers, Katja; Gravel, Dominique

    2013-05-01

    The demand for projections of the future distribution of biodiversity has triggered an upsurge in modelling at the crossroads between ecology and evolution. Despite the enthusiasm around these so-called biodiversity models, most approaches are still criticised for not integrating key processes known to shape species ranges and community structure. Developing an integrative modelling framework for biodiversity distribution promises to improve the reliability of predictions and to give a better understanding of the eco-evolutionary dynamics of species and communities under changing environments. In this article, we briefly review some eco-evolutionary processes and interplays among them, which are essential to provide reliable projections of species distributions and community structure. We identify gaps in theory, quantitative knowledge and data availability hampering the development of an integrated modelling framework. We argue that model development relying on a strong theoretical foundation is essential to inspire new models, manage complexity and maintain tractability. We support our argument with an example of a novel integrated model for species distribution modelling, derived from metapopulation theory, which accounts for abiotic constraints, dispersal, biotic interactions and evolution under changing environmental conditions. We hope such a perspective will motivate exciting and novel research, and challenge others to improve on our proposed approach. © 2013 John Wiley & Sons Ltd/CNRS.

  1. AN IMPROVED MODEL FOR ESTIMATING EMISSIONS OF VOLATILE ORGANIC COMPOUNDS FROM FORESTS IN THE EASTERN UNITED STATES (Journal)

    EPA Science Inventory

    Regional estimates of biogenic volatile organic compound (BVOC) emissions are important inputs for models of atmospheric chemistry and carbon budgets. Since forests are the primary emitters of BVOCs, it is important to develop reliable estimates of their areal coverage and BVOC e...

  2. Will building new reservoirs always help increase the water supply reliability? - insight from a modeling-based global study

    NASA Astrophysics Data System (ADS)

    Zhuang, Y.; Tian, F.; Yigzaw, W.; Hejazi, M. I.; Li, H. Y.; Turner, S. W. D.; Vernon, C. R.

    2017-12-01

    More and more reservoirs are being build or planned in order to help meet the increasing water demand all over the world. However, is building new reservoirs always helpful to water supply? To address this question, the river routing module of Global Change Assessment Model (GCAM) has been extended with a simple yet physical-based reservoir scheme accounting for irrigation, flood control and hydropower operations at each individual reservoir. The new GCAM river routing model has been applied over the global domain with the runoff inputs from the Variable Infiltration Capacity Model. The simulated streamflow is validated at 150 global river basins where the observed streamflow data are available. The model performance has been significantly improved at 77 basins and worsened at 35 basins. To facilitate the analysis of additional reservoir storage impacts at the basin level, a lumped version of GCAM reservoir model has been developed, representing a single lumped reservoir at each river basin which has the regulation capacity of all reservoir combined. A Sequent Peak Analysis is used to estimate how much additional reservoir storage is required to satisfy the current water demand. For basins with water deficit, the water supply reliability can be improved with additional storage. However, there is a threshold storage value at each basin beyond which the reliability stops increasing, suggesting that building new reservoirs will not help better relieve the water stress. Findings in the research can be helpful to the future planning and management of new reservoirs.

  3. Multi-gauge Calibration for modeling the Semi-Arid Santa Cruz Watershed in Arizona-Mexico Border Area Using SWAT

    USGS Publications Warehouse

    Niraula, Rewati; Norman, Laura A.; Meixner, Thomas; Callegary, James B.

    2012-01-01

    In most watershed-modeling studies, flow is calibrated at one monitoring site, usually at the watershed outlet. Like many arid and semi-arid watersheds, the main reach of the Santa Cruz watershed, located on the Arizona-Mexico border, is discontinuous for most of the year except during large flood events, and therefore the flow characteristics at the outlet do not represent the entire watershed. Calibration is required at multiple locations along the Santa Cruz River to improve model reliability. The objective of this study was to best portray surface water flow in this semiarid watershed and evaluate the effect of multi-gage calibration on flow predictions. In this study, the Soil and Water Assessment Tool (SWAT) was calibrated at seven monitoring stations, which improved model performance and increased the reliability of flow, in the Santa Cruz watershed. The most sensitive parameters to affect flow were found to be curve number (CN2), soil evaporation and compensation coefficient (ESCO), threshold water depth in shallow aquifer for return flow to occur (GWQMN), base flow alpha factor (Alpha_Bf), and effective hydraulic conductivity of the soil layer (Ch_K2). In comparison, when the model was established with a single calibration at the watershed outlet, flow predictions at other monitoring gages were inaccurate. This study emphasizes the importance of multi-gage calibration to develop a reliable watershed model in arid and semiarid environments. The developed model, with further calibration of water quality parameters will be an integral part of the Santa Cruz Watershed Ecosystem Portfolio Model (SCWEPM), an online decision support tool, to assess the impacts of climate change and urban growth in the Santa Cruz watershed.

  4. Linear and evolutionary polynomial regression models to forecast coastal dynamics: Comparison and reliability assessment

    NASA Astrophysics Data System (ADS)

    Bruno, Delia Evelina; Barca, Emanuele; Goncalves, Rodrigo Mikosz; de Araujo Queiroz, Heithor Alexandre; Berardi, Luigi; Passarella, Giuseppe

    2018-01-01

    In this paper, the Evolutionary Polynomial Regression data modelling strategy has been applied to study small scale, short-term coastal morphodynamics, given its capability for treating a wide database of known information, non-linearly. Simple linear and multilinear regression models were also applied to achieve a balance between the computational load and reliability of estimations of the three models. In fact, even though it is easy to imagine that the more complex the model, the more the prediction improves, sometimes a "slight" worsening of estimations can be accepted in exchange for the time saved in data organization and computational load. The models' outcomes were validated through a detailed statistical, error analysis, which revealed a slightly better estimation of the polynomial model with respect to the multilinear model, as expected. On the other hand, even though the data organization was identical for the two models, the multilinear one required a simpler simulation setting and a faster run time. Finally, the most reliable evolutionary polynomial regression model was used in order to make some conjecture about the uncertainty increase with the extension of extrapolation time of the estimation. The overlapping rate between the confidence band of the mean of the known coast position and the prediction band of the estimated position can be a good index of the weakness in producing reliable estimations when the extrapolation time increases too much. The proposed models and tests have been applied to a coastal sector located nearby Torre Colimena in the Apulia region, south Italy.

  5. Model Based Mission Assurance: Emerging Opportunities for Robotic Systems

    NASA Technical Reports Server (NTRS)

    Evans, John W.; DiVenti, Tony

    2016-01-01

    The emergence of Model Based Systems Engineering (MBSE) in a Model Based Engineering framework has created new opportunities to improve effectiveness and efficiencies across the assurance functions. The MBSE environment supports not only system architecture development, but provides for support of Systems Safety, Reliability and Risk Analysis concurrently in the same framework. Linking to detailed design will further improve assurance capabilities to support failures avoidance and mitigation in flight systems. This also is leading new assurance functions including model assurance and management of uncertainty in the modeling environment. Further, the assurance cases, a structured hierarchal argument or model, are emerging as a basis for supporting a comprehensive viewpoint in which to support Model Based Mission Assurance (MBMA).

  6. The reliability of in-training assessment when performance improvement is taken into account.

    PubMed

    van Lohuizen, Mirjam T; Kuks, Jan B M; van Hell, Elisabeth A; Raat, A N; Stewart, Roy E; Cohen-Schotanus, Janke

    2010-12-01

    During in-training assessment students are frequently assessed over a longer period of time and therefore it can be expected that their performance will improve. We studied whether there really is a measurable performance improvement when students are assessed over an extended period of time and how this improvement affects the reliability of the overall judgement. In-training assessment results were obtained from 104 students on rotation at our university hospital or at one of the six affiliated hospitals. Generalisability theory was used in combination with multilevel analysis to obtain reliability coefficients and to estimate the number of assessments needed for reliable overall judgement, both including and excluding performance improvement. Students' clinical performance ratings improved significantly from a mean of 7.6 at the start to a mean of 7.8 at the end of their clerkship. When taking performance improvement into account, reliability coefficients were higher. The number of assessments needed to achieve a reliability of 0.80 or higher decreased from 17 to 11. Therefore, when studying reliability of in-training assessment, performance improvement should be considered.

  7. The Importance of Human Reliability Analysis in Human Space Flight: Understanding the Risks

    NASA Technical Reports Server (NTRS)

    Hamlin, Teri L.

    2010-01-01

    HRA is a method used to describe, qualitatively and quantitatively, the occurrence of human failures in the operation of complex systems that affect availability and reliability. Modeling human actions with their corresponding failure in a PRA (Probabilistic Risk Assessment) provides a more complete picture of the risk and risk contributions. A high quality HRA can provide valuable information on potential areas for improvement, including training, procedural, equipment design and need for automation.

  8. IGA resistance of TT Alloy 690 and concentration behavior of Broached Egg Crate tube support configuration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki, S.; Kusakabe, T.; Yamamoto, H.

    1992-12-31

    In order to improve the reliability of the Steam Generator (SG), TT Alloy 690 and BEC (Broached Egg Crate) type tube support plate has been developed. Some tests are carried out to heighten the reliability for these improvements all the more and the following results are obtained. (1) SERT test (Slow Extension Rate Test) made clear that TT690 has less IGA susceptibility in comparison with MA600. (2) The alkaline susceptibility on the occurrence of IGA/SCC on TT690 and MA600 obtained by SERT corresponds to that obtained by Model Boiler test. (3) By model boiler test, superior concentration behaviors for BECmore » type tube support plate configuration have been recognized in comparison with Drill type. This result is obtained by the joint research of the five utilities (Kansai Epco, Hokkaido Epco, Shikoku Epco, Kyushu Epco, JAPCO) and MHI.« less

  9. Need for improved methods to collect and present spatial epidemiologic data for vectorborne diseases.

    PubMed

    Eisen, Lars; Eisen, Rebecca J

    2007-12-01

    Improved methods for collection and presentation of spatial epidemiologic data are needed for vectorborne diseases in the United States. Lack of reliable data for probable pathogen exposure site has emerged as a major obstacle to the development of predictive spatial risk models. Although plague case investigations can serve as a model for how to ideally generate needed information, this comprehensive approach is cost-prohibitive for more common and less severe diseases. New methods are urgently needed to determine probable pathogen exposure sites that will yield reliable results while taking into account economic and time constraints of the public health system and attending physicians. Recent data demonstrate the need for a change from use of the county spatial unit for presentation of incidence of vectorborne diseases to more precise ZIP code or census tract scales. Such fine-scale spatial risk patterns can be communicated to the public and medical community through Web-mapping approaches.

  10. Electron Transport Modeling of Molecular Nanoscale Bridges Used in Energy Conversion Schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunietz, Barry D

    2016-08-09

    The goal of the research program is to reliably describe electron transport and transfer processes at the molecular level. Such insight is essential for improving molecular applications of solar and thermal energy conversion. We develop electronic structure models to study (1) photoinduced electron transfer and transport processes in organic semiconducting materials, and (2) charge and heat transport through molecular bridges. We seek fundamental understanding of key processes, which lead to design new experiments and ultimately to achieve systems with improved properties.

  11. Benchmarking novel approaches for modelling species range dynamics

    PubMed Central

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H.; Moore, Kara A.; Zimmermann, Niklaus E.

    2016-01-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species’ range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species’ response to climate change but also emphasise several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. PMID:26872305

  12. Benchmarking novel approaches for modelling species range dynamics.

    PubMed

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E

    2016-08-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. © 2016 John Wiley & Sons Ltd.

  13. Design of Oil-Lubricated Machine for Life and Reliability

    NASA Technical Reports Server (NTRS)

    Zaretsky, Erwin V.

    2007-01-01

    In the post-World War II era, the major technology drivers for improving the life, reliability, and performance of rolling-element bearings and gears have been the jet engine and the helicopter. By the late 1950s, most of the materials used for bearings and gears in the aerospace industry had been introduced into use. By the early 1960s, the life of most steels was increased over that experienced in the early 1940s, primarily by the introduction of vacuum degassing and vacuum melting processes in the late 1950s. The development of elastohydrodynamic (EHD) theory showed that most rolling bearings and gears have a thin film separating the contacting bodies during motion and it is that film which affects their lives. Computer programs modeling bearing and gear dynamics that incorporate probabilistic life prediction methods and EHD theory enable optimization of rotating machinery based on life and reliability. With improved manufacturing and processing, the potential improvement in bearing and gear life can be as much as 80 times that attainable in the early 1950s. The work presented summarizes the use of laboratory fatigue data for bearings and gears coupled with probabilistic life prediction and EHD theories to predict the life and reliability of a commercial turboprop gearbox. The resulting predictions are compared with field data.

  14. A Fuzzy Robust Optimization Model for Waste Allocation Planning Under Uncertainty

    PubMed Central

    Xu, Ye; Huang, Guohe; Xu, Ling

    2014-01-01

    Abstract In this study, a fuzzy robust optimization (FRO) model was developed for supporting municipal solid waste management under uncertainty. The Development Zone of the City of Dalian, China, was used as a study case for demonstration. Comparing with traditional fuzzy models, the FRO model made improvement by considering the minimization of the weighted summation among the expected objective values, the differences between two extreme possible objective values, and the penalty of the constraints violation as the objective function, instead of relying purely on the minimization of expected value. Such an improvement leads to enhanced system reliability and the model becomes especially useful when multiple types of uncertainties and complexities are involved in the management system. Through a case study, the applicability of the FRO model was successfully demonstrated. Solutions under three future planning scenarios were provided by the FRO model, including (1) priority on economic development, (2) priority on environmental protection, and (3) balanced consideration for both. The balanced scenario solution was recommended for decision makers, since it respected both system economy and reliability. The model proved valuable in providing a comprehensive profile about the studied system and helping decision makers gain an in-depth insight into system complexity and select cost-effective management strategies. PMID:25317037

  15. A Fuzzy Robust Optimization Model for Waste Allocation Planning Under Uncertainty.

    PubMed

    Xu, Ye; Huang, Guohe; Xu, Ling

    2014-10-01

    In this study, a fuzzy robust optimization (FRO) model was developed for supporting municipal solid waste management under uncertainty. The Development Zone of the City of Dalian, China, was used as a study case for demonstration. Comparing with traditional fuzzy models, the FRO model made improvement by considering the minimization of the weighted summation among the expected objective values, the differences between two extreme possible objective values, and the penalty of the constraints violation as the objective function, instead of relying purely on the minimization of expected value. Such an improvement leads to enhanced system reliability and the model becomes especially useful when multiple types of uncertainties and complexities are involved in the management system. Through a case study, the applicability of the FRO model was successfully demonstrated. Solutions under three future planning scenarios were provided by the FRO model, including (1) priority on economic development, (2) priority on environmental protection, and (3) balanced consideration for both. The balanced scenario solution was recommended for decision makers, since it respected both system economy and reliability. The model proved valuable in providing a comprehensive profile about the studied system and helping decision makers gain an in-depth insight into system complexity and select cost-effective management strategies.

  16. Impact of distributed power electronics on the lifetime and reliability of PV systems: Impact of distributed power electronics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olalla, Carlos; Maksimovic, Dragan; Deline, Chris

    Here, this paper quantifies the impact of distributed power electronics in photovoltaic (PV) systems in terms of end-of-life energy-capture performance and reliability. The analysis is based on simulations of PV installations over system lifetime at various degradation rates. It is shown how module-level or submodule-level power converters can mitigate variations in cell degradation over time, effectively increasing the system lifespan by 5-10 years compared with the nominal 25-year lifetime. An important aspect typically overlooked when characterizing such improvements is the reliability of distributed power electronics, as power converter failures may not only diminish energy yield improvements but also adversely affectmore » the overall system operation. Failure models are developed, and power electronics reliability is taken into account in this work, in order to provide a more comprehensive view of the opportunities and limitations offered by distributed power electronics in PV systems. Lastly, it is shown how a differential power-processing approach achieves the best mismatch mitigation performance and the least susceptibility to converter faults.« less

  17. Impact of distributed power electronics on the lifetime and reliability of PV systems: Impact of distributed power electronics

    DOE PAGES

    Olalla, Carlos; Maksimovic, Dragan; Deline, Chris; ...

    2017-04-26

    Here, this paper quantifies the impact of distributed power electronics in photovoltaic (PV) systems in terms of end-of-life energy-capture performance and reliability. The analysis is based on simulations of PV installations over system lifetime at various degradation rates. It is shown how module-level or submodule-level power converters can mitigate variations in cell degradation over time, effectively increasing the system lifespan by 5-10 years compared with the nominal 25-year lifetime. An important aspect typically overlooked when characterizing such improvements is the reliability of distributed power electronics, as power converter failures may not only diminish energy yield improvements but also adversely affectmore » the overall system operation. Failure models are developed, and power electronics reliability is taken into account in this work, in order to provide a more comprehensive view of the opportunities and limitations offered by distributed power electronics in PV systems. Lastly, it is shown how a differential power-processing approach achieves the best mismatch mitigation performance and the least susceptibility to converter faults.« less

  18. Validating the European Health Literacy Survey Questionnaire in people with type 2 diabetes: Latent trait analyses applying multidimensional Rasch modelling and confirmatory factor analysis.

    PubMed

    Finbråten, Hanne Søberg; Pettersen, Kjell Sverre; Wilde-Larsson, Bodil; Nordström, Gun; Trollvik, Anne; Guttersrud, Øystein

    2017-11-01

    To validate the European Health Literacy Survey Questionnaire (HLS-EU-Q47) in people with type 2 diabetes mellitus. The HLS-EU-Q47 latent variable is outlined in a framework with four cognitive domains integrated in three health domains, implying 12 theoretically defined subscales. Valid and reliable health literacy measurers are crucial to effectively adapt health communication and education to individuals and groups of patients. Cross-sectional study applying confirmatory latent trait analyses. Using a paper-and-pencil self-administered approach, 388 adults responded in March 2015. The data were analysed using the Rasch methodology and confirmatory factor analysis. Response violation (response dependency) and trait violation (multidimensionality) of local independence were identified. Fitting the "multidimensional random coefficients multinomial logit" model, 1-, 3- and 12-dimensional Rasch models were applied and compared. Poor model fit and differential item functioning were present in some items, and several subscales suffered from poor targeting and low reliability. Despite multidimensional data, we did not observe any unordered response categories. Interpreting the domains as distinct but related latent dimensions, the data fit a 12-dimensional Rasch model and a 12-factor confirmatory factor model best. Therefore, the analyses did not support the estimation of one overall "health literacy score." To support the plausibility of claims based on the HLS-EU score(s), we suggest: removing the health care aspect to reduce the magnitude of multidimensionality; rejecting redundant items to avoid response dependency; adding "harder" items and applying a six-point rating scale to improve subscale targeting and reliability; and revising items to improve model fit and avoid bias owing to person factors. © 2017 John Wiley & Sons Ltd.

  19. Waves at Navigation Structures

    DTIC Science & Technology

    2015-10-30

    upgrades the Coastal Modeling System (CMS) wave models CMS-Wave, a phase- averaged spectral wave model, and BOUSS-2D, a Boussinesq type nonlinear wave...developing WaveNet and TideNet, two Web-based tool systems for wind and wave data access and processing, which provide critical data for USACE project...practical applications, resulting in optimization of navigation system to improve safety, reliability and operations with innovative infrastructures

  20. [Optimization of the parameters of microcirculatory structural adaptation model based on improved quantum-behaved particle swarm optimization algorithm].

    PubMed

    Pan, Qing; Yao, Jialiang; Wang, Ruofan; Cao, Ping; Ning, Gangmin; Fang, Luping

    2017-08-01

    The vessels in the microcirculation keep adjusting their structure to meet the functional requirements of the different tissues. A previously developed theoretical model can reproduce the process of vascular structural adaptation to help the study of the microcirculatory physiology. However, until now, such model lacks the appropriate methods for its parameter settings with subsequent limitation of further applications. This study proposed an improved quantum-behaved particle swarm optimization (QPSO) algorithm for setting the parameter values in this model. The optimization was performed on a real mesenteric microvascular network of rat. The results showed that the improved QPSO was superior to the standard particle swarm optimization, the standard QPSO and the previously reported Downhill algorithm. We conclude that the improved QPSO leads to a better agreement between mathematical simulation and animal experiment, rendering the model more reliable in future physiological studies.

  1. Reliability Analysis of a Green Roof Under Different Storm Scenarios

    NASA Astrophysics Data System (ADS)

    William, R. K.; Stillwell, A. S.

    2015-12-01

    Urban environments continue to face the challenges of localized flooding and decreased water quality brought on by the increasing amount of impervious area in the built environment. Green infrastructure provides an alternative to conventional storm sewer design by using natural processes to filter and store stormwater at its source. However, there are currently few consistent standards available in North America to ensure that installed green infrastructure is performing as expected. This analysis offers a method for characterizing green roof failure using a visual aid commonly used in earthquake engineering: fragility curves. We adapted the concept of the fragility curve based on the efficiency in runoff reduction provided by a green roof compared to a conventional roof under different storm scenarios. We then used the 2D distributed surface water-groundwater coupled model MIKE SHE to model the impact that a real green roof might have on runoff in different storm events. We then employed a multiple regression analysis to generate an algebraic demand model that was input into the Matlab-based reliability analysis model FERUM, which was then used to calculate the probability of failure. The use of reliability analysis as a part of green infrastructure design code can provide insights into green roof weaknesses and areas for improvement. It also supports the design of code that is more resilient than current standards and is easily testable for failure. Finally, the understanding of reliability of a single green roof module under different scenarios can support holistic testing of system reliability.

  2. GalaxyTBM: template-based modeling by building a reliable core and refining unreliable local regions.

    PubMed

    Ko, Junsu; Park, Hahnbeom; Seok, Chaok

    2012-08-10

    Protein structures can be reliably predicted by template-based modeling (TBM) when experimental structures of homologous proteins are available. However, it is challenging to obtain structures more accurate than the single best templates by either combining information from multiple templates or by modeling regions that vary among templates or are not covered by any templates. We introduce GalaxyTBM, a new TBM method in which the more reliable core region is modeled first from multiple templates and less reliable, variable local regions, such as loops or termini, are then detected and re-modeled by an ab initio method. This TBM method is based on "Seok-server," which was tested in CASP9 and assessed to be amongst the top TBM servers. The accuracy of the initial core modeling is enhanced by focusing on more conserved regions in the multiple-template selection and multiple sequence alignment stages. Additional improvement is achieved by ab initio modeling of up to 3 unreliable local regions in the fixed framework of the core structure. Overall, GalaxyTBM reproduced the performance of Seok-server, with GalaxyTBM and Seok-server resulting in average GDT-TS of 68.1 and 68.4, respectively, when tested on 68 single-domain CASP9 TBM targets. For application to multi-domain proteins, GalaxyTBM must be combined with domain-splitting methods. Application of GalaxyTBM to CASP9 targets demonstrates that accurate protein structure prediction is possible by use of a multiple-template-based approach, and ab initio modeling of variable regions can further enhance the model quality.

  3. Evaluating Cellular Instrumentation on Rural Handpumps to Improve Service Delivery-A Longitudinal Study in Rural Rwanda.

    PubMed

    Nagel, Corey; Beach, Jack; Iribagiza, Chantal; Thomas, Evan A

    2015-12-15

    In rural sub-Saharan Africa, where handpumps are common, 10-67% are nonfunctional at any one time, and many never get repaired. Increased reliability requires improved monitoring and responsiveness of maintenance providers. In 2014, 181 cellular enabled water pump use sensors were installed in three provinces of Rwanda. In three arms, the nominal maintenance model was compared against a "best practice" circuit rider model, and an "ambulance" service model. In only the ambulance model was the sensor data available to the implementer, and used to dispatch technicians. The study ran for seven months in 2014-2015. In the study period, the nominal maintenance group had a median time to successful repair of approximately 152 days, with a mean per-pump functionality of about 68%. In the circuit rider group, the median time to successful repair was nearly 57 days, with a per-pump functionality mean of nearly 73%. In the ambulance service group, the successful repair interval was nearly 21 days with a functionality mean of nearly 91%. An indicative cost analysis suggests that the cost per functional pump per year is approximately similar between the three models. However, the benefits of reliable water service may justify greater focus on servicing models over installation models.

  4. Efficient stochastic approaches for sensitivity studies of an Eulerian large-scale air pollution model

    NASA Astrophysics Data System (ADS)

    Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.

    2017-10-01

    Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.

  5. Exploring the validity and reliability of a questionnaire for evaluating veterinary clinical teachers' supervisory skills during clinical rotations.

    PubMed

    Boerboom, T B B; Dolmans, D H J M; Jaarsma, A D C; Muijtjens, A M M; Van Beukelen, P; Scherpbier, A J J A

    2011-01-01

    Feedback to aid teachers in improving their teaching requires validated evaluation instruments. When implementing an evaluation instrument in a different context, it is important to collect validity evidence from multiple sources. We examined the validity and reliability of the Maastricht Clinical Teaching Questionnaire (MCTQ) as an instrument to evaluate individual clinical teachers during short clinical rotations in veterinary education. We examined four sources of validity evidence: (1) Content was examined based on theory of effective learning. (2) Response process was explored in a pilot study. (3) Internal structure was assessed by confirmatory factor analysis using 1086 student evaluations and reliability was examined utilizing generalizability analysis. (4) Relations with other relevant variables were examined by comparing factor scores with other outcomes. Content validity was supported by theory underlying the cognitive apprenticeship model on which the instrument is based. The pilot study resulted in an additional question about supervision time. A five-factor model showed a good fit with the data. Acceptable reliability was achievable with 10-12 questionnaires per teacher. Correlations between the factors and overall teacher judgement were strong. The MCTQ appears to be a valid and reliable instrument to evaluate clinical teachers' performance during short rotations.

  6. Physics Based Modeling in Design and Development for U.S. Defense Held in Denver, Colorado on November 14-17, 2011. Volume 2: Audio and Movie Files

    DTIC Science & Technology

    2011-11-17

    Mr. Frank Salvatore, High Performance Technologies FIXED AND ROTARY WING AIRCRAFT 13274 - “CREATE-AV DaVinci : Model-Based Engineering for Systems... Tools for Reliability Improvement and Addressing Modularity Issues in Evaluation and Physical Testing”, Dr. Richard Heine, Army Materiel Systems

  7. Two Models of Raters in a Structured Oral Examination: Does It Make a Difference?

    ERIC Educational Resources Information Center

    Touchie, Claire; Humphrey-Murto, Susan; Ainslie, Martha; Myers, Kathryn; Wood, Timothy J.

    2010-01-01

    Oral examinations have become more standardized over recent years. Traditionally a small number of raters were used for this type of examination. Past studies suggested that more raters should improve reliability. We compared the results of a multi-station structured oral examination using two different rater models, those based in a station,…

  8. Assimilating a synthetic Kalman filter leaf area index series into the WOFOST model to improve regional winter wheat yield estimation

    USDA-ARS?s Scientific Manuscript database

    The scale mismatch between remotely sensed observations and crop growth models simulated state variables decreases the reliability of crop yield estimates. To overcome this problem, we used a two-step data assimilation phases: first we generated a complete leaf area index (LAI) time series by combin...

  9. Study on Distribution Reliability with Parallel and On-site Distributed Generation Considering Protection Miscoordination and Tie Line

    NASA Astrophysics Data System (ADS)

    Chaitusaney, Surachai; Yokoyama, Akihiko

    In distribution system, Distributed Generation (DG) is expected to improve the system reliability as its backup generation. However, DG contribution in fault current may cause the loss of the existing protection coordination, e.g. recloser-fuse coordination and breaker-breaker coordination. This problem can drastically deteriorate the system reliability, and it is more serious and complicated when there are several DG sources in the system. Hence, the above conflict in reliability aspect unavoidably needs a detailed investigation before the installation or enhancement of DG is done. The model of composite DG fault current is proposed to find the threshold beyond which existing protection coordination is lost. Cases of protection miscoordination are described, together with their consequences. Since a distribution system may be tied with another system, the issues of tie line and on-site DG are integrated into this study. Reliability indices are evaluated and compared in the distribution reliability test system RBTS Bus 2.

  10. Methodology to improve design of accelerated life tests in civil engineering projects.

    PubMed

    Lin, Jing; Yuan, Yongbo; Zhou, Jilai; Gao, Jie

    2014-01-01

    For reliability testing an Energy Expansion Tree (EET) and a companion Energy Function Model (EFM) are proposed and described in this paper. Different from conventional approaches, the EET provides a more comprehensive and objective way to systematically identify external energy factors affecting reliability. The EFM introduces energy loss into a traditional Function Model to identify internal energy sources affecting reliability. The combination creates a sound way to enumerate the energies to which a system may be exposed during its lifetime. We input these energies into planning an accelerated life test, a Multi Environment Over Stress Test. The test objective is to discover weak links and interactions among the system and the energies to which it is exposed, and design them out. As an example, the methods are applied to the pipe in subsea pipeline. However, they can be widely used in other civil engineering industries as well. The proposed method is compared with current methods.

  11. Ensemble Flow Forecasts for Risk Based Reservoir Operations of Lake Mendocino in Mendocino County, California: A Framework for Objectively Leveraging Weather and Climate Forecasts in a Decision Support Environment

    NASA Astrophysics Data System (ADS)

    Delaney, C.; Hartman, R. K.; Mendoza, J.; Whitin, B.

    2017-12-01

    Forecast informed reservoir operations (FIRO) is a methodology that incorporates short to mid-range precipitation and flow forecasts to inform the flood operations of reservoirs. The Ensemble Forecast Operations (EFO) alternative is a probabilistic approach of FIRO that incorporates ensemble streamflow predictions (ESPs) made by NOAA's California-Nevada River Forecast Center (CNRFC). With the EFO approach, release decisions are made to manage forecasted risk of reaching critical operational thresholds. A water management model was developed for Lake Mendocino, a 111,000 acre-foot reservoir located near Ukiah, California, to evaluate the viability of the EFO alternative to improve water supply reliability but not increase downstream flood risk. Lake Mendocino is a dual use reservoir, which is owned and operated for flood control by the United States Army Corps of Engineers and is operated for water supply by the Sonoma County Water Agency. Due to recent changes in the operations of an upstream hydroelectric facility, this reservoir has suffered from water supply reliability issues since 2007. The EFO alternative was simulated using a 26-year (1985-2010) ESP hindcast generated by the CNRFC. The ESP hindcast was developed using Global Ensemble Forecast System version 10 precipitation reforecasts processed with the Hydrologic Ensemble Forecast System to generate daily reforecasts of 61 flow ensemble members for a 15-day forecast horizon. Model simulation results demonstrate that the EFO alternative may improve water supply reliability for Lake Mendocino yet not increase flood risk for downstream areas. The developed operations framework can directly leverage improved skill in the second week of the forecast and is extendable into the S2S time domain given the demonstration of improved skill through a reliable reforecast of adequate historical duration and consistent with operationally available numerical weather predictions.

  12. How Does Higher Frequency Monitoring Data Affect the Calibration of a Process-Based Water Quality Model?

    NASA Astrophysics Data System (ADS)

    Jackson-Blake, L.

    2014-12-01

    Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, but even in well-studied catchments, streams are often only sampled at a fortnightly or monthly frequency. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by one process-based catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the MCMC-DREAM algorithm. Using daily rather than fortnightly data resulted in improved simulation of the magnitude of peak TDP concentrations, in turn resulting in improved model performance statistics. Marginal posteriors were better constrained by the higher frequency data, resulting in a large reduction in parameter-related uncertainty in simulated TDP (the 95% credible interval decreased from 26 to 6 μg/l). The number of parameters that could be reliably auto-calibrated was lower for the fortnightly data, leading to the recommendation that parameters should not be varied spatially for models such as INCA-P unless there is solid evidence that this is appropriate, or there is a real need to do so for the model to fulfil its purpose. Secondary study aims were to highlight the subjective elements involved in auto-calibration and suggest practical improvements that could make models such as INCA-P more suited to auto-calibration and uncertainty analyses. Two key improvements include model simplification, so that all model parameters can be included in an analysis of this kind, and better documenting of recommended ranges for each parameter, to help in choosing sensible priors.

  13. Reliability analysis of a robotic system using hybridized technique

    NASA Astrophysics Data System (ADS)

    Kumar, Naveen; Komal; Lather, J. S.

    2017-09-01

    In this manuscript, the reliability of a robotic system has been analyzed using the available data (containing vagueness, uncertainty, etc). Quantification of involved uncertainties is done through data fuzzification using triangular fuzzy numbers with known spreads as suggested by system experts. With fuzzified data, if the existing fuzzy lambda-tau (FLT) technique is employed, then the computed reliability parameters have wide range of predictions. Therefore, decision-maker cannot suggest any specific and influential managerial strategy to prevent unexpected failures and consequently to improve complex system performance. To overcome this problem, the present study utilizes a hybridized technique. With this technique, fuzzy set theory is utilized to quantify uncertainties, fault tree is utilized for the system modeling, lambda-tau method is utilized to formulate mathematical expressions for failure/repair rates of the system, and genetic algorithm is utilized to solve established nonlinear programming problem. Different reliability parameters of a robotic system are computed and the results are compared with the existing technique. The components of the robotic system follow exponential distribution, i.e., constant. Sensitivity analysis is also performed and impact on system mean time between failures (MTBF) is addressed by varying other reliability parameters. Based on analysis some influential suggestions are given to improve the system performance.

  14. A reliability-based maintenance technicians' workloads optimisation model with stochastic consideration

    NASA Astrophysics Data System (ADS)

    Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.

    2016-06-01

    The growing interest in technicians' workloads research is probably associated with the recent surge in competition. This was prompted by unprecedented technological development that triggers changes in customer tastes and preferences for industrial goods. In a quest for business improvement, this worldwide intense competition in industries has stimulated theories and practical frameworks that seek to optimise performance in workplaces. In line with this drive, the present paper proposes an optimisation model which considers technicians' reliability that complements factory information obtained. The information used emerged from technicians' productivity and earned-values using the concept of multi-objective modelling approach. Since technicians are expected to carry out routine and stochastic maintenance work, we consider these workloads as constraints. The influence of training, fatigue and experiential knowledge of technicians on workload management was considered. These workloads were combined with maintenance policy in optimising reliability, productivity and earned-values using the goal programming approach. Practical datasets were utilised in studying the applicability of the proposed model in practice. It was observed that our model was able to generate information that practicing maintenance engineers can apply in making more informed decisions on technicians' management.

  15. A Fresh Start for Flood Estimation in Ungauged Basins

    NASA Astrophysics Data System (ADS)

    Woods, R. A.

    2017-12-01

    The two standard methods for flood estimation in ungauged basins, regression-based statistical models and rainfall-runoff models using a design rainfall event, have survived relatively unchanged as the methods of choice for more than 40 years. Their technical implementation has developed greatly, but the models' representation of hydrological processes has not, despite a large volume of hydrological research. I suggest it is time to introduce more hydrology into flood estimation. The reliability of the current methods can be unsatisfactory. For example, despite the UK's relatively straightforward hydrology, regression estimates of the index flood are uncertain by +/- a factor of two (for a 95% confidence interval), an impractically large uncertainty for design. The standard error of rainfall-runoff model estimates is not usually known, but available assessments indicate poorer reliability than statistical methods. There is a practical need for improved reliability in flood estimation. Two promising candidates to supersede the existing methods are (i) continuous simulation by rainfall-runoff modelling and (ii) event-based derived distribution methods. The main challenge with continuous simulation methods in ungauged basins is to specify the model structure and parameter values, when calibration data are not available. This has been an active area of research for more than a decade, and this activity is likely to continue. The major challenges for the derived distribution method in ungauged catchments include not only the correct specification of model structure and parameter values, but also antecedent conditions (e.g. seasonal soil water balance). However, a much smaller community of researchers are active in developing or applying the derived distribution approach, and as a result slower progress is being made. A change in needed: surely we have learned enough about hydrology in the last 40 years that we can make a practical hydrological advance on our methods for flood estimation! A shift to new methods for flood estimation will not be taken lightly by practitioners. However, the standard for change is clear - can we develop new methods which give significant improvements in reliability over those existing methods which are demonstrably unsatisfactory?

  16. Reliability of quadriceps surface electromyography measurements is improved by two vs. single site recordings.

    PubMed

    Balshaw, T G; Fry, A; Maden-Wilkinson, T M; Kong, P W; Folland, J P

    2017-06-01

    The reliability of surface electromyography (sEMG) is typically modest even with rigorous methods, and therefore further improvements in sEMG reliability are desirable. This study compared the between-session reliability (both within participant absolute reliability and between-participant relative reliability) of sEMG amplitude from single vs. average of two distinct recording sites, for individual muscle (IM) and whole quadriceps (WQ) measures during voluntary and evoked contractions. Healthy males (n = 20) performed unilateral isometric knee extension contractions: voluntary maximum and submaximum (60%), as well as evoked twitch contractions on two separate days. sEMG was recorded from two distinct sites on each superficial quadriceps muscle. Averaging two recording sites vs. using single site measures improved reliability for IM and WQ measurements during voluntary (16-26% reduction in within-participant coefficient of variation, CV W ) and evoked contractions (40-56% reduction in CV W ). For sEMG measurements from large muscles, averaging the recording of two distinct sites is recommended as it improves within-participant reliability. This improved sensitivity has application to clinical and research measurement of sEMG amplitude.

  17. Robust hard-solder packaging of conduction cooled laser diode bars

    NASA Astrophysics Data System (ADS)

    Schleuning, David; Griffin, Mike; James, Phillip; McNulty, John; Mendoza, Dan; Morales, John; Nabors, David; Peters, Mike; Zhou, Hailong; Reed, Murray

    2007-02-01

    We present the reliability of high-power laser diodes utilizing hard solder (AuSn) on a conduction-cooled package (HCCP). We present results of 50 W hard-pulse operation at 8xx nm and demonstrate a reliability of MTTF > 27 khrs (90% CL), which is an order of magnitude improvement over traditional packaging. We also present results at 9xx nm with a reliability of MTTF >17 khrs (90% CL) at 75 W. We discuss finite element analysis (FEA) modeling and time dependent temperature measurements combined with experimental life-test data to quantify true hard-pulse operation. We also discuss FEA and measured stress profiles across laser bars comparing soft and hard solder packaging.

  18. Improving of local ozone forecasting by integrated models.

    PubMed

    Gradišar, Dejan; Grašič, Boštjan; Božnar, Marija Zlata; Mlakar, Primož; Kocijan, Juš

    2016-09-01

    This paper discuss the problem of forecasting the maximum ozone concentrations in urban microlocations, where reliable alerting of the local population when thresholds have been surpassed is necessary. To improve the forecast, the methodology of integrated models is proposed. The model is based on multilayer perceptron neural networks that use as inputs all available information from QualeAria air-quality model, WRF numerical weather prediction model and onsite measurements of meteorology and air pollution. While air-quality and meteorological models cover large geographical 3-dimensional space, their local resolution is often not satisfactory. On the other hand, empirical methods have the advantage of good local forecasts. In this paper, integrated models are used for improved 1-day-ahead forecasting of the maximum hourly value of ozone within each day for representative locations in Slovenia. The WRF meteorological model is used for forecasting meteorological variables and the QualeAria air-quality model for gas concentrations. Their predictions, together with measurements from ground stations, are used as inputs to a neural network. The model validation results show that integrated models noticeably improve ozone forecasts and provide better alert systems.

  19. A numerical insight into elastomer normally closed micro valve actuation with cohesive interfacial cracking modelling

    NASA Astrophysics Data System (ADS)

    Wang, Dongyang; Ba, Dechun; Hao, Ming; Duan, Qihui; Liu, Kun; Mei, Qi

    2018-05-01

    Pneumatic NC (normally closed) valves are widely used in high density microfluidics systems. To improve actuation reliability, the actuation pressure needs to be reduced. In this work, we utilize 3D FEM (finite element method) modelling to get an insight into the valve actuation process numerically. Specifically, the progressive debonding process at the elastomer interface is simulated with CZM (cohesive zone model) method. To minimize the actuation pressure, the V-shape design has been investigated and compared with a normal straight design. The geometrical effects of valve shape has been elaborated, in terms of valve actuation pressure. Based on our simulated results, we formulate the main concerns for micro valve design and fabrication, which is significant for minimizing actuation pressures and ensuring reliable operation.

  20. Testing of the SEE and OEE post-hip fracture.

    PubMed

    Resnick, Barbara; Orwig, Denise; Zimmerman, Sheryl; Hawkes, William; Golden, Justine; Werner-Bronzert, Michelle; Magaziner, Jay

    2006-08-01

    The purpose of this study was to test the reliability and validity of the Self-Efficacy for Exercise (SEE) and the Outcome Expectations for Exercise (OEE) scales in a sample of 166 older women post-hip fracture. There was some evidence of validity of the SEE and OEE based on confirmatory factor analysis and Rasch model testing, criterion based and convergent validity, and evidence of internal consistency based on alpha coefficients and separation indices and reliability based on R2 estimates. Rasch model testing demonstrated that some items had high variability. Based on these findings suggestions are made for how items could be revised and the scales improved for future use.

  1. A Study on Micropipetting Detection Technology of Automatic Enzyme Immunoassay Analyzer.

    PubMed

    Shang, Zhiwu; Zhou, Xiangping; Li, Cheng; Tsai, Sang-Bing

    2018-04-10

    In order to improve the accuracy and reliability of micropipetting, a method of micro-pipette detection and calibration combining the dynamic pressure monitoring in pipetting process and quantitative identification of pipette volume in image processing was proposed. Firstly, the normalized pressure model for the pipetting process was established with the kinematic model of the pipetting operation, and the pressure model is corrected by the experimental method. Through the pipetting process pressure and pressure of the first derivative of real-time monitoring, the use of segmentation of the double threshold method as pipetting fault evaluation criteria, and the pressure sensor data are processed by Kalman filtering, the accuracy of fault diagnosis is improved. When there is a fault, the pipette tip image is collected through the camera, extract the boundary of the liquid region by the background contrast method, and obtain the liquid volume in the tip according to the geometric characteristics of the pipette tip. The pipette deviation feedback to the automatic pipetting module and deviation correction is carried out. The titration test results show that the combination of the segmented pipetting kinematic model of the double threshold method of pressure monitoring, can effectively real-time judgment and classification of the pipette fault. The method of closed-loop adjustment of pipetting volume can effectively improve the accuracy and reliability of the pipetting system.

  2. SPIPS: Spectro-Photo-Interferometry of Pulsating Stars

    NASA Astrophysics Data System (ADS)

    Mérand, Antoine

    2017-10-01

    SPIPS (Spectro-Photo-Interferometry of Pulsating Stars) combines radial velocimetry, interferometry, and photometry to estimate physical parameters of pulsating stars, including presence of infrared excess, color excess, Teff, and ratio distance/p-factor. The global model-based parallax-of-pulsation method is implemented in Python. Derived parameters have a high level of confidence; statistical precision is improved (compared to other methods) due to the large number of data taken into account, accuracy is improved by using consistent physical modeling and reliability of the derived parameters is strengthened by redundancy in the data.

  3. IMPROVED ALGORITHMS FOR RADAR-BASED RECONSTRUCTION OF ASTEROID SHAPES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenberg, Adam H.; Margot, Jean-Luc

    We describe our implementation of a global-parameter optimizer and Square Root Information Filter into the asteroid-modeling software shape. We compare the performance of our new optimizer with that of the existing sequential optimizer when operating on various forms of simulated data and actual asteroid radar data. In all cases, the new implementation performs substantially better than its predecessor: it converges faster, produces shape models that are more accurate, and solves for spin axis orientations more reliably. We discuss potential future changes to improve shape's fitting speed and accuracy.

  4. Modeling and Quantification of Team Performance in Human Reliability Analysis for Probabilistic Risk Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeffrey C. JOe; Ronald L. Boring

    Probabilistic Risk Assessment (PRA) and Human Reliability Assessment (HRA) are important technical contributors to the United States (U.S.) Nuclear Regulatory Commission’s (NRC) risk-informed and performance based approach to regulating U.S. commercial nuclear activities. Furthermore, all currently operating commercial NPPs in the U.S. are required by federal regulation to be staffed with crews of operators. Yet, aspects of team performance are underspecified in most HRA methods that are widely used in the nuclear industry. There are a variety of "emergent" team cognition and teamwork errors (e.g., communication errors) that are 1) distinct from individual human errors, and 2) important to understandmore » from a PRA perspective. The lack of robust models or quantification of team performance is an issue that affects the accuracy and validity of HRA methods and models, leading to significant uncertainty in estimating HEPs. This paper describes research that has the objective to model and quantify team dynamics and teamwork within NPP control room crews for risk informed applications, thereby improving the technical basis of HRA, which improves the risk-informed approach the NRC uses to regulate the U.S. commercial nuclear industry.« less

  5. Multiscale decoding for reliable brain-machine interface performance over time.

    PubMed

    Han-Lin Hsieh; Wong, Yan T; Pesaran, Bijan; Shanechi, Maryam M

    2017-07-01

    Recordings from invasive implants can degrade over time, resulting in a loss of spiking activity for some electrodes. For brain-machine interfaces (BMI), such a signal degradation lowers control performance. Achieving reliable performance over time is critical for BMI clinical viability. One approach to improve BMI longevity is to simultaneously use spikes and other recording modalities such as local field potentials (LFP), which are more robust to signal degradation over time. We have developed a multiscale decoder that can simultaneously model the different statistical profiles of multi-scale spike/LFP activity (discrete spikes vs. continuous LFP). This decoder can also run at multiple time-scales (millisecond for spikes vs. tens of milliseconds for LFP). Here, we validate the multiscale decoder for estimating the movement of 7 major upper-arm joint angles in a non-human primate (NHP) during a 3D reach-to-grasp task. The multiscale decoder uses motor cortical spike/LFP recordings as its input. We show that the multiscale decoder can improve decoding accuracy by adding information from LFP to spikes, while running at the fast millisecond time-scale of the spiking activity. Moreover, this improvement is achieved using relatively few LFP channels, demonstrating the robustness of the approach. These results suggest that using multiscale decoders has the potential to improve the reliability and longevity of BMIs.

  6. Safety and reliability analysis in a polyvinyl chloride batch process using dynamic simulator-case study: Loss of containment incident.

    PubMed

    Rizal, Datu; Tani, Shinichi; Nishiyama, Kimitoshi; Suzuki, Kazuhiko

    2006-10-11

    In this paper, a novel methodology in batch plant safety and reliability analysis is proposed using a dynamic simulator. A batch process involving several safety objects (e.g. sensors, controller, valves, etc.) is activated during the operational stage. The performance of the safety objects is evaluated by the dynamic simulation and a fault propagation model is generated. By using the fault propagation model, an improved fault tree analysis (FTA) method using switching signal mode (SSM) is developed for estimating the probability of failures. The timely dependent failures can be considered as unavailability of safety objects that can cause the accidents in a plant. Finally, the rank of safety object is formulated as performance index (PI) and can be estimated using the importance measures. PI shows the prioritization of safety objects that should be investigated for safety improvement program in the plants. The output of this method can be used for optimal policy in safety object improvement and maintenance. The dynamic simulator was constructed using Visual Modeler (VM, the plant simulator, developed by Omega Simulation Corp., Japan). A case study is focused on the loss of containment (LOC) incident at polyvinyl chloride (PVC) batch process which is consumed the hazardous material, vinyl chloride monomer (VCM).

  7. High-Frequency Sound Interaction with Ocean Sediments and with Objects in the Vicinity of the Water/Sediment Interface and Mid-Frequency Shallow Water Propagation and Scattering

    DTIC Science & Technology

    2007-09-30

    combined with measured sediment properties, to test the validity of sediment acoustic models , and in particular the poroelastic (Biot) model . Addressing...TERM GOALS 1. Development of accurate models for acoustic scattering from, penetration into, and propagation within shallow water ocean sediments...2. Development of reliable methods for modeling acoustic detection of buried objects at subcritical grazing angles. 3. Improving our

  8. Research on Storm-Tide Disaster Losses in China Using a New Grey Relational Analysis Model with the Dispersion of Panel Data.

    PubMed

    Yin, Kedong; Zhang, Ya; Li, Xuemei

    2017-11-01

    Owing to the difference of the sequences' orders and the surface structure in the current panel grey relational models, research results will not be unique. In addition, individual measurement of indicators and objects and the subjectivity of combined weight would significantly weaken the effective information of panel data and reduce the reliability and accuracy of research results. Therefore, we propose the concept and calculation method of dispersion of panel data, establish the grey relational model based on dispersion of panel data (DPGRA), and prove that DPGRA exhibits the effective properties of uniqueness, symmetry, and normality. To demonstrate its applicability, the proposed DPGRA model is used to research on storm-tide disaster losses in China's coastal areas. Comparing research results of three models, which are DPGRA, Euclidean distance grey relational model, and grey grid relational model, it was shown that DPGRA is more effective, feasible, and stable. It is indicated that DPGRA can entirely utilize the effective information of panel data; what's more, it can not only handle the non-uniqueness of the grey relational model's results but also improve the reliability and accuracy of research results. The research results are of great significance for coastal areas to focus on monitoring storm-tide disasters hazards, strengthen the protection measures of natural disasters, and improve the ability of disaster prevention and reduction.

  9. Modified Y-TZP Core Design Improves All-ceramic Crown Reliability

    PubMed Central

    Silva, N.R.F.A.; Bonfante, E.A.; Rafferty, B.T.; Zavanelli, R.A.; Rekow, E.D.; Thompson, V.P.; Coelho, P.G.

    2011-01-01

    This study tested the hypothesis that all-ceramic core-veneer system crown reliability is improved by modification of the core design. We modeled a tooth preparation by reducing the height of proximal walls by 1.5 mm and the occlusal surface by 2.0 mm. The CAD-based tooth preparation was replicated and positioned in a dental articulator for core and veneer fabrication. Standard (0.5 mm uniform thickness) and modified (2.5 mm height lingual and proximal cervical areas) core designs were produced, followed by the application of veneer porcelain for a total thickness of 1.5 mm. The crowns were cemented to 30-day-aged composite dies and were either single-load-to-failure or step-stress-accelerated fatigue-tested. Use of level probability plots showed significantly higher reliability for the modified core design group. The fatigue fracture modes were veneer chipping not exposing the core for the standard group, and exposing the veneer core interface for the modified group. PMID:21057036

  10. Analyzing the effect of transmissivity uncertainty on the reliability of a model of the northwestern Sahara aquifer system

    NASA Astrophysics Data System (ADS)

    Zammouri, Mounira; Ribeiro, Luis

    2017-05-01

    Groundwater flow model of the transboundary Saharan aquifer system is developed in 2003 and used for management and decision-making by Algeria, Tunisia and Libya. In decision-making processes, reliability plays a decisive role. This paper looks into the reliability assessment of the Saharan aquifers model. It aims to detect the shortcomings of the model considered properly calibrated. After presenting the calibration results of the effort modelling in 2003, the uncertainty in the model which arising from the lack of the groundwater level and the transmissivity data is analyzed using kriging technique and stochastic approach. The structural analysis of piezometry in steady state and logarithms of transmissivity were carried out for the Continental Intercalaire (CI) and the Complexe Terminal (CT) aquifers. The available data (piezometry and transmissivity) were compared to the calculated values, using geostatistics approach. Using a stochastic approach, 2500 realizations of a log-normal random transmissivity field of the CI aquifer has been performed to assess the errors of the model output, due to the uncertainty in transmissivity. Two types of bad calibration are shown. In some regions, calibration should be improved using the available data. In others areas, undertaking the model refinement requires gathering new data to enhance the aquifer system knowledge. Stochastic simulations' results showed that the calculated drawdowns in 2050 could be higher than the values predicted by the calibrated model.

  11. Reliability Assessment for Low-cost Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Freeman, Paul Michael

    Existing low-cost unmanned aerospace systems are unreliable, and engineers must blend reliability analysis with fault-tolerant control in novel ways. This dissertation introduces the University of Minnesota unmanned aerial vehicle flight research platform, a comprehensive simulation and flight test facility for reliability and fault-tolerance research. An industry-standard reliability assessment technique, the failure modes and effects analysis, is performed for an unmanned aircraft. Particular attention is afforded to the control surface and servo-actuation subsystem. Maintaining effector health is essential for safe flight; failures may lead to loss of control incidents. Failure likelihood, severity, and risk are qualitatively assessed for several effector failure modes. Design changes are recommended to improve aircraft reliability based on this analysis. Most notably, the control surfaces are split, providing independent actuation and dual-redundancy. The simulation models for control surface aerodynamic effects are updated to reflect the split surfaces using a first-principles geometric analysis. The failure modes and effects analysis is extended by using a high-fidelity nonlinear aircraft simulation. A trim state discovery is performed to identify the achievable steady, wings-level flight envelope of the healthy and damaged vehicle. Tolerance of elevator actuator failures is studied using familiar tools from linear systems analysis. This analysis reveals significant inherent performance limitations for candidate adaptive/reconfigurable control algorithms used for the vehicle. Moreover, it demonstrates how these tools can be applied in a design feedback loop to make safety-critical unmanned systems more reliable. Control surface impairments that do occur must be quickly and accurately detected. This dissertation also considers fault detection and identification for an unmanned aerial vehicle using model-based and model-free approaches and applies those algorithms to experimental faulted and unfaulted flight test data. Flight tests are conducted with actuator faults that affect the plant input and sensor faults that affect the vehicle state measurements. A model-based detection strategy is designed and uses robust linear filtering methods to reject exogenous disturbances, e.g. wind, while providing robustness to model variation. A data-driven algorithm is developed to operate exclusively on raw flight test data without physical model knowledge. The fault detection and identification performance of these complementary but different methods is compared. Together, enhanced reliability assessment and multi-pronged fault detection and identification techniques can help to bring about the next generation of reliable low-cost unmanned aircraft.

  12. Reliability Assessment Approach for Stirling Convertors and Generators

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Schreiber, Jeffrey G.; Zampino, Edward; Best, Timothy

    2004-01-01

    Stirling power conversion is being considered for use in a Radioisotope Power System for deep-space science missions because it offers a multifold increase in the conversion efficiency of heat to electric power. Quantifying the reliability of a Radioisotope Power System that utilizes Stirling power conversion technology is important in developing and demonstrating the capability for long-term success. A description of the Stirling power convertor is provided, along with a discussion about some of the key components. Ongoing efforts to understand component life, design variables at the component and system levels, related sources, and the nature of uncertainties is discussed. The requirement for reliability also is discussed, and some of the critical areas of concern are identified. A section on the objectives of the performance model development and a computation of reliability is included to highlight the goals of this effort. Also, a viable physics-based reliability plan to model the design-level variable uncertainties at the component and system levels is outlined, and potential benefits are elucidated. The plan involves the interaction of different disciplines, maintaining the physical and probabilistic correlations at all the levels, and a verification process based on rational short-term tests. In addition, both top-down and bottom-up coherency were maintained to follow the physics-based design process and mission requirements. The outlined reliability assessment approach provides guidelines to improve the design and identifies governing variables to achieve high reliability in the Stirling Radioisotope Generator design.

  13. Sustained reductions in time to antibiotic delivery in febrile immunocompromised children: results of a quality improvement collaborative.

    PubMed

    Dandoy, Christopher E; Hariharan, Selena; Weiss, Brian; Demmel, Kathy; Timm, Nathan; Chiarenzelli, Janis; Dewald, Mary Katherine; Kennebeck, Stephanie; Langworthy, Shawna; Pomales, Jennifer; Rineair, Sylvia; Sandfoss, Erin; Volz-Noe, Pamela; Nagarajan, Rajaram; Alessandrini, Evaline

    2016-02-01

    Timely delivery of antibiotics to febrile immunocompromised (F&I) paediatric patients in the emergency department (ED) and outpatient clinic reduces morbidity and mortality. The aim of this quality improvement initiative was to increase the percentage of F&I patients who received antibiotics within goal in the clinic and ED from 25% to 90%. Using the Model of Improvement, we performed Plan-Do-Study-Act cycles to design, test and implement high-reliability interventions to decrease time to antibiotics. Pre-arrival interventions were tested and implemented, followed by post-arrival interventions in the ED. Many processes were spread successfully to the outpatient clinic. The Chronic Care Model was used, in addition to active family engagement, to inform and improve processes. The study period was from January 2010 to January 2015. Pre-arrival planning improved our F&I time to antibiotics in the ED from 137 to 88 min. This was sustained until October 2012, when further interventions including a pre-arrival huddle decreased the median time to <50 min. Implementation of the various processes to the clinic delivery system increased the mean percentage of patients receiving antibiotics within 60 min to >90%. In September 2014, we implemented a rapid response team to improve reliable venous access in the ED, which increased our mean percentage of patients receiving timely antibiotics to its highest rate (95%). This stepwise approach with pre-arrival planning using the Chronic Care Model, followed by standardisation of processes, created a sustainable improvement of timely antibiotic delivery in F&I patients. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  14. Assessing the Reliability of Material Flow Analysis Results: The Cases of Rhenium, Gallium, and Germanium in the United States Economy.

    PubMed

    Meylan, Grégoire; Reck, Barbara K; Rechberger, Helmut; Graedel, Thomas E; Schwab, Oliver

    2017-10-17

    Decision-makers traditionally expect "hard facts" from scientific inquiry, an expectation that the results of material flow analyses (MFAs) can hardly meet. MFA limitations are attributable to incompleteness of flowcharts, limited data quality, and model assumptions. Moreover, MFA results are, for the most part, based less on empirical observation but rather on social knowledge construction processes. Developing, applying, and improving the means of evaluating and communicating the reliability of MFA results is imperative. We apply two recently proposed approaches for making quantitative statements on MFA reliability to national minor metals systems: rhenium, gallium, and germanium in the United States in 2012. We discuss the reliability of results in policy and management contexts. The first approach consists of assessing data quality based on systematic characterization of MFA data and the associated meta-information and quantifying the "information content" of MFAs. The second is a quantification of data inconsistencies indicated by the "degree of data reconciliation" between the data and the model. A high information content and a low degree of reconciliation indicate reliable or certain MFA results. This article contributes to reliability and uncertainty discourses in MFA, exemplifying the usefulness of the approaches in policy and management, and to raw material supply discussions by providing country-level information on three important minor metals often considered critical.

  15. An experimental evaluation of software redundancy as a strategy for improving reliability

    NASA Technical Reports Server (NTRS)

    Eckhardt, Dave E., Jr.; Caglayan, Alper K.; Knight, John C.; Lee, Larry D.; Mcallister, David F.; Vouk, Mladen A.; Kelly, John P. J.

    1990-01-01

    The strategy of using multiple versions of independently developed software as a means to tolerate residual software design faults is suggested by the success of hardware redundancy for tolerating hardware failures. Although, as generally accepted, the independence of hardware failures resulting from physical wearout can lead to substantial increases in reliability for redundant hardware structures, a similar conclusion is not immediate for software. The degree to which design faults are manifested as independent failures determines the effectiveness of redundancy as a method for improving software reliability. Interest in multi-version software centers on whether it provides an adequate measure of increased reliability to warrant its use in critical applications. The effectiveness of multi-version software is studied by comparing estimates of the failure probabilities of these systems with the failure probabilities of single versions. The estimates are obtained under a model of dependent failures and compared with estimates obtained when failures are assumed to be independent. The experimental results are based on twenty versions of an aerospace application developed and certified by sixty programmers from four universities. Descriptions of the application, development and certification processes, and operational evaluation are given together with an analysis of the twenty versions.

  16. Real-Time GNSS-Based Attitude Determination in the Measurement Domain.

    PubMed

    Zhao, Lin; Li, Na; Li, Liang; Zhang, Yi; Cheng, Chun

    2017-02-05

    A multi-antenna-based GNSS receiver is capable of providing high-precision and drift-free attitude solution. Carrier phase measurements need be utilized to achieve high-precision attitude. The traditional attitude determination methods in the measurement domain and the position domain resolve the attitude and the ambiguity sequentially. The redundant measurements from multiple baselines have not been fully utilized to enhance the reliability of attitude determination. A multi-baseline-based attitude determination method in the measurement domain is proposed to estimate the attitude parameters and the ambiguity simultaneously. Meanwhile, the redundancy of attitude resolution has also been increased so that the reliability of ambiguity resolution and attitude determination can be enhanced. Moreover, in order to further improve the reliability of attitude determination, we propose a partial ambiguity resolution method based on the proposed attitude determination model. The static and kinematic experiments were conducted to verify the performance of the proposed method. When compared with the traditional attitude determination methods, the static experimental results show that the proposed method can improve the accuracy by at least 0.03° and enhance the continuity by 18%, at most. The kinematic result has shown that the proposed method can obtain an optimal balance between accuracy and reliability performance.

  17. A Research Roadmap for Computation-Based Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, Ronald; Mandelli, Diego; Joe, Jeffrey

    2015-08-01

    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is oftenmore » secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.« less

  18. Water balance models in one-month-ahead streamflow forecasting

    USGS Publications Warehouse

    Alley, William M.

    1985-01-01

    Techniques are tested that incorporate information from water balance models in making 1-month-ahead streamflow forecasts in New Jersey. The results are compared to those based on simple autoregressive time series models. The relative performance of the models is dependent on the month of the year in question. The water balance models are most useful for forecasts of April and May flows. For the stations in northern New Jersey, the April and May forecasts were made in order of decreasing reliability using the water-balance-based approaches, using the historical monthly means, and using simple autoregressive models. The water balance models were useful to a lesser extent for forecasts during the fall months. For the rest of the year the improvements in forecasts over those obtained using the simpler autoregressive models were either very small or the simpler models provided better forecasts. When using the water balance models, monthly corrections for bias are found to improve minimum mean-square-error forecasts as well as to improve estimates of the forecast conditional distributions.

  19. 2017 NREL Photovoltaic Reliability Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurtz, Sarah

    NREL's Photovoltaic (PV) Reliability Workshop (PVRW) brings together PV reliability experts to share information, leading to the improvement of PV module reliability. Such improvement reduces the cost of solar electricity and promotes investor confidence in the technology -- both critical goals for moving PV technologies deeper into the electricity marketplace.

  20. Reliability Analysis and Modeling of ZigBee Networks

    NASA Astrophysics Data System (ADS)

    Lin, Cheng-Min

    The architecture of ZigBee networks focuses on developing low-cost, low-speed ubiquitous communication between devices. The ZigBee technique is based on IEEE 802.15.4, which specifies the physical layer and medium access control (MAC) for a low rate wireless personal area network (LR-WPAN). Currently, numerous wireless sensor networks have adapted the ZigBee open standard to develop various services to promote improved communication quality in our daily lives. The problem of system and network reliability in providing stable services has become more important because these services will be stopped if the system and network reliability is unstable. The ZigBee standard has three kinds of networks; star, tree and mesh. The paper models the ZigBee protocol stack from the physical layer to the application layer and analyzes these layer reliability and mean time to failure (MTTF). Channel resource usage, device role, network topology and application objects are used to evaluate reliability in the physical, medium access control, network, and application layers, respectively. In the star or tree networks, a series system and the reliability block diagram (RBD) technique can be used to solve their reliability problem. However, a division technology is applied here to overcome the problem because the network complexity is higher than that of the others. A mesh network using division technology is classified into several non-reducible series systems and edge parallel systems. Hence, the reliability of mesh networks is easily solved using series-parallel systems through our proposed scheme. The numerical results demonstrate that the reliability will increase for mesh networks when the number of edges in parallel systems increases while the reliability quickly drops when the number of edges and the number of nodes increase for all three networks. More use of resources is another factor impact on reliability decreasing. However, lower network reliability will occur due to network complexity, more resource usage and complex object relationship.

  1. A Bayesian modelling method for post-processing daily sub-seasonal to seasonal rainfall forecasts from global climate models and evaluation for 12 Australian catchments

    NASA Astrophysics Data System (ADS)

    Schepen, Andrew; Zhao, Tongtiegang; Wang, Quan J.; Robertson, David E.

    2018-03-01

    Rainfall forecasts are an integral part of hydrological forecasting systems at sub-seasonal to seasonal timescales. In seasonal forecasting, global climate models (GCMs) are now the go-to source for rainfall forecasts. For hydrological applications however, GCM forecasts are often biased and unreliable in uncertainty spread, and calibration is therefore required before use. There are sophisticated statistical techniques for calibrating monthly and seasonal aggregations of the forecasts. However, calibration of seasonal forecasts at the daily time step typically uses very simple statistical methods or climate analogue methods. These methods generally lack the sophistication to achieve unbiased, reliable and coherent forecasts of daily amounts and seasonal accumulated totals. In this study, we propose and evaluate a Rainfall Post-Processing method for Seasonal forecasts (RPP-S), which is based on the Bayesian joint probability modelling approach for calibrating daily forecasts and the Schaake Shuffle for connecting the daily ensemble members of different lead times. We apply the method to post-process ACCESS-S forecasts for 12 perennial and ephemeral catchments across Australia and for 12 initialisation dates. RPP-S significantly reduces bias in raw forecasts and improves both skill and reliability. RPP-S forecasts are also more skilful and reliable than forecasts derived from ACCESS-S forecasts that have been post-processed using quantile mapping, especially for monthly and seasonal accumulations. Several opportunities to improve the robustness and skill of RPP-S are identified. The new RPP-S post-processed forecasts will be used in ensemble sub-seasonal to seasonal streamflow applications.

  2. Learned helplessness in the rat: improvements in validity and reliability.

    PubMed

    Vollmayr, B; Henn, F A

    2001-08-01

    Major depression has a high prevalence and a high mortality. Despite many years of research little is known about the pathophysiologic events leading to depression nor about the causative molecular mechanisms of antidepressant treatment leading to remission and prevention of relapse. Animal models of depression are urgently needed to investigate new hypotheses. The learned helplessness paradigm initially described by Overmier and Seligman [J. Comp. Physiol. Psychol. 63 (1967) 28] is the most widely studied animal model of depression. Animals are exposed to inescapable shock and subsequently tested for a deficit in acquiring an avoidance task. Despite its excellent validity concerning the construct of etiology, symptomatology and prediction of treatment response [Clin. Neurosci. 1 (1993) 152; Trends Pharmacol. Sci. 12 (1991) 131] there has been little use of the model for the investigation of recent theories on the pathogenesis of depression. This may be due to reported difficulties in reliability of the paradigm [Animal Learn. Behav. 4 (1976) 401; Pharmacol. Biochem. Behav. 36 (1990) 739]. The aim of the current study was therefore to improve parameters for inescapable shock and learned helplessness testing to minimize artifacts and random error and yield a reliable fraction of helpless animals after shock exposure. The protocol uses mild current which induces helplessness only in some of the animals thereby modeling the hypothesis of variable predisposition for depression in different subjects [Psychopharmacol. Bull. 21 (1985) 443; Neurosci. Res. 38 (200) 193]. This allows us to use animals which are not helpless after inescapable shock as a stressed control, but sensitivity, specificity and variability of test results have to be reassessed.

  3. EM calibration based on Post OPC layout analysis

    NASA Astrophysics Data System (ADS)

    Sreedhar, Aswin; Kundu, Sandip

    2010-03-01

    Design for Manufacturability (DFM) involves changes to the design and CAD tools to help increase pattern printability and improve process control. Design for Reliability (DFR) performs the same to improve reliability of devices from failures such as Electromigration (EM), gate-oxide break down, hot carrier injection (HCI), Negative Bias Temperature Insatiability (NBTI) and mechanical stress effects. Electromigration (EM) occurs due to migration or displacement of atoms as a result of the movement of electrons through a conducting medium. The rate of migration determines the Mean Time to Failure (MTTF) which is modeled as a function of temperature and current density. The model itself is calibrated through failure analysis (FA) of parts that are deemed to have failed due to EM against design parameters such as linewidth. Reliability Verification (RV) of a design involves verifying that every conducting line in a design meets certain MTTF threshold. In order to perform RV, current density for each wire must be computed. Current itself is a function of the parasitics that are determined through RC extraction. The standard practice is to perform the RC extraction and current density calculation on drawn, pre-OPC layouts. If a wire fails to meet threshold for MTTF, it may be resized. Subsequently, mask preparation steps such as OPC and PSM introduce extra features such as SRAFs, jogs,hammerheads and serifs that change their resistance, capacitance and current density values. Hence, calibrating EM model based on pre-OPC layouts will lead to different results compared to post-OPC layouts. In this work, we compare EM model calibration and reliability check based on drawn layout versus predicted layout, where the drawn layout is pre-OPC layout and predicted layout is based on litho simulation of post-OPC layout. Results show significant divergence between these two approaches, making a case for methodology based on predicted layout.

  4. Assessing the efficacy of the Measure of Understanding of Macroevolution as a valid tool for undergraduate non-science majors

    NASA Astrophysics Data System (ADS)

    Romine, William Lee; Walter, Emily Marie

    2014-11-01

    Efficacy of the Measure of Understanding of Macroevolution (MUM) as a measurement tool has been a point of contention among scholars needing a valid measure for knowledge of macroevolution. We explored the structure and construct validity of the MUM using Rasch methodologies in the context of a general education biology course designed with an emphasis on macroevolution content. The Rasch model was utilized to quantify item- and test-level characteristics, including dimensionality, reliability, and fit with the Rasch model. Contrary to previous work, we found that the MUM provides a valid, reliable, and unidimensional scale for measuring knowledge of macroevolution in introductory non-science majors, and that its psychometric behavior does not exhibit large changes across time. While we found that all items provide productive measurement information, several depart substantially from ideal behavior, warranting a collective effort to improve these items. Suggestions for improving the measurement characteristics of the MUM at the item and test levels are put forward and discussed.

  5. Need for Improved Methods to Collect and Present Spatial Epidemiologic Data for Vectorborne Diseases

    PubMed Central

    Eisen, Rebecca J.

    2007-01-01

    Improved methods for collection and presentation of spatial epidemiologic data are needed for vectorborne diseases in the United States. Lack of reliable data for probable pathogen exposure site has emerged as a major obstacle to the development of predictive spatial risk models. Although plague case investigations can serve as a model for how to ideally generate needed information, this comprehensive approach is cost-prohibitive for more common and less severe diseases. New methods are urgently needed to determine probable pathogen exposure sites that will yield reliable results while taking into account economic and time constraints of the public health system and attending physicians. Recent data demonstrate the need for a change from use of the county spatial unit for presentation of incidence of vectorborne diseases to more precise ZIP code or census tract scales. Such fine-scale spatial risk patterns can be communicated to the public and medical community through Web-mapping approaches. PMID:18258029

  6. Photovoltaic Module Reliability Workshop 2011: February 16-17, 2011

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurtz, S.

    2013-11-01

    NREL's Photovoltaic (PV) Module Reliability Workshop (PVMRW) brings together PV reliability experts to share information, leading to the improvement of PV module reliability. Such improvement reduces the cost of solar electricity and promotes investor confidence in the technology--both critical goals for moving PV technologies deeper into the electricity marketplace.

  7. Photovoltaic Module Reliability Workshop 2014: February 25-26, 2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurtz, S.

    2014-02-01

    NREL's Photovoltaic (PV) Module Reliability Workshop (PVMRW) brings together PV reliability experts to share information, leading to the improvement of PV module reliability. Such improvement reduces the cost of solar electricity and promotes investor confidence in the technology--both critical goals for moving PV technologies deeper into the electricity marketplace.

  8. Photovoltaic Module Reliability Workshop 2013: February 26-27, 2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurtz, S.

    2013-10-01

    NREL's Photovoltaic (PV) Module Reliability Workshop (PVMRW) brings together PV reliability experts to share information, leading to the improvement of PV module reliability. Such improvement reduces the cost of solar electricity and promotes investor confidence in the technology--both critical goals for moving PV technologies deeper into the electricity marketplace.

  9. Photovoltaic Module Reliability Workshop 2010: February 18-19, 2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurtz, J.

    2013-11-01

    NREL's Photovoltaic (PV) Module Reliability Workshop (PVMRW) brings together PV reliability experts to share information, leading to the improvement of PV module reliability. Such improvement reduces the cost of solar electricity and promotes investor confidence in the technology--both critical goals for moving PV technologies deeper into the electricity marketplace.

  10. 2016 NREL Photovoltaic Module Reliability Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurtz, Sarah

    NREL's Photovoltaic (PV) Module Reliability Workshop (PVMRW) brings together PV reliability experts to share information, leading to the improvement of PV module reliability. Such improvement reduces the cost of solar electricity and promotes investor confidence in the technology - both critical goals for moving PV technologies deeper into the electricity marketplace.

  11. 2015 NREL Photovoltaic Module Reliability Workshops

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurtz, Sarah

    NREL's Photovoltaic (PV) Module Reliability Workshop (PVMRW) brings together PV reliability experts to share information, leading to the improvement of PV module reliability. Such improvement reduces the cost of solar electricity and promotes investor confidence in the technology--both critical goals for moving PV technologies deeper into the electricity marketplace.

  12. A Capacity Forecast Model for Volatile Data in Maintenance Logistics

    NASA Astrophysics Data System (ADS)

    Berkholz, Daniel

    2009-05-01

    Maintenance, repair and overhaul processes (MRO processes) are elaborate and complex. Rising demands on these after sales services require reliable production planning and control methods particularly for maintaining valuable capital goods. Downtimes lead to high costs and an inability to meet delivery due dates results in severe contract penalties. Predicting the required capacities for maintenance orders in advance is often difficult due to unknown part conditions unless the goods are actually inspected. This planning uncertainty results in extensive capital tie-up by rising stock levels within the whole MRO network. The article outlines an approach to planning capacities when maintenance data forecasting is volatile. It focuses on the development of prerequisites for a reliable capacity planning model. This enables a quick response to maintenance orders by employing appropriate measures. The information gained through the model is then systematically applied to forecast both personnel capacities and the demand for spare parts. The improved planning reliability can support MRO service providers in shortening delivery times and reducing stock levels in order to enhance the performance of their maintenance logistics.

  13. The fusion of large scale classified side-scan sonar image mosaics.

    PubMed

    Reed, Scott; Tena, Ruiz Ioseba; Capus, Chris; Petillot, Yvan

    2006-07-01

    This paper presents a unified framework for the creation of classified maps of the seafloor from sonar imagery. Significant challenges in photometric correction, classification, navigation and registration, and image fusion are addressed. The techniques described are directly applicable to a range of remote sensing problems. Recent advances in side-scan data correction are incorporated to compensate for the sonar beam pattern and motion of the acquisition platform. The corrected images are segmented using pixel-based textural features and standard classifiers. In parallel, the navigation of the sonar device is processed using Kalman filtering techniques. A simultaneous localization and mapping framework is adopted to improve the navigation accuracy and produce georeferenced mosaics of the segmented side-scan data. These are fused within a Markovian framework and two fusion models are presented. The first uses a voting scheme regularized by an isotropic Markov random field and is applicable when the reliability of each information source is unknown. The Markov model is also used to inpaint regions where no final classification decision can be reached using pixel level fusion. The second model formally introduces the reliability of each information source into a probabilistic model. Evaluation of the two models using both synthetic images and real data from a large scale survey shows significant quantitative and qualitative improvement using the fusion approach.

  14. Gearbox Reliability Collaborative Gearbox 3 Planet Bearing Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keller, Jonathan

    2017-03-24

    The Gearbox Reliability Collaborative gearbox was redesigned to improve its load-sharing characteristics and predicted fatigue life. The most important aspect of the redesign was to replace the cylindrical roller bearings with preloaded tapered roller bearings in the planetary section. Similar to previous work, the strain gages installed on the planet tapered roller bearings were calibrated in a load frame. This report describes the calibration tests and provides the factors necessary to convert the measured units from dynamometer testing to bearing loads, suitable for comparison to engineering models.

  15. An improved probit method for assessment of domino effect to chemical process equipment caused by overpressure.

    PubMed

    Mingguang, Zhang; Juncheng, Jiang

    2008-10-30

    Overpressure is one important cause of domino effect in accidents of chemical process equipments. Damage probability and relative threshold value are two necessary parameters in QRA of this phenomenon. Some simple models had been proposed based on scarce data or oversimplified assumption. Hence, more data about damage to chemical process equipments were gathered and analyzed, a quantitative relationship between damage probability and damage degrees of equipment was built, and reliable probit models were developed associated to specific category of chemical process equipments. Finally, the improvements of present models were evidenced through comparison with other models in literatures, taking into account such parameters: consistency between models and data, depth of quantitativeness in QRA.

  16. Research on Storm-Tide Disaster Losses in China Using a New Grey Relational Analysis Model with the Dispersion of Panel Data

    PubMed Central

    Yin, Kedong; Zhang, Ya; Li, Xuemei

    2017-01-01

    Owing to the difference of the sequences’ orders and the surface structure in the current panel grey relational models, research results will not be unique. In addition, individual measurement of indicators and objects and the subjectivity of combined weight would significantly weaken the effective information of panel data and reduce the reliability and accuracy of research results. Therefore, we propose the concept and calculation method of dispersion of panel data, establish the grey relational model based on dispersion of panel data (DPGRA), and prove that DPGRA exhibits the effective properties of uniqueness, symmetry, and normality. To demonstrate its applicability, the proposed DPGRA model is used to research on storm-tide disaster losses in China’s coastal areas. Comparing research results of three models, which are DPGRA, Euclidean distance grey relational model, and grey grid relational model, it was shown that DPGRA is more effective, feasible, and stable. It is indicated that DPGRA can entirely utilize the effective information of panel data; what’s more, it can not only handle the non-uniqueness of the grey relational model’s results but also improve the reliability and accuracy of research results. The research results are of great significance for coastal areas to focus on monitoring storm–tide disasters hazards, strengthen the protection measures of natural disasters, and improve the ability of disaster prevention and reduction. PMID:29104262

  17. An exploration of multilevel modeling for estimating access to drinking-water and sanitation.

    PubMed

    Wolf, Jennyfer; Bonjour, Sophie; Prüss-Ustün, Annette

    2013-03-01

    Monitoring progress towards the targets for access to safe drinking-water and sanitation under the Millennium Development Goals (MDG) requires reliable estimates and indicators. We analyzed trends and reviewed current indicators used for those targets. We developed continuous time series for 1990 to 2015 for access to improved drinking-water sources and improved sanitation facilities by country using multilevel modeling (MLM). We show that MLM is a reliable and transparent tool with many advantages over alternative approaches to estimate access to facilities. Using current indicators, the MDG target for water would be met, but the target for sanitation missed considerably. The number of people without access to such services is still increasing in certain regions. Striking differences persist between urban and rural areas. Consideration of water quality and different classification of shared sanitation facilities would, however, alter estimates considerably. To achieve improved monitoring we propose: (1) considering the use of MLM as an alternative for estimating access to safe drinking-water and sanitation; (2) completing regular assessments of water quality and supporting the development of national regulatory frameworks as part of capacity development; (3) evaluating health impacts of shared sanitation; (4) using a more equitable presentation of countries' performances in providing improved services.

  18. Hydrological modelling improvements required in basins in the Hindukush-Karakoram-Himalayas region

    NASA Astrophysics Data System (ADS)

    Khan, Asif; Richards, Keith S.; McRobie, Allan; Booij, Martijn

    2016-04-01

    Millions of people rely on river water originating from basins in the Hindukush-Karakoram-Himalayas (HKH), where snow- and ice-melt are significant flow components. One such basin is the Upper Indus Basin (UIB), where snow- and ice-melt can contribute more than 80% of total flow. Containing some of the world's largest alpine glaciers, this basin may be highly susceptible to global warming and climate change, and reliable predictions of future water availability are vital for resource planning for downstream food and energy needs in a changing climate, but depend on significantly improved hydrological modelling. However, a critical assessment of available hydro-climatic data and hydrological modelling in the HKH region has identified five major failings in many published hydro-climatic studies, even those appearing in reputable international journals. The main weaknesses of these studies are: i) incorrect basin areas; ii) under-estimated precipitation; iii) incorrectly-defined glacier boundaries; iv) under-estimated snow-cover data; and v) use of biased melt factors for snow and ice during the summer months. This paper illustrates these limitations, which have either resulted in modelled flows being under-estimates of measured flows, leading to an implied severe water scarcity; or have led to the use of unrealistically high degree-day factors and over-estimates of glacier melt contributions, implying unrealistic melt rates. These effects vary amongst sub-basins. Forecasts obtained from these models cannot be used reliably in policy making or water resource development, and need revision. Detailed critical analysis and improvement of existing hydrological modelling may be equally necessary in other mountain regions across the world.

  19. Product component genealogy modeling and field-failure prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Caleb; Hong, Yili; Meeker, William Q.

    Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less

  20. Assessing the applicability of template-based protein docking in the twilight zone.

    PubMed

    Negroni, Jacopo; Mosca, Roberto; Aloy, Patrick

    2014-09-02

    The structural modeling of protein interactions in the absence of close homologous templates is a challenging task. Recently, template-based docking methods have emerged to exploit local structural similarities to help ab-initio protocols provide reliable 3D models for protein interactions. In this work, we critically assess the performance of template-based docking in the twilight zone. Our results show that, while it is possible to find templates for nearly all known interactions, the quality of the obtained models is rather limited. We can increase the precision of the models at expenses of coverage, but it drastically reduces the potential applicability of the method, as illustrated by the whole-interactome modeling of nine organisms. Template-based docking is likely to play an important role in the structural characterization of the interaction space, but we still need to improve the repertoire of structural templates onto which we can reliably model protein complexes. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Product component genealogy modeling and field-failure prediction

    DOE PAGES

    King, Caleb; Hong, Yili; Meeker, William Q.

    2016-04-13

    Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less

  2. Sensitivity of predicted bioaerosol exposure from open windrow composting facilities to ADMS dispersion model parameters.

    PubMed

    Douglas, P; Tyrrel, S F; Kinnersley, R P; Whelan, M; Longhurst, P J; Walsh, K; Pollard, S J T; Drew, G H

    2016-12-15

    Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions. Copyright © 2016. Published by Elsevier Ltd.

  3. Measuring Work Environment and Performance in Nursing Homes

    PubMed Central

    Temkin-Greener, Helena; Zheng, Nan (Tracy); Katz, Paul; Zhao, Hongwei; Mukamel, Dana B.

    2008-01-01

    Background Qualitative studies of the nursing home work environment have long suggested that such attributes as leadership and communication may be related to nursing home performance, including residents' outcomes. However, empirical studies examining these relationships have been scant. Objectives This study is designed to: develop an instrument for measuring nursing home work environment and perceived work effectiveness; test the reliability and validity of the instrument; and identify individual and facility-level factors associated with better facility performance. Research Design and Methods The analysis was based on survey responses provided by managers (N=308) and direct care workers (N=7,418) employed in 162 facilities throughout New York State. Exploratory factor analysis, Chronbach's alphas, analysis of variance, and regression models were used to assess instrument reliability and validity. Multivariate regression models, with fixed facility effects, were used to examine factors associated with work effectiveness. Results The reliability and the validity of the survey instrument for measuring work environment and perceived work effectiveness has been demonstrated. Several individual (e.g. occupation, race) and facility characteristics (e.g. management style, workplace conditions, staffing) that are significant predictors of perceived work effectiveness were identified. Conclusions The organizational performance model used in this study recognizes the multidimensionality of the work environment in nursing homes. Our findings suggest that efforts at improving work effectiveness must also be multifaceted. Empirical findings from such a line of research may provide insights for improving the quality of the work environment and ultimately the quality of residents' care. PMID:19330892

  4. Acquisition Community Team Dynamics: The Tuckman Model vs. the DAU Model

    DTIC Science & Technology

    2007-04-30

    courses . These student teams are used to enable the generation of more complex products and to prepare the students for the ...requirement for stage discreteness was met, I developed a stage-separation test that, when applied to the data representing the experience of a... test the reliability, and validate an improved questionnaire instrument that: – Redefines “Storming” with new storming questions Less focused

  5. Modeling of BN Lifetime Prediction of a System Based on Integrated Multi-Level Information

    PubMed Central

    Wang, Xiaohong; Wang, Lizhi

    2017-01-01

    Predicting system lifetime is important to ensure safe and reliable operation of products, which requires integrated modeling based on multi-level, multi-sensor information. However, lifetime characteristics of equipment in a system are different and failure mechanisms are inter-coupled, which leads to complex logical correlations and the lack of a uniform lifetime measure. Based on a Bayesian network (BN), a lifetime prediction method for systems that combine multi-level sensor information is proposed. The method considers the correlation between accidental failures and degradation failure mechanisms, and achieves system modeling and lifetime prediction under complex logic correlations. This method is applied in the lifetime prediction of a multi-level solar-powered unmanned system, and the predicted results can provide guidance for the improvement of system reliability and for the maintenance and protection of the system. PMID:28926930

  6. Modeling of BN Lifetime Prediction of a System Based on Integrated Multi-Level Information.

    PubMed

    Wang, Jingbin; Wang, Xiaohong; Wang, Lizhi

    2017-09-15

    Predicting system lifetime is important to ensure safe and reliable operation of products, which requires integrated modeling based on multi-level, multi-sensor information. However, lifetime characteristics of equipment in a system are different and failure mechanisms are inter-coupled, which leads to complex logical correlations and the lack of a uniform lifetime measure. Based on a Bayesian network (BN), a lifetime prediction method for systems that combine multi-level sensor information is proposed. The method considers the correlation between accidental failures and degradation failure mechanisms, and achieves system modeling and lifetime prediction under complex logic correlations. This method is applied in the lifetime prediction of a multi-level solar-powered unmanned system, and the predicted results can provide guidance for the improvement of system reliability and for the maintenance and protection of the system.

  7. Photovoltaic Module Reliability Workshop 2012: February 28 - March 1, 2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurtz, S.

    2013-11-01

    NREL's Photovoltaic (PV) Module Reliability Workshop (PVMRW) brings together PV reliability experts to share information, leading to the improvement of PV module reliability. Such improvement reduces the cost of solar electricity and promotes investor confidence in the technology--both critical goals for moving PV technologies deeper into the electricity marketplace.

  8. Sonar Transducer Reliability Improvement Program (STRIP) FY80.

    DTIC Science & Technology

    1980-07-01

    heating element powered by a temperature conLroller (YSI model 74) with a series 400 thermistor probe. Figure 3.1 shows the data and average curves...ATTACHMENT METHODS General Welded Receptacles Threaded or Bolted Receptacles Elastomeric Bonded Receptacles I I!11 " SECTION 12 - CABLE HARNESS TEST

  9. Improved Reliability Models for Mechanical and Electrical Components at Navigation Lock and Dam and Flood Risk Management Facilities

    DTIC Science & Technology

    2013-04-01

    official Department of the Army position unless so designated by other authorized documents. DESTROY THIS REPORT WHEN NO LONGER NEEDED. DO NOT...56 A2 Mechanical system - shafts ...80 A23 Mechanical system – strut spindle pin

  10. Proactive assessment of accident risk to improve safety on a system of freeways : [research brief].

    DOT National Transportation Integrated Search

    2012-05-01

    As traffic safety on freeways continues to be a growing concern, much progress has been made in shifting from reactive (incident detection) to proactive (real-time crash risk assessment) traffic strategies. Reliable models that can take in real-time ...

  11. A framework for conducting mechanistic based reliability assessments of components operating in complex systems

    NASA Astrophysics Data System (ADS)

    Wallace, Jon Michael

    2003-10-01

    Reliability prediction of components operating in complex systems has historically been conducted in a statistically isolated manner. Current physics-based, i.e. mechanistic, component reliability approaches focus more on component-specific attributes and mathematical algorithms and not enough on the influence of the system. The result is that significant error can be introduced into the component reliability assessment process. The objective of this study is the development of a framework that infuses the needs and influence of the system into the process of conducting mechanistic-based component reliability assessments. The formulated framework consists of six primary steps. The first three steps, identification, decomposition, and synthesis, are primarily qualitative in nature and employ system reliability and safety engineering principles to construct an appropriate starting point for the component reliability assessment. The following two steps are the most unique. They involve a step to efficiently characterize and quantify the system-driven local parameter space and a subsequent step using this information to guide the reduction of the component parameter space. The local statistical space quantification step is accomplished using two proposed multivariate probability models: Multi-Response First Order Second Moment and Taylor-Based Inverse Transformation. Where existing joint probability models require preliminary distribution and correlation information of the responses, these models combine statistical information of the input parameters with an efficient sampling of the response analyses to produce the multi-response joint probability distribution. Parameter space reduction is accomplished using Approximate Canonical Correlation Analysis (ACCA) employed as a multi-response screening technique. The novelty of this approach is that each individual local parameter and even subsets of parameters representing entire contributing analyses can now be rank ordered with respect to their contribution to not just one response, but the entire vector of component responses simultaneously. The final step of the framework is the actual probabilistic assessment of the component. Although the same multivariate probability tools employed in the characterization step can be used for the component probability assessment, variations of this final step are given to allow for the utilization of existing probabilistic methods such as response surface Monte Carlo and Fast Probability Integration. The overall framework developed in this study is implemented to assess the finite-element based reliability prediction of a gas turbine airfoil involving several failure responses. Results of this implementation are compared to results generated using the conventional 'isolated' approach as well as a validation approach conducted through large sample Monte Carlo simulations. The framework resulted in a considerable improvement to the accuracy of the part reliability assessment and an improved understanding of the component failure behavior. Considerable statistical complexity in the form of joint non-normal behavior was found and accounted for using the framework. Future applications of the framework elements are discussed.

  12. Improving the XAJ Model on the Basis of Mass-Energy Balance

    NASA Astrophysics Data System (ADS)

    Fang, Yuanhao; Corbari, Chiara; Zhang, Xingnan; Mancini, Marco

    2014-11-01

    Introduction: The Xin'anjiang(XAJ) model is a conceptual model developed by the group led by Prof. Ren-Jun Zhao, which takes the pan evaporation as one of its input and then computes the effective evapotranspiration (ET) of the catchment by mass balance. Such scheme can ensure a good performance of discharge simulation but has obvious defects, one of which is that the effective ET is spatially-constant over the computation unit, neglecting the spatial variation of variables that influence the effective ET and therefore the simulation of ET and SM by the XAJ model, comparing with discharge, is less reliable. In this study, The XAJ model was improved to employ both energy and mass balance to compute the ET following the energy-mass balance scheme of FEST-EWB. model.

  13. Improving the XAJ Model on the Basis of Mass-Energy Balance

    NASA Astrophysics Data System (ADS)

    Fang, Yuanghao; Corbari, Chiara; Zhang, Xingnan; Mancini, Marco

    2014-11-01

    The Xin’anjiang(XAJ) model is a conceptual model developed by the group led by Prof. Ren-Jun Zhao, which takes the pan evaporation as one of its input and then computes the effective evapotranspiration (ET) of the catchment by mass balance. Such scheme can ensure a good performance of discharge simulation but has obvious defects, one of which is that the effective ET is spatially-constant over the computation unit, neglecting the spatial variation of variables that influence the effective ET and therefore the simulation of ET and SM by the XAJ model, comparing with discharge, is less reliable. In this study, The XAJ model was improved to employ both energy and mass balance to compute the ET following the energy-mass balance scheme of FEST-EWB. model.

  14. Evaluation of a primary care adult mental health service: Year 2

    PubMed Central

    2013-01-01

    Aims This study aimed to examine the effectiveness of a primary care adult mental health service operating within a stepped care model of service delivery. Methods Supervised by a principal psychologist manager, psychology graduate practitioners provided one-to-one brief cognitive behavioural therapy (CBT) to service users. The Clinical Outcomes in Routine Evaluation-Outcome Measure (CORE-OM) was used to assess service user treatment outcomes. Satisfaction questionnaires were administered to service users and referring general practitioners (GPs). Results A total of 43 individuals attended for an initial appointment, of whom 19 (44.2%) completed brief CBT treatment. Of the 13 service users who were in the clinical range pre-treatment, 11 (84.6%) achieved clinical and reliably significant improvement. Of the six service users who were in the non-clinical range pre-treatment, three (50%) achieved reliably significant improvement. Both service users and GPs indicated high levels of satisfaction with the service, although service accessibility was highlighted as needing improvement. Conclusion The service was effective in treating mild to moderate mental health problems in primary care. Stricter adherence to a stepped care model through the provision of low-intensity, high-throughput interventions would be desirable for future service provision. PMID:24381655

  15. Strategy for continuous improvement in IC manufacturability, yield, and reliability

    NASA Astrophysics Data System (ADS)

    Dreier, Dean J.; Berry, Mark; Schani, Phil; Phillips, Michael; Steinberg, Joe; DePinto, Gary

    1993-01-01

    Continual improvements in yield, reliability and manufacturability measure a fab and ultimately result in Total Customer Satisfaction. A new organizational and technical methodology for continuous defect reduction has been established in a formal feedback loop, which relies on yield and reliability, failed bit map analysis, analytical tools, inline monitoring, cross functional teams and a defect engineering group. The strategy requires the fastest detection, identification and implementation of possible corrective actions. Feedback cycle time is minimized at all points to improve yield and reliability and reduce costs, essential for competitiveness in the memory business. Payoff was a 9.4X reduction in defectivity and a 6.2X improvement in reliability of 256 K fast SRAMs over 20 months.

  16. Evaluating the capabilities of watershed-scale models in estimating sediment yield at field-scale.

    PubMed

    Sommerlot, Andrew R; Nejadhashemi, A Pouyan; Woznicki, Sean A; Giri, Subhasis; Prohaska, Michael D

    2013-09-30

    Many watershed model interfaces have been developed in recent years for predicting field-scale sediment loads. They share the goal of providing data for decisions aimed at improving watershed health and the effectiveness of water quality conservation efforts. The objectives of this study were to: 1) compare three watershed-scale models (Soil and Water Assessment Tool (SWAT), Field_SWAT, and the High Impact Targeting (HIT) model) against calibrated field-scale model (RUSLE2) in estimating sediment yield from 41 randomly selected agricultural fields within the River Raisin watershed; 2) evaluate the statistical significance among models; 3) assess the watershed models' capabilities in identifying areas of concern at the field level; 4) evaluate the reliability of the watershed-scale models for field-scale analysis. The SWAT model produced the most similar estimates to RUSLE2 by providing the closest median and the lowest absolute error in sediment yield predictions, while the HIT model estimates were the worst. Concerning statistically significant differences between models, SWAT was the only model found to be not significantly different from the calibrated RUSLE2 at α = 0.05. Meanwhile, all models were incapable of identifying priorities areas similar to the RUSLE2 model. Overall, SWAT provided the most correct estimates (51%) within the uncertainty bounds of RUSLE2 and is the most reliable among the studied models, while HIT is the least reliable. The results of this study suggest caution should be exercised when using watershed-scale models for field level decision-making, while field specific data is of paramount importance. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Reliability Analysis for AFTI-F16 SRFCS Using ASSIST and SURE

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva

    2001-01-01

    This paper reports the results of a study on reliability analysis of an AFTI-16 Self-Repairing Flight Control System (SRFCS) using software tools SURE (Semi-Markov Unreliability Range Evaluator and ASSIST (Abstract Semi-Markov Specification Interface to the SURE Tool). The purpose of the study is to investigate the potential utility of the software tools in the ongoing effort of the NASA Aviation Safety Program, where the class of systems must be extended beyond the originally intended serving class of electronic digital processors. The study concludes that SURE and ASSIST are applicable to reliability, analysis of flight control systems. They are especially efficient for sensitivity analysis that quantifies the dependence of system reliability on model parameters. The study also confirms an earlier finding on the dominant role of a parameter called a failure coverage. The paper will remark on issues related to the improvement of coverage and the optimization of redundancy level.

  18. A validation of the construct and reliability of an emotional intelligence scale applied to nursing students1

    PubMed Central

    Espinoza-Venegas, Maritza; Sanhueza-Alvarado, Olivia; Ramírez-Elizondo, Noé; Sáez-Carrillo, Katia

    2015-01-01

    OBJECTIVE: The current study aimed to validate the construct and reliability of an emotional intelligence scale. METHOD: The Trait Meta-Mood Scale-24 was applied to 349 nursing students. The process included content validation, which involved expert reviews, pilot testing, measurements of reliability using Cronbach's alpha, and factor analysis to corroborate the validity of the theoretical model's construct. RESULTS: Adequate Cronbach coefficients were obtained for all three dimensions, and factor analysis confirmed the scale's dimensions (perception, comprehension, and regulation). CONCLUSION: The Trait Meta-Mood Scale is a reliable and valid tool to measure the emotional intelligence of nursing students. Its use allows for accurate determinations of individuals' abilities to interpret and manage emotions. At the same time, this new construct is of potential importance for measurements in nursing leadership; educational, organizational, and personal improvements; and the establishment of effective relationships with patients. PMID:25806642

  19. Postmortem time estimation using body temperature and a finite-element computer model.

    PubMed

    den Hartog, Emiel A; Lotens, Wouter A

    2004-09-01

    In the Netherlands most murder victims are found 2-24 h after the crime. During this period, body temperature decrease is the most reliable method to estimate the postmortem time (PMT). Recently, two murder cases were analysed in which currently available methods did not provide a sufficiently reliable estimate of the PMT. In both cases a study was performed to verify the statements of suspects. For this purpose a finite-element computer model was developed that simulates a human torso and its clothing. With this model, changes to the body and the environment can also be modelled; this was very relevant in one of the cases, as the body had been in the presence of a small fire. In both cases it was possible to falsify the statements of the suspects by improving the accuracy of the PMT estimate. The estimated PMT in both cases was within the range of Henssge's model. The standard deviation of the PMT estimate was 35 min in the first case and 45 min in the second case, compared to 168 min (2.8 h) in Henssge's model. In conclusion, the model as presented here can have additional value for improving the accuracy of the PMT estimate. In contrast to the simple model of Henssge, the current model allows for increased accuracy when more detailed information is available. Moreover, the sensitivity of the predicted PMT for uncertainty in the circumstances can be studied, which is crucial to the confidence of the judge in the results.

  20. Systematic study of 16O-induced fusion with the improved quantum molecular dynamics model

    NASA Astrophysics Data System (ADS)

    Wang, Ning; Zhao, Kai; Li, Zhuxia

    2014-11-01

    The heavy-ion fusion reactions with 16O bombarding on 62Ni,65Cu,74Ge,148Nd,180Hf,186W,208Pb,238U are systematically investigated with the improved quantum molecular dynamics model. The fusion cross sections at energies near and above the Coulomb barriers can be reasonably well reproduced by using this semiclassical microscopic transport model with the parameter sets SkP* and IQ3a. The dynamical nucleus-nucleus potentials and the influence of Fermi constraint on the fusion process are also studied simultaneously. In addition to the mean field, the Fermi constraint also plays a key role for the reliable description of the fusion process and for improving the stability of fragments in heavy-ion collisions.

  1. The impact of symptom stability on time frame and recall reliability in CFS.

    PubMed

    Evans, Meredyth; Jason, Leonard A

    This study is an investigation of the potential impact of perceived symptom stability on the recall reliability of symptom severity and frequency as reported by individuals with chronic fatigue syndrome (CFS). Symptoms were recalled using three different recall timeframes (the past week, the past month, and the past six months) and at two assessment points (with one week in between each assessment). Participants were 51 adults (45 women and 6 men), between the ages of 29 and 66 with a current diagnosis of CFS. Multilevel Model (MLM) Analyses were used to determine the optimal recall timeframe (in terms of test-retest reliability) for reporting symptoms perceived as variable and as stable over time. Headaches were recalled more reliably when they were reported as stable over time. Furthermore, the optimal timeframe in terms of test-retest reliability for stable symptoms was highly uniform, such that all Fukuda 1 CFS symptoms were more reliably recalled at the six month timeframe. Furthermore, the optimal timeframe for CFS symptoms perceived as variable, differed across symptoms. Symptom stability and recall timeframe are important to consider in order to improve the accuracy and reliability of the current methods for diagnosing this illness.

  2. Research on navigation of satellite constellation based on an asynchronous observation model using X-ray pulsar

    NASA Astrophysics Data System (ADS)

    Guo, Pengbin; Sun, Jian; Hu, Shuling; Xue, Ju

    2018-02-01

    Pulsar navigation is a promising navigation method for high-altitude orbit space tasks or deep space exploration. At present, an important reason for restricting the development of pulsar navigation is that navigation accuracy is not high due to the slow update of the measurements. In order to improve the accuracy of pulsar navigation, an asynchronous observation model which can improve the update rate of the measurements is proposed on the basis of satellite constellation which has a broad space for development because of its visibility and reliability. The simulation results show that the asynchronous observation model improves the positioning accuracy by 31.48% and velocity accuracy by 24.75% than that of the synchronous observation model. With the new Doppler effects compensation method in the asynchronous observation model proposed in this paper, the positioning accuracy is improved by 32.27%, and the velocity accuracy is improved by 34.07% than that of the traditional method. The simulation results show that without considering the clock error will result in a filtering divergence.

  3. Measurement-based reliability prediction methodology. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Linn, Linda Shen

    1991-01-01

    In the past, analytical and measurement based models were developed to characterize computer system behavior. An open issue is how these models can be used, if at all, for system design improvement. The issue is addressed here. A combined statistical/analytical approach to use measurements from one environment to model the system failure behavior in a new environment is proposed. A comparison of the predicted results with the actual data from the new environment shows a close correspondence.

  4. Methodology to Improve Design of Accelerated Life Tests in Civil Engineering Projects

    PubMed Central

    Lin, Jing; Yuan, Yongbo; Zhou, Jilai; Gao, Jie

    2014-01-01

    For reliability testing an Energy Expansion Tree (EET) and a companion Energy Function Model (EFM) are proposed and described in this paper. Different from conventional approaches, the EET provides a more comprehensive and objective way to systematically identify external energy factors affecting reliability. The EFM introduces energy loss into a traditional Function Model to identify internal energy sources affecting reliability. The combination creates a sound way to enumerate the energies to which a system may be exposed during its lifetime. We input these energies into planning an accelerated life test, a Multi Environment Over Stress Test. The test objective is to discover weak links and interactions among the system and the energies to which it is exposed, and design them out. As an example, the methods are applied to the pipe in subsea pipeline. However, they can be widely used in other civil engineering industries as well. The proposed method is compared with current methods. PMID:25111800

  5. Stress Intensity of Delamination in a Sintered-Silver Interconnection: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeVoto, D. J.; Paret, P. P.; Wereszczak, A. A.

    2014-08-01

    In automotive power electronics packages, conventional thermal interface materials such as greases, gels, and phase-change materials pose bottlenecks to heat removal and are also associated with reliability concerns. The industry trend is toward high thermal performance bonded interfaces for large-area attachments. However, because of coefficient of thermal expansion mismatches between materials/layers and resultant thermomechanical stresses, adhesive and cohesive fractures could occur, posing a reliability problem. These defects manifest themselves in increased thermal resistance. This research aims to investigate and improve the thermal performance and reliability of sintered-silver for power electronics packaging applications. This has been experimentally accomplished by the synthesismore » of large-area bonded interfaces between metalized substrates and copper base plates that have subsequently been subjected to thermal cycles. A finite element model of crack initiation and propagation in these bonded interfaces will allow for the interpretation of degradation rates by a crack-velocity (V)-stress intensity factor (K) analysis. A description of the experiment and the modeling approach are discussed.« less

  6. Can bias correction and statistical downscaling methods improve the skill of seasonal precipitation forecasts?

    NASA Astrophysics Data System (ADS)

    Manzanas, R.; Lucero, A.; Weisheimer, A.; Gutiérrez, J. M.

    2018-02-01

    Statistical downscaling methods are popular post-processing tools which are widely used in many sectors to adapt the coarse-resolution biased outputs from global climate simulations to the regional-to-local scale typically required by users. They range from simple and pragmatic Bias Correction (BC) methods, which directly adjust the model outputs of interest (e.g. precipitation) according to the available local observations, to more complex Perfect Prognosis (PP) ones, which indirectly derive local predictions (e.g. precipitation) from appropriate upper-air large-scale model variables (predictors). Statistical downscaling methods have been extensively used and critically assessed in climate change applications; however, their advantages and limitations in seasonal forecasting are not well understood yet. In particular, a key problem in this context is whether they serve to improve the forecast quality/skill of raw model outputs beyond the adjustment of their systematic biases. In this paper we analyze this issue by applying two state-of-the-art BC and two PP methods to downscale precipitation from a multimodel seasonal hindcast in a challenging tropical region, the Philippines. To properly assess the potential added value beyond the reduction of model biases, we consider two validation scores which are not sensitive to changes in the mean (correlation and reliability categories). Our results show that, whereas BC methods maintain or worsen the skill of the raw model forecasts, PP methods can yield significant skill improvement (worsening) in cases for which the large-scale predictor variables considered are better (worse) predicted by the model than precipitation. For instance, PP methods are found to increase (decrease) model reliability in nearly 40% of the stations considered in boreal summer (autumn). Therefore, the choice of a convenient downscaling approach (either BC or PP) depends on the region and the season.

  7. Measuring nursing competencies in the operating theatre: instrument development and psychometric analysis using Item Response Theory.

    PubMed

    Nicholson, Patricia; Griffin, Patrick; Gillis, Shelley; Wu, Margaret; Dunning, Trisha

    2013-09-01

    Concern about the process of identifying underlying competencies that contribute to effective nursing performance has been debated with a lack of consensus surrounding an approved measurement instrument for assessing clinical performance. Although a number of methodologies are noted in the development of competency-based assessment measures, these studies are not without criticism. The primary aim of the study was to develop and validate a Performance Based Scoring Rubric, which included both analytical and holistic scales. The aim included examining the validity and reliability of the rubric, which was designed to measure clinical competencies in the operating theatre. The fieldwork observations of 32 nurse educators and preceptors assessing the performance of 95 instrument nurses in the operating theatre were used in the calibration of the rubric. The Rasch model, a particular model among Item Response Models, was used in the calibration of each item in the rubric in an attempt at improving the measurement properties of the scale. This is done by establishing the 'fit' of the data to the conditions demanded by the Rasch model. Acceptable reliability estimates, specifically a high Cronbach's alpha reliability coefficient (0.940), as well as empirical support for construct and criterion validity for the rubric were achieved. Calibration of the Performance Based Scoring Rubric using Rasch model revealed that the fit statistics for most items were acceptable. The use of the Rasch model offers a number of features in developing and refining healthcare competency-based assessments, improving confidence in measuring clinical performance. The Rasch model was shown to be useful in developing and validating a competency-based assessment for measuring the competence of the instrument nurse in the operating theatre with implications for use in other areas of nursing practice. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  8. EUV Irradiance Inputs to Thermospheric Density Models: Open Issues and Path Forward

    NASA Astrophysics Data System (ADS)

    Vourlidas, A.; Bruinsma, S.

    2018-01-01

    One of the objectives of the NASA Living With a Star Institute on "Nowcasting of Atmospheric Drag for low Earth orbit (LEO) Spacecraft" was to investigate whether and how to increase the accuracy of atmospheric drag models by improving the quality of the solar forcing inputs, namely, extreme ultraviolet (EUV) irradiance information. In this focused review, we examine the status of and issues with EUV measurements and proxies, discuss recent promising developments, and suggest a number of ways to improve the reliability, availability, and forecast accuracy of EUV measurements in the next solar cycle.

  9. Thermal modelling using discrete vasculature for thermal therapy: a review

    PubMed Central

    Kok, H.P.; Gellermann, J.; van den Berg, C.A.T.; Stauffer, P.R.; Hand, J.W.; Crezee, J.

    2013-01-01

    Reliable temperature information during clinical hyperthermia and thermal ablation is essential for adequate treatment control, but conventional temperature measurements do not provide 3D temperature information. Treatment planning is a very useful tool to improve treatment quality and substantial progress has been made over the last decade. Thermal modelling is a very important and challenging aspect of hyperthermia treatment planning. Various thermal models have been developed for this purpose, with varying complexity. Since blood perfusion is such an important factor in thermal redistribution of energy in in vivo tissue, thermal simulations are most accurately performed by modelling discrete vasculature. This review describes the progress in thermal modelling with discrete vasculature for the purpose of hyperthermia treatment planning and thermal ablation. There has been significant progress in thermal modelling with discrete vasculature. Recent developments have made real-time simulations possible, which can provide feedback during treatment for improved therapy. Future clinical application of thermal modelling with discrete vasculature in hyperthermia treatment planning is expected to further improve treatment quality. PMID:23738700

  10. Improving Factor Score Estimation Through the Use of Observed Background Characteristics

    PubMed Central

    Curran, Patrick J.; Cole, Veronica; Bauer, Daniel J.; Hussong, Andrea M.; Gottfredson, Nisha

    2016-01-01

    A challenge facing nearly all studies in the psychological sciences is how to best combine multiple items into a valid and reliable score to be used in subsequent modelling. The most ubiquitous method is to compute a mean of items, but more contemporary approaches use various forms of latent score estimation. Regardless of approach, outside of large-scale testing applications, scoring models rarely include background characteristics to improve score quality. The current paper used a Monte Carlo simulation design to study score quality for different psychometric models that did and did not include covariates across levels of sample size, number of items, and degree of measurement invariance. The inclusion of covariates improved score quality for nearly all design factors, and in no case did the covariates degrade score quality relative to not considering the influences at all. Results suggest that the inclusion of observed covariates can improve factor score estimation. PMID:28757790

  11. User's guide to the Reliability Estimation System Testbed (REST)

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam

    1992-01-01

    The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.

  12. Assessment of an ensemble seasonal streamflow forecasting system for Australia

    NASA Astrophysics Data System (ADS)

    Bennett, James C.; Wang, Quan J.; Robertson, David E.; Schepen, Andrew; Li, Ming; Michael, Kelvin

    2017-11-01

    Despite an increasing availability of skilful long-range streamflow forecasts, many water agencies still rely on simple resampled historical inflow sequences (stochastic scenarios) to plan operations over the coming year. We assess a recently developed forecasting system called forecast guided stochastic scenarios (FoGSS) as a skilful alternative to standard stochastic scenarios for the Australian continent. FoGSS uses climate forecasts from a coupled ocean-land-atmosphere prediction system, post-processed with the method of calibration, bridging and merging. Ensemble rainfall forecasts force a monthly rainfall-runoff model, while a staged hydrological error model quantifies and propagates hydrological forecast uncertainty through forecast lead times. FoGSS is able to generate ensemble streamflow forecasts in the form of monthly time series to a 12-month forecast horizon. FoGSS is tested on 63 Australian catchments that cover a wide range of climates, including 21 ephemeral rivers. In all perennial and many ephemeral catchments, FoGSS provides an effective alternative to resampled historical inflow sequences. FoGSS generally produces skilful forecasts at shorter lead times ( < 4 months), and transits to climatology-like forecasts at longer lead times. Forecasts are generally reliable and unbiased. However, FoGSS does not perform well in very dry catchments (catchments that experience zero flows more than half the time in some months), sometimes producing strongly negative forecast skill and poor reliability. We attempt to improve forecasts through the use of (i) ESP rainfall forcings, (ii) different rainfall-runoff models, and (iii) a Bayesian prior to encourage the error model to return climatology forecasts in months when the rainfall-runoff model performs poorly. Of these, the use of the prior offers the clearest benefit in very dry catchments, where it moderates strongly negative forecast skill and reduces bias in some instances. However, the prior does not remedy poor reliability in very dry catchments. Overall, FoGSS is an attractive alternative to historical inflow sequences in all but the driest catchments. We discuss ways in which forecast reliability in very dry catchments could be improved in future work.

  13. Software reliability models for fault-tolerant avionics computers and related topics

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1987-01-01

    Software reliability research is briefly described. General research topics are reliability growth models, quality of software reliability prediction, the complete monotonicity property of reliability growth, conceptual modelling of software failure behavior, assurance of ultrahigh reliability, and analysis techniques for fault-tolerant systems.

  14. Advanced Stirling Convertor Heater Head Durability and Reliability Quantification

    NASA Technical Reports Server (NTRS)

    Krause, David L.; Shah, Ashwin R.; Korovaichuk, Igor; Kalluri, Sreeramesh

    2008-01-01

    The National Aeronautics and Space Administration (NASA) has identified the high efficiency Advanced Stirling Radioisotope Generator (ASRG) as a candidate power source for long duration Science missions, such as lunar applications, Mars rovers, and deep space missions, that require reliable design lifetimes of up to 17 years. Resistance to creep deformation of the MarM-247 heater head (HH), a structurally critical component of the ASRG Advanced Stirling Convertor (ASC), under high temperatures (up to 850 C) is a key design driver for durability. Inherent uncertainties in the creep behavior of the thin-walled HH and the variations in the wall thickness, control temperature, and working gas pressure need to be accounted for in the life and reliability prediction. Due to the availability of very limited test data, assuring life and reliability of the HH is a challenging task. The NASA Glenn Research Center (GRC) has adopted an integrated approach combining available uniaxial MarM-247 material behavior testing, HH benchmark testing and advanced analysis in order to demonstrate the integrity, life and reliability of the HH under expected mission conditions. The proposed paper describes analytical aspects of the deterministic and probabilistic approaches and results. The deterministic approach involves development of the creep constitutive model for the MarM-247 (akin to the Oak Ridge National Laboratory master curve model used previously for Inconel 718 (Special Metals Corporation)) and nonlinear finite element analysis to predict the mean life. The probabilistic approach includes evaluation of the effect of design variable uncertainties in material creep behavior, geometry and operating conditions on life and reliability for the expected life. The sensitivity of the uncertainties in the design variables on the HH reliability is also quantified, and guidelines to improve reliability are discussed.

  15. Method for evaluating the reliability of compressor impeller of turbocharger for vehicle application in plateau area

    NASA Astrophysics Data System (ADS)

    Wang, Zheng; Wang, Zengquan; Wang, A.-na; Zhuang, Li; Wang, Jinwei

    2016-10-01

    As turbocharging diesel engines for vehicle application are applied in plateau area, the environmental adaptability of engines has drawn more attention. For the environmental adaptability problem of turbocharging diesel engines for vehicle application, the present studies almost focus on the optimization of performance match between turbocharger and engine, and the reliability problem of turbocharger is almost ignored. The reliability problem of compressor impeller of turbocharger for vehicle application when diesel engines operate in plateau area is studied. Firstly, the rule that the rotational speed of turbocharger changes with the altitude height is presented, and the potential failure modes of compressor impeller are analyzed. Then, the failure behavior models of compressor impeller are built, and the reliability models of compressor impeller operating in plateau area are developed. Finally, the rule that the reliability of compressor impeller changes with the altitude height is studied, the measurements for improving the reliability of the compressor impellers of turbocharger operating in plateau area are given. The results indicate that when the operating speed of diesel engine is certain, the rotational speed of turbocharger increases with the increase of altitude height, and the failure risk of compressor impeller with the failure modes of hub fatigue and blade resonance increases. The reliability of compressor impeller decreases with the increase of altitude height, and it also decreases as the increase of number of the mission profile cycle of engine. The method proposed can not only be used to evaluating the reliability of compressor impeller when diesel engines operate in plateau area but also be applied to direct the structural optimization of compressor impeller.

  16. Assuring reliability program effectiveness.

    NASA Technical Reports Server (NTRS)

    Ball, L. W.

    1973-01-01

    An attempt is made to provide simple identification and description of techniques that have proved to be most useful either in developing a new product or in improving reliability of an established product. The first reliability task is obtaining and organizing parts failure rate data. Other tasks are parts screening, tabulation of general failure rates, preventive maintenance, prediction of new product reliability, and statistical demonstration of achieved reliability. Five principal tasks for improving reliability involve the physics of failure research, derating of internal stresses, control of external stresses, functional redundancy, and failure effects control. A final task is the training and motivation of reliability specialist engineers.

  17. Allometric scaling theory applied to FIA biomass estimation

    Treesearch

    David C. Chojnacky

    2002-01-01

    Tree biomass estimates in the Forest Inventory and Analysis (FIA) database are derived from numerous methodologies whose abundance and complexity raise questions about consistent results throughout the U.S. A new model based on allometric scaling theory ("WBE") offers simplified methodology and a theoretically sound basis for improving the reliability and...

  18. Intra- and inter-laboratory reliability of a cryopreserved trout hepatocyte assay for the prediction of chemical bioaccumulation potential

    EPA Science Inventory

    Cryopreserved trout hepatocytes provide a convenient in vitro system for measuring the intrinsic clearance of xenobiotics. Measured clearance rates can then be extrapolated to the whole animal as a means of improving modeled bioaccumulation predictions. To date, however, the in...

  19. Improving Re-Enlistment through Decision-Making Modeling and Intervention

    DTIC Science & Technology

    1990-03-01

    everything from politics easily and quickly all useful facts and opinions. It to finance to sports, decisions are not based on then synthesizes these... Disneyland ferred method of evaluating jobs both reliably Disney World and validly. Possibly the greatest benefit of JEBOR Dow Chemical Company to the user

  20. Integrated material state awareness system with self-learning symbiotic diagnostic algorithms and models

    NASA Astrophysics Data System (ADS)

    Banerjee, Sourav; Liu, Lie; Liu, S. T.; Yuan, Fuh-Gwo; Beard, Shawn

    2011-04-01

    Materials State Awareness (MSA) goes beyond traditional NDE and SHM in its challenge to characterize the current state of material damage before the onset of macro-damage such as cracks. A highly reliable, minimally invasive system for MSA of Aerospace Structures, Naval structures as well as next generation space systems is critically needed. Development of such a system will require a reliable SHM system that can detect the onset of damage well before the flaw grows to a critical size. Therefore, it is important to develop an integrated SHM system that not only detects macroscale damages in the structures but also provides an early indication of flaw precursors and microdamages. The early warning for flaw precursors and their evolution provided by an SHM system can then be used to define remedial strategies before the structural damage leads to failure, and significantly improve the safety and reliability of the structures. Thus, in this article a preliminary concept of developing the Hybrid Distributed Sensor Network Integrated with Self-learning Symbiotic Diagnostic Algorithms and Models to accurately and reliably detect the precursors to damages that occur to the structure are discussed. Experiments conducted in a laboratory environment shows potential of the proposed technique.

  1. Validity and reliability of an application review process using dedicated reviewers in one stage of a multi-stage admissions model.

    PubMed

    Zeeman, Jacqueline M; McLaughlin, Jacqueline E; Cox, Wendy C

    2017-11-01

    With increased emphasis placed on non-academic skills in the workplace, a need exists to identify an admissions process that evaluates these skills. This study assessed the validity and reliability of an application review process involving three dedicated application reviewers in a multi-stage admissions model. A multi-stage admissions model was utilized during the 2014-2015 admissions cycle. After advancing through the academic review, each application was independently reviewed by two dedicated application reviewers utilizing a six-construct rubric (written communication, extracurricular and community service activities, leadership experience, pharmacy career appreciation, research experience, and resiliency). Rubric scores were extrapolated to a three-tier ranking to select candidates for on-site interviews. Kappa statistics were used to assess interrater reliability. A three-facet Many-Facet Rasch Model (MFRM) determined reviewer severity, candidate suitability, and rubric construct difficulty. The kappa statistic for candidates' tier rank score (n = 388 candidates) was 0.692 with a perfect agreement frequency of 84.3%. There was substantial interrater reliability between reviewers for the tier ranking (kappa: 0.654-0.710). Highest construct agreement occurred in written communication (kappa: 0.924-0.984). A three-facet MFRM analysis explained 36.9% of variance in the ratings, with 0.06% reflecting application reviewer scoring patterns (i.e., severity or leniency), 22.8% reflecting candidate suitability, and 14.1% reflecting construct difficulty. Utilization of dedicated application reviewers and a defined tiered rubric provided a valid and reliable method to effectively evaluate candidates during the application review process. These analyses provide insight into opportunities for improving the application review process among schools and colleges of pharmacy. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Applicability of the ReproQ client experiences questionnaire for quality improvement in maternity care

    PubMed Central

    Scheerhagen, Marisja; Tholhuijsen, Dominique J.C.; Birnie, Erwin; Franx, Arie; Bonsel, Gouke J.

    2016-01-01

    Background. The ReproQuestionnaire (ReproQ) measures the client’s experience with maternity care, following the WHO responsiveness model. In 2015, the ReproQ was appointed as national client experience questionnaire and will be added to the national list of indicators in maternity care. For using the ReproQ in quality improvement, the questionnaire should be able to identify best and worst practices. To achieve this, ReproQ should be reliable and able to identify relevant differences. Methods and Findings. We sent questionnaires to 17,867 women six weeks after labor (response 32%). Additionally, we invited 915 women for the retest (response 29%). Next we determined the test–retest reliability, the Minimally Important Difference (MID) and six known group comparisons, using two scorings methods: the percentage women with at least one negative experience and the mean score. The reliability for the percentage negative experience and mean score was both ‘good’ (Absolute agreement = 79%; intraclass correlation coefficient = 0.78). The MID was 11% for the percentage negative and 0.15 for the mean score. Application of the MIDs revealed relevant differences in women’s experience with regard to professional continuity, setting continuity and having travel time. Conclusions. The measurement characteristics of the ReproQ support its use in quality improvement cycle. Test–retest reliability was good, and the observed minimal important difference allows for discrimination of good and poor performers, also at the level of specific features of performance. PMID:27478690

  3. Design and experimentation of an empirical multistructure framework for accurate, sharp and reliable hydrological ensembles

    NASA Astrophysics Data System (ADS)

    Seiller, G.; Anctil, F.; Roy, R.

    2017-09-01

    This paper outlines the design and experimentation of an Empirical Multistructure Framework (EMF) for lumped conceptual hydrological modeling. This concept is inspired from modular frameworks, empirical model development, and multimodel applications, and encompasses the overproduce and select paradigm. The EMF concept aims to reduce subjectivity in conceptual hydrological modeling practice and includes model selection in the optimisation steps, reducing initial assumptions on the prior perception of the dominant rainfall-runoff transformation processes. EMF generates thousands of new modeling options from, for now, twelve parent models that share their functional components and parameters. Optimisation resorts to ensemble calibration, ranking and selection of individual child time series based on optimal bias and reliability trade-offs, as well as accuracy and sharpness improvement of the ensemble. Results on 37 snow-dominated Canadian catchments and 20 climatically-diversified American catchments reveal the excellent potential of the EMF in generating new individual model alternatives, with high respective performance values, that may be pooled efficiently into ensembles of seven to sixty constitutive members, with low bias and high accuracy, sharpness, and reliability. A group of 1446 new models is highlighted to offer good potential on other catchments or applications, based on their individual and collective interests. An analysis of the preferred functional components reveals the importance of the production and total flow elements. Overall, results from this research confirm the added value of ensemble and flexible approaches for hydrological applications, especially in uncertain contexts, and open up new modeling possibilities.

  4. Statistical modelling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1991-01-01

    During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety.

  5. From bedside to bench and back again: research issues in animal models of human disease.

    PubMed

    Tkacs, Nancy C; Thompson, Hilaire J

    2006-07-01

    To improve outcomes for patients with many serious clinical problems, multifactorial research approaches by nurse scientists, including the use of animal models, are necessary. Animal models serve as analogies for clinical problems seen in humans and must meet certain criteria, including validity and reliability, to be useful in moving research efforts forward. This article describes research considerations in the development of rodent models. As the standard of diabetes care evolves to emphasize intensive insulin therapy, rates of severe hypoglycemia are increasing among patients with type 1 and type 2 diabetes mellitus. A consequence of this change in clinical practice is an increase in rates of two hypoglycemia-related diabetes complications: hypoglycemia-associated autonomic failure (HAAF) and resulting hypoglycemia unawareness. Work on an animal model of HAAF is in an early developmental stage, with several labs reporting different approaches to model this complication of type 1 diabetes mellitus. This emerging model serves as an example illustrating how evaluation of validity and reliability is critically important at each stage of developing and testing animal models to support inquiry into human disease.

  6. Calibration and combination of dynamical seasonal forecasts to enhance the value of predicted probabilities for managing risk

    NASA Astrophysics Data System (ADS)

    Dutton, John A.; James, Richard P.; Ross, Jeremy D.

    2013-06-01

    Seasonal probability forecasts produced with numerical dynamics on supercomputers offer great potential value in managing risk and opportunity created by seasonal variability. The skill and reliability of contemporary forecast systems can be increased by calibration methods that use the historical performance of the forecast system to improve the ongoing real-time forecasts. Two calibration methods are applied to seasonal surface temperature forecasts of the US National Weather Service, the European Centre for Medium Range Weather Forecasts, and to a World Climate Service multi-model ensemble created by combining those two forecasts with Bayesian methods. As expected, the multi-model is somewhat more skillful and more reliable than the original models taken alone. The potential value of the multimodel in decision making is illustrated with the profits achieved in simulated trading of a weather derivative. In addition to examining the seasonal models, the article demonstrates that calibrated probability forecasts of weekly average temperatures for leads of 2-4 weeks are also skillful and reliable. The conversion of ensemble forecasts into probability distributions of impact variables is illustrated with degree days derived from the temperature forecasts. Some issues related to loss of stationarity owing to long-term warming are considered. The main conclusion of the article is that properly calibrated probabilistic forecasts possess sufficient skill and reliability to contribute to effective decisions in government and business activities that are sensitive to intraseasonal and seasonal climate variability.

  7. RICOR's new development of a highly reliable integral rotary cooler: engineering and reliability aspects

    NASA Astrophysics Data System (ADS)

    Filis, Avishai; Pundak, Nachman; Barak, Moshe; Porat, Ze'ev; Jaeger, Mordechai

    2011-06-01

    The growing demand for EO applications that work around the clock 24hr/7days a week, such as in border surveillance systems, emphasizes the need for a highly reliable cryocooler having increased operational availability and decreased integrated system Life Cycle (ILS) cost. In order to meet this need RICOR has developed a new rotary Stirling cryocooler, model K508N, intended to double the K508's operating MTTF achieving 20,000 operating MTTF hours. The K508N employs RICOR's latest mechanical design technologies such as optimized bearings and greases, bearings preloading, advanced seals, laser welded cold finger and robust design structure with increased natural frequency compared to the K508 model. The cooler enhanced MTTF was demonstrated by a Validation and Verification (V&V) plan comprising analytical means and a comparative accelerated life test between the standard K508 and the K508N models. Particularly, point estimate and confidence interval for the MTTF improvement factor where calculated periodically during and after the test. The (V&V) effort revealed that the K508N meets its MTTF design goal. The paper will focus on the technical and engineering aspects of the new design. In addition it will discuss the market needs and expectations, investigate the reliability data of the present reference K508 model; and report the accelerate life test data and the statistical analysis methodology as well as its underlying assumptions and results.

  8. Monitoring Quality Across Home Visiting Models: A Field Test of Michigan's Home Visiting Quality Assurance System.

    PubMed

    Heany, Julia; Torres, Jennifer; Zagar, Cynthia; Kostelec, Tiffany

    2018-06-05

    Introduction In order to achieve the positive outcomes with parents and children demonstrated by many home visiting models, home visiting services must be well implemented. The Michigan Home Visiting Initiative developed a tool and procedure for monitoring implementation quality across models referred to as Michigan's Home Visiting Quality Assurance System (MHVQAS). This study field tested the MHVQAS. This article focuses on one of the study's evaluation questions: Can the MHVQAS be applied across models? Methods Eight local implementing agencies (LIAs) from four home visiting models (Healthy Families America, Early Head Start-Home Based, Parents as Teachers, Maternal Infant Health Program) and five reviewers participated in the study by completing site visits, tracking their time and costs, and completing surveys about the process. LIAs also submitted their most recent review by their model developer. The researchers conducted participant observation of the review process. Results Ratings on the MHVQAS were not significantly different between models. There were some differences in interrater reliability and perceived reliability between models. There were no significant differences between models in perceived validity, satisfaction with the review process, or cost to participate. Observational data suggested that cross-model applicability could be improved by assisting sites in relating the requirements of the tool to the specifics of their model. Discussion The MHVQAS shows promise as a tool and process to monitor implementation quality of home visiting services across models. The results of the study will be used to make improvements before the MHVQAS is used in practice.

  9. Improving the quality of discrete-choice experiments in health: how can we assess validity and reliability?

    PubMed

    Janssen, Ellen M; Marshall, Deborah A; Hauber, A Brett; Bridges, John F P

    2017-12-01

    The recent endorsement of discrete-choice experiments (DCEs) and other stated-preference methods by regulatory and health technology assessment (HTA) agencies has placed a greater focus on demonstrating the validity and reliability of preference results. Areas covered: We present a practical overview of tests of validity and reliability that have been applied in the health DCE literature and explore other study qualities of DCEs. From the published literature, we identify a variety of methods to assess the validity and reliability of DCEs. We conceptualize these methods to create a conceptual model with four domains: measurement validity, measurement reliability, choice validity, and choice reliability. Each domain consists of three categories that can be assessed using one to four procedures (for a total of 24 tests). We present how these tests have been applied in the literature and direct readers to applications of these tests in the health DCE literature. Based on a stakeholder engagement exercise, we consider the importance of study characteristics beyond traditional concepts of validity and reliability. Expert commentary: We discuss study design considerations to assess the validity and reliability of a DCE, consider limitations to the current application of tests, and discuss future work to consider the quality of DCEs in healthcare.

  10. On the Path to SunShot. The Role of Advancements in Solar Photovoltaic Efficiency, Reliability, and Costs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodhouse, Michael; Jones-Albertus, Rebecca; Feldman, David

    2016-05-01

    This report examines the remaining challenges to achieving the competitive photovoltaic (PV) costs and large-scale deployment envisioned under the U.S. Department of Energy's SunShot Initiative. Solar-energy cost reductions can be realized through lower PV module and balance-of-system (BOS) costs as well as improved system efficiency and reliability. Numerous combinations of PV improvements could help achieve the levelized cost of electricity (LCOE) goals because of the tradeoffs among key metrics like module price, efficiency, and degradation rate as well as system price and lifetime. Using LCOE modeling based on bottom-up cost analysis, two specific pathways are mapped to exemplify the manymore » possible approaches to module cost reductions of 29%-38% between 2015 and 2020. BOS hardware and soft cost reductions, ranging from 54%-77% of total cost reductions, are also modeled. The residential sector's high supply-chain costs, labor requirements, and customer-acquisition costs give it the greatest BOS cost-reduction opportunities, followed by the commercial sector, although opportunities are available to the utility-scale sector as well. Finally, a future scenario is considered in which very high PV penetration requires additional costs to facilitate grid integration and increased power-system flexibility--which might necessitate even lower solar LCOEs. The analysis of a pathway to 3-5 cents/kWh PV systems underscores the importance of combining robust improvements in PV module and BOS costs as well as PV system efficiency and reliability if such aggressive long-term targets are to be achieved.« less

  11. A system methodology for optimization design of the structural crashworthiness of a vehicle subjected to a high-speed frontal crash

    NASA Astrophysics Data System (ADS)

    Xia, Liang; Liu, Weiguo; Lv, Xiaojiang; Gu, Xianguang

    2018-04-01

    The structural crashworthiness design of vehicles has become an important research direction to ensure the safety of the occupants. To effectively improve the structural safety of a vehicle in a frontal crash, a system methodology is presented in this study. The surrogate model of Online support vector regression (Online-SVR) is adopted to approximate crashworthiness criteria and different kernel functions are selected to enhance the accuracy of the model. The Online-SVR model is demonstrated to have the advantages of solving highly nonlinear problems and saving training costs, and can effectively be applied for vehicle structural crashworthiness design. By combining the non-dominated sorting genetic algorithm II and Monte Carlo simulation, both deterministic optimization and reliability-based design optimization (RBDO) are conducted. The optimization solutions are further validated by finite element analysis, which shows the effectiveness of the RBDO solution in the structural crashworthiness design process. The results demonstrate the advantages of using RBDO, resulting in not only increased energy absorption and decreased structural weight from a baseline design, but also a significant improvement in the reliability of the design.

  12. Using a Multivariate Multilevel Polytomous Item Response Theory Model to Study Parallel Processes of Change: The Dynamic Association between Adolescents' Social Isolation and Engagement with Delinquent Peers in the National Youth Survey

    ERIC Educational Resources Information Center

    Hsieh, Chueh-An; von Eye, Alexander A.; Maier, Kimberly S.

    2010-01-01

    The application of multidimensional item response theory models to repeated observations has demonstrated great promise in developmental research. It allows researchers to take into consideration both the characteristics of item response and measurement error in longitudinal trajectory analysis, which improves the reliability and validity of the…

  13. Contribution of domestic production records, Interbull estimated breeding values, and single nucleotide polymorphism genetic markers to the single-step genomic evaluation of milk production.

    PubMed

    Přibyl, J; Madsen, P; Bauer, J; Přibylová, J; Simečková, M; Vostrý, L; Zavadilová, L

    2013-03-01

    Estimated breeding values (EBV) for first-lactation milk production of Holstein cattle in the Czech Republic were calculated using a conventional animal model and by single-step prediction of the genomic enhanced breeding value. Two overlapping data sets of milk production data were evaluated: (1) calving years 1991 to 2006, with 861,429 lactations and 1,918,901 animals in the pedigree and (2) calving years 1991 to 2010, with 1,097,319 lactations and 1,906,576 animals in the pedigree. Global Interbull (Uppsala, Sweden) deregressed proofs of 114,189 bulls were used in the analyses. Reliabilities of Interbull values were equivalent to an average of 8.53 effective records, which were used in a weighted analysis. A total of 1,341 bulls were genotyped using the Illumina BovineSNP50 BeadChip V2 (Illumina Inc., San Diego, CA). Among the genotyped bulls were 332 young bulls with no daughters in the first data set but more than 50 daughters (88.41, on average) with performance records in the second data set. For young bulls, correlations of EBV and genomic enhanced breeding value before and after progeny testing, corresponding average expected reliabilities, and effective daughter contributions (EDC) were calculated. The reliability of prediction pedigree EBV of young bulls was 0.41, corresponding to EDC=10.6. Including Interbull deregressed proofs improved the reliability of prediction by EDC=13.4 and including genotyping improved prediction reliability by EDC=6.2. Total average expected reliability of prediction reached 0.67, corresponding to EDC=30.2. The combination of domestic and Interbull sources for both genotyped and nongenotyped animals is valuable for improving the accuracy of genetic prediction in small populations of dairy cattle. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  14. Neo-deterministic seismic hazard scenarios for India—a preventive tool for disaster mitigation

    NASA Astrophysics Data System (ADS)

    Parvez, Imtiyaz A.; Magrin, Andrea; Vaccari, Franco; Ashish; Mir, Ramees R.; Peresan, Antonella; Panza, Giuliano Francesco

    2017-11-01

    Current computational resources and physical knowledge of the seismic wave generation and propagation processes allow for reliable numerical and analytical models of waveform generation and propagation. From the simulation of ground motion, it is easy to extract the desired earthquake hazard parameters. Accordingly, a scenario-based approach to seismic hazard assessment has been developed, namely the neo-deterministic seismic hazard assessment (NDSHA), which allows for a wide range of possible seismic sources to be used in the definition of reliable scenarios by means of realistic waveforms modelling. Such reliable and comprehensive characterization of expected earthquake ground motion is essential to improve building codes, particularly for the protection of critical infrastructures and for land use planning. Parvez et al. (Geophys J Int 155:489-508, 2003) published the first ever neo-deterministic seismic hazard map of India by computing synthetic seismograms with input data set consisting of structural models, seismogenic zones, focal mechanisms and earthquake catalogues. As described in Panza et al. (Adv Geophys 53:93-165, 2012), the NDSHA methodology evolved with respect to the original formulation used by Parvez et al. (Geophys J Int 155:489-508, 2003): the computer codes were improved to better fit the need of producing realistic ground shaking maps and ground shaking scenarios, at different scale levels, exploiting the most significant pertinent progresses in data acquisition and modelling. Accordingly, the present study supplies a revised NDSHA map for India. The seismic hazard, expressed in terms of maximum displacement (Dmax), maximum velocity (Vmax) and design ground acceleration (DGA), has been extracted from the synthetic signals and mapped on a regular grid over the studied territory.

  15. Processing and mechanical properties of metal-ceramic composites with controlled microstructure formed by reactive metal penetration

    NASA Astrophysics Data System (ADS)

    Ellerby, Donald Thomas

    1999-12-01

    Compared to monolithic ceramics, metal-reinforced ceramic composites offer the potential for improved toughness and reliability in ceramic materials. As such, there is significant scientific and commercial interest in the microstructure and properties of metal-ceramic composites. Considerable work has been conducted on modeling the toughening behavior of metal reinforcements in ceramics; however, there has been limited application and testing of these concepts on real systems. Composites formed by newly developed reactive processes now offer the flexibility to systematically control metal-ceramic composite microstructure, and to test some of the property models that have been proposed for these materials. In this work, the effects of metal-ceramic composite microstructure on resistance curve (R-curve) behavior, strength, and reliability were systematically investigated. Al/Al2O3 composites were formed by reactive metal penetration (RMP) of aluminum metal into aluminosilicate ceramic preforms. Processing techniques were developed to control the metal content, metal composition, and metal ligament size in the resultant composite microstructure. Quantitative stereology and microscopy were used to characterize the composite microstructures, and then the influence of microstructure on strength, toughness, R-curve behavior, and reliability, was investigated. To identify the strength limiting flaws in the composite microstructure, fractography was used to determine the failure origins. Additionally, the crack bridging tractions produced by the metal ligaments in metal-ceramic composites formed by the RMP process were modeled. Due to relatively large flaws and low bridging stresses in RMP composites, no dependence of reliability on R-curve behavior was observed. The inherent flaws formed during reactive processing appear to limit the strength and reliability of composites formed by the RMP process. This investigation has established a clear relationship between processing, microstructure, and properties in metal-ceramic composites formed by the RMP process. RMP composite properties are determined by the metal-ceramic composite microstructure (e.g., metal content and ligament size), which can be systematically varied by processing. Furthermore, relative to the ceramic preforms used to make the composites, metal-ceramic composites formed by RMP generally have improved properties and combinations of properties that make them more desirable for advanced engineering applications.

  16. R&D of high reliable refrigeration system for superconducting generators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hosoya, T.; Shindo, S.; Yaguchi, H.

    1996-12-31

    Super-GM carries out R&D of 70 MW class superconducting generators (model machines), refrigeration system and superconducting wires to apply superconducting technology to electric power apparatuses. The helium refrigeration system for keeping field windings of superconducting generator (SCG) in cryogenic environment must meet the requirement of high reliability for uninterrupted long term operation of the SCG. In FY 1992, a high reliable conventional refrigeration system for the model machines was integrated by combining components such as compressor unit, higher temperature cold box and lower temperature cold box which were manufactured utilizing various fundamental technologies developed in early stage of the projectmore » since 1988. Since FY 1993, its performance tests have been carried out. It has been confirmed that its performance was fulfilled the development target of liquefaction capacity of 100 L/h and impurity removal in the helium gas to < 0.1 ppm. Furthermore, its operation method and performance were clarified to all different modes as how to control liquefaction rate and how to supply liquid helium from a dewar to the model machine. In addition, the authors have made performance tests and system performance analysis of oil free screw type and turbo type compressors which greatly improve reliability of conventional refrigeration systems. The operation performance and operational control method of the compressors has been clarified through the tests and analysis.« less

  17. Real-Time GNSS-Based Attitude Determination in the Measurement Domain

    PubMed Central

    Zhao, Lin; Li, Na; Li, Liang; Zhang, Yi; Cheng, Chun

    2017-01-01

    A multi-antenna-based GNSS receiver is capable of providing high-precision and drift-free attitude solution. Carrier phase measurements need be utilized to achieve high-precision attitude. The traditional attitude determination methods in the measurement domain and the position domain resolve the attitude and the ambiguity sequentially. The redundant measurements from multiple baselines have not been fully utilized to enhance the reliability of attitude determination. A multi-baseline-based attitude determination method in the measurement domain is proposed to estimate the attitude parameters and the ambiguity simultaneously. Meanwhile, the redundancy of attitude resolution has also been increased so that the reliability of ambiguity resolution and attitude determination can be enhanced. Moreover, in order to further improve the reliability of attitude determination, we propose a partial ambiguity resolution method based on the proposed attitude determination model. The static and kinematic experiments were conducted to verify the performance of the proposed method. When compared with the traditional attitude determination methods, the static experimental results show that the proposed method can improve the accuracy by at least 0.03° and enhance the continuity by 18%, at most. The kinematic result has shown that the proposed method can obtain an optimal balance between accuracy and reliability performance. PMID:28165434

  18. How to quantify exposure to traumatic stress? Reliability and predictive validity of measures for cumulative trauma exposure in a post-conflict population.

    PubMed

    Wilker, Sarah; Pfeiffer, Anett; Kolassa, Stephan; Koslowski, Daniela; Elbert, Thomas; Kolassa, Iris-Tatjana

    2015-01-01

    While studies with survivors of single traumatic experiences highlight individual response variation following trauma, research from conflict regions shows that almost everyone develops posttraumatic stress disorder (PTSD) if trauma exposure reaches extreme levels. Therefore, evaluating the effects of cumulative trauma exposure is of utmost importance in studies investigating risk factors for PTSD. Yet, little research has been devoted to evaluate how this important environmental risk factor can be best quantified. We investigated the retest reliability and predictive validity of different trauma measures in a sample of 227 Ugandan rebel war survivors. Trauma exposure was modeled as the number of traumatic event types experienced or as a score considering traumatic event frequencies. In addition, we investigated whether age at trauma exposure can be reliably measured and improves PTSD risk prediction. All trauma measures showed good reliability. While prediction of lifetime PTSD was most accurate from the number of different traumatic event types experienced, inclusion of event frequencies slightly improved the prediction of current PTSD. As assessing the number of traumatic events experienced is the least stressful and time-consuming assessment and leads to the best prediction of lifetime PTSD, we recommend this measure for research on PTSD etiology.

  19. AST Critical Propulsion and Noise Reduction Technologies for Future Commercial Subsonic Engines Area of Interest 1.0: Reliable and Affordable Control Systems

    NASA Technical Reports Server (NTRS)

    Myers, William; Winter, Steve

    2006-01-01

    The General Electric Reliable and Affordable Controls effort under the NASA Advanced Subsonic Technology (AST) Program has designed, fabricated, and tested advanced controls hardware and software to reduce emissions and improve engine safety and reliability. The original effort consisted of four elements: 1) a Hydraulic Multiplexer; 2) Active Combustor Control; 3) a Variable Displacement Vane Pump (VDVP); and 4) Intelligent Engine Control. The VDVP and Intelligent Engine Control elements were cancelled due to funding constraints and are reported here only to the state they progressed. The Hydraulic Multiplexing element developed and tested a prototype which improves reliability by combining the functionality of up to 16 solenoids and servo-valves into one component with a single electrically powered force motor. The Active Combustor Control element developed intelligent staging and control strategies for low emission combustors. This included development and tests of a Controlled Pressure Fuel Nozzle for fuel sequencing, a Fuel Multiplexer for individual fuel cup metering, and model-based control logic. Both the Hydraulic Multiplexer and Controlled Pressure Fuel Nozzle system were cleared for engine test. The Fuel Multiplexer was cleared for combustor rig test which must be followed by an engine test to achieve full maturation.

  20. A Case Study on Improving Intensive Care Unit (ICU) Services Reliability: By Using Process Failure Mode and Effects Analysis (PFMEA)

    PubMed Central

    Yousefinezhadi, Taraneh; Jannesar Nobari, Farnaz Attar; Goodari, Faranak Behzadi; Arab, Mohammad

    2016-01-01

    Introduction: In any complex human system, human error is inevitable and shows that can’t be eliminated by blaming wrong doers. So with the aim of improving Intensive Care Units (ICU) reliability in hospitals, this research tries to identify and analyze ICU’s process failure modes at the point of systematic approach to errors. Methods: In this descriptive research, data was gathered qualitatively by observations, document reviews, and Focus Group Discussions (FGDs) with the process owners in two selected ICUs in Tehran in 2014. But, data analysis was quantitative, based on failures’ Risk Priority Number (RPN) at the base of Failure Modes and Effects Analysis (FMEA) method used. Besides, some causes of failures were analyzed by qualitative Eindhoven Classification Model (ECM). Results: Through FMEA methodology, 378 potential failure modes from 180 ICU activities in hospital A and 184 potential failures from 99 ICU activities in hospital B were identified and evaluated. Then with 90% reliability (RPN≥100), totally 18 failures in hospital A and 42 ones in hospital B were identified as non-acceptable risks and then their causes were analyzed by ECM. Conclusions: Applying of modified PFMEA for improving two selected ICUs’ processes reliability in two different kinds of hospitals shows that this method empowers staff to identify, evaluate, prioritize and analyze all potential failure modes and also make them eager to identify their causes, recommend corrective actions and even participate in improving process without feeling blamed by top management. Moreover, by combining FMEA and ECM, team members can easily identify failure causes at the point of health care perspectives. PMID:27157162

  1. Improvement of automatic control system for high-speed current collectors

    NASA Astrophysics Data System (ADS)

    Sidorov, O. A.; Goryunov, V. N.; Golubkov, A. S.

    2018-01-01

    The article considers the ways of regulation of pantographs to provide quality and reliability of current collection at high speeds. To assess impact of regulation was proposed integral criterion of the quality of current collection, taking into account efficiency and reliability of operation of the pantograph. The study was carried out using mathematical model of interaction of pantograph and catenary system, allowing to assess contact force and intensity of arcing at the contact zone at different movement speeds. The simulation results allowed us to estimate the efficiency of different methods of regulation of pantographs and determine the best option.

  2. Redundancy management of inertial systems.

    NASA Technical Reports Server (NTRS)

    Mckern, R. A.; Musoff, H.

    1973-01-01

    The paper reviews developments in failure detection and isolation techniques applicable to gimballed and strapdown systems. It examines basic redundancy management goals of improved reliability, performance and logistic costs, and explores mechanizations available for both input and output data handling. The meaning of redundant system reliability in terms of available coverage, system MTBF, and mission time is presented and the practical hardware performance limitations of failure detection and isolation techniques are explored. Simulation results are presented illustrating implementation coverages attainable considering IMU performance models and mission detection threshold requirements. The implications of a complete GN&C redundancy management method on inertial techniques are also explored.

  3. Reliability models: the influence of model specification in generation expansion planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stremel, J.P.

    1982-10-01

    This paper is a critical evaluation of reliability methods used for generation expansion planning. It is shown that the methods for treating uncertainty are critical for determining the relative reliability value of expansion alternatives. It is also shown that the specification of the reliability model will not favor all expansion options equally. Consequently, the model is biased. In addition, reliability models should be augmented with an economic value of reliability (such as the cost of emergency procedures or energy not served). Generation expansion evaluations which ignore the economic value of excess reliability can be shown to be inconsistent. The conclusionsmore » are that, in general, a reliability model simplifies generation expansion planning evaluations. However, for a thorough analysis, the expansion options should be reviewed for candidates which may be unduly rejected because of the bias of the reliability model. And this implies that for a consistent formulation in an optimization framework, the reliability model should be replaced with a full economic optimization which includes the costs of emergency procedures and interruptions in the objective function.« less

  4. Launch and Assembly Reliability Analysis for Mars Human Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Cates, Grant R.; Stromgren, Chel; Cirillo, William M.; Goodliff, Kandyce E.

    2013-01-01

    NASA s long-range goal is focused upon human exploration of Mars. Missions to Mars will require campaigns of multiple launches to assemble Mars Transfer Vehicles in Earth orbit. Launch campaigns are subject to delays, launch vehicles can fail to place their payloads into the required orbit, and spacecraft may fail during the assembly process or while loitering prior to the Trans-Mars Injection (TMI) burn. Additionally, missions to Mars have constrained departure windows lasting approximately sixty days that repeat approximately every two years. Ensuring high reliability of launching and assembling all required elements in time to support the TMI window will be a key enabler to mission success. This paper describes an integrated methodology for analyzing and improving the reliability of the launch and assembly campaign phase. A discrete event simulation involves several pertinent risk factors including, but not limited to: manufacturing completion; transportation; ground processing; launch countdown; ascent; rendezvous and docking, assembly, and orbital operations leading up to TMI. The model accommodates varying numbers of launches, including the potential for spare launches. Having a spare launch capability provides significant improvement to mission success.

  5. Application of neural networks to software quality modeling of a very large telecommunications system.

    PubMed

    Khoshgoftaar, T M; Allen, E B; Hudepohl, J P; Aud, S J

    1997-01-01

    Society relies on telecommunications to such an extent that telecommunications software must have high reliability. Enhanced measurement for early risk assessment of latent defects (EMERALD) is a joint project of Nortel and Bell Canada for improving the reliability of telecommunications software products. This paper reports a case study of neural-network modeling techniques developed for the EMERALD system. The resulting neural network is currently in the prototype testing phase at Nortel. Neural-network models can be used to identify fault-prone modules for extra attention early in development, and thus reduce the risk of operational problems with those modules. We modeled a subset of modules representing over seven million lines of code from a very large telecommunications software system. The set consisted of those modules reused with changes from the previous release. The dependent variable was membership in the class of fault-prone modules. The independent variables were principal components of nine measures of software design attributes. We compared the neural-network model with a nonparametric discriminant model and found the neural-network model had better predictive accuracy.

  6. Improving the Test-Retest Reliability of Resting State fMRI by Removing the Impact of Sleep.

    PubMed

    Wang, Jiahui; Han, Junwei; Nguyen, Vinh T; Guo, Lei; Guo, Christine C

    2017-01-01

    Resting state functional magnetic resonance imaging (rs-fMRI) provides a powerful tool to examine large-scale neural networks in the human brain and their disturbances in neuropsychiatric disorders. Thanks to its low demand and high tolerance, resting state paradigms can be easily acquired from clinical population. However, due to the unconstrained nature, resting state paradigm is associated with excessive head movement and proneness to sleep. Consequently, the test-retest reliability of rs-fMRI measures is moderate at best, falling short of widespread use in the clinic. Here, we characterized the effect of sleep on the test-retest reliability of rs-fMRI. Using measures of heart rate variability (HRV) derived from simultaneous electrocardiogram (ECG) recording, we identified portions of fMRI data when subjects were more alert or sleepy, and examined their effects on the test-retest reliability of functional connectivity measures. When volumes of sleep were excluded, the reliability of rs-fMRI is significantly improved, and the improvement appears to be general across brain networks. The amount of improvement is robust with the removal of as much as 60% volumes of sleepiness. Therefore, test-retest reliability of rs-fMRI is affected by sleep and could be improved by excluding volumes of sleepiness as indexed by HRV. Our results suggest a novel and practical method to improve test-retest reliability of rs-fMRI measures.

  7. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1993-01-01

    Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down.

  8. An improved mounting device for attaching intracranial probes in large animal models.

    PubMed

    Dunster, Kimble R

    2015-12-01

    The rigid support of intracranial probes can be difficult when using animal models, as mounting devices suitable for the probes are either not available, or designed for human use and not suitable in animal skulls. A cheap and reliable mounting device for securing intracranial probes in large animal models is described. Using commonly available clinical consumables, a universal mounting device for securing intracranial probes to the skull of large animals was developed and tested. A simply made mounting device to hold a variety of probes from 500 μm to 1.3 mm in diameter to the skull was developed. The device was used to hold probes to the skulls of sheep for up to 18 h. No adhesives or cements were used. The described device provides a reliable method of securing probes to the skull of animals.

  9. The welfare effects of integrating renewable energy into electricity markets

    NASA Astrophysics Data System (ADS)

    Lamadrid, Alberto J.

    The challenges of deploying more renewable energy sources on an electric grid are caused largely by their inherent variability. In this context, energy storage can help make the electric delivery system more reliable by mitigating this variability. This thesis analyzes a series of models for procuring electricity and ancillary services for both individuals and social planners with high penetrations of stochastic wind energy. The results obtained for an individual decision maker using stochastic optimization are ambiguous, with closed form solutions dependent on technological parameters, and no consideration of the system reliability. The social planner models correctly reflect the effect of system reliability, and in the case of a Stochastic, Security Constrained Optimal Power Flow (S-SC-OPF or SuperOPF), determine reserve capacity endogenously so that system reliability is maintained. A single-period SuperOPF shows that including ramping costs in the objective function leads to more wind spilling and increased capacity requirements for reliability. However, this model does not reflect the inter temporal tradeoffs of using Energy Storage Systems (ESS) to improve reliability and mitigate wind variability. The results with the multiperiod SuperOPF determine the optimum use of storage for a typical day, and compare the effects of collocating ESS at wind sites with the same amount of storage (deferrable demand) located at demand centers. The collocated ESS has slightly lower operating costs and spills less wind generation compared to deferrable demand, but the total amount of conventional generating capacity needed for system adequacy is higher. In terms of the total system costs, that include the capital cost of conventional generating capacity, the costs with deferrable demand is substantially lower because the daily demand profile is flattened and less conventional generation capacity is then needed for reliability purposes. The analysis also demonstrates that the optimum daily pattern of dispatch and reserves is seriously distorted if the stochastic characteristics of wind generation are ignored.

  10. Structural Probability Concepts Adapted to Electrical Engineering

    NASA Technical Reports Server (NTRS)

    Steinberg, Eric P.; Chamis, Christos C.

    1994-01-01

    Through the use of equivalent variable analogies, the authors demonstrate how an electrical subsystem can be modeled by an equivalent structural subsystem. This allows the electrical subsystem to be probabilistically analyzed by using available structural reliability computer codes such as NESSUS. With the ability to analyze the electrical subsystem probabilistically, we can evaluate the reliability of systems that include both structural and electrical subsystems. Common examples of such systems are a structural subsystem integrated with a health-monitoring subsystem, and smart structures. Since these systems have electrical subsystems that directly affect the operation of the overall system, probabilistically analyzing them could lead to improved reliability and reduced costs. The direct effect of the electrical subsystem on the structural subsystem is of secondary order and is not considered in the scope of this work.

  11. Automatic documentation system extension to multi-manufacturers' computers and to measure, improve, and predict software reliability

    NASA Technical Reports Server (NTRS)

    Simmons, D. B.

    1975-01-01

    The DOMONIC system has been modified to run on the Univac 1108 and the CDC 6600 as well as the IBM 370 computer system. The DOMONIC monitor system has been implemented to gather data which can be used to optimize the DOMONIC system and to predict the reliability of software developed using DOMONIC. The areas of quality metrics, error characterization, program complexity, program testing, validation and verification are analyzed. A software reliability model for estimating program completion levels and one on which to base system acceptance have been developed. The DAVE system which performs flow analysis and error detection has been converted from the University of Colorado CDC 6400/6600 computer to the IBM 360/370 computer system for use with the DOMONIC system.

  12. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization

    PubMed Central

    Chen, Qingkui; Zhao, Deyu; Wang, Jingjuan

    2017-01-01

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes’ diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services. PMID:28777325

  13. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization.

    PubMed

    Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan

    2017-08-04

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.

  14. Software reliability models for critical applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pham, H.; Pham, M.

    This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the secondmore » place. 407 refs., 4 figs., 2 tabs.« less

  15. Software reliability models for critical applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pham, H.; Pham, M.

    This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place.more » 407 refs., 4 figs., 2 tabs.« less

  16. Improving the FLORIS wind plant model for compatibility with gradient-based optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, Jared J.; Gebraad, Pieter MO; Ning, Andrew

    The FLORIS (FLOw Redirection and Induction in Steady-state) model, a parametric wind turbine wake model that predicts steady-state wake characteristics based on wind turbine position and yaw angle, was developed for optimization of control settings and turbine locations. This article provides details on changes made to the FLORIS model to make the model more suitable for gradient-based optimization. Changes to the FLORIS model were made to remove discontinuities and add curvature to regions of non-physical zero gradient. Exact gradients for the FLORIS model were obtained using algorithmic differentiation. A set of three case studies demonstrate that using exact gradients withmore » gradient-based optimization reduces the number of function calls by several orders of magnitude. The case studies also show that adding curvature improves convergence behavior, allowing gradient-based optimization algorithms used with the FLORIS model to more reliably find better solutions to wind farm optimization problems.« less

  17. Long-term prediction of fish growth under varying ambient temperature using a multiscale dynamic model

    PubMed Central

    2009-01-01

    Background Feed composition has a large impact on the growth of animals, particularly marine fish. We have developed a quantitative dynamic model that can predict the growth and body composition of marine fish for a given feed composition over a timespan of several months. The model takes into consideration the effects of environmental factors, particularly temperature, on growth, and it incorporates detailed kinetics describing the main metabolic processes (protein, lipid, and central metabolism) known to play major roles in growth and body composition. Results For validation, we compared our model's predictions with the results of several experimental studies. We showed that the model gives reliable predictions of growth, nutrient utilization (including amino acid retention), and body composition over a timespan of several months, longer than most of the previously developed predictive models. Conclusion We demonstrate that, despite the difficulties involved, multiscale models in biology can yield reasonable and useful results. The model predictions are reliable over several timescales and in the presence of strong temperature fluctuations, which are crucial factors for modeling marine organism growth. The model provides important improvements over existing models. PMID:19903354

  18. Protein-Protein Interface Predictions by Data-Driven Methods: A Review

    PubMed Central

    Xue, Li C; Dobbs, Drena; Bonvin, Alexandre M.J.J.; Honavar, Vasant

    2015-01-01

    Reliably pinpointing which specific amino acid residues form the interface(s) between a protein and its binding partner(s) is critical for understanding the structural and physicochemical determinants of protein recognition and binding affinity, and has wide applications in modeling and validating protein interactions predicted by high-throughput methods, in engineering proteins, and in prioritizing drug targets. Here, we review the basic concepts, principles and recent advances in computational approaches to the analysis and prediction of protein-protein interfaces. We point out caveats for objectively evaluating interface predictors, and discuss various applications of data-driven interface predictors for improving energy model-driven protein-protein docking. Finally, we stress the importance of exploiting binding partner information in reliably predicting interfaces and highlight recent advances in this emerging direction. PMID:26460190

  19. An assessment of laser velocimetry in hypersonic flow

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Although extensive progress has been made in computational fluid mechanics, reliable flight vehicle designs and modifications still cannot be made without recourse to extensive wind tunnel testing. Future progress in the computation of hypersonic flow fields is restricted by the need for a reliable mean flow and turbulence modeling data base which could be used to aid in the development of improved empirical models for use in numerical codes. Currently, there are few compressible flow measurements which could be used for this purpose. In this report, the results of experiments designed to assess the potential for laser velocimeter measurements of mean flow and turbulent fluctuations in hypersonic flow fields are presented. Details of a new laser velocimeter system which was designed and built for this test program are described.

  20. Modeling service time reliability in urban ferry system

    NASA Astrophysics Data System (ADS)

    Chen, Yifan; Luo, Sida; Zhang, Mengke; Shen, Hanxia; Xin, Feifei; Luo, Yujie

    2017-09-01

    The urban ferry system can carry a large number of travelers, which may alleviate the pressure on road traffic. As an indicator of its service quality, service time reliability (STR) plays an essential part in attracting travelers to the ferry system. A wide array of studies have been conducted to analyze the STR of land transportation. However, the STR of ferry systems has received little attention in the transportation literature. In this study, a model was established to obtain the STR in urban ferry systems. First, the probability density function (PDF) of the service time provided by ferry systems was constructed. Considering the deficiency of the queuing theory, this PDF was determined by Bayes’ theorem. Then, to validate the function, the results of the proposed model were compared with those of the Monte Carlo simulation. With the PDF, the reliability could be determined mathematically by integration. Results showed how the factors including the frequency, capacity, time schedule and ferry waiting time affected the STR under different degrees of congestion in ferry systems. Based on these results, some strategies for improving the STR were proposed. These findings are of great significance to increasing the share of ferries among various urban transport modes.

  1. Learning from Trending, Precursor Analysis, and System Failures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Youngblood, R. W.; Duffey, R. B.

    2015-11-01

    Models of reliability growth relate current system unreliability to currently accumulated experience. But “experience” comes in different forms. Looking back after a major accident, one is sometimes able to identify previous events or measurable performance trends that were, in some sense, signaling the potential for that major accident: potential that could have been recognized and acted upon, but was not recognized until the accident occurred. This could be a previously unrecognized cause of accidents, or underestimation of the likelihood that a recognized potential cause would actually operate. Despite improvements in the state of practice of modeling of risk and reliability,more » operational experience still has a great deal to teach us, and work has been going on in several industries to try to do a better job of learning from experience before major accidents occur. It is not enough to say that we should review operating experience; there is too much “experience” for such general advice to be considered practical. The paper discusses the following: 1. The challenge of deciding what to focus on in analysis of operating experience. 2. Comparing what different models of learning and reliability growth imply about trending and precursor analysis.« less

  2. Improved dynamical scaling analysis using the kernel method for nonequilibrium relaxation.

    PubMed

    Echinaka, Yuki; Ozeki, Yukiyasu

    2016-10-01

    The dynamical scaling analysis for the Kosterlitz-Thouless transition in the nonequilibrium relaxation method is improved by the use of Bayesian statistics and the kernel method. This allows data to be fitted to a scaling function without using any parametric model function, which makes the results more reliable and reproducible and enables automatic and faster parameter estimation. Applying this method, the bootstrap method is introduced and a numerical discrimination for the transition type is proposed.

  3. Hierarchical Bayesian Model Averaging for Chance Constrained Remediation Designs

    NASA Astrophysics Data System (ADS)

    Chitsazan, N.; Tsai, F. T.

    2012-12-01

    Groundwater remediation designs are heavily relying on simulation models which are subjected to various sources of uncertainty in their predictions. To develop a robust remediation design, it is crucial to understand the effect of uncertainty sources. In this research, we introduce a hierarchical Bayesian model averaging (HBMA) framework to segregate and prioritize sources of uncertainty in a multi-layer frame, where each layer targets a source of uncertainty. The HBMA framework provides an insight to uncertainty priorities and propagation. In addition, HBMA allows evaluating model weights in different hierarchy levels and assessing the relative importance of models in each level. To account for uncertainty, we employ a chance constrained (CC) programming for stochastic remediation design. Chance constrained programming was implemented traditionally to account for parameter uncertainty. Recently, many studies suggested that model structure uncertainty is not negligible compared to parameter uncertainty. Using chance constrained programming along with HBMA can provide a rigorous tool for groundwater remediation designs under uncertainty. In this research, the HBMA-CC was applied to a remediation design in a synthetic aquifer. The design was to develop a scavenger well approach to mitigate saltwater intrusion toward production wells. HBMA was employed to assess uncertainties from model structure, parameter estimation and kriging interpolation. An improved harmony search optimization method was used to find the optimal location of the scavenger well. We evaluated prediction variances of chloride concentration at the production wells through the HBMA framework. The results showed that choosing the single best model may lead to a significant error in evaluating prediction variances for two reasons. First, considering the single best model, variances that stem from uncertainty in the model structure will be ignored. Second, considering the best model with non-dominant model weight may underestimate or overestimate prediction variances by ignoring other plausible propositions. Chance constraints allow developing a remediation design with a desirable reliability. However, considering the single best model, the calculated reliability will be different from the desirable reliability. We calculated the reliability of the design for the models at different levels of HBMA. The results showed that by moving toward the top layers of HBMA, the calculated reliability converges to the chosen reliability. We employed the chance constrained optimization along with the HBMA framework to find the optimal location and pumpage for the scavenger well. The results showed that using models at different levels in the HBMA framework, the optimal location of the scavenger well remained the same, but the optimal extraction rate was altered. Thus, we concluded that the optimal pumping rate was sensitive to the prediction variance. Also, the prediction variance was changed by using different extraction rate. Using very high extraction rate will cause prediction variances of chloride concentration at the production wells to approach zero regardless of which HBMA models used.

  4. Common carotid artery intima-media thickness is as good as carotid intima-media thickness of all carotid artery segments in improving prediction of coronary heart disease risk in the Atherosclerosis Risk in Communities (ARIC) study.

    PubMed

    Nambi, Vijay; Chambless, Lloyd; He, Max; Folsom, Aaron R; Mosley, Tom; Boerwinkle, Eric; Ballantyne, Christie M

    2012-01-01

    Carotid intima-media thickness (CIMT) and plaque information can improve coronary heart disease (CHD) risk prediction when added to traditional risk factors (TRF). However, obtaining adequate images of all carotid artery segments (A-CIMT) may be difficult. Of A-CIMT, the common carotid artery intima-media thickness (CCA-IMT) is relatively more reliable and easier to measure. We evaluated whether CCA-IMT is comparable to A-CIMT when added to TRF and plaque information in improving CHD risk prediction in the Atherosclerosis Risk in Communities (ARIC) study. Ten-year CHD risk prediction models using TRF alone, TRF + A-CIMT + plaque, and TRF + CCA-IMT + plaque were developed for the overall cohort, men, and women. The area under the receiver operator characteristic curve (AUC), per cent individuals reclassified, net reclassification index (NRI), and model calibration by the Grønnesby-Borgan test were estimated. There were 1722 incident CHD events in 12 576 individuals over a mean follow-up of 15.2 years. The AUC for TRF only, TRF + A-CIMT + plaque, and TRF + CCA-IMT + plaque models were 0.741, 0.754, and 0.753, respectively. Although there was some discordance when the CCA-IMT + plaque- and A-CIMT + plaque-based risk estimation was compared, the NRI and clinical NRI (NRI in the intermediate-risk group) when comparing the CIMT models with TRF-only model, per cent reclassified, and test for model calibration were not significantly different. Coronary heart disease risk prediction can be improved by adding A-CIMT + plaque or CCA-IMT + plaque information to TRF. Therefore, evaluating the carotid artery for plaque presence and measuring CCA-IMT, which is easier and more reliable than measuring A-CIMT, provide a good alternative to measuring A-CIMT for CHD risk prediction.

  5. Improved source inversion from joint measurements of translational and rotational ground motions

    NASA Astrophysics Data System (ADS)

    Donner, S.; Bernauer, M.; Reinwald, M.; Hadziioannou, C.; Igel, H.

    2017-12-01

    Waveform inversion for seismic point (moment tensor) and kinematic sources is a standard procedure. However, especially in the local and regional distances a lack of appropriate velocity models, the sparsity of station networks, or a low signal-to-noise ratio combined with more complex waveforms hamper the successful retrieval of reliable source solutions. We assess the potential of rotational ground motion recordings to increase the resolution power and reduce non-uniquenesses for point and kinematic source solutions. Based on synthetic waveform data, we perform a Bayesian (i.e. probabilistic) inversion. Thus, we avoid the subjective selection of the most reliable solution according the lowest misfit or other constructed criterion. In addition, we obtain unbiased measures of resolution and possible trade-offs. Testing different earthquake mechanisms and scenarios, we can show that the resolution of the source solutions can be improved significantly. Especially depth dependent components show significant improvement. Next to synthetic data of station networks, we also tested sparse-network and single station cases.

  6. An Improved method for separation of leucocytes from peripheral blood of the little skate (Leucoraja erinacea)

    PubMed Central

    Tomana, Mitsuru; Parton, Angela; Barnes, David W.

    2008-01-01

    Cartilaginous fish, especially sharks, rays and skates (elasmobranchs) hold interest as comparative models in immunology because they are thought to be among the organisms most closely related to the ancestor animal that first developed acquired immunity. The aim of this study was to improve methods used for the purification of viable leucocytes from peripheral blood of elasmobranchs. Here we describe modifications of density gradient centrifugation and medium formulation that improve isolation and analysis of highly-purified leucocytes from peripheral blood of a model elasmobranch, Leucoraja erinacea, the little skate. These techniques contribute to the preparation of elasmobranch immune cells that can be reliably analyzed by a variety of means, including the study of immune function. PMID:18474431

  7. Integrating Reliability Analysis with a Performance Tool

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Palumbo, Daniel L.; Ulrey, Michael

    1995-01-01

    A large number of commercial simulation tools support performance oriented studies of complex computer and communication systems. Reliability of these systems, when desired, must be obtained by remodeling the system in a different tool. This has obvious drawbacks: (1) substantial extra effort is required to create the reliability model; (2) through modeling error the reliability model may not reflect precisely the same system as the performance model; (3) as the performance model evolves one must continuously reevaluate the validity of assumptions made in that model. In this paper we describe an approach, and a tool that implements this approach, for integrating a reliability analysis engine into a production quality simulation based performance modeling tool, and for modeling within such an integrated tool. The integrated tool allows one to use the same modeling formalisms to conduct both performance and reliability studies. We describe how the reliability analysis engine is integrated into the performance tool, describe the extensions made to the performance tool to support the reliability analysis, and consider the tool's performance.

  8. Southeast Atmosphere Studies: learning from model-observation syntheses

    NASA Astrophysics Data System (ADS)

    Mao, Jingqiu; Carlton, Annmarie; Cohen, Ronald C.; Brune, William H.; Brown, Steven S.; Wolfe, Glenn M.; Jimenez, Jose L.; Pye, Havala O. T.; Ng, Nga Lee; Xu, Lu; McNeill, V. Faye; Tsigaridis, Kostas; McDonald, Brian C.; Warneke, Carsten; Guenther, Alex; Alvarado, Matthew J.; de Gouw, Joost; Mickley, Loretta J.; Leibensperger, Eric M.; Mathur, Rohit; Nolte, Christopher G.; Portmann, Robert W.; Unger, Nadine; Tosca, Mika; Horowitz, Larry W.

    2018-02-01

    Concentrations of atmospheric trace species in the United States have changed dramatically over the past several decades in response to pollution control strategies, shifts in domestic energy policy and economics, and economic development (and resulting emission changes) elsewhere in the world. Reliable projections of the future atmosphere require models to not only accurately describe current atmospheric concentrations, but to do so by representing chemical, physical and biological processes with conceptual and quantitative fidelity. Only through incorporation of the processes controlling emissions and chemical mechanisms that represent the key transformations among reactive molecules can models reliably project the impacts of future policy, energy and climate scenarios. Efforts to properly identify and implement the fundamental and controlling mechanisms in atmospheric models benefit from intensive observation periods, during which collocated measurements of diverse, speciated chemicals in both the gas and condensed phases are obtained. The Southeast Atmosphere Studies (SAS, including SENEX, SOAS, NOMADSS and SEAC4RS) conducted during the summer of 2013 provided an unprecedented opportunity for the atmospheric modeling community to come together to evaluate, diagnose and improve the representation of fundamental climate and air quality processes in models of varying temporal and spatial scales.This paper is aimed at discussing progress in evaluating, diagnosing and improving air quality and climate modeling using comparisons to SAS observations as a guide to thinking about improvements to mechanisms and parameterizations in models. The effort focused primarily on model representation of fundamental atmospheric processes that are essential to the formation of ozone, secondary organic aerosol (SOA) and other trace species in the troposphere, with the ultimate goal of understanding the radiative impacts of these species in the southeast and elsewhere. Here we address questions surrounding four key themes: gas-phase chemistry, aerosol chemistry, regional climate and chemistry interactions, and natural and anthropogenic emissions. We expect this review to serve as a guidance for future modeling efforts.

  9. Southeast Atmosphere Studies: learning from model-observation syntheses

    PubMed Central

    Mao, Jingqiu; Carlton, Annmarie; Cohen, Ronald C.; Brune, William H.; Brown, Steven S.; Wolfe, Glenn M.; Jimenez, Jose L.; Pye, Havala O. T.; Ng, Nga Lee; Xu, Lu; McNeill, V. Faye; Tsigaridis, Kostas; McDonald, Brian C.; Warneke, Carsten; Guenther, Alex; Alvarado, Matthew J.; de Gouw, Joost; Mickley, Loretta J.; Leibensperger, Eric M.; Mathur, Rohit; Nolte, Christopher G.; Portmann, Robert W.; Unger, Nadine; Tosca, Mika; Horowitz, Larry W.

    2018-01-01

    Concentrations of atmospheric trace species in the United States have changed dramatically over the past several decades in response to pollution control strategies, shifts in domestic energy policy and economics, and economic development (and resulting emission changes) elsewhere in the world. Reliable projections of the future atmosphere require models to not only accurately describe current atmospheric concentrations, but to do so by representing chemical, physical and biological processes with conceptual and quantitative fidelity. Only through incorporation of the processes controlling emissions and chemical mechanisms that represent the key transformations among reactive molecules can models reliably project the impacts of future policy, energy and climate scenarios. Efforts to properly identify and implement the fundamental and controlling mechanisms in atmospheric models benefit from intensive observation periods, during which collocated measurements of diverse, speciated chemicals in both the gas and condensed phases are obtained. The Southeast Atmosphere Studies (SAS, including SENEX, SOAS, NOMADSS and SEAC4RS) conducted during the summer of 2013 provided an unprecedented opportunity for the atmospheric modeling community to come together to evaluate, diagnose and improve the representation of fundamental climate and air quality processes in models of varying temporal and spatial scales. This paper is aimed at discussing progress in evaluating, diagnosing and improving air quality and climate modeling using comparisons to SAS observations as a guide to thinking about improvements to mechanisms and parameterizations in models. The effort focused primarily on model representation of fundamental atmospheric processes that are essential to the formation of ozone, secondary organic aerosol (SOA) and other trace species in the troposphere, with the ultimate goal of understanding the radiative impacts of these species in the southeast and elsewhere. Here we address questions surrounding four key themes: gas-phase chemistry, aerosol chemistry, regional climate and chemistry interactions, and natural and anthropogenic emissions. We expect this review to serve as a guidance for future modeling efforts.

  10. Southeast Atmosphere Studies: Learning from Model-Observation Syntheses

    NASA Technical Reports Server (NTRS)

    Mao, Jingqiu; Carlton, Annmarie; Cohen, Ronald C.; Brune, William H.; Brown, Steven S.; Wolfe, Glenn M.; Jimenez, Jose L.; Pye, Havala O. T.; Ng, Nga Lee; Xu, Lu; hide

    2018-01-01

    Concentrations of atmospheric trace species in the United States have changed dramatically over the past several decades in response to pollution control strategies, shifts in domestic energy policy and economics, and economic development (and resulting emission changes) elsewhere in the world. Reliable projections of the future atmosphere require models to not only accurately describe current atmospheric concentrations, but to do so by representing chemical, physical and biological processes with conceptual and quantitative fidelity. Only through incorporation of the processes controlling emissions and chemical mechanisms that represent the key transformations among reactive molecules can models reliably project the impacts of future policy, energy and climate scenarios. Efforts to properly identify and implement the fundamental and controlling mechanisms in atmospheric models benefit from intensive observation periods, during which collocated measurements of diverse, speciated chemicals in both the gas and condensed phases are obtained. The Southeast Atmosphere Studies (SAS, including SENEX, SOAS, NOMADSS and SEAC4RS) conducted during the summer of 2013 provided an unprecedented opportunity for the atmospheric modeling community to come together to evaluate, diagnose and improve the representation of fundamental climate and air quality processes in models of varying temporal and spatial scales. This paper is aimed at discussing progress in evaluating, diagnosing and improving air quality and climate modeling using comparisons to SAS observations as a guide to thinking about improvements to mechanisms and parameterizations in models. The effort focused primarily on model representation of fundamental atmospheric processes that are essential to the formation of ozone, secondary organic aerosol (SOA) and other trace species in the troposphere, with the ultimate goal of understanding the radiative impacts of these species in the southeast and elsewhere. Here we address questions surrounding four key themes: gas-phase chemistry, aerosol chemistry, regional climate and chemistry interactions, and natural and anthropogenic emissions. We expect this review to serve as a guidance for future modeling efforts.

  11. Leveraging Observation Tools for Instructional Improvement: Exploring Variability in Uptake of Ambitious Instructional Practices

    ERIC Educational Resources Information Center

    Cohen, Julie; Schuldt, Lorien Chambers; Brown, Lindsay; Grossman, Pamela

    2016-01-01

    Background/Context: Current efforts to build rigorous teacher evaluation systems has increased interest in standardized classroom observation tools as reliable measures for assessing teaching. However, many argue these instruments can also be used to effect change in classroom practice. This study investigates a model of professional development…

  12. Renewable Energy on the Front Lines - Continuum Magazine | NREL

    Science.gov Websites

    , vehicles, the microgrid, and intelligent controls. Functional models of this system could be used to of the multi-year, multi-agency Smart Power Infrastructure Demonstration for Energy Reliability and Security (SPIDERS) project, which focuses on improving energy surety for military installations. Funded by

  13. School District Professional Learning: Teachers' Perceptions of Instructional Leadership, Teacher Practice, and Student Learning

    ERIC Educational Resources Information Center

    Avery, Christine M.

    2013-01-01

    This dissertation study includes an evaluation of a school district model of professional learning that aims to improve school administrators' instructional leadership skills and teacher practice to positively impact student learning. This study employs a valid and reliable survey instrument that measures professional learning standards. The…

  14. Empirically Derived Optimal Growth Equations For Hardwoods and Softwoods in Arkansas

    Treesearch

    Don C. Bragg

    2002-01-01

    Accurate growth projections are critical to reliable forest models, and ecologically based simulators can improve siivicultural predictions because of their sensitivity to change and their capacity to produce long-term forecasts. Potential relative increment (PRI) optimal diameter growth equations for loblolly pine, shortleaf pine, sweetgum, and white oak were fit to...

  15. Improved Cryptosporidium parvum oocysts propagation using dexamethasone suppressed CF-1 mice

    EPA Science Inventory

    This study evaluates Cryptosporidium parvum oocyst production in dexamethasone suppressed CF-1 and C57BL/6 mice. Both models can yield 1 x 109 total oocysts over a 20 day production period; however, only 20 CF-1 mice are required to reliably achieve this goal compared...

  16. Strategies to improve electrode positioning and safety in cochlear implants.

    PubMed

    Rebscher, S J; Heilmann, M; Bruszewski, W; Talbot, N H; Snyder, R L; Merzenich, M M

    1999-03-01

    An injection-molded internal supporting rib has been produced to control the flexibility of silicone rubber encapsulated electrodes designed to electrically stimulate the auditory nerve in human subjects with severe to profound hearing loss. The rib molding dies, and molds for silicone rubber encapsulation of the electrode, were designed and machined using AutoCad and MasterCam software packages in a PC environment. After molding, the prototype plastic ribs were iteratively modified based on observations of the performance of the rib/silicone composite insert in a clear plastic model of the human scala tympani cavity. The rib-based electrodes were reliably inserted farther into these models, required less insertion force and were positioned closer to the target auditory neural elements than currently available cochlear implant electrodes. With further design improvements the injection-molded rib may also function to accurately support metal stimulating contacts and wire leads during assembly to significantly increase the manufacturing efficiency of these devices. This method to reliably control the mechanical properties of miniature implantable devices with multiple electrical leads may be valuable in other areas of biomedical device design.

  17. Validity and Reliability of Visual Analog Scaling for Assessment of Hypernasality and Audible Nasal Emission in Children With Repaired Cleft Palate.

    PubMed

    Baylis, Adriane; Chapman, Kathy; Whitehill, Tara L; Group, The Americleft Speech

    2015-11-01

    To investigate the validity and reliability of multiple listener judgments of hypernasality and audible nasal emission, in children with repaired cleft palate, using visual analog scaling (VAS) and equal-appearing interval (EAI) scaling. Prospective comparative study of multiple listener ratings of hypernasality and audible nasal emission. Multisite institutional. Five trained and experienced speech-language pathologist listeners from the Americleft Speech Project. Average VAS and EAI ratings of hypernasality and audible nasal emission/turbulence for 12 video-recorded speech samples from the Americleft Speech Project. Intrarater and interrater reliability was computed, as well as linear and polynomial models of best fit. Intrarater and interrater reliability was acceptable for both rating methods; however, reliability was higher for VAS as compared to EAI ratings. When VAS ratings were plotted against EAI ratings, results revealed a stronger curvilinear relationship. The results of this study provide additional evidence that alternate rating methods such as VAS may offer improved validity and reliability over EAI ratings of speech. VAS should be considered a viable method for rating hypernasality and nasal emission in speech in children with repaired cleft palate.

  18. Cabin Atmosphere Monitoring System (CAMS), pre-prototype model development continuation

    NASA Technical Reports Server (NTRS)

    Bursack, W. W.; Harris, W. A.

    1975-01-01

    The development of the Cabin Atmosphere Monitoring System (CAMS) is described. Attention was directed toward improving stability and reliability of the design using flight application guidelines. Considerable effort was devoted to the development of a temperature-stable RF/DC generator used for excitation of the quadrupole mass filter. Minor design changes were made in the preprototype model. Specific gas measurement examples are included along with a discussion of the measurement rationale employed.

  19. At the Crossroads of Nanotoxicology: Past Achievements and Current Challenges

    DTIC Science & Technology

    2015-01-01

    rates of ionic dissolution, improving in vitro to in vivo predictive efficiencies, and establishing safety exposure limits. This Review will discuss...Oberdörster et al., 2005a), which drove the focus of in vitro and in vivo model selection to accommodate these areas of higher NM exposure. Most...Accordingly, a current challenge is the design of simple, in vitro models that reliably predict in vivo effects following a NM challenge. In order

  20. Intraseasonal Variability in the Atmosphere-Ocean Climate System. Second Edition

    NASA Technical Reports Server (NTRS)

    Lau, William K. M.; Waliser, Duane E.

    2011-01-01

    Understanding and predicting the intraseasonal variability (ISV) of the ocean and atmosphere is crucial to improving long-range environmental forecasts and the reliability of climate change projections through climate models. This updated, comprehensive and authoritative second edition has a balance of observation, theory and modeling and provides a single source of reference for all those interested in this important multi-faceted natural phenomenon and its relation to major short-term climatic variations.

  1. Groundwater Vulnerability Assessment of the Pingtung Plain in Southern Taiwan.

    PubMed

    Liang, Ching-Ping; Jang, Cheng-Shin; Liang, Cheng-Wei; Chen, Jui-Sheng

    2016-11-23

    In the Pingtung Plain of southern Taiwan, elevated levels of NO₃ - -N in groundwater have been reported. Therefore, efforts for assessing groundwater vulnerability are required as part of the critical steps to prevent and control groundwater pollution. This study makes a groundwater vulnerability assessment for the Pingtung Plain using an improved overlay and index-based DRASTIC model. The improvement of the DRASTIC model is achieved by reassigning the weighting coefficients of the factors in this model with the help of a discriminant analysis statistical method. The analytical results obtained from the improved DRASTIC model provide a reliable prediction for use in groundwater vulnerability assessment to nitrate pollution and can correctly identify the groundwater protection zones in the Pingtung Plain. Moreover, the results of the sensitivity analysis conducted for the seven parameters in the improved DRASTIC model demonstrate that the aquifer media (A) is the most sensitive factor when the nitrate-N concentration is below 2.5 mg/L. For the cases where the nitrate-N concentration is above 2.5 mg/L, the aquifer media (A) and net recharge (R) are the two most important factors.

  2. Nanowire growth process modeling and reliability models for nanodevices

    NASA Astrophysics Data System (ADS)

    Fathi Aghdam, Faranak

    Nowadays, nanotechnology is becoming an inescapable part of everyday life. The big barrier in front of its rapid growth is our incapability of producing nanoscale materials in a reliable and cost-effective way. In fact, the current yield of nano-devices is very low (around 10 %), which makes fabrications of nano-devices very expensive and uncertain. To overcome this challenge, the first and most important step is to investigate how to control nano-structure synthesis variations. The main directions of reliability research in nanotechnology can be classified either from a material perspective or from a device perspective. The first direction focuses on restructuring materials and/or optimizing process conditions at the nano-level (nanomaterials). The other direction is linked to nano-devices and includes the creation of nano-electronic and electro-mechanical systems at nano-level architectures by taking into account the reliability of future products. In this dissertation, we have investigated two topics on both nano-materials and nano-devices. In the first research work, we have studied the optimization of one of the most important nanowire growth processes using statistical methods. Research on nanowire growth with patterned arrays of catalyst has shown that the wire-to-wire spacing is an important factor affecting the quality of resulting nanowires. To improve the process yield and the length uniformity of fabricated nanowires, it is important to reduce the resource competition between nanowires during the growth process. We have proposed a physical-statistical nanowire-interaction model considering the shadowing effect and shared substrate diffusion area to determine the optimal pitch that would ensure the minimum competition between nanowires. A sigmoid function is used in the model, and the least squares estimation method is used to estimate the model parameters. The estimated model is then used to determine the optimal spatial arrangement of catalyst arrays. This work is an early attempt that uses a physical-statistical modeling approach to studying selective nanowire growth for the improvement of process yield. In the second research work, the reliability of nano-dielectrics is investigated. As electronic devices get smaller, reliability issues pose new challenges due to unknown underlying physics of failure (i.e., failure mechanisms and modes). This necessitates new reliability analysis approaches related to nano-scale devices. One of the most important nano-devices is the transistor that is subject to various failure mechanisms. Dielectric breakdown is known to be the most critical one and has become a major barrier for reliable circuit design in nano-scale. Due to the need for aggressive downscaling of transistors, dielectric films are being made extremely thin, and this has led to adopting high permittivity (k) dielectrics as an alternative to widely used SiO2 in recent years. Since most time-dependent dielectric breakdown test data on bilayer stacks show significant deviations from a Weibull trend, we have proposed two new approaches to modeling the time to breakdown of bi-layer high-k dielectrics. In the first approach, we have used a marked space-time self-exciting point process to model the defect generation rate. A simulation algorithm is used to generate defects within the dielectric space, and an optimization algorithm is employed to minimize the Kullback-Leibler divergence between the empirical distribution obtained from the real data and the one based on the simulated data to find the best parameter values and to predict the total time to failure. The novelty of the presented approach lies in using a conditional intensity for trap generation in dielectric that is a function of time, space and size of the previous defects. In addition, in the second approach, a k-out-of-n system framework is proposed to estimate the total failure time after the generation of more than one soft breakdown.

  3. Validation of A Global Hydrological Model

    NASA Astrophysics Data System (ADS)

    Doell, P.; Lehner, B.; Kaspar, F.; Vassolo, S.

    Freshwater availability has been recognized as a global issue, and its consistent quan- tification not only in individual river basins but also at the global scale is required to support the sustainable use of water. The Global Hydrology Model WGHM, which is a submodel of the global water use and availability model WaterGAP 2, computes sur- face runoff, groundwater recharge and river discharge at a spatial resolution of 0.5. WGHM is based on the best global data sets currently available, including a newly developed drainage direction map and a data set of wetlands, lakes and reservoirs. It calculates both natural and actual discharge by simulating the reduction of river discharge by human water consumption (as computed by the water use submodel of WaterGAP 2). WGHM is calibrated against observed discharge at 724 gauging sta- tions (representing about 50% of the global land area) by adjusting a parameter of the soil water balance. It not only computes the long-term average water resources but also water availability indicators that take into account the interannual and seasonal variability of runoff and discharge. The reliability of the model results is assessed by comparing observed and simulated discharges at the calibration stations and at se- lected other stations. We conclude that reliable results can be obtained for basins of more than 20,000 km2. In particular, the 90% reliable monthly discharge is simu- lated well. However, there is the tendency that semi-arid and arid basins are modeled less satisfactorily than humid ones, which is partially due to neglecting river channel losses and evaporation of runoff from small ephemeral ponds in the model. Also, the hydrology of highly developed basins with large artificial storages, basin transfers and irrigation schemes cannot be simulated well. The seasonality of discharge in snow- dominated basins is overestimated by WGHM, and if the snow-dominated basin is uncalibrated, discharge is likely to be underestimated due to the precipitation mea- surement errors. Even though the explicit modeling of wetlands and lakes leads to a much improved modeling of both the vertical water balance and the lateral transport of water, not enough information is included in WGHM to accurately capture the hy- drology of these water bodies. Certainly, the reliability of model results is highest at the locations at which WGHM was calibrated. The validation indicates that reliability for cells inside calibrated basins is satisfactory if the basin is relatively homogeneous. Analyses of the few available stations outside of calibrated basins indicate a reason- ably high model reliability, particularly in humid regions.

  4. Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory

    PubMed Central

    Yuan, Kaijuan; Xiao, Fuyuan; Fei, Liguo; Kang, Bingyi; Deng, Yong

    2016-01-01

    Sensor data fusion plays an important role in fault diagnosis. Dempster–Shafer (D-R) evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods. PMID:26797611

  5. How does higher frequency monitoring data affect the calibration of a process-based water quality model?

    NASA Astrophysics Data System (ADS)

    Jackson-Blake, Leah; Helliwell, Rachel

    2015-04-01

    Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, spanning all hydrochemical conditions. However, regulatory agencies and research organisations generally only sample at a fortnightly or monthly frequency, even in well-studied catchments, often missing peak flow events. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by a process-based, semi-distributed catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the Markov Chain Monte Carlo - DiffeRential Evolution Adaptive Metropolis (MCMC-DREAM) algorithm. Calibration to daily data resulted in improved simulation of peak TDP concentrations and improved model performance statistics. Parameter-related uncertainty in simulated TDP was large when fortnightly data was used for calibration, with a 95% credible interval of 26 μg/l. This uncertainty is comparable in size to the difference between Water Framework Directive (WFD) chemical status classes, and would therefore make it difficult to use this calibration to predict shifts in WFD status. The 95% credible interval reduced markedly with the higher frequency monitoring data, to 6 μg/l. The number of parameters that could be reliably auto-calibrated was lower for the fortnightly data, with a physically unrealistic TDP simulation being produced when too many parameters were allowed to vary during model calibration. Parameters should not therefore be varied spatially for models such as INCA-P unless there is solid evidence that this is appropriate, or there is a real need to do so for the model to fulfil its purpose. This study highlights the potential pitfalls of using low frequency timeseries of observed water quality to calibrate complex process-based models. For reliable model calibrations to be produced, monitoring programmes need to be designed which capture system variability, in particular nutrient dynamics during high flow events. In addition, there is a need for simpler models, so that all model parameters can be included in auto-calibration and uncertainty analysis, and to reduce the data needs during calibration.

  6. The Use of Animal Models for Stroke Research: A Review

    PubMed Central

    Casals, Juliana B; Pieri, Naira CG; Feitosa, Matheus LT; Ercolin, Anna CM; Roballo, Kelly CS; Barreto, Rodrigo SN; Bressan, Fabiana F; Martins, Daniele S; Miglino, Maria A; Ambrósio, Carlos E

    2011-01-01

    Stroke has been identified as the second leading cause of death worldwide. Stroke is a focal neurologic deficit caused by a change in cerebral circulation. The use of animal models in recent years has improved our understanding of the physiopathology of this disease. Rats and mice are the most commonly used stroke models, but the demand for larger models, such as rabbits and even nonhuman primates, is increasing so as to better understand the disease and its treatment. Although the basic mechanisms of stroke are nearly identical among mammals, we here discuss the differences between the human encephalon and various animals. In addition, we compare common surgical techniques used to induce animal models of stroke. A more complete anatomic knowledge of the cerebral vessels of various model species is needed to develop more reliable models for objective results that improve knowledge of the pathology of stroke in both human and veterinary medicine. PMID:22330245

  7. Reliability and Maintainability Analysis of a High Air Pressure Compressor Facility

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.; Ring, Robert W.; Cole, Stuart K.

    2013-01-01

    This paper discusses a Reliability, Availability, and Maintainability (RAM) independent assessment conducted to support the refurbishment of the Compressor Station at the NASA Langley Research Center (LaRC). The paper discusses the methodologies used by the assessment team to derive the repair by replacement (RR) strategies to improve the reliability and availability of the Compressor Station (Ref.1). This includes a RAPTOR simulation model that was used to generate the statistical data analysis needed to derive a 15-year investment plan to support the refurbishment of the facility. To summarize, study results clearly indicate that the air compressors are well past their design life. The major failures of Compressors indicate that significant latent failure causes are present. Given the occurrence of these high-cost failures following compressor overhauls, future major failures should be anticipated if compressors are not replaced. Given the results from the RR analysis, the study team recommended a compressor replacement strategy. Based on the data analysis, the RR strategy will lead to sustainable operations through significant improvements in reliability, availability, and the probability of meeting the air demand with acceptable investment cost that should translate, in the long run, into major cost savings. For example, the probability of meeting air demand improved from 79.7 percent for the Base Case to 97.3 percent. Expressed in terms of a reduction in the probability of failing to meet demand (1 in 5 days to 1 in 37 days), the improvement is about 700 percent. Similarly, compressor replacement improved the operational availability of the facility from 97.5 percent to 99.8 percent. Expressed in terms of a reduction in system unavailability (1 in 40 to 1 in 500), the improvement is better than 1000 percent (an order of magnitude improvement). It is worthy to note that the methodologies, tools, and techniques used in the LaRC study can be used to evaluate similar high value equipment components and facilities. Also, lessons learned in data collection and maintenance practices derived from the observations, findings, and recommendations of the study are extremely important in the evaluation and sustainment of new compressor facilities.

  8. Reducing RANS Model Error Using Random Forest

    NASA Astrophysics Data System (ADS)

    Wang, Jian-Xun; Wu, Jin-Long; Xiao, Heng; Ling, Julia

    2016-11-01

    Reynolds-Averaged Navier-Stokes (RANS) models are still the work-horse tools in the turbulence modeling of industrial flows. However, the model discrepancy due to the inadequacy of modeled Reynolds stresses largely diminishes the reliability of simulation results. In this work we use a physics-informed machine learning approach to improve the RANS modeled Reynolds stresses and propagate them to obtain the mean velocity field. Specifically, the functional forms of Reynolds stress discrepancies with respect to mean flow features are trained based on an offline database of flows with similar characteristics. The random forest model is used to predict Reynolds stress discrepancies in new flows. Then the improved Reynolds stresses are propagated to the velocity field via RANS equations. The effects of expanding the feature space through the use of a complete basis of Galilean tensor invariants are also studied. The flow in a square duct, which is challenging for standard RANS models, is investigated to demonstrate the merit of the proposed approach. The results show that both the Reynolds stresses and the propagated velocity field are improved over the baseline RANS predictions. SAND Number: SAND2016-7437 A

  9. Word associations contribute to machine learning in automatic scoring of degree of emotional tones in dream reports.

    PubMed

    Amini, Reza; Sabourin, Catherine; De Koninck, Joseph

    2011-12-01

    Scientific study of dreams requires the most objective methods to reliably analyze dream content. In this context, artificial intelligence should prove useful for an automatic and non subjective scoring technique. Past research has utilized word search and emotional affiliation methods, to model and automatically match human judges' scoring of dream report's negative emotional tone. The current study added word associations to improve the model's accuracy. Word associations were established using words' frequency of co-occurrence with their defining words as found in a dictionary and an encyclopedia. It was hypothesized that this addition would facilitate the machine learning model and improve its predictability beyond those of previous models. With a sample of 458 dreams, this model demonstrated an improvement in accuracy from 59% to 63% (kappa=.485) on the negative emotional tone scale, and for the first time reached an accuracy of 77% (kappa=.520) on the positive scale. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. Calculus detection calibration among dental hygiene faculty members utilizing dental endoscopy: a pilot study.

    PubMed

    Partido, Brian B; Jones, Archie A; English, Dana L; Nguyen, Carol A; Jacks, Mary E

    2015-02-01

    Dental and dental hygiene faculty members often do not provide consistent instruction in the clinical environment, especially in tasks requiring clinical judgment. From previous efforts to calibrate faculty members in calculus detection using typodonts, researchers have suggested using human subjects and emerging technology to improve consistency in clinical instruction. The purpose of this pilot study was to determine if a dental endoscopy-assisted training program would improve intra- and interrater reliability of dental hygiene faculty members in calculus detection. Training included an ODU 11/12 explorer, typodonts, and dental endoscopy. A convenience sample of six participants was recruited from the dental hygiene faculty at a California community college, and a two-group randomized experimental design was utilized. Intra- and interrater reliability was measured before and after calibration training. Pretest and posttest Kappa averages of all participants were compared using repeated measures (split-plot) ANOVA to determine the effectiveness of the calibration training on intra- and interrater reliability. The results showed that both kinds of reliability significantly improved for all participants and the training group improved significantly in interrater reliability from pretest to posttest. Calibration training was beneficial to these dental hygiene faculty members, especially those beginning with less than full agreement. This study suggests that calculus detection calibration training utilizing dental endoscopy can effectively improve interrater reliability of dental and dental hygiene clinical educators. Future studies should include human subjects, involve more participants at multiple locations, and determine whether improved rater reliability can be sustained over time.

  11. Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Luo, Yabo; Waden, Yongo P.

    2017-06-01

    Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.

  12. Flood extent and water level estimation from SAR using data-model integration

    NASA Astrophysics Data System (ADS)

    Ajadi, O. A.; Meyer, F. J.

    2017-12-01

    Synthetic Aperture Radar (SAR) images have long been recognized as a valuable data source for flood mapping. Compared to other sources, SAR's weather and illumination independence and large area coverage at high spatial resolution supports reliable, frequent, and detailed observations of developing flood events. Accordingly, SAR has the potential to greatly aid in the near real-time monitoring of natural hazards, such as flood detection, if combined with automated image processing. This research works towards increasing the reliability and temporal sampling of SAR-derived flood hazard information by integrating information from multiple SAR sensors and SAR modalities (images and Interferometric SAR (InSAR) coherence) and by combining SAR-derived change detection information with hydrologic and hydraulic flood forecast models. First, the combination of multi-temporal SAR intensity images and coherence information for generating flood extent maps is introduced. The application of least-squares estimation integrates flood information from multiple SAR sensors, thus increasing the temporal sampling. SAR-based flood extent information will be combined with a Digital Elevation Model (DEM) to reduce false alarms and to estimate water depth and flood volume. The SAR-based flood extent map is assimilated into the Hydrologic Engineering Center River Analysis System (Hec-RAS) model to aid in hydraulic model calibration. The developed technology is improving the accuracy of flood information by exploiting information from data and models. It also provides enhanced flood information to decision-makers supporting the response to flood extent and improving emergency relief efforts.

  13. How will climate novelty influence ecological forecasts? Using the Quaternary to assess future reliability.

    PubMed

    Fitzpatrick, Matthew C; Blois, Jessica L; Williams, John W; Nieto-Lugilde, Diego; Maguire, Kaitlin C; Lorenz, David J

    2018-03-23

    Future climates are projected to be highly novel relative to recent climates. Climate novelty challenges models that correlate ecological patterns to climate variables and then use these relationships to forecast ecological responses to future climate change. Here, we quantify the magnitude and ecological significance of future climate novelty by comparing it to novel climates over the past 21,000 years in North America. We then use relationships between model performance and climate novelty derived from the fossil pollen record from eastern North America to estimate the expected decrease in predictive skill of ecological forecasting models as future climate novelty increases. We show that, in the high emissions scenario (RCP 8.5) and by late 21st century, future climate novelty is similar to or higher than peak levels of climate novelty over the last 21,000 years. The accuracy of ecological forecasting models is projected to decline steadily over the coming decades in response to increasing climate novelty, although models that incorporate co-occurrences among species may retain somewhat higher predictive skill. In addition to quantifying future climate novelty in the context of late Quaternary climate change, this work underscores the challenges of making reliable forecasts to an increasingly novel future, while highlighting the need to assess potential avenues for improvement, such as increased reliance on geological analogs for future novel climates and improving existing models by pooling data through time and incorporating assemblage-level information. © 2018 John Wiley & Sons Ltd.

  14. Evaluation of 3D-Jury on CASP7 models.

    PubMed

    Kaján, László; Rychlewski, Leszek

    2007-08-21

    3D-Jury, the structure prediction consensus method publicly available in the Meta Server http://meta.bioinfo.pl/, was evaluated using models gathered in the 7th round of the Critical Assessment of Techniques for Protein Structure Prediction (CASP7). 3D-Jury is an automated expert process that generates protein structure meta-predictions from sets of models obtained from partner servers. The performance of 3D-Jury was analysed for three aspects. First, we examined the correlation between the 3D-Jury score and a model quality measure: the number of correctly predicted residues. The 3D-Jury score was shown to correlate significantly with the number of correctly predicted residues, the correlation is good enough to be used for prediction. 3D-Jury was also found to improve upon the competing servers' choice of the best structure model in most cases. The value of the 3D-Jury score as a generic reliability measure was also examined. We found that the 3D-Jury score separates bad models from good models better than the reliability score of the original server in 27 cases and falls short of it in only 5 cases out of a total of 38. We report the release of a new Meta Server feature: instant 3D-Jury scoring of uploaded user models. The 3D-Jury score continues to be a good indicator of structural model quality. It also provides a generic reliability score, especially important for models that were not assigned such by the original server. Individual structure modellers can also benefit from the 3D-Jury scoring system by testing their models in the new instant scoring feature http://meta.bioinfo.pl/compare_your_model_example.pl available in the Meta Server.

  15. A whole-body dual-modality radionuclide optical strategy for preclinical imaging of metastasis and heterogeneous treatment response in different microenvironments.

    PubMed

    Fruhwirth, Gilbert O; Diocou, Seckou; Blower, Philip J; Ng, Tony; Mullen, Greg E D

    2014-04-01

    Imaging spontaneous cancer cell metastasis or heterogeneous tumor responses to drug treatment in vivo is difficult to achieve. The goal was to develop a new highly sensitive and reliable preclinical longitudinal in vivo imaging model for this purpose, thereby facilitating discovery and validation of anticancer therapies or molecular imaging agents. The strategy is based on breast cancer cells stably expressing the human sodium iodide symporter (NIS) fused to a red fluorescent protein, thereby permitting radionuclide and fluorescence imaging. Using whole-body nano-SPECT/CT with (99m)TcO4(-), we followed primary tumor growth and spontaneous metastasis in the presence or absence of etoposide treatment. NIS imaging was used to classify organs as small as individual lymph nodes (LNs) to be positive or negative for metastasis, and results were confirmed by confocal fluorescence microscopy. Etoposide treatment efficacy was proven by ex vivo anticaspase 3 staining and fluorescence microscopy. In this preclinical model, we found that the NIS imaging strategy outperformed state-of-the-art (18)F-FDG imaging in its ability to detect small tumors (18.5-fold-better tumor-to-blood ratio) and metastases (LN, 3.6-fold) because of improved contrast in organs close to metastatic sites (12- and 8.5-fold-lower standardized uptake value in the heart and kidney, respectively). We applied the model to assess the treatment response to the neoadjuvant etoposide and found a consistent and reliable improvement in spontaneous metastasis detection. Importantly, we also found that tumor cells in different microenvironments responded in a heterogeneous manner to etoposide treatment, which could be determined only by the NIS-based strategy and not by (18)F-FDG imaging. We developed a new strategy for preclinical longitudinal in vivo cancer cell tracking with greater sensitivity and reliability than (18)F-FDG PET and applied it to track spontaneous and distant metastasis in the presence or absence of genotoxic stress therapy. Importantly, the model provides sufficient sensitivity and dynamic range to permit the reliable assessment of heterogeneous treatment responses in various microenvironments.

  16. Mechanical testing of bones: the positive synergy of finite-element models and in vitro experiments.

    PubMed

    Cristofolini, Luca; Schileo, Enrico; Juszczyk, Mateusz; Taddei, Fulvia; Martelli, Saulo; Viceconti, Marco

    2010-06-13

    Bone biomechanics have been extensively investigated in the past both with in vitro experiments and numerical models. In most cases either approach is chosen, without exploiting synergies. Both experiments and numerical models suffer from limitations relative to their accuracy and their respective fields of application. In vitro experiments can improve numerical models by: (i) preliminarily identifying the most relevant failure scenarios; (ii) improving the model identification with experimentally measured material properties; (iii) improving the model identification with accurately measured actual boundary conditions; and (iv) providing quantitative validation based on mechanical properties (strain, displacements) directly measured from physical specimens being tested in parallel with the modelling activity. Likewise, numerical models can improve in vitro experiments by: (i) identifying the most relevant loading configurations among a number of motor tasks that cannot be replicated in vitro; (ii) identifying acceptable simplifications for the in vitro simulation; (iii) optimizing the use of transducers to minimize errors and provide measurements at the most relevant locations; and (iv) exploring a variety of different conditions (material properties, interface, etc.) that would require enormous experimental effort. By reporting an example of successful investigation of the femur, we show how a combination of numerical modelling and controlled experiments within the same research team can be designed to create a virtuous circle where models are used to improve experiments, experiments are used to improve models and their combination synergistically provides more detailed and more reliable results than can be achieved with either approach singularly.

  17. A new fault diagnosis algorithm for AUV cooperative localization system

    NASA Astrophysics Data System (ADS)

    Shi, Hongyang; Miao, Zhiyong; Zhang, Yi

    2017-10-01

    Multiple AUVs cooperative localization as a new kind of underwater positioning technology, not only can improve the positioning accuracy, but also has many advantages the single AUV does not have. It is necessary to detect and isolate the fault to increase the reliability and availability of the AUVs cooperative localization system. In this paper, the Extended Multiple Model Adaptive Cubature Kalmam Filter (EMMACKF) method is presented to detect the fault. The sensor failures are simulated based on the off-line experimental data. Experimental results have shown that the faulty apparatus can be diagnosed effectively using the proposed method. Compared with Multiple Model Adaptive Extended Kalman Filter and Multi-Model Adaptive Unscented Kalman Filter, both accuracy and timelines have been improved to some extent.

  18. NDE reliability and probability of detection (POD) evolution and paradigm shift

    NASA Astrophysics Data System (ADS)

    Singh, Surendra

    2014-02-01

    The subject of NDE Reliability and POD has gone through multiple phases since its humble beginning in the late 1960s. This was followed by several programs including the important one nicknamed "Have Cracks - Will Travel" or in short "Have Cracks" by Lockheed Georgia Company for US Air Force during 1974-1978. This and other studies ultimately led to a series of developments in the field of reliability and POD starting from the introduction of fracture mechanics and Damaged Tolerant Design (DTD) to statistical framework by Bernes and Hovey in 1981 for POD estimation to MIL-STD HDBK 1823 (1999) and 1823A (2009). During the last decade, various groups and researchers have further studied the reliability and POD using Model Assisted POD (MAPOD), Simulation Assisted POD (SAPOD), and applying Bayesian Statistics. All and each of these developments had one objective, i.e., improving accuracy of life prediction in components that to a large extent depends on the reliability and capability of NDE methods. Therefore, it is essential to have a reliable detection and sizing of large flaws in components. Currently, POD is used for studying reliability and capability of NDE methods, though POD data offers no absolute truth regarding NDE reliability, i.e., system capability, effects of flaw morphology, and quantifying the human factors. Furthermore, reliability and POD have been reported alike in meaning but POD is not NDE reliability. POD is a subset of the reliability that consists of six phases: 1) samples selection using DOE, 2) NDE equipment setup and calibration, 3) System Measurement Evaluation (SME) including Gage Repeatability &Reproducibility (Gage R&R) and Analysis Of Variance (ANOVA), 4) NDE system capability and electronic and physical saturation, 5) acquiring and fitting data to a model, and data analysis, and 6) POD estimation. This paper provides an overview of all major POD milestones for the last several decades and discuss rationale for using Integrated Computational Materials Engineering (ICME), MAPOD, SAPOD, and Bayesian statistics for studying controllable and non-controllable variables including human factors for estimating POD. Another objective is to list gaps between "hoped for" versus validated or fielded failed hardware.

  19. Predictive ability of genomic selection models for breeding value estimation on growth traits of Pacific white shrimp Litopenaeus vannamei

    NASA Astrophysics Data System (ADS)

    Wang, Quanchao; Yu, Yang; Li, Fuhua; Zhang, Xiaojun; Xiang, Jianhai

    2017-09-01

    Genomic selection (GS) can be used to accelerate genetic improvement by shortening the selection interval. The successful application of GS depends largely on the accuracy of the prediction of genomic estimated breeding value (GEBV). This study is a first attempt to understand the practicality of GS in Litopenaeus vannamei and aims to evaluate models for GS on growth traits. The performance of GS models in L. vannamei was evaluated in a population consisting of 205 individuals, which were genotyped for 6 359 single nucleotide polymorphism (SNP) markers by specific length amplified fragment sequencing (SLAF-seq) and phenotyped for body length and body weight. Three GS models (RR-BLUP, BayesA, and Bayesian LASSO) were used to obtain the GEBV, and their predictive ability was assessed by the reliability of the GEBV and the bias of the predicted phenotypes. The mean reliability of the GEBVs for body length and body weight predicted by the different models was 0.296 and 0.411, respectively. For each trait, the performances of the three models were very similar to each other with respect to predictability. The regression coefficients estimated by the three models were close to one, suggesting near to zero bias for the predictions. Therefore, when GS was applied in a L. vannamei population for the studied scenarios, all three models appeared practicable. Further analyses suggested that improved estimation of the genomic prediction could be realized by increasing the size of the training population as well as the density of SNPs.

  20. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Posse, Christian

    2005-09-15

    The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabási-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using other methods and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.

  1. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Posse, Christian

    2005-09-15

    The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabasi-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using standard power engineering methods, and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.

  2. Effect of Surge Current Testing on Reliability of Solid Tantalum Capacitors

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2008-01-01

    Tantalum capacitors manufactured per military specifications are established reliability components and have less than 0.001% of failures per 1000 hours for grades D or S, thus positioning these parts among electronic components with the highest reliability characteristics. Still, failures of tantalum capacitors do happen and when it occurs it might have catastrophic consequences for the system. To reduce this risk, further development of a screening and qualification system with special attention to the possible deficiencies in the existing procedures is necessary. The purpose of this work is evaluation of the effect of surge current stress testing on reliability of the parts at both steady-state and multiple surge current stress conditions. In order to reveal possible degradation and precipitate more failures, various part types were tested and stressed in the range of voltage and temperature conditions exceeding the specified limits. A model to estimate the probability of post-surge current testing-screening failures and measures to improve the effectiveness of the screening process has been suggested.

  3. Interobserver Reliability of the Total Body Score System for Quantifying Human Decomposition.

    PubMed

    Dabbs, Gretchen R; Connor, Melissa; Bytheway, Joan A

    2016-03-01

    Several authors have tested the accuracy of the Total Body Score (TBS) method for quantifying decomposition, but none have examined the reliability of the method as a scoring system by testing interobserver error rates. Sixteen participants used the TBS system to score 59 observation packets including photographs and written descriptions of 13 human cadavers in different stages of decomposition (postmortem interval: 2-186 days). Data analysis used a two-way random model intraclass correlation in SPSS (v. 17.0). The TBS method showed "almost perfect" agreement between observers, with average absolute correlation coefficients of 0.990 and average consistency correlation coefficients of 0.991. While the TBS method may have sources of error, scoring reliability is not one of them. Individual component scores were examined, and the influences of education and experience levels were investigated. Overall, the trunk component scores were the least concordant. Suggestions are made to improve the reliability of the TBS method. © 2016 American Academy of Forensic Sciences.

  4. Improved Hot Carrier Reliability Characteristics of Metal Oxide Semiconductor Field Effect Transistors with High-k Gate Dielectric by Using High Pressure Deuterium Post Metallization Annealing

    NASA Astrophysics Data System (ADS)

    Park, Hokyung; Choi, Rino; Lee, Byoung Hun; Hwang, Hyunsang

    2007-09-01

    High pressure deuterium annealing on the hot carrier reliability characteristics of HfSiO metal oxide semiconductor field effect transistor (MOSFET) was investigated. Comparing with the conventional forming gas (H2/Ar=10%/96%, 480 °C, 30 min) annealed sample, MOSFET annealed in 5 atm pure deuterium ambient at 400 °C showed the improvement of linear drain current, reduction of interface trap density, and improvement of the hot carrier reliability characteristics. These improvements can be attributed to the effective passivation of the interface trap site after high pressure annealing and heavy mass effect of deuterium. These results indicate that high pressure pure deuterium annealing can be a promising process for improving device performance as well as hot carrier reliability, together.

  5. Modeling Reliability Growth in Accelerated Stress Testing

    DTIC Science & Technology

    2013-12-01

    MODELING RELIABILITY GROWTH IN ACCELERATED STRESS TESTING DISSERTATION Jason K. Freels Major...Defense, or the United States Government. AFIT-ENS-DS-13-D-02 MODELING RELIABILITY GROWTH IN ACCELERATED STRESS TESTING ...DISTRIBUTION UNLIMITED AFIT-ENS-DS-13-D-02 MODELING RELIABILITY GROWTH IN ACCELERATED STRESS TESTING Jason K. Freels

  6. Evaluation of 3D-Jury on CASP7 models

    PubMed Central

    Kaján, László; Rychlewski, Leszek

    2007-01-01

    Background 3D-Jury, the structure prediction consensus method publicly available in the Meta Server , was evaluated using models gathered in the 7th round of the Critical Assessment of Techniques for Protein Structure Prediction (CASP7). 3D-Jury is an automated expert process that generates protein structure meta-predictions from sets of models obtained from partner servers. Results The performance of 3D-Jury was analysed for three aspects. First, we examined the correlation between the 3D-Jury score and a model quality measure: the number of correctly predicted residues. The 3D-Jury score was shown to correlate significantly with the number of correctly predicted residues, the correlation is good enough to be used for prediction. 3D-Jury was also found to improve upon the competing servers' choice of the best structure model in most cases. The value of the 3D-Jury score as a generic reliability measure was also examined. We found that the 3D-Jury score separates bad models from good models better than the reliability score of the original server in 27 cases and falls short of it in only 5 cases out of a total of 38. We report the release of a new Meta Server feature: instant 3D-Jury scoring of uploaded user models. Conclusion The 3D-Jury score continues to be a good indicator of structural model quality. It also provides a generic reliability score, especially important for models that were not assigned such by the original server. Individual structure modellers can also benefit from the 3D-Jury scoring system by testing their models in the new instant scoring feature available in the Meta Server. PMID:17711571

  7. Helicopter Reliability and Maintainability Trends during Development and Production.

    DTIC Science & Technology

    1981-07-01

    engine entry (the T-53) showed improvement in successive models. For helicopters, we have mixed results: some improved (YUH-60A, CH-47, UH-lD, AH-IG...understand the linkage between R&M program goals and life cycle costs, however, it is necessary to understand-- (1) what resource levels are required during...attributes of the system; (3) how those field attributes affect the cost of owner- ship of the system; and (4) whether or not, and at what cost, R&M values

  8. Hybrid automated reliability predictor integrated work station (HiREL)

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.

    1991-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated reliability (HiREL) workstation tool system marks another step toward the goal of producing a totally integrated computer aided design (CAD) workstation design capability. Since a reliability engineer must generally graphically represent a reliability model before he can solve it, the use of a graphical input description language increases productivity and decreases the incidence of error. The captured image displayed on a cathode ray tube (CRT) screen serves as a documented copy of the model and provides the data for automatic input to the HARP reliability model solver. The introduction of dependency gates to a fault tree notation allows the modeling of very large fault tolerant system models using a concise and visually recognizable and familiar graphical language. In addition to aiding in the validation of the reliability model, the concise graphical representation presents company management, regulatory agencies, and company customers a means of expressing a complex model that is readily understandable. The graphical postprocessor computer program HARPO (HARP Output) makes it possible for reliability engineers to quickly analyze huge amounts of reliability/availability data to observe trends due to exploratory design changes.

  9. Prediction of Geomagnetic Activity and Key Parameters in High-Latitude Ionosphere-Basic Elements

    NASA Technical Reports Server (NTRS)

    Lyatsky, W.; Khazanov, G. V.

    2007-01-01

    Prediction of geomagnetic activity and related events in the Earth's magnetosphere and ionosphere is an important task of the Space Weather program. Prediction reliability is dependent on the prediction method and elements included in the prediction scheme. Two main elements are a suitable geomagnetic activity index and coupling function -- the combination of solar wind parameters providing the best correlation between upstream solar wind data and geomagnetic activity. The appropriate choice of these two elements is imperative for any reliable prediction model. The purpose of this work was to elaborate on these two elements -- the appropriate geomagnetic activity index and the coupling function -- and investigate the opportunity to improve the reliability of the prediction of geomagnetic activity and other events in the Earth's magnetosphere. The new polar magnetic index of geomagnetic activity and the new version of the coupling function lead to a significant increase in the reliability of predicting the geomagnetic activity and some key parameters, such as cross-polar cap voltage and total Joule heating in high-latitude ionosphere, which play a very important role in the development of geomagnetic and other activity in the Earth s magnetosphere, and are widely used as key input parameters in modeling magnetospheric, ionospheric, and thermospheric processes.

  10. Fault Tolerant Homopolar Magnetic Bearings

    NASA Technical Reports Server (NTRS)

    Li, Ming-Hsiu; Palazzolo, Alan; Kenny, Andrew; Provenza, Andrew; Beach, Raymond; Kascak, Albert

    2003-01-01

    Magnetic suspensions (MS) satisfy the long life and low loss conditions demanded by satellite and ISS based flywheels used for Energy Storage and Attitude Control (ACESE) service. This paper summarizes the development of a novel MS that improves reliability via fault tolerant operation. Specifically, flux coupling between poles of a homopolar magnetic bearing is shown to deliver desired forces even after termination of coil currents to a subset of failed poles . Linear, coordinate decoupled force-voltage relations are also maintained before and after failure by bias linearization. Current distribution matrices (CDM) which adjust the currents and fluxes following a pole set failure are determined for many faulted pole combinations. The CDM s and the system responses are obtained utilizing 1D magnetic circuit models with fringe and leakage factors derived from detailed, 3D, finite element field models. Reliability results are presented vs. detection/correction delay time and individual power amplifier reliability for 4, 6, and 7 pole configurations. Reliability is shown for two success criteria, i.e. (a) no catcher bearing contact following pole failures and (b) re-levitation off of the catcher bearings following pole failures. An advantage of the method presented over other redundant operation approaches is a significantly reduced requirement for backup hardware such as additional actuators or power amplifiers.

  11. Forecasting infectious disease emergence subject to seasonal forcing.

    PubMed

    Miller, Paige B; O'Dea, Eamon B; Rohani, Pejman; Drake, John M

    2017-09-06

    Despite high vaccination coverage, many childhood infections pose a growing threat to human populations. Accurate disease forecasting would be of tremendous value to public health. Forecasting disease emergence using early warning signals (EWS) is possible in non-seasonal models of infectious diseases. Here, we assessed whether EWS also anticipate disease emergence in seasonal models. We simulated the dynamics of an immunizing infectious pathogen approaching the tipping point to disease endemicity. To explore the effect of seasonality on the reliability of early warning statistics, we varied the amplitude of fluctuations around the average transmission. We proposed and analyzed two new early warning signals based on the wavelet spectrum. We measured the reliability of the early warning signals depending on the strength of their trend preceding the tipping point and then calculated the Area Under the Curve (AUC) statistic. Early warning signals were reliable when disease transmission was subject to seasonal forcing. Wavelet-based early warning signals were as reliable as other conventional early warning signals. We found that removing seasonal trends, prior to analysis, did not improve early warning statistics uniformly. Early warning signals anticipate the onset of critical transitions for infectious diseases which are subject to seasonal forcing. Wavelet-based early warning statistics can also be used to forecast infectious disease.

  12. Wind-US Code Physical Modeling Improvements to Complement Hypersonic Testing and Evaluation

    NASA Technical Reports Server (NTRS)

    Georgiadis, Nicholas J.; Yoder, Dennis A.; Towne, Charles S.; Engblom, William A.; Bhagwandin, Vishal A.; Power, Greg D.; Lankford, Dennis W.; Nelson, Christopher C.

    2009-01-01

    This report gives an overview of physical modeling enhancements to the Wind-US flow solver which were made to improve the capabilities for simulation of hypersonic flows and the reliability of computations to complement hypersonic testing. The improvements include advanced turbulence models, a bypass transition model, a conjugate (or closely coupled to vehicle structure) conduction-convection heat transfer capability, and an upgraded high-speed combustion solver. A Mach 5 shock-wave boundary layer interaction problem is used to investigate the benefits of k- s and k-w based explicit algebraic stress turbulence models relative to linear two-equation models. The bypass transition model is validated using data from experiments for incompressible boundary layers and a Mach 7.9 cone flow. The conjugate heat transfer method is validated for a test case involving reacting H2-O2 rocket exhaust over cooled calorimeter panels. A dual-mode scramjet configuration is investigated using both a simplified 1-step kinetics mechanism and an 8-step mechanism. Additionally, variations in the turbulent Prandtl and Schmidt numbers are considered for this scramjet configuration.

  13. Preliminary Results Obtained in Integrated Safety Analysis of NASA Aviation Safety Program Technologies

    NASA Technical Reports Server (NTRS)

    Reveley, Mary S.

    2003-01-01

    The goal of the NASA Aviation Safety Program (AvSP) is to develop and demonstrate technologies that contribute to a reduction in the aviation fatal accident rate by a factor of 5 by the year 2007 and by a factor of 10 by the year 2022. Integrated safety analysis of day-to-day operations and risks within those operations will provide an understanding of the Aviation Safety Program portfolio. Safety benefits analyses are currently being conducted. Preliminary results for the Synthetic Vision Systems (SVS) and Weather Accident Prevention (WxAP) projects of the AvSP have been completed by the Logistics Management Institute under a contract with the NASA Glenn Research Center. These analyses include both a reliability analysis and a computer simulation model. The integrated safety analysis method comprises two principal components: a reliability model and a simulation model. In the reliability model, the results indicate how different technologies and systems will perform in normal, degraded, and failed modes of operation. In the simulation, an operational scenario is modeled. The primary purpose of the SVS project is to improve safety by providing visual-flightlike situation awareness during instrument conditions. The current analyses are an estimate of the benefits of SVS in avoiding controlled flight into terrain. The scenario modeled has an aircraft flying directly toward a terrain feature. When the flight crew determines that the aircraft is headed toward an obstruction, the aircraft executes a level turn at speed. The simulation is ended when the aircraft completes the turn.

  14. A Validity and Reliability Update on the Informal Reading Inventory with Suggestions for Improvement.

    ERIC Educational Resources Information Center

    Klesius, Janell P.; Homan, Susan P.

    1985-01-01

    The article reviews validity and reliability studies on the informal reading inventory, a diagnostic instrument to identify reading grade-level placement and strengths and weaknesses in work recognition and comprehension. Gives suggestions to improve the validity and reliability of existing inventories and to evaluate them in newly published…

  15. The effect of leverage and/or influential on structure-activity relationships.

    PubMed

    Bolboacă, Sorana D; Jäntschi, Lorentz

    2013-05-01

    In the spirit of reporting valid and reliable Quantitative Structure-Activity Relationship (QSAR) models, the aim of our research was to assess how the leverage (analysis with Hat matrix, h(i)) and the influential (analysis with Cook's distance, D(i)) of QSAR models may reflect the models reliability and their characteristics. The datasets included in this research were collected from previously published papers. Seven datasets which accomplished the imposed inclusion criteria were analyzed. Three models were obtained for each dataset (full-model, h(i)-model and D(i)-model) and several statistical validation criteria were applied to the models. In 5 out of 7 sets the correlation coefficient increased when compounds with either h(i) or D(i) higher than the threshold were removed. Withdrawn compounds varied from 2 to 4 for h(i)-models and from 1 to 13 for D(i)-models. Validation statistics showed that D(i)-models possess systematically better agreement than both full-models and h(i)-models. Removal of influential compounds from training set significantly improves the model and is recommended to be conducted in the process of quantitative structure-activity relationships developing. Cook's distance approach should be combined with hat matrix analysis in order to identify the compounds candidates for removal.

  16. Predicting Incursion of Plant Invaders into Kruger National Park, South Africa: The Interplay of General Drivers and Species-Specific Factors

    PubMed Central

    Jarošík, Vojtěch; Pyšek, Petr; Foxcroft, Llewellyn C.; Richardson, David M.; Rouget, Mathieu; MacFadyen, Sandra

    2011-01-01

    Background Overcoming boundaries is crucial for incursion of alien plant species and their successful naturalization and invasion within protected areas. Previous work showed that in Kruger National Park, South Africa, this process can be quantified and that factors determining the incursion of invasive species can be identified and predicted confidently. Here we explore the similarity between determinants of incursions identified by the general model based on a multispecies assemblage, and those identified by species-specific models. We analyzed the presence and absence of six invasive plant species in 1.0×1.5 km segments along the border of the park as a function of environmental characteristics from outside and inside the KNP boundary, using two data-mining techniques: classification trees and random forests. Principal Findings The occurrence of Ageratum houstonianum, Chromolaena odorata, Xanthium strumarium, Argemone ochroleuca, Opuntia stricta and Lantana camara can be reliably predicted based on landscape characteristics identified by the general multispecies model, namely water runoff from surrounding watersheds and road density in a 10 km radius. The presence of main rivers and species-specific combinations of vegetation types are reliable predictors from inside the park. Conclusions The predictors from the outside and inside of the park are complementary, and are approximately equally reliable for explaining the presence/absence of current invaders; those from the inside are, however, more reliable for predicting future invasions. Landscape characteristics determined as crucial predictors from outside the KNP serve as guidelines for management to enact proactive interventions to manipulate landscape features near the KNP to prevent further incursions. Predictors from the inside the KNP can be used reliably to identify high-risk areas to improve the cost-effectiveness of management, to locate invasive plants and target them for eradication. PMID:22194893

  17. Predicting incursion of plant invaders into Kruger National Park, South Africa: the interplay of general drivers and species-specific factors.

    PubMed

    Jarošík, Vojtěch; Pyšek, Petr; Foxcroft, Llewellyn C; Richardson, David M; Rouget, Mathieu; MacFadyen, Sandra

    2011-01-01

    Overcoming boundaries is crucial for incursion of alien plant species and their successful naturalization and invasion within protected areas. Previous work showed that in Kruger National Park, South Africa, this process can be quantified and that factors determining the incursion of invasive species can be identified and predicted confidently. Here we explore the similarity between determinants of incursions identified by the general model based on a multispecies assemblage, and those identified by species-specific models. We analyzed the presence and absence of six invasive plant species in 1.0×1.5 km segments along the border of the park as a function of environmental characteristics from outside and inside the KNP boundary, using two data-mining techniques: classification trees and random forests. The occurrence of Ageratum houstonianum, Chromolaena odorata, Xanthium strumarium, Argemone ochroleuca, Opuntia stricta and Lantana camara can be reliably predicted based on landscape characteristics identified by the general multispecies model, namely water runoff from surrounding watersheds and road density in a 10 km radius. The presence of main rivers and species-specific combinations of vegetation types are reliable predictors from inside the park. The predictors from the outside and inside of the park are complementary, and are approximately equally reliable for explaining the presence/absence of current invaders; those from the inside are, however, more reliable for predicting future invasions. Landscape characteristics determined as crucial predictors from outside the KNP serve as guidelines for management to enact proactive interventions to manipulate landscape features near the KNP to prevent further incursions. Predictors from the inside the KNP can be used reliably to identify high-risk areas to improve the cost-effectiveness of management, to locate invasive plants and target them for eradication.

  18. A comparative study on improved Arrhenius-type and artificial neural network models to predict high-temperature flow behaviors in 20MnNiMo alloy.

    PubMed

    Quan, Guo-zheng; Yu, Chun-tang; Liu, Ying-ying; Xia, Yu-feng

    2014-01-01

    The stress-strain data of 20MnNiMo alloy were collected from a series of hot compressions on Gleeble-1500 thermal-mechanical simulator in the temperature range of 1173 ∼ 1473 K and strain rate range of 0.01 ∼ 10 s(-1). Based on the experimental data, the improved Arrhenius-type constitutive model and the artificial neural network (ANN) model were established to predict the high temperature flow stress of as-cast 20MnNiMo alloy. The accuracy and reliability of the improved Arrhenius-type model and the trained ANN model were further evaluated in terms of the correlation coefficient (R), the average absolute relative error (AARE), and the relative error (η). For the former, R and AARE were found to be 0.9954 and 5.26%, respectively, while, for the latter, 0.9997 and 1.02%, respectively. The relative errors (η) of the improved Arrhenius-type model and the ANN model were, respectively, in the range of -39.99% ∼ 35.05% and -3.77% ∼ 16.74%. As for the former, only 16.3% of the test data set possesses η-values within ± 1%, while, as for the latter, more than 79% possesses. The results indicate that the ANN model presents a higher predictable ability than the improved Arrhenius-type constitutive model.

  19. A Reliability Estimation in Modeling Watershed Runoff With Uncertainties

    NASA Astrophysics Data System (ADS)

    Melching, Charles S.; Yen, Ben Chie; Wenzel, Harry G., Jr.

    1990-10-01

    The reliability of simulation results produced by watershed runoff models is a function of uncertainties in nature, data, model parameters, and model structure. A framework is presented here for using a reliability analysis method (such as first-order second-moment techniques or Monte Carlo simulation) to evaluate the combined effect of the uncertainties on the reliability of output hydrographs from hydrologic models. For a given event the prediction reliability can be expressed in terms of the probability distribution of the estimated hydrologic variable. The peak discharge probability for a watershed in Illinois using the HEC-1 watershed model is given as an example. The study of the reliability of predictions from watershed models provides useful information on the stochastic nature of output from deterministic models subject to uncertainties and identifies the relative contribution of the various uncertainties to unreliability of model predictions.

  20. Estimate of the Reliability in Geological Forecasts for Tunnels: Toward a Structured Approach

    NASA Astrophysics Data System (ADS)

    Perello, Paolo

    2011-11-01

    In tunnelling, a reliable geological model often allows providing an effective design and facing the construction phase without unpleasant surprises. A geological model can be considered reliable when it is a valid support to correctly foresee the rock mass behaviour, therefore preventing unexpected events during the excavation. The higher the model reliability, the lower the probability of unforeseen rock mass behaviour. Unfortunately, owing to different reasons, geological models are affected by uncertainties and a fully reliable knowledge of the rock mass is, in most cases, impossible. Therefore, estimating to which degree a geological model is reliable, becomes a primary requirement in order to save time and money and to adopt the appropriate construction strategy. The definition of the geological model reliability is often achieved by engineering geologists through an unstructured analytical process and variable criteria. This paper focusses on geological models for projects of linear underground structures and represents an effort to analyse and include in a conceptual framework the factors influencing such models. An empirical parametric procedure is then developed with the aim of obtaining an index called "geological model rating (GMR)", which can be used to provide a more standardised definition of a geological model reliability.

  1. Scoring haemophilic arthropathy on X-rays: improving inter- and intra-observer reliability and agreement using a consensus atlas.

    PubMed

    Foppen, Wouter; van der Schaaf, Irene C; Beek, Frederik J A; Verkooijen, Helena M; Fischer, Kathelijn

    2016-06-01

    The radiological Pettersson score (PS) is widely applied for classification of arthropathy to evaluate costly haemophilia treatment. This study aims to assess and improve inter- and intra-observer reliability and agreement of the PS. Two series of X-rays (bilateral elbows, knees, and ankles) of 10 haemophilia patients (120 joints) with haemophilic arthropathy were scored by three observers according to the PS (maximum score 13/joint). Subsequently, (dis-)agreement in scoring was discussed until consensus. Example images were collected in an atlas. Thereafter, second series of 120 joints were scored using the atlas. One observer rescored the second series after three months. Reliability was assessed by intraclass correlation coefficients (ICC), agreement by limits of agreement (LoA). Median Pettersson score at joint level (PSjoint) of affected joints was 6 (interquartile range 3-9). Using the consensus atlas, inter-observer reliability of the PSjoint improved significantly from 0.94 (95 % confidence interval (CI) 0.91-0.96) to 0.97 (CI 0.96-0.98). LoA improved from ±1.7 to ±1.1 for the PSjoint. Therefore, true differences in arthropathy were differences in the PSjoint of >2 points. Intra-observer reliability of the PSjoint was 0.98 (CI 0.97-0.98), intra-observer LoA were ±0.9 points. Reliability and agreement of the PS improved by using a consensus atlas. • Reliability of the Pettersson score significantly improved using the consensus atlas. • The presented consensus atlas improved the agreement among observers. • The consensus atlas could be recommended to obtain a reproducible Pettersson score.

  2. Qualitative Importance Measures of Systems Components - A New Approach and Its Applications

    NASA Astrophysics Data System (ADS)

    Chybowski, Leszek; Gawdzińska, Katarzyna; Wiśnicki, Bogusz

    2016-12-01

    The paper presents an improved methodology of analysing the qualitative importance of components in the functional and reliability structures of the system. We present basic importance measures, i.e. the Birnbaum's structural measure, the order of the smallest minimal cut-set, the repetition count of an i-th event in the Fault Tree and the streams measure. A subsystem of circulation pumps and fuel heaters in the main engine fuel supply system of a container vessel illustrates the qualitative importance analysis. We constructed a functional model and a Fault Tree which we analysed using qualitative measures. Additionally, we compared the calculated measures and introduced corrected measures as a tool for improving the analysis. We proposed scaled measures and a common measure taking into account the location of the component in the reliability and functional structures. Finally, we proposed an area where the measures could be applied.

  3. A Closed-Form Error Model of Straight Lines for Improved Data Association and Sensor Fusing

    PubMed Central

    2018-01-01

    Linear regression is a basic tool in mobile robotics, since it enables accurate estimation of straight lines from range-bearing scans or in digital images, which is a prerequisite for reliable data association and sensor fusing in the context of feature-based SLAM. This paper discusses, extends and compares existing algorithms for line fitting applicable also in the case of strong covariances between the coordinates at each single data point, which must not be neglected if range-bearing sensors are used. Besides, in particular, the determination of the covariance matrix is considered, which is required for stochastic modeling. The main contribution is a new error model of straight lines in closed form for calculating quickly and reliably the covariance matrix dependent on just a few comprehensible and easily-obtainable parameters. The model can be applied widely in any case when a line is fitted from a number of distinct points also without a priori knowledge of the specific measurement noise. By means of extensive simulations, the performance and robustness of the new model in comparison to existing approaches is shown. PMID:29673205

  4. Improving evaluation of climate change impacts on the water cycle by remote sensing ET-retrieval

    NASA Astrophysics Data System (ADS)

    García Galiano, S. G.; Olmos Giménez, P.; Ángel Martínez Pérez, J.; Diego Giraldo Osorio, J.

    2015-05-01

    Population growth and intense consumptive water uses are generating pressures on water resources in the southeast of Spain. Improving the knowledge of the climate change impacts on water cycle processes at the basin scale is a step to building adaptive capacity. In this work, regional climate model (RCM) ensembles are considered as an input to the hydrological model, for improving the reliability of hydroclimatic projections. To build the RCMs ensembles, the work focuses on probability density function (PDF)-based evaluation of the ability of RCMs to simulate of rainfall and temperature at the basin scale. To improve the spatial calibration of the continuous hydrological model used, an algorithm for remote sensing actual evapotranspiration (AET) retrieval was applied. From the results, a clear decrease in runoff is expected for 2050 in the headwater basin studied. The plausible future scenario of water shortage will produce negative impacts on the regional economy, where the main activity is irrigated agriculture.

  5. Statistics on Blindness in the Model Reporting Area 1969-1970.

    ERIC Educational Resources Information Center

    Kahn, Harold A.; Moorhead, Helen B.

    Presented in the form of 30 tables are statistics on blindness in 16 states which have agreed to uniform definitions and procedures to improve reliability of data regarding blind persons. The data indicates that rates of blindness were generally higher for nonwhites than for whites with the ratio ranging from almost 10 for glaucoma to minimal for…

  6. Effects of forcing uncertainties in the improvement skills of assimilating satellite soil moisture retrievals into flood forecasting models

    USDA-ARS?s Scientific Manuscript database

    Floods have negative impacts on society, causing damages in infrastructures and industry, and in the worst cases, causing loss of human lives. Thus early and accurate warning is crucial to significantly reduce the impacts on public safety and economy. Reliable flood warning can be generated using ...

  7. Acceptability of the Kalman filter to monitor pronghorn population size

    Treesearch

    Raymond L. Czaplewski

    1986-01-01

    Pronghorn antelope are important components of grassland and steppe ecosystems in Wyoming. Monitoring data on the size and population dynamics of these herds are expensive and gathered only a few times each year. Reliable data include estimates of animals harvested and proportion of bucks, does, and fawns. A deterministic simulation model has been used to improve...

  8. Performance Steel Castings

    DTIC Science & Technology

    2012-09-30

    Development of Sand Properties 103 Advanced Modeling Dataset.. 105 High Strength Low Alloy (HSLA) Steels 107 Steel Casting and Engineering Support...to achieve the performance goals required for new systems. The dramatic reduction in weight and increase in capability will require high performance...for improved weapon system reliability. SFSA developed innovative casting design and manufacturing processes for high performance parts. SFSA is

  9. Simulation, measurement, and emulation of photovoltaic modules using high frequency and high power density power electronic circuits

    NASA Astrophysics Data System (ADS)

    Erkaya, Yunus

    The number of solar photovoltaic (PV) installations is growing exponentially, and to improve the energy yield and the efficiency of PV systems, it is necessary to have correct methods for simulation, measurement, and emulation. PV systems can be simulated using PV models for different configurations and technologies of PV modules. Additionally, different environmental conditions of solar irradiance, temperature, and partial shading can be incorporated in the model to accurately simulate PV systems for any given condition. The electrical measurement of PV systems both prior to and after making electrical connections is important for attaining high efficiency and reliability. Measuring PV modules using a current-voltage (I-V) curve tracer allows the installer to know whether the PV modules are 100% operational. The installed modules can be properly matched to maximize performance. Once installed, the whole system needs to be characterized similarly to detect mismatches, partial shading, or installation damage before energizing the system. This will prevent any reliability issues from the onset and ensure the system efficiency will remain high. A capacitive load is implemented in making I-V curve measurements with the goal of minimizing the curve tracer volume and cost. Additionally, the increase of measurement resolution and accuracy is possible via the use of accurate voltage and current measurement methods and accurate PV models to translate the curves to standard testing conditions. A move from mechanical relays to solid-state MOSFETs improved system reliability while significantly reducing device volume and costs. Finally, emulating PV modules is necessary for testing electrical components of a PV system. PV emulation simplifies and standardizes the tests allowing for different irradiance, temperature and partial shading levels to be easily tested. Proper emulation of PV modules requires an accurate and mathematically simple PV model that incorporates all known system variables so that any PV module can be emulated as the design requires. A non-synchronous buck converter is proposed for the emulation of a single, high-power PV module using traditional silicon devices. With the proof-of-concept working and improvements in efficiency, power density and steady-state errors made, dynamic tests were performed using an inverter connected to the PV emulator. In order to improve the dynamic characteristics, a synchronous buck converter topology is proposed along with the use of advanced GaNFET devices which resulted in very high power efficiency and improved dynamic response characteristics when emulating PV modules.

  10. Parameterization of the InVEST Crop Pollination Model to spatially predict abundance of wild blueberry (Vaccinium angustifolium Aiton) native bee pollinators in Maine, USA

    USGS Publications Warehouse

    Groff, Shannon C.; Loftin, Cynthia S.; Drummond, Frank; Bushmann, Sara; McGill, Brian J.

    2016-01-01

    Non-native honeybees historically have been managed for crop pollination, however, recent population declines draw attention to pollination services provided by native bees. We applied the InVEST Crop Pollination model, developed to predict native bee abundance from habitat resources, in Maine's wild blueberry crop landscape. We evaluated model performance with parameters informed by four approaches: 1) expert opinion; 2) sensitivity analysis; 3) sensitivity analysis informed model optimization; and, 4) simulated annealing (uninformed) model optimization. Uninformed optimization improved model performance by 29% compared to expert opinion-informed model, while sensitivity-analysis informed optimization improved model performance by 54%. This suggests that expert opinion may not result in the best parameter values for the InVEST model. The proportion of deciduous/mixed forest within 2000 m of a blueberry field also reliably predicted native bee abundance in blueberry fields, however, the InVEST model provides an efficient tool to estimate bee abundance beyond the field perimeter.

  11. How can model comparison help improving species distribution models?

    PubMed

    Gritti, Emmanuel Stephan; Gaucherel, Cédric; Crespo-Perez, Maria-Veronica; Chuine, Isabelle

    2013-01-01

    Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs). However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT) that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagussylvatica L., Quercusrobur L. and Pinussylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes.

  12. How Can Model Comparison Help Improving Species Distribution Models?

    PubMed Central

    Gritti, Emmanuel Stephan; Gaucherel, Cédric; Crespo-Perez, Maria-Veronica; Chuine, Isabelle

    2013-01-01

    Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs). However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT) that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagus sylvatica L., Quercus robur L. and Pinus sylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes. PMID:23874779

  13. Reliability assessment and improvement for a fast corrector power supply in TPS

    NASA Astrophysics Data System (ADS)

    Liu, Kuo-Bin; Liu, Chen-Yao; Wang, Bao-Sheng; Wong, Yong Seng

    2018-07-01

    Fast Orbit Feedback System (FOFB) can be installed in a synchrotron light source to eliminate undesired disturbances and to improve the stability of beam orbit. The design and implementation of an accurate and reliable Fast Corrector Power Supply (FCPS) is essential to realize the effectiveness and availability of the FOFB. A reliability assessment for the FCPSs in the FOFB of Taiwan Photon Source (TPS) considering MOSFETs' temperatures is represented in this paper. The FCPS is composed of a full-bridge topology and a low-pass filter. A Hybrid Pulse Width Modulation (HPWM) requiring two MOSFETs in the full-bridge circuit to be operated at high frequency and the other two be operated at the output frequency is adopted to control the implemented FCPS. Due the characteristic of HPWM, the conduction loss and switching loss of each MOSFET in the FCPS is not same. Two of the MOSFETs in the full-bridge circuit will suffer higher temperatures and therefore the circuit reliability of FCPS is reduced. A Modified PWM Scheme (MPWMS) designed to average MOSFETs' temperatures and to improve circuit reliability is proposed in this paper. Experimental results measure the MOSFETs' temperatures of FCPS controlled by the HPWM and the proposed MPWMS. The reliability indices under different PWM controls are then assessed. From the experimental results, it can be observed that the reliability of FCPS using the proposed MPWMS can be improved because the MOSFETs' temperatures are closer. Since the reliability of FCPS can be enhanced, the availability of FOFB can also be improved.

  14. High resolution infrared datasets useful for validating stratospheric models

    NASA Technical Reports Server (NTRS)

    Rinsland, Curtis P.

    1992-01-01

    An important objective of the High Speed Research Program (HSRP) is to support research in the atmospheric sciences that will improve the basic understanding of the circulation and chemistry of the stratosphere and lead to an interim assessment of the impact of a projected fleet of High Speed Civil Transports (HSCT's) on the stratosphere. As part of this work, critical comparisons between models and existing high quality measurements are planned. These comparisons will be used to test the reliability of current atmospheric chemistry models. Two suitable sets of high resolution infrared measurements are discussed.

  15. The mathematical model accuracy estimation of the oil storage tank foundation soil moistening

    NASA Astrophysics Data System (ADS)

    Gildebrandt, M. I.; Ivanov, R. N.; Gruzin, AV; Antropova, L. B.; Kononov, S. A.

    2018-04-01

    The oil storage tanks foundations preparation technologies improvement is the relevant objective which achievement will make possible to reduce the material costs and spent time for the foundation preparing while providing the required operational reliability. The laboratory research revealed the nature of sandy soil layer watering with a given amount of water. The obtained data made possible developing the sandy soil layer moistening mathematical model. The performed estimation of the oil storage tank foundation soil moistening mathematical model accuracy showed the experimental and theoretical results acceptable convergence.

  16. Model-driven analysis of experimentally determined growth phenotypes for 465 yeast gene deletion mutants under 16 different conditions

    PubMed Central

    Snitkin, Evan S; Dudley, Aimée M; Janse, Daniel M; Wong, Kaisheen; Church, George M; Segrè, Daniel

    2008-01-01

    Background Understanding the response of complex biochemical networks to genetic perturbations and environmental variability is a fundamental challenge in biology. Integration of high-throughput experimental assays and genome-scale computational methods is likely to produce insight otherwise unreachable, but specific examples of such integration have only begun to be explored. Results In this study, we measured growth phenotypes of 465 Saccharomyces cerevisiae gene deletion mutants under 16 metabolically relevant conditions and integrated them with the corresponding flux balance model predictions. We first used discordance between experimental results and model predictions to guide a stage of experimental refinement, which resulted in a significant improvement in the quality of the experimental data. Next, we used discordance still present in the refined experimental data to assess the reliability of yeast metabolism models under different conditions. In addition to estimating predictive capacity based on growth phenotypes, we sought to explain these discordances by examining predicted flux distributions visualized through a new, freely available platform. This analysis led to insight into the glycerol utilization pathway and the potential effects of metabolic shortcuts on model results. Finally, we used model predictions and experimental data to discriminate between alternative raffinose catabolism routes. Conclusions Our study demonstrates how a new level of integration between high throughput measurements and flux balance model predictions can improve understanding of both experimental and computational results. The added value of a joint analysis is a more reliable platform for specific testing of biological hypotheses, such as the catabolic routes of different carbon sources. PMID:18808699

  17. Factor Structure, Reliability and Measurement Invariance of the Alberta Context Tool and the Conceptual Research Utilization Scale, for German Residential Long Term Care

    PubMed Central

    Hoben, Matthias; Estabrooks, Carole A.; Squires, Janet E.; Behrens, Johann

    2016-01-01

    We translated the Canadian residential long term care versions of the Alberta Context Tool (ACT) and the Conceptual Research Utilization (CRU) Scale into German, to study the association between organizational context factors and research utilization in German nursing homes. The rigorous translation process was based on best practice guidelines for tool translation, and we previously published methods and results of this process in two papers. Both instruments are self-report questionnaires used with care providers working in nursing homes. The aim of this study was to assess the factor structure, reliability, and measurement invariance (MI) between care provider groups responding to these instruments. In a stratified random sample of 38 nursing homes in one German region (Metropolregion Rhein-Neckar), we collected questionnaires from 273 care aides, 196 regulated nurses, 152 allied health providers, 6 quality improvement specialists, 129 clinical leaders, and 65 nursing students. The factor structure was assessed using confirmatory factor models. The first model included all 10 ACT concepts. We also decided a priori to run two separate models for the scale-based and the count-based ACT concepts as suggested by the instrument developers. The fourth model included the five CRU Scale items. Reliability scores were calculated based on the parameters of the best-fitting factor models. Multiple-group confirmatory factor models were used to assess MI between provider groups. Rather than the hypothesized ten-factor structure of the ACT, confirmatory factor models suggested 13 factors. The one-factor solution of the CRU Scale was confirmed. The reliability was acceptable (>0.7 in the entire sample and in all provider groups) for 10 of 13 ACT concepts, and high (0.90–0.96) for the CRU Scale. We could demonstrate partial strong MI for both ACT models and partial strict MI for the CRU Scale. Our results suggest that the scores of the German ACT and the CRU Scale for nursing homes are acceptably reliable and valid. However, as the ACT lacked strict MI, observed variables (or scale scores based on them) cannot be compared between provider groups. Rather, group comparisons should be based on latent variable models, which consider the different residual variances of each group. PMID:27656156

  18. The Americleft Speech Project: A Training and Reliability Study.

    PubMed

    Chapman, Kathy L; Baylis, Adriane; Trost-Cardamone, Judith; Cordero, Kelly Nett; Dixon, Angela; Dobbelsteyn, Cindy; Thurmes, Anna; Wilson, Kristina; Harding-Bell, Anne; Sweeney, Triona; Stoddard, Gregory; Sell, Debbie

    2016-01-01

    To describe the results of two reliability studies and to assess the effect of training on interrater reliability scores. The first study (1) examined interrater and intrarater reliability scores (weighted and unweighted kappas) and (2) compared interrater reliability scores before and after training on the use of the Cleft Audit Protocol for Speech-Augmented (CAPS-A) with British English-speaking children. The second study examined interrater and intrarater reliability on a modified version of the CAPS-A (CAPS-A Americleft Modification) with American and Canadian English-speaking children. Finally, comparisons were made between the interrater and intrarater reliability scores obtained for Study 1 and Study 2. The participants were speech-language pathologists from the Americleft Speech Project. In Study 1, interrater reliability scores improved for 6 of the 13 parameters following training on the CAPS-A protocol. Comparison of the reliability results for the two studies indicated lower scores for Study 2 compared with Study 1. However, this appeared to be an artifact of the kappa statistic that occurred due to insufficient variability in the reliability samples for Study 2. When percent agreement scores were also calculated, the ratings appeared similar across Study 1 and Study 2. The findings of this study suggested that improvements in interrater reliability could be obtained following a program of systematic training. However, improvements were not uniform across all parameters. Acceptable levels of reliability were achieved for those parameters most important for evaluation of velopharyngeal function.

  19. The Americleft Speech Project: A Training and Reliability Study

    PubMed Central

    Chapman, Kathy L.; Baylis, Adriane; Trost-Cardamone, Judith; Cordero, Kelly Nett; Dixon, Angela; Dobbelsteyn, Cindy; Thurmes, Anna; Wilson, Kristina; Harding-Bell, Anne; Sweeney, Triona; Stoddard, Gregory; Sell, Debbie

    2017-01-01

    Objective To describe the results of two reliability studies and to assess the effect of training on interrater reliability scores. Design The first study (1) examined interrater and intrarater reliability scores (weighted and unweighted kappas) and (2) compared interrater reliability scores before and after training on the use of the Cleft Audit Protocol for Speech–Augmented (CAPS-A) with British English-speaking children. The second study examined interrater and intrarater reliability on a modified version of the CAPS-A (CAPS-A Americleft Modification) with American and Canadian English-speaking children. Finally, comparisons were made between the interrater and intrarater reliability scores obtained for Study 1 and Study 2. Participants The participants were speech-language pathologists from the Americleft Speech Project. Results In Study 1, interrater reliability scores improved for 6 of the 13 parameters following training on the CAPS-A protocol. Comparison of the reliability results for the two studies indicated lower scores for Study 2 compared with Study 1. However, this appeared to be an artifact of the kappa statistic that occurred due to insufficient variability in the reliability samples for Study 2. When percent agreement scores were also calculated, the ratings appeared similar across Study 1 and Study 2. Conclusion The findings of this study suggested that improvements in interrater reliability could be obtained following a program of systematic training. However, improvements were not uniform across all parameters. Acceptable levels of reliability were achieved for those parameters most important for evaluation of velopharyngeal function. PMID:25531738

  20. Reliability Modeling of Microelectromechanical Systems Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Perera. J. Sebastian

    2000-01-01

    Microelectromechanical systems (MEMS) are a broad and rapidly expanding field that is currently receiving a great deal of attention because of the potential to significantly improve the ability to sense, analyze, and control a variety of processes, such as heating and ventilation systems, automobiles, medicine, aeronautical flight, military surveillance, weather forecasting, and space exploration. MEMS are very small and are a blend of electrical and mechanical components, with electrical and mechanical systems on one chip. This research establishes reliability estimation and prediction for MEMS devices at the conceptual design phase using neural networks. At the conceptual design phase, before devices are built and tested, traditional methods of quantifying reliability are inadequate because the device is not in existence and cannot be tested to establish the reliability distributions. A novel approach using neural networks is created to predict the overall reliability of a MEMS device based on its components and each component's attributes. The methodology begins with collecting attribute data (fabrication process, physical specifications, operating environment, property characteristics, packaging, etc.) and reliability data for many types of microengines. The data are partitioned into training data (the majority) and validation data (the remainder). A neural network is applied to the training data (both attribute and reliability); the attributes become the system inputs and reliability data (cycles to failure), the system output. After the neural network is trained with sufficient data. the validation data are used to verify the neural networks provided accurate reliability estimates. Now, the reliability of a new proposed MEMS device can be estimated by using the appropriate trained neural networks developed in this work.

  1. Quality assessment of a new surgical simulator for neuroendoscopic training.

    PubMed

    Filho, Francisco Vaz Guimarães; Coelho, Giselle; Cavalheiro, Sergio; Lyra, Marcos; Zymberg, Samuel T

    2011-04-01

    Ideal surgical training models should be entirely reliable, atoxic, easy to handle, and, if possible, low cost. All available models have their advantages and disadvantages. The choice of one or another will depend on the type of surgery to be performed. The authors created an anatomical model called the S.I.M.O.N.T. (Sinus Model Oto-Rhino Neuro Trainer) Neurosurgical Endotrainer, which can provide reliable neuroendoscopic training. The aim in the present study was to assess both the quality of the model and the development of surgical skills by trainees. The S.I.M.O.N.T. is built of a synthetic thermoretractable, thermosensible rubber called Neoderma, which, combined with different polymers, produces more than 30 different formulas. Quality assessment of the model was based on qualitative and quantitative data obtained from training sessions with 9 experienced and 13 inexperienced neurosurgeons. The techniques used for evaluation were face validation, retest and interrater reliability, and construct validation. The experts considered the S.I.M.O.N.T. capable of reproducing surgical situations as if they were real and presenting great similarity with the human brain. Surgical results of serial training showed that the model could be considered precise. Finally, development and improvement in surgical skills by the trainees were observed and considered relevant to further training. It was also observed that the probability of any single error was dramatically decreased after each training session, with a mean reduction of 41.65% (range 38.7%-45.6%). Neuroendoscopic training has some specific requirements. A unique set of instruments is required, as is a model that can resemble real-life situations. The S.I.M.O.N.T. is a new alternative model specially designed for this purpose. Validation techniques followed by precision assessments attested to the model's feasibility.

  2. Enhancing the Arctic Mean Sea Surface and Mean Dynamic Topography with CryoSat-2 Data

    NASA Astrophysics Data System (ADS)

    Stenseng, Lars; Andersen, Ole B.; Knudsen, Per

    2014-05-01

    A reliable mean sea surface (MSS) is essential to derive a good mean dynamic topography (MDT) and for the estimation of short and long-term changes in the sea surface. The lack of satellite radar altimetry observations above 82 degrees latitude means that existing mean sea surface models have been unreliable in the Arctic Ocean. We here present the latest DTU mean sea surface and mean dynamic topography models that includes CryoSat-2 data to improve the reliability in the Arctic Ocean. In an attempt to extrapolate across the gap above 82 degrees latitude the previously models included ICESat data, gravimetrical geoids, ocean circulation models and various combinations hereof. Unfortunately cloud cover and the short periods of operation has a negative effect on the number of ICESat sea surface observations. DTU13MSS and DTU13MDT are the new generation of state of the art global high-resolution models that includes CryoSat-2 data to extend the satellite radar altimetry coverage up to 88 degrees latitude. Furthermore the SAR and SARin capability of CryoSat-2 dramatically increases the amount of useable sea surface returns in sea-ice covered areas compared to conventional radar altimeters like ENVISAT and ERS-1/2. With the inclusion of CryoSat-2 data the new mean sea surface is improved by more than 20 cm above 82 degrees latitude compared with the previous generation of mean sea surfaces.

  3. Incorporation of prior information on parameters into nonlinear regression groundwater flow models: 2. Applications

    USGS Publications Warehouse

    Cooley, Richard L.

    1983-01-01

    This paper investigates factors influencing the degree of improvement in estimates of parameters of a nonlinear regression groundwater flow model by incorporating prior information of unknown reliability. Consideration of expected behavior of the regression solutions and results of a hypothetical modeling problem lead to several general conclusions. First, if the parameters are properly scaled, linearized expressions for the mean square error (MSE) in parameter estimates of a nonlinear model will often behave very nearly as if the model were linear. Second, by using prior information, the MSE in properly scaled parameters can be reduced greatly over the MSE of ordinary least squares estimates of parameters. Third, plots of estimated MSE and the estimated standard deviation of MSE versus an auxiliary parameter (the ridge parameter) specifying the degree of influence of the prior information on regression results can help determine the potential for improvement of parameter estimates. Fourth, proposed criteria can be used to make appropriate choices for the ridge parameter and another parameter expressing degree of overall bias in the prior information. Results of a case study of Truckee Meadows, Reno-Sparks area, Washoe County, Nevada, conform closely to the results of the hypothetical problem. In the Truckee Meadows case, incorporation of prior information did not greatly change the parameter estimates from those obtained by ordinary least squares. However, the analysis showed that both sets of estimates are more reliable than suggested by the standard errors from ordinary least squares.

  4. An Application of Con-Resistant Trust to Improve the Reliability of Special Protection Systems within the Smart Grid

    DTIC Science & Technology

    2012-06-01

    in an effort to be more reliable and efficient. However, with the benefits of this new technology comes added risk . This research utilizes a con ...AN APPLICATION OF CON -RESISTANT TRUST TO IMPROVE THE RELIABILITY OF SPECIAL PROTECTION SYSTEMS WITHIN THE SMART GRID THESIS Crystal M. Shipman...Government and is not subject to copyright protection in the United States AFIT/GCO/ENG/12-22 AN APPLICATION OF CON -RESISTANT TRUST TO IMPROVE THE

  5. Model Improvement by Assimilating Observations of Storm-Induced Coastal Change

    NASA Astrophysics Data System (ADS)

    Long, J. W.; Plant, N. G.; Sopkin, K.

    2010-12-01

    Discrete, large scale, meteorological events such as hurricanes can cause wide-spread destruction of coastal islands, habitats, and infrastructure. The effects can vary significantly along the coast depending on the configuration of the coastline, variable dune elevations, changes in geomorphology (sandy beach vs. marshland), and alongshore variations in storm hydrodynamic forcing. There are two primary methods of determining the changing state of a coastal system. Process-based numerical models provide highly resolved (in space and time) representations of the dominant dynamics in a physical system but must employ certain parameterizations due to computational limitations. The predictive capability may also suffer from the lack of reliable initial or boundary conditions. On the other hand, observations of coastal topography before and after the storm allow the direct quantification of cumulative storm impacts. Unfortunately these measurements suffer from instrument noise and a lack of necessary temporal resolution. This research focuses on the combination of these two pieces of information to make more reliable forecasts of storm-induced coastal change. Of primary importance is the development of a data assimilation strategy that is efficient, applicable for use with highly nonlinear models, and able to quantify the remaining forecast uncertainty based on the reliability of each individual piece of information used in the assimilation process. We concentrate on an event time-scale and estimate/update unobserved model information (boundary conditions, free parameters, etc.) by assimilating direct observations of coastal change with those simulated by the model. The data assimilation can help estimate spatially varying quantities (e.g. friction coefficients) that are often modeled as homogeneous and identify processes inadequately characterized in the model.

  6. Emission of pesticides into the air

    USGS Publications Warehouse

    Van Den, Berg; Kubiak, R.; Benjey, W.G.; Majewski, M.S.; Yates, S.R.; Reeves, G.L.; Smelt, J.H.; Van Der Linden, A. M. A.

    1999-01-01

    During and after the application of a pesticide in agriculture, a substantial fraction of the dosage may enter the atmosphere and be transported over varying distances downwind of the target. The rate and extent of the emission during application, predominantly as spray particle drift, depends primarily on the application method (equipment and technique), the formulation and environmental conditions, whereas the emission after application depends primarily on the properties of the pesticide, soils, crops and environmental conditions. The fraction of the dosage that misses the target area may be high in some cases and more experimental data on this loss term are needed for various application types and weather conditions. Such data are necessary to test spray drift models, and for further model development and verification as well. Following application, the emission of soil fumigants and soil incorporated pesticides into the air can be measured and computed with reasonable accuracy, but further model development is needed to improve the reliability of the model predictions. For soil surface applied pesticides reliable measurement methods are available, but there is not yet a reliable model. Further model development is required which must be verified by field experiments. Few data are available on pesticide volatilization from plants and more field experiments are also needed to study the fate processes on the plants. Once this information is available, a model needs to be developed to predict the volatilization of pesticides from plants, which, again, should be verified with field measurements. For regional emission estimates, a link between data on the temporal and spatial pesticide use and a geographical information system for crops and soils with their characteristics is needed.

  7. Reliability models applicable to space telescope solar array assembly system

    NASA Technical Reports Server (NTRS)

    Patil, S. A.

    1986-01-01

    A complex system may consist of a number of subsystems with several components in series, parallel, or combination of both series and parallel. In order to predict how well the system will perform, it is necessary to know the reliabilities of the subsystems and the reliability of the whole system. The objective of the present study is to develop mathematical models of the reliability which are applicable to complex systems. The models are determined by assuming k failures out of n components in a subsystem. By taking k = 1 and k = n, these models reduce to parallel and series models; hence, the models can be specialized to parallel, series combination systems. The models are developed by assuming the failure rates of the components as functions of time and as such, can be applied to processes with or without aging effects. The reliability models are further specialized to Space Telescope Solar Arrray (STSA) System. The STSA consists of 20 identical solar panel assemblies (SPA's). The reliabilities of the SPA's are determined by the reliabilities of solar cell strings, interconnects, and diodes. The estimates of the reliability of the system for one to five years are calculated by using the reliability estimates of solar cells and interconnects given n ESA documents. Aging effects in relation to breaks in interconnects are discussed.

  8. Analytical procedures for determining the impacts of reliability mitigation strategies. [supporting datasets

    DOT National Transportation Integrated Search

    2012-11-30

    The objective of this project was to develop technical relationships between reliability improvement strategies and reliability performance metrics. This project defined reliability, explained the importance of travel time distributions for measuring...

  9. A simulation model to estimate the cost and effectiveness of alternative dialysis initiation strategies.

    PubMed

    Lee, Chris P; Chertow, Glenn M; Zenios, Stefanos A

    2006-01-01

    Patients with end-stage renal disease (ESRD) require dialysis to maintain survival. The optimal timing of dialysis initiation in terms of cost-effectiveness has not been established. We developed a simulation model of individuals progressing towards ESRD and requiring dialysis. It can be used to analyze dialysis strategies and scenarios. It was embedded in an optimization frame worked to derive improved strategies. Actual (historical) and simulated survival curves and hospitalization rates were virtually indistinguishable. The model overestimated transplantation costs (10%) but it was related to confounding by Medicare coverage. To assess the model's robustness, we examined several dialysis strategies while input parameters were perturbed. Under all 38 scenarios, relative rankings remained unchanged. An improved policy for a hypothetical patient was derived using an optimization algorithm. The model produces reliable results and is robust. It enables the cost-effectiveness analysis of dialysis strategies.

  10. Impact of task-related changes in heart rate on estimation of hemodynamic response and model fit.

    PubMed

    Hillenbrand, Sarah F; Ivry, Richard B; Schlerf, John E

    2016-05-15

    The blood oxygen level dependent (BOLD) signal, as measured using functional magnetic resonance imaging (fMRI), is widely used as a proxy for changes in neural activity in the brain. Physiological variables such as heart rate (HR) and respiratory variation (RV) affect the BOLD signal in a way that may interfere with the estimation and detection of true task-related neural activity. This interference is of particular concern when these variables themselves show task-related modulations. We first establish that a simple movement task reliably induces a change in HR but not RV. In group data, the effect of HR on the BOLD response was larger and more widespread throughout the brain than were the effects of RV or phase regressors. The inclusion of HR regressors, but not RV or phase regressors, had a small but reliable effect on the estimated hemodynamic response function (HRF) in M1 and the cerebellum. We next asked whether the inclusion of a nested set of physiological regressors combining phase, RV, and HR significantly improved the model fit in individual participants' data sets. There was a significant improvement from HR correction in M1 for the greatest number of participants, followed by RV and phase correction. These improvements were more modest in the cerebellum. These results indicate that accounting for task-related modulation of physiological variables can improve the detection and estimation of true neural effects of interest. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. High power diode lasers emitting from 639 nm to 690 nm

    NASA Astrophysics Data System (ADS)

    Bao, L.; Grimshaw, M.; DeVito, M.; Kanskar, M.; Dong, W.; Guan, X.; Zhang, S.; Patterson, J.; Dickerson, P.; Kennedy, K.; Li, S.; Haden, J.; Martinsen, R.

    2014-03-01

    There is increasing market demand for high power reliable red lasers for display and cinema applications. Due to the fundamental material system limit at this wavelength range, red diode lasers have lower efficiency and are more temperature sensitive, compared to 790-980 nm diode lasers. In terms of reliability, red lasers are also more sensitive to catastrophic optical mirror damage (COMD) due to the higher photon energy. Thus developing higher power-reliable red lasers is very challenging. This paper will present nLIGHT's released red products from 639 nm to 690nm, with established high performance and long-term reliability. These single emitter diode lasers can work as stand-alone singleemitter units or efficiently integrate into our compact, passively-cooled Pearl™ fiber-coupled module architectures for higher output power and improved reliability. In order to further improve power and reliability, new chip optimizations have been focused on improving epitaxial design/growth, chip configuration/processing and optical facet passivation. Initial optimization has demonstrated promising results for 639 nm diode lasers to be reliably rated at 1.5 W and 690nm diode lasers to be reliably rated at 4.0 W. Accelerated life-test has started and further design optimization are underway.

  12. Examining the reliability of ADAS-Cog change scores.

    PubMed

    Grochowalski, Joseph H; Liu, Ying; Siedlecki, Karen L

    2016-09-01

    The purpose of this study was to estimate and examine ways to improve the reliability of change scores on the Alzheimer's Disease Assessment Scale, Cognitive Subtest (ADAS-Cog). The sample, provided by the Alzheimer's Disease Neuroimaging Initiative, included individuals with Alzheimer's disease (AD) (n = 153) and individuals with mild cognitive impairment (MCI) (n = 352). All participants were administered the ADAS-Cog at baseline and 1 year, and change scores were calculated as the difference in scores over the 1-year period. Three types of change score reliabilities were estimated using multivariate generalizability. Two methods to increase change score reliability were evaluated: reweighting the subtests of the scale and adding more subtests. Reliability of ADAS-Cog change scores over 1 year was low for both the AD sample (ranging from .53 to .64) and the MCI sample (.39 to .61). Reweighting the change scores from the AD sample improved reliability (.68 to .76), but lengthening provided no useful improvement for either sample. The MCI change scores had low reliability, even with reweighting and adding additional subtests. The ADAS-Cog scores had low reliability for measuring change. Researchers using the ADAS-Cog should estimate and report reliability for their use of the change scores. The ADAS-Cog change scores are not recommended for assessment of meaningful clinical change.

  13. Asymptotically reliable transport of multimedia/graphics over wireless channels

    NASA Astrophysics Data System (ADS)

    Han, Richard Y.; Messerschmitt, David G.

    1996-03-01

    We propose a multiple-delivery transport service tailored for graphics and video transported over connections with wireless access. This service operates at the interface between the transport and application layers, balancing the subjective delay and image quality objectives of the application with the low reliability and limited bandwidth of the wireless link. While techniques like forward-error correction, interleaving and retransmission improve reliability over wireless links, they also increase latency substantially when bandwidth is limited. Certain forms of interactive multimedia datatypes can benefit from an initial delivery of a corrupt packet to lower the perceptual latency, as long as reliable delivery occurs eventually. Multiple delivery of successively refined versions of the received packet, terminating when a sufficiently reliable version arrives, exploits the redundancy inherently required to improve reliability without a traffic penalty. Modifications to acknowledgment-repeat-request (ARQ) methods to implement this transport service are proposed, which we term `leaky ARQ'. For the specific case of pixel-coded window-based text/graphics, we describe additional functions needed to more effectively support urgent delivery and asymptotic reliability. X server emulation suggests that users will accept a multi-second delay between a (possibly corrupt) packet and the ultimate reliably-delivered version. The relaxed delay for reliable delivery can be exploited for traffic capacity improvement using scheduling of retransmissions.

  14. Reliability model generator specification

    NASA Technical Reports Server (NTRS)

    Cohen, Gerald C.; Mccann, Catherine

    1990-01-01

    The Reliability Model Generator (RMG), a program which produces reliability models from block diagrams for ASSIST, the interface for the reliability evaluation tool SURE is described. An account is given of motivation for RMG and the implemented algorithms are discussed. The appendices contain the algorithms and two detailed traces of examples.

  15. [Improved methods for monitoring sleep state and respiratory rhythm in freely moving rats].

    PubMed

    Wang, Qi-Min; Dong, Hui; Zhang, Cheng; Zhang, Yong-He; Ma, Jing; Wang, Guang-Fa

    2014-01-01

    To improve the method for monitoring sleep state and respiratory rhythm of SD rats, providing a solution for rats' chewing on the wires, signal loss and instability problems in the animal model of sleep apnea syndrome (SAS). We improved monitoring electrodes of both electrocorticogram (ECoG) and electromyogram (EMG), signal circuit and animal operation. Operation time was shortened and wound exposure time was reduced, which made it easier for postoperative recovery. The ECoG and EMG signals were more stable with sharp image, and signal circuit lines had better conductivity and material durability, achieving continuous monitoring for a long time and high success rate. We could precisely distinguish the sleep wake state and the sleep apnea events in rats according to these signals. The improved method is more reliable and practical to test the small animal model of SAS, and is more easily to operate and the signals are more stable.

  16. An Organizational Learning Framework for Patient Safety.

    PubMed

    Edwards, Marc T

    Despite concerted effort to improve quality and safety, high reliability remains a distant goal. Although this likely reflects the challenge of organizational change, persistent controversy over basic issues suggests that weaknesses in conceptual models may contribute. The essence of operational improvement is organizational learning. This article presents a framework for identifying leverage points for improvement based on organizational learning theory and applies it to an analysis of current practice and controversy. Organizations learn from others, from defects, from measurement, and from mindfulness. These learning modes correspond with contemporary themes of collaboration, no blame for human error, accountability for performance, and managing the unexpected. The collaborative model has dominated improvement efforts. Greater attention to the underdeveloped modes of organizational learning may foster more rapid progress in patient safety by increasing organizational capabilities, strengthening a culture of safety, and fixing more of the process problems that contribute to patient harm.

  17. A Comparison of Three Multivariate Models for Estimating Test Battery Reliability.

    ERIC Educational Resources Information Center

    Wood, Terry M.; Safrit, Margaret J.

    1987-01-01

    A comparison of three multivariate models (canonical reliability model, maximum generalizability model, canonical correlation model) for estimating test battery reliability indicated that the maximum generalizability model showed the least degree of bias, smallest errors in estimation, and the greatest relative efficiency across all experimental…

  18. Geopotential models of the Earth from satellite tracking, altimeter and surface gravity observations: GEM-T3 and GEM-T3S

    NASA Technical Reports Server (NTRS)

    Lerch, F. J.; Nerem, R. S.; Putney, B. H.; Felsentreger, T. L.; Sanchez, B. V.; Klosko, S. M.; Patel, G. B.; Williamson, R. G.; Chinn, D. S.; Chan, J. C.

    1992-01-01

    Improved models of the Earth's gravitational field have been developed from conventional tracking data and from a combination of satellite tracking, satellite altimeter and surface gravimetric data. This combination model represents a significant improvement in the modeling of the gravity field at half-wavelengths of 300 km and longer. Both models are complete to degree and order 50. The Goddard Earth Model-T3 (GEM-T3) provides more accurate computation of satellite orbital effects as well as giving superior geoidal representation from that achieved in any previous GEM. A description of the models, their development and an assessment of their accuracy is presented. The GEM-T3 model used altimeter data from previous satellite missions in estimating the orbits, geoid, and dynamic height fields. Other satellite tracking data are largely the same as was used to develop GEM-T2, but contain certain important improvements in data treatment and expanded laser tracking coverage. Over 1300 arcs of tracking data from 31 different satellites have been used in the solution. Reliable estimates of the model uncertainties via error calibration and optimal data weighting techniques are discussed.

  19. Learned helplessness: validity and reliability of depressive-like states in mice.

    PubMed

    Chourbaji, S; Zacher, C; Sanchis-Segura, C; Dormann, C; Vollmayr, B; Gass, P

    2005-12-01

    The learned helplessness paradigm is a depression model in which animals are exposed to unpredictable and uncontrollable stress, e.g. electroshocks, and subsequently develop coping deficits for aversive but escapable situations (J.B. Overmier, M.E. Seligman, Effects of inescapable shock upon subsequent escape and avoidance responding, J. Comp. Physiol. Psychol. 63 (1967) 28-33 ). It represents a model with good similarity to the symptoms of depression, construct, and predictive validity in rats. Despite an increased need to investigate emotional, in particular depression-like behaviors in transgenic mice, so far only a few studies have been published using the learned helplessness paradigm. One reason may be the fact that-in contrast to rats (B. Vollmayr, F.A. Henn, Learned helplessness in the rat: improvements in validity and reliability, Brain Res. Brain Res. Protoc. 8 (2001) 1-7)--there is no generally accepted learned helplessness protocol available for mice. This prompted us to develop a reliable helplessness procedure in C57BL/6N mice, to exclude possible artifacts, and to establish a protocol, which yields a consistent fraction of helpless mice following the shock exposure. Furthermore, we validated this protocol pharmacologically using the tricyclic antidepressant imipramine. Here, we present a mouse model with good face and predictive validity that can be used for transgenic, behavioral, and pharmacological studies.

  20. Coupling long and short term decisions in the design of urban water supply infrastructure for added reliability and flexibility

    NASA Astrophysics Data System (ADS)

    Marques, G.; Fraga, C. C. S.; Medellin-Azuara, J.

    2016-12-01

    The expansion and operation of urban water supply systems under growing demands, hydrologic uncertainty and water scarcity requires a strategic combination of supply sources for reliability, reduced costs and improved operational flexibility. The design and operation of such portfolio of water supply sources involves integration of long and short term planning to determine what and when to expand, and how much to use of each supply source accounting for interest rates, economies of scale and hydrologic variability. This research presents an integrated methodology coupling dynamic programming optimization with quadratic programming to optimize the expansion (long term) and operations (short term) of multiple water supply alternatives. Lagrange Multipliers produced by the short-term model provide a signal about the marginal opportunity cost of expansion to the long-term model, in an iterative procedure. A simulation model hosts the water supply infrastructure and hydrologic conditions. Results allow (a) identification of trade offs between cost and reliability of different expansion paths and water use decisions; (b) evaluation of water transfers between urban supply systems; and (c) evaluation of potential gains by reducing water system losses as a portfolio component. The latter is critical in several developing countries where water supply system losses are high and often neglected in favor of more system expansion.

  1. The Reliability of In-Training Assessment when Performance Improvement Is Taken into Account

    ERIC Educational Resources Information Center

    van Lohuizen, Mirjam T.; Kuks, Jan B. M.; van Hell, Elisabeth A.; Raat, A. N.; Stewart, Roy E.; Cohen-Schotanus, Janke

    2010-01-01

    During in-training assessment students are frequently assessed over a longer period of time and therefore it can be expected that their performance will improve. We studied whether there really is a measurable performance improvement when students are assessed over an extended period of time and how this improvement affects the reliability of the…

  2. Comparison of ensemble post-processing approaches, based on empirical and dynamical error modelisation of rainfall-runoff model forecasts

    NASA Astrophysics Data System (ADS)

    Chardon, J.; Mathevet, T.; Le Lay, M.; Gailhard, J.

    2012-04-01

    In the context of a national energy company (EDF : Electricité de France), hydro-meteorological forecasts are necessary to ensure safety and security of installations, meet environmental standards and improve water ressources management and decision making. Hydrological ensemble forecasts allow a better representation of meteorological and hydrological forecasts uncertainties and improve human expertise of hydrological forecasts, which is essential to synthesize available informations, coming from different meteorological and hydrological models and human experience. An operational hydrological ensemble forecasting chain has been developed at EDF since 2008 and is being used since 2010 on more than 30 watersheds in France. This ensemble forecasting chain is characterized ensemble pre-processing (rainfall and temperature) and post-processing (streamflow), where a large human expertise is solicited. The aim of this paper is to compare 2 hydrological ensemble post-processing methods developed at EDF in order improve ensemble forecasts reliability (similar to Monatanari &Brath, 2004; Schaefli et al., 2007). The aim of the post-processing methods is to dress hydrological ensemble forecasts with hydrological model uncertainties, based on perfect forecasts. The first method (called empirical approach) is based on a statistical modelisation of empirical error of perfect forecasts, by streamflow sub-samples of quantile class and lead-time. The second method (called dynamical approach) is based on streamflow sub-samples of quantile class and streamflow variation, and lead-time. On a set of 20 watersheds used for operational forecasts, results show that both approaches are necessary to ensure a good post-processing of hydrological ensemble, allowing a good improvement of reliability, skill and sharpness of ensemble forecasts. The comparison of the empirical and dynamical approaches shows the limits of the empirical approach which is not able to take into account hydrological dynamic and processes, i. e. sample heterogeneity. For a same streamflow range corresponds different processes such as rising limbs or recession, where uncertainties are different. The dynamical approach improves reliability, skills and sharpness of forecasts and globally reduces confidence intervals width. When compared in details, the dynamical approach allows a noticeable reduction of confidence intervals during recessions where uncertainty is relatively lower and a slight increase of confidence intervals during rising limbs or snowmelt where uncertainty is greater. The dynamic approach, validated by forecaster's experience that considered the empirical approach not discriminative enough, improved forecaster's confidence and communication of uncertainties. Montanari, A. and Brath, A., (2004). A stochastic approach for assessing the uncertainty of rainfall-runoff simulations. Water Resources Research, 40, W01106, doi:10.1029/2003WR002540. Schaefli, B., Balin Talamba, D. and Musy, A., (2007). Quantifying hydrological modeling errors through a mixture of normal distributions. Journal of Hydrology, 332, 303-315.

  3. Efficient Unsteady Flow Visualization with High-Order Access Dependencies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiang; Guo, Hanqi; Yuan, Xiaoru

    We present a novel high-order access dependencies based model for efficient pathline computation in unsteady flow visualization. By taking longer access sequences into account to model more sophisticated data access patterns in particle tracing, our method greatly improves the accuracy and reliability in data access prediction. In our work, high-order access dependencies are calculated by tracing uniformly-seeded pathlines in both forward and backward directions in a preprocessing stage. The effectiveness of our proposed approach is demonstrated through a parallel particle tracing framework with high-order data prefetching. Results show that our method achieves higher data locality and hence improves the efficiencymore » of pathline computation.« less

  4. Customer-Driven Reliability Models for Multistate Coherent Systems

    DTIC Science & Technology

    1992-01-01

    AENCYUSEONLY(Leae bank)2. RPO- COVERED 1 11992DISSERTATION 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS Customer -Driven Reliability Models For Multistate Coherent...UNIVERSITY OF OKLAHOMA GRADUATE COLLEGE CUSTOMER -DRIVEN RELIABILITY MODELS FOR MULTISTATE COHERENT SYSTEMS A DISSERTATION SUBMITTED TO THE GRADUATE FACULTY...BOEDIGHEIMER I Norman, Oklahoma Distribution/ Av~ilability Codes 1992 A vil andior Dist Special CUSTOMER -DRIVEN RELIABILITY MODELS FOR MULTISTATE

  5. Evaluation of animal models of neurobehavioral disorders

    PubMed Central

    van der Staay, F Josef; Arndt, Saskia S; Nordquist, Rebecca E

    2009-01-01

    Animal models play a central role in all areas of biomedical research. The process of animal model building, development and evaluation has rarely been addressed systematically, despite the long history of using animal models in the investigation of neuropsychiatric disorders and behavioral dysfunctions. An iterative, multi-stage trajectory for developing animal models and assessing their quality is proposed. The process starts with defining the purpose(s) of the model, preferentially based on hypotheses about brain-behavior relationships. Then, the model is developed and tested. The evaluation of the model takes scientific and ethical criteria into consideration. Model development requires a multidisciplinary approach. Preclinical and clinical experts should establish a set of scientific criteria, which a model must meet. The scientific evaluation consists of assessing the replicability/reliability, predictive, construct and external validity/generalizability, and relevance of the model. We emphasize the role of (systematic and extended) replications in the course of the validation process. One may apply a multiple-tiered 'replication battery' to estimate the reliability/replicability, validity, and generalizability of result. Compromised welfare is inherent in many deficiency models in animals. Unfortunately, 'animal welfare' is a vaguely defined concept, making it difficult to establish exact evaluation criteria. Weighing the animal's welfare and considerations as to whether action is indicated to reduce the discomfort must accompany the scientific evaluation at any stage of the model building and evaluation process. Animal model building should be discontinued if the model does not meet the preset scientific criteria, or when animal welfare is severely compromised. The application of the evaluation procedure is exemplified using the rat with neonatal hippocampal lesion as a proposed model of schizophrenia. In a manner congruent to that for improving animal models, guided by the procedure expounded upon in this paper, the developmental and evaluation procedure itself may be improved by careful definition of the purpose(s) of a model and by defining better evaluation criteria, based on the proposed use of the model. PMID:19243583

  6. A Dependable Localization Algorithm for Survivable Belt-Type Sensor Networks.

    PubMed

    Zhu, Mingqiang; Song, Fei; Xu, Lei; Seo, Jung Taek; You, Ilsun

    2017-11-29

    As the key element, sensor networks are widely investigated by the Internet of Things (IoT) community. When massive numbers of devices are well connected, malicious attackers may deliberately propagate fake position information to confuse the ordinary users and lower the network survivability in belt-type situation. However, most existing positioning solutions only focus on the algorithm accuracy and do not consider any security aspects. In this paper, we propose a comprehensive scheme for node localization protection, which aims to improve the energy-efficient, reliability and accuracy. To handle the unbalanced resource consumption, a node deployment mechanism is presented to satisfy the energy balancing strategy in resource-constrained scenarios. According to cooperation localization theory and network connection property, the parameter estimation model is established. To achieve reliable estimations and eliminate large errors, an improved localization algorithm is created based on modified average hop distances. In order to further improve the algorithms, the node positioning accuracy is enhanced by using the steepest descent method. The experimental simulations illustrate the performance of new scheme can meet the previous targets. The results also demonstrate that it improves the belt-type sensor networks' survivability, in terms of anti-interference, network energy saving, etc.

  7. A Dependable Localization Algorithm for Survivable Belt-Type Sensor Networks

    PubMed Central

    Zhu, Mingqiang; Song, Fei; Xu, Lei; Seo, Jung Taek

    2017-01-01

    As the key element, sensor networks are widely investigated by the Internet of Things (IoT) community. When massive numbers of devices are well connected, malicious attackers may deliberately propagate fake position information to confuse the ordinary users and lower the network survivability in belt-type situation. However, most existing positioning solutions only focus on the algorithm accuracy and do not consider any security aspects. In this paper, we propose a comprehensive scheme for node localization protection, which aims to improve the energy-efficient, reliability and accuracy. To handle the unbalanced resource consumption, a node deployment mechanism is presented to satisfy the energy balancing strategy in resource-constrained scenarios. According to cooperation localization theory and network connection property, the parameter estimation model is established. To achieve reliable estimations and eliminate large errors, an improved localization algorithm is created based on modified average hop distances. In order to further improve the algorithms, the node positioning accuracy is enhanced by using the steepest descent method. The experimental simulations illustrate the performance of new scheme can meet the previous targets. The results also demonstrate that it improves the belt-type sensor networks’ survivability, in terms of anti-interference, network energy saving, etc. PMID:29186072

  8. Critical-Inquiry-Based-Learning: Model of Learning to Promote Critical Thinking Ability of Pre-service Teachers

    NASA Astrophysics Data System (ADS)

    Prayogi, S.; Yuanita, L.; Wasis

    2018-01-01

    This study aimed to develop Critical-Inquiry-Based-Learning (CIBL) learning model to promote critical thinking (CT) ability of preservice teachers. The CIBL learning model was developed by meeting the criteria of validity, practicality, and effectiveness. Validation of the model involves 4 expert validators through the mechanism of the focus group discussion (FGD). CIBL learning model declared valid to promote CT ability, with the validity level (Va) of 4.20 and reliability (r) of 90,1% (very reliable). The practicality of the model was evaluated when it was implemented that involving 17 of preservice teachers. The CIBL learning model had been declared practice, its measuring from learning feasibility (LF) with very good criteria (LF-score = 4.75). The effectiveness of the model was evaluated from the improvement CT ability after the implementation of the model. CT ability were evaluated using the scoring technique adapted from Ennis-Weir Critical Thinking Essay Test. The average score of CT ability on pretest is - 1.53 (uncritical criteria), whereas on posttest is 8.76 (critical criteria), with N-gain score of 0.76 (high criteria). Based on the results of this study, it can be concluded that developed CIBL learning model is feasible to promote CT ability of preservice teachers.

  9. On-board adaptive model for state of charge estimation of lithium-ion batteries based on Kalman filter with proportional integral-based error adjustment

    NASA Astrophysics Data System (ADS)

    Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai

    2017-10-01

    With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.

  10. Study on residual stresses in ultrasonic torsional vibration assisted micro-milling

    NASA Astrophysics Data System (ADS)

    Lu, Zesheng; Hu, Haijun; Sun, Yazhou; Sun, Qing

    2010-10-01

    It is well known that machining induced residual stresses can seriously affect the dimensional accuracy, corrosion and wear resistance, etc., and further influence the longevity and reliability of Micro-Optical Components (MOC). In Ultrasonic Torsional Vibration Assisted Micro-milling (UTVAM), cutting parameters, vibration parameters, mill cutter parameters, the status of wear length of tool flank are the main factors which affect residual stresses. A 2D model of UTVAM was established with FE analysis software ABAQUS. Johnson-Cook's flow stress model and shear failure principle are used as the workpiece material model and failure principle, while friction between tool and workpiece uses modified Coulomb's law whose sliding friction area is combined with sticking friction. By means of FEA, the influence rules of cutting parameters, vibration parameters, mill cutter parameters, the status of wear length of tool flank on residual stresses are obtained, which provides a basis for choosing optimal process parameters and improving the longevity and reliability of MOC.

  11. Prediction of chemical biodegradability using support vector classifier optimized with differential evolution.

    PubMed

    Cao, Qi; Leung, K M

    2014-09-22

    Reliable computer models for the prediction of chemical biodegradability from molecular descriptors and fingerprints are very important for making health and environmental decisions. Coupling of the differential evolution (DE) algorithm with the support vector classifier (SVC) in order to optimize the main parameters of the classifier resulted in an improved classifier called the DE-SVC, which is introduced in this paper for use in chemical biodegradability studies. The DE-SVC was applied to predict the biodegradation of chemicals on the basis of extensive sample data sets and known structural features of molecules. Our optimization experiments showed that DE can efficiently find the proper parameters of the SVC. The resulting classifier possesses strong robustness and reliability compared with grid search, genetic algorithm, and particle swarm optimization methods. The classification experiments conducted here showed that the DE-SVC exhibits better classification performance than models previously used for such studies. It is a more effective and efficient prediction model for chemical biodegradability.

  12. Analysis of an experiment aimed at improving the reliability of transmission centre shafts.

    PubMed

    Davis, T P

    1995-01-01

    Smith (1991) presents a paper proposing the use of Weibull regression models to establish dependence of failure data (usually times) on covariates related to the design of the test specimens and test procedures. In his article Smith made the point that good experimental design was as important in reliability applications as elsewhere, and in view of the current interest in design inspired by Taguchi and others, we pay some attention in this article to that topic. A real case study from the Ford Motor Company is presented. Our main approach is to utilize suggestions in the literature for applying standard least squares techniques of experimental analysis even when there is likely to be nonnormal error, and censoring. This approach lacks theoretical justification, but its appeal is its simplicity and flexibility. For completeness we also include some analysis based on the proportional hazards model, and in an attempt to link back to Smith (1991), look at a Weibull regression model.

  13. Patient safety in surgical environments: cross-countries comparison of psychometric properties and results of the Norwegian version of the Hospital Survey on Patient Safety.

    PubMed

    Haugen, Arvid S; Søfteland, Eirik; Eide, Geir E; Nortvedt, Monica W; Aase, Karina; Harthug, Stig

    2010-09-22

    How hospital health care personnel perceive safety climate has been assessed in several countries by using the Hospital Survey on Patient Safety (HSOPS). Few studies have examined safety climate factors in surgical departments per se. This study examined the psychometric properties of a Norwegian translation of the HSOPS and also compared safety climate factors from a surgical setting to hospitals in the United States, the Netherlands and Norway. This survey included 575 surgical personnel in Haukeland University Hospital in Bergen, an 1100-bed tertiary hospital in western Norway: surgeons, operating theatre nurses, anaesthesiologists, nurse anaesthetists and ancillary personnel. Of these, 358 returned the HSOPS, resulting in a 62% response rate. We used factor analysis to examine the applicability of the HSOPS factor structure in operating theatre settings. We also performed psychometric analysis for internal consistency and construct validity. In addition, we compared the percent of average positive responds of the patient safety climate factors with results of the US HSOPS 2010 comparative data base report. The professions differed in their perception of patient safety climate, with anaesthesia personnel having the highest mean scores. Factor analysis using the original 12-factor model of the HSOPS resulted in low reliability scores (r = 0.6) for two factors: "adequate staffing" and "organizational learning and continuous improvement". For the remaining factors, reliability was ≥ 0.7. Reliability scores improved to r = 0.8 by combining the factors "organizational learning and continuous improvement" and "feedback and communication about error" into one six-item factor, supporting an 11-factor model. The inter-item correlations were found satisfactory. The psychometric properties of the questionnaire need further investigations to be regarded as reliable in surgical environments. The operating theatre personnel perceived their hospital's patient safety climate far more negatively than the health care personnel in hospitals in the United States and with perceptions more comparable to those of health care personnel in hospitals in the Netherlands. In fact, the surgical personnel in our hospital may perceive that patient safety climate is less focused in our hospital, at least compared with the results from hospitals in the United States.

  14. Reliability design and verification for launch-vehicle propulsion systems - Report of an AIAA Workshop, Washington, DC, May 16, 17, 1989

    NASA Astrophysics Data System (ADS)

    Launch vehicle propulsion system reliability considerations during the design and verification processes are discussed. The tools available for predicting and minimizing anomalies or failure modes are described and objectives for validating advanced launch system propulsion reliability are listed. Methods for ensuring vehicle/propulsion system interface reliability are examined and improvements in the propulsion system development process are suggested to improve reliability in launch operations. Also, possible approaches to streamline the specification and procurement process are given. It is suggested that government and industry should define reliability program requirements and manage production and operations activities in a manner that provides control over reliability drivers. Also, it is recommended that sufficient funds should be invested in design, development, test, and evaluation processes to ensure that reliability is not inappropriately subordinated to other management considerations.

  15. Journal of the British Ship Research Association. Index to Volume 34, January to December 1979. Abstracts Number 49, 883-52, 042.

    DTIC Science & Technology

    1979-01-01

    ENERGY ABSORPTION CAPACITY 50083 MODEL TESTS: 51101 OF TANKERS 52034 POLLUTION PROBLEMS: 51678 SAFETY PRECAUTIONS 50019 SIMULATION OF MODEL TESTS 50257...PREVENTION METHODS 50375 PROTECTION AGAINST: PIPEWORK: 51649 TANKS: 51649 RESISTANCE TO: IN MARINE SYSTEMS . 50179 SALT-WATER 52004 CORROSION...STANDARDISED 50550 DIESEL ENGINES: 51961 309-KW 50051 9DKRM84/180-3 •• 50049 ADVANCED STRENGTH ANA’,YSIS METHOD TO IMPROVE RELIABILITY OF CYLI- NDER COVER

  16. A 1-D Model of the 4 Bed Molecular Sieve of the Carbon Dioxide Removal Assembly

    NASA Technical Reports Server (NTRS)

    Coker, Robert; Knox, Jim

    2015-01-01

    Developments to improve system efficiency and reliability for water and carbon dioxide separation systems on crewed vehicles combine sub-scale systems testing and multi-physics simulations. This paper describes the development of COMSOL simulations in support of the Life Support Systems (LSS) project within NASA's Advanced Exploration Systems (AES) program. Specifically, we model the 4 Bed Molecular Sieve (4BMS) of the Carbon Dioxide Removal Assembly (CDRA) operating on the International Space Station (ISS).

  17. An exploratory study into the effect of time-restricted internet access on face-validity, construct validity and reliability of postgraduate knowledge progress testing

    PubMed Central

    2013-01-01

    Background Yearly formative knowledge testing (also known as progress testing) was shown to have a limited construct-validity and reliability in postgraduate medical education. One way to improve construct-validity and reliability is to improve the authenticity of a test. As easily accessible internet has become inseparably linked to daily clinical practice, we hypothesized that allowing internet access for a limited amount of time during the progress test would improve the perception of authenticity (face-validity) of the test, which would in turn improve the construct-validity and reliability of postgraduate progress testing. Methods Postgraduate trainees taking the yearly knowledge progress test were asked to participate in a study where they could access the internet for 30 minutes at the end of a traditional pen and paper test. Before and after the test they were asked to complete a short questionnaire regarding the face-validity of the test. Results Mean test scores increased significantly for all training years. Trainees indicated that the face-validity of the test improved with internet access and that they would like to continue to have internet access during future testing. Internet access did not improve the construct-validity or reliability of the test. Conclusion Improving the face-validity of postgraduate progress testing, by adding the possibility to search the internet for a limited amount of time, positively influences test performance and face-validity. However, it did not change the reliability or the construct-validity of the test. PMID:24195696

  18. Reliability Quantification of the Flexure: A Critical Stirling Convertor Component

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward J.

    2004-01-01

    Uncertainties in the manufacturing, fabrication process, material behavior, loads, and boundary conditions results in the variation of the stresses and strains induced in the flexures and its fatigue life. Past experience and the test data at material coupon levels revealed a significant amount of scatter of the fatigue life. Owing to these facts, the design of the flexure, using conventional approaches based on safety factor or traditional reliability based on similar equipment considerations does not provide a direct measure of reliability. Additionally, it may not be feasible to run actual long term fatigue tests due to cost and time constraints. Therefore it is difficult to ascertain material fatigue strength limit. The objective of the paper is to present a methodology and quantified results of numerical simulation for the reliability of flexures used in the Stirling convertor for their structural performance. The proposed approach is based on application of finite element analysis method in combination with the random fatigue limit model, which includes uncertainties in material fatigue life. Additionally, sensitivity of fatigue life reliability to the design variables is quantified and its use to develop guidelines to improve design, manufacturing, quality control and inspection design process is described.

  19. The new GRID Hamilton Rating Scale for Depression demonstrates excellent inter-rater reliability for inexperienced and experienced raters before and after training.

    PubMed

    Tabuse, Hideaki; Kalali, Amir; Azuma, Hideki; Ozaki, Norio; Iwata, Nakao; Naitoh, Hiroshi; Higuchi, Teruhiko; Kanba, Shigenobu; Shioe, Kunihiko; Akechi, Tatsuo; Furukawa, Toshi A

    2007-09-30

    The Hamilton Rating Scale for Depression (HAMD) is the de facto international gold standard for the assessment of depression. There are some criticisms, however, especially with regard to its inter-rater reliability, due to the lack of standardized questions or explicit scoring procedures. The GRID-HAMD was developed to provide standardized explicit scoring conventions and a structured interview guide for administration and scoring of the HAMD. We developed the Japanese version of the GRID-HAMD and examined its inter-rater reliability among experienced and inexperienced clinicians (n=70), how rater characteristics may affect it, and how training can improve it in the course of a model training program using videotaped interviews. The results showed that the inter-rater reliability of the GRID-HAMD total score was excellent to almost perfect and those of most individual items were also satisfactory to excellent, both with experienced and inexperienced raters, and both before and after the training. With its standardized definitions, questions and detailed scoring conventions, the GRID-HAMD appears to be the best achievable set of interview guides for the HAMD and can provide a solid tool for highly reliable assessment of depression severity.

  20. Space reliability technology - A historical perspective

    NASA Technical Reports Server (NTRS)

    Cohen, H.

    1984-01-01

    The progressive improvements in reliability of launch vehicles is traced from the Vanguard rocket to the STS. The Vanguard, built with minimal redundancy and a high mass ratio, was used as an operational vehicle midway through its test program in an attempt to meet the perceived challenge represented by the Sputnik. The fourth Vanguard failed due to inadequate contamination prevention and lack of inspection ports. Automatic firing sequences were adopted for the Titan rockets, which were an order of magnitude larger than the Vanguard and therefore had room for interior inspections. Qualification testing and reporting were introduced for components, along with X ray inspection of fuel tank welds. Dual systems were added for flight critical components when the Titan became man-rated for the Gemini program. Designs incorporated full failure mode effects and criticality analyses for the Apollo program, which exposed the limits of applicability of numerical reliability models. Fault tree analyses and program milestone reviews were initiated. The worth of man-in-the-loop in space activities for reliability was demonstrated with the rescue of Skylab after solar panel and meteoroid shield failures. It is now the reliability of the payload, rather than the vehicle, that is questioned for Shuttle launches.

  1. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 2: HARP tutorial

    NASA Technical Reports Server (NTRS)

    Rothmann, Elizabeth; Dugan, Joanne Bechta; Trivedi, Kishor S.; Mittal, Nitin; Bavuso, Salvatore J.

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. The Hybrid Automated Reliability Predictor (HARP) tutorial provides insight into HARP modeling techniques and the interactive textual prompting input language via a step-by-step explanation and demonstration of HARP's fault occurrence/repair model and the fault/error handling models. Example applications are worked in their entirety and the HARP tabular output data are presented for each. Simple models are presented at first with each succeeding example demonstrating greater modeling power and complexity. This document is not intended to present the theoretical and mathematical basis for HARP.

  2. Ensemble assimilation of ARGO temperature profile, sea surface temperature and Altimetric satellite data into an eddy permitting primitive equation model of the North Atlantic ocean

    NASA Astrophysics Data System (ADS)

    Yan, Yajing; Barth, Alexander; Beckers, Jean-Marie; Candille, Guillem; Brankart, Jean-Michel; Brasseur, Pierre

    2015-04-01

    Sea surface height, sea surface temperature and temperature profiles at depth collected between January and December 2005 are assimilated into a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. 60 ensemble members are generated by adding realistic noise to the forcing parameters related to the temperature. The ensemble is diagnosed and validated by comparison between the ensemble spread and the model/observation difference, as well as by rank histogram before the assimilation experiments. Incremental analysis update scheme is applied in order to reduce spurious oscillations due to the model state correction. The results of the assimilation are assessed according to both deterministic and probabilistic metrics with observations used in the assimilation experiments and independent observations, which goes further than most previous studies and constitutes one of the original points of this paper. Regarding the deterministic validation, the ensemble means, together with the ensemble spreads are compared to the observations in order to diagnose the ensemble distribution properties in a deterministic way. Regarding the probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centred random variable (RCRV) score in order to investigate the reliability properties of the ensemble forecast system. The improvement of the assimilation is demonstrated using these validation metrics. Finally, the deterministic validation and the probabilistic validation are analysed jointly. The consistency and complementarity between both validations are highlighted. High reliable situations, in which the RMS error and the CRPS give the same information, are identified for the first time in this paper.

  3. How to improve an un-alterable model forecast? A sequential data assimilation based error updating approach

    NASA Astrophysics Data System (ADS)

    Gragne, A. S.; Sharma, A.; Mehrotra, R.; Alfredsen, K. T.

    2012-12-01

    Accuracy of reservoir inflow forecasts is instrumental for maximizing value of water resources and influences operation of hydropower reservoirs significantly. Improving hourly reservoir inflow forecasts over a 24 hours lead-time is considered with the day-ahead (Elspot) market of the Nordic exchange market in perspectives. The procedure presented comprises of an error model added on top of an un-alterable constant parameter conceptual model, and a sequential data assimilation routine. The structure of the error model was investigated using freely available software for detecting mathematical relationships in a given dataset (EUREQA) and adopted to contain minimum complexity for computational reasons. As new streamflow data become available the extra information manifested in the discrepancies between measurements and conceptual model outputs are extracted and assimilated into the forecasting system recursively using Sequential Monte Carlo technique. Besides improving forecast skills significantly, the probabilistic inflow forecasts provided by the present approach entrains suitable information for reducing uncertainty in decision making processes related to hydropower systems operation. The potential of the current procedure for improving accuracy of inflow forecasts at lead-times unto 24 hours and its reliability in different seasons of the year will be illustrated and discussed thoroughly.

  4. Assessing institutional support for Hispanic nursing student retention: a study to evaluate the psychometric properties of two self-assessment inventories.

    PubMed

    Bond, Mary Lou; Cason, Carolyn L

    2014-01-01

    To assess the content validity and internal consistency reliability of the Healthcare Professions Education Program Self-Assessment (PSA) and the Institutional Self-Assessment for Factors Supporting Hispanic Student Retention (ISA). Health disparities among vulnerable populations are among the top priorities demanding attention in the United States. Efforts to recruit and retain Hispanic nursing students are essential. Based on a sample of provosts, deans/directors, and an author of the Model of Institutional Support, participants commented on the perceived validity and usefulness of each item of the PSA and ISA. Internal consistency reliability was calculated by Cronbach's alpha using responses from nursing schools in states with large Hispanic populations. The ISA and PSA were found to be reliable and valid tools for assessing institutional friendliness. The instruments highlight strengths and identify potential areas of improvement at institutional and program levels.

  5. 18 CFR 39.3 - Electric Reliability Organization certification.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... operators of the Bulk-Power System, and other interested parties for improvement of the Electric Reliability... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Electric Reliability..., Reliability Standards that provide for an adequate level of reliability of the Bulk-Power System, and (2) Has...

  6. ASME V\\&V challenge problem: Surrogate-based V&V

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beghini, Lauren L.; Hough, Patricia D.

    2015-12-18

    The process of verification and validation can be resource intensive. From the computational model perspective, the resource demand typically arises from long simulation run times on multiple cores coupled with the need to characterize and propagate uncertainties. In addition, predictive computations performed for safety and reliability analyses have similar resource requirements. For this reason, there is a tradeoff between the time required to complete the requisite studies and the fidelity or accuracy of the results that can be obtained. At a high level, our approach is cast within a validation hierarchy that provides a framework in which we perform sensitivitymore » analysis, model calibration, model validation, and prediction. The evidence gathered as part of these activities is mapped into the Predictive Capability Maturity Model to assess credibility of the model used for the reliability predictions. With regard to specific technical aspects of our analysis, we employ surrogate-based methods, primarily based on polynomial chaos expansions and Gaussian processes, for model calibration, sensitivity analysis, and uncertainty quantification in order to reduce the number of simulations that must be done. The goal is to tip the tradeoff balance to improving accuracy without increasing the computational demands.« less

  7. Applying A Multi-Objective Based Procedure to SWAT Modelling in Alpine Catchments

    NASA Astrophysics Data System (ADS)

    Tuo, Y.; Disse, M.; Chiogna, G.

    2017-12-01

    In alpine catchments, water management practices can lead to conflicts between upstream and downstream stakeholders, like in the Adige river basin (Italy). A correct prediction of available water resources plays an important part, for example, in defining how much water can be stored for hydropower production in upstream reservoirs without affecting agricultural activities downstream. Snow is a crucial hydrological component that highly affects seasonal behavior of streamflow. Therefore, a realistic representation of snow dynamics is fundamental for water management operations in alpine catchments. The Soil and Water Assessment Tool (SWAT) model has been applied in alpine catchments worldwide. However, during model calibration of catchment scale applications, snow parameters were generally estimated based on streamflow records rather than on snow measurements. This may lead to streamflow predictions with wrong snow melt contribution. This work highlights the importance of considering snow measurements in the calibration of the SWAT model for alpine hydrology and compares various calibration methodologies. In addition to discharge records, snow water equivalent time series of both subbasin scale and monitoring station were also utilized to evaluate the model performance by comparing with the SWAT subbasin and elevation band snow outputs. Comparing model results obtained calibrating the model using discharge data only and discharge data along with snow water equivalent data, we show that the latter approach allows us to improve the reliability of snow simulations while maintaining good estimations of streamflow. With a more reliable representation of snow dynamics, the hydrological model can provide more accurate references for proposing adequate water management solutions. This study offers to the wide SWAT user community an effective approach to improve streamflow predictions in alpine catchments and hence support decision makers in water allocation.

  8. A Novel Physiology-Based Mathematical Model to Estimate Red Blood Cell Lifespan in Different Human Age Groups.

    PubMed

    An, Guohua; Widness, John A; Mock, Donald M; Veng-Pedersen, Peter

    2016-09-01

    Direct measurement of red blood cell (RBC) survival in humans has improved from the original accurate but limited differential agglutination technique to the current reliable, safe, and accurate biotin method. Despite this, all of these methods are time consuming and require blood sampling over several months to determine the RBC lifespan. For situations in which RBC survival information must be obtained quickly, these methods are not suitable. With the exception of adults and infants, RBC survival has not been extensively investigated in other age groups. To address this need, we developed a novel, physiology-based mathematical model that quickly estimates RBC lifespan in healthy individuals at any age. The model is based on the assumption that the total number of RBC recirculations during the lifespan of each RBC (denoted by N max) is relatively constant for all age groups. The model was initially validated using the data from our prior infant and adult biotin-labeled red blood cell studies and then extended to the other age groups. The model generated the following estimated RBC lifespans in 2-year-old, 5-year-old, 8-year-old, and 10-year-old children: 62, 74, 82, and 86 days, respectively. We speculate that this model has useful clinical applications. For example, HbA1c testing is not reliable in identifying children with diabetes because HbA1c is directly affected by RBC lifespan. Because our model can estimate RBC lifespan in children at any age, corrections to HbA1c values based on the model-generated RBC lifespan could improve diabetes diagnosis as well as therapy in children.

  9. The Impact of Statistical Adjustment on Conditional Standard Errors of Measurement in the Assessment of Physician Communication Skills

    ERIC Educational Resources Information Center

    Raymond, Mark R.; Clauser, Brian E.; Furman, Gail E.

    2010-01-01

    The use of standardized patients to assess communication skills is now an essential part of assessing a physician's readiness for practice. To improve the reliability of communication scores, it has become increasingly common in recent years to use statistical models to adjust ratings provided by standardized patients. This study employed ordinary…

  10. Embedded Diagnostic/Prognostic Reasoning and Information Continuity for Improved Avionics Maintenance

    DTIC Science & Technology

    2006-01-01

    enabling technologies such as built-in-test, advanced health monitoring algorithms, reliability and component aging models, prognostics methods, and...deployment and acceptance. This framework and vision is consistent with the onboard PHM ( Prognostic and Health Management) as well as advanced... monitored . In addition to the prognostic forecasting capabilities provided by monitoring system power, multiple confounding errors by electronic

  11. Curriculum-Based Measurement of Reading: An Evaluation of Frequentist and Bayesian Methods to Model Progress Monitoring Data

    ERIC Educational Resources Information Center

    Christ, Theodore J.; Desjardins, Christopher David

    2018-01-01

    Curriculum-Based Measurement of Oral Reading (CBM-R) is often used to monitor student progress and guide educational decisions. Ordinary least squares regression (OLSR) is the most widely used method to estimate the slope, or rate of improvement (ROI), even though published research demonstrates OLSR's lack of validity and reliability, and…

  12. Closed-loop control of targeted ultrasound drug delivery across the blood–brain/tumor barriers in a rat glioma model

    PubMed Central

    Sun, Tao; Zhang, Yongzhi; Power, Chanikarn; Alexander, Phillip M.; Sutton, Jonathan T.; Aryal, Muna; Vykhodtseva, Natalia; Miller, Eric L.; McDannold, Nathan J.

    2017-01-01

    Cavitation-facilitated microbubble-mediated focused ultrasound therapy is a promising method of drug delivery across the blood–brain barrier (BBB) for treating many neurological disorders. Unlike ultrasound thermal therapies, during which magnetic resonance thermometry can serve as a reliable treatment control modality, real-time control of modulated BBB disruption with undetectable vascular damage remains a challenge. Here a closed-loop cavitation controlling paradigm that sustains stable cavitation while suppressing inertial cavitation behavior was designed and validated using a dual-transducer system operating at the clinically relevant ultrasound frequency of 274.3 kHz. Tests in the normal brain and in the F98 glioma model in vivo demonstrated that this controller enables reliable and damage-free delivery of a predetermined amount of the chemotherapeutic drug (liposomal doxorubicin) into the brain. The maximum concentration level of delivered doxorubicin exceeded levels previously shown (using uncontrolled sonication) to induce tumor regression and improve survival in rat glioma. These results confirmed the ability of the controller to modulate the drug delivery dosage within a therapeutically effective range, while improving safety control. It can be readily implemented clinically and potentially applied to other cavitation-enhanced ultrasound therapies. PMID:29133392

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eto, Joe; Eto, Joe; Lesieutre, Bernard

    The increased need to manage California?s electricity grid in real time is a result of the ongoing transition from a system operated by vertically-integrated utilities serving native loads to one operated by an independent system operator supporting competitive energy markets. During this transition period, the traditional approach to reliability management -- construction of new transmission lines -- has not been pursued due to unresolved issues related to the financing and recovery of transmission project costs. In the absence of investments in new transmission infrastructure, the best strategy for managing reliability is to equip system operators with better real-time information aboutmore » actual operating margins so that they can better understand and manage the risk of operating closer to the edge. A companion strategy is to address known deficiencies in offline modeling tools that are needed to ground the use of improved real-time tools. This project: (1) developed and conducted first-ever demonstrations of two prototype real-time software tools for voltage security assessment and phasor monitoring; and (2) prepared a scoping study on improving load and generator response models. Additional funding through two separate subsequent work authorizations has already been provided to build upon the work initiated in this project.« less

  14. Composite Stress Rupture: A New Reliability Model Based on Strength Decay

    NASA Technical Reports Server (NTRS)

    Reeder, James R.

    2012-01-01

    A model is proposed to estimate reliability for stress rupture of composite overwrap pressure vessels (COPVs) and similar composite structures. This new reliability model is generated by assuming a strength degradation (or decay) over time. The model suggests that most of the strength decay occurs late in life. The strength decay model will be shown to predict a response similar to that predicted by a traditional reliability model for stress rupture based on tests at a single stress level. In addition, the model predicts that even though there is strength decay due to proof loading, a significant overall increase in reliability is gained by eliminating any weak vessels, which would fail early. The model predicts that there should be significant periods of safe life following proof loading, because time is required for the strength to decay from the proof stress level to the subsequent loading level. Suggestions for testing the strength decay reliability model have been made. If the strength decay reliability model predictions are shown through testing to be accurate, COPVs may be designed to carry a higher level of stress than is currently allowed, which will enable the production of lighter structures

  15. Application of regional physically-based landslide early warning model: tuning of the input parameters and validation of the results

    NASA Astrophysics Data System (ADS)

    D'Ambrosio, Michele; Tofani, Veronica; Rossi, Guglielmo; Salvatici, Teresa; Tacconi Stefanelli, Carlo; Rosi, Ascanio; Benedetta Masi, Elena; Pazzi, Veronica; Vannocci, Pietro; Catani, Filippo; Casagli, Nicola

    2017-04-01

    The Aosta Valley region is located in North-West Alpine mountain chain. The geomorphology of the region is characterized by steep slopes, high climatic and altitude (ranging from 400 m a.s.l of Dora Baltea's river floodplain to 4810 m a.s.l. of Mont Blanc) variability. In the study area (zone B), located in Eastern part of Aosta Valley, heavy rainfall of about 800-900 mm per year is the main landslides trigger. These features lead to a high hydrogeological risk in all territory, as mass movements interest the 70% of the municipality areas (mainly shallow rapid landslides and rock falls). An in-depth study of the geotechnical and hydrological properties of hillslopes controlling shallow landslides formation was conducted, with the aim to improve the reliability of deterministic model, named HIRESS (HIgh REsolution Stability Simulator). In particular, two campaigns of on site measurements and laboratory experiments were performed. The data obtained have been studied in order to assess the relationships existing among the different parameters and the bedrock lithology. The analyzed soils in 12 survey points are mainly composed of sand and gravel, with highly variable contents of silt. The range of effective internal friction angle (from 25.6° to 34.3°) and effective cohesion (from 0 kPa to 9.3 kPa) measured and the median ks (10E-6 m/s) value are consistent with the average grain sizes (gravelly sand). The data collected contributes to generate input map of parameters for HIRESS (static data). More static data are: volume weight, residual water content, porosity and grain size index. In order to improve the original formulation of the model, the contribution of the root cohesion has been also taken into account based on the vegetation map and literature values. HIRESS is a physically based distributed slope stability simulator for analyzing shallow landslide triggering conditions in real time and in large areas using parallel computational techniques. The software runs in real-time by assimilating weather data and uses Monte Carlo simulation techniques to manage the geotechnical and hydrological input parameters. In this context, an assessment of the factors controlling the geotechnical and hydrological features is crucial in order to understand the occurrence of slope instability mechanisms and to provide reliable forecasting of the hydrogeological hazard occurrence, especially in relation to weather events. In particular, the model and the soil characterization were applied in back analysis, in order to assess the reliability of the model through validation of the results with landslide events that occurred during the period. The validation was performed on four past events of intense rainfall that have affected Valle d'Aosta region between 2008 and 2010 years triggering fast shallows landslides. The simulations show substantial improvement of the reliability of the results compared to the use of literature parameters. A statistical analysis of the HIRESSS outputs in terms of failure probability has been carried out in order to define reliable alert levels for regional landslide early warning systems.

  16. Cross-cultural adaptation and validation of the Chinese Comfort, Afford, Respect, and Expect scale of caring nurse-patient interaction competence.

    PubMed

    Chung, Hui-Chun; Hsieh, Tsung-Cheng; Chen, Yueh-Chih; Chang, Shu-Chuan; Hsu, Wen-Lin

    2017-11-29

    To investigate the construct validity and reliability of the Chinese Comfort, Afford, Respect, and Expect scale, which can be used to determine clinical nurses' competence. The results can also serve to promote nursing competence and improve patient satisfaction. Nurse-patient interaction is critical for improving nursing care quality. However, to date, no relevant validated instrument has been proposed for assessing caring nurse-patient interaction competence in clinical practice. This study adapted and validated the Chinese version of the caring nurse-patient interaction scale. A cross-cultural adaptation and validation study. A psychometric analysis of the four major constructs of the Chinese Comfort, Afford, Respect, and Expect scale was conducted on a sample of 356 nurses from a medical centre in China. Item analysis and exploratory factor analysis were adopted to extract the main components, both the internal consistency and correlation coefficients were used to examine reliability and a confirmatory factor analysis was adopted to verify the construct validity. The goodness-of-fit results of the model were strong. The standardised factor loadings of the Chinese Comfort, Afford, Respect, and Expect scale ranged from 0.73-0.95, indicating that the validity and reliability of this instrument were favourable. Moreover, the 12 extracted items explained 95.9% of the measured content of the Chinese Comfort, Afford, Respect, and Expect scale. The results serve as empirical evidence regarding the validity and reliability of the Chinese Comfort, Afford, Respect, and Expect scale. Hospital nurses increasingly demand help from patients and their family members in identifying health problems and assisting with medical decision-making. Therefore, enhancing nurses' competence in nurse-patient interactions is crucial for nursing and hospital managers to improve nursing care quality. The Chinese caring nurse-patient interaction scale can serve as an effective tool for nursing and hospital managers to evaluate the caring nurse-patient interaction confidence of nurses and improve inpatient satisfaction and quality of care. © 2017 John Wiley & Sons Ltd.

  17. A Cross-Layer Optimized Opportunistic Routing Scheme for Loss-and-Delay Sensitive WSNs

    PubMed Central

    Xu, Xin; Yuan, Minjiao; Liu, Xiao; Cai, Zhiping; Wang, Tian

    2018-01-01

    In wireless sensor networks (WSNs), communication links are typically error-prone and unreliable, so providing reliable and timely data routing for loss- and delay-sensitive applications in WSNs it is a challenge issue. Additionally, with specific thresholds in practical applications, the loss and delay sensitivity implies requirements for high reliability and low delay. Opportunistic Routing (OR) has been well studied in WSNs to improve reliability for error-prone and unreliable wireless communication links where the transmission power is assumed to be identical in the whole network. In this paper, a Cross-layer Optimized Opportunistic Routing (COOR) scheme is proposed to improve the communication link reliability and reduce delay for loss-and-delay sensitive WSNs. The main contribution of the COOR scheme is making full use of the remaining energy in networks to increase the transmission power of most nodes, which will provide a higher communication reliability or further transmission distance. Two optimization strategies referred to as COOR(R) and COOR(P) of the COOR scheme are proposed to improve network performance. In the case of increasing the transmission power, the COOR(R) strategy chooses a node that has a higher communication reliability with same distance in comparison to the traditional opportunistic routing when selecting the next hop candidate node. Since the reliability of data transmission is improved, the delay of the data reaching the sink is reduced by shortening the time of communication between candidate nodes. On the other hand, the COOR(P) strategy prefers a node that has the same communication reliability with longer distance. As a result, network performance can be improved for the following reasons: (a) the delay is reduced as fewer hops are needed while the packet reaches the sink in longer transmission distance circumstances; (b) the reliability can be improved since it is the product of the reliability of every hop of the routing path, and the count is reduced while the reliability of each hop is the same as the traditional method. After analyzing the energy consumption of the network in detail, the value of optimized transmission power in different areas is given. On the basis of a large number of experimental and theoretical analyses, the results show that the COOR scheme will increase communication reliability by 36.62–87.77%, decrease delay by 21.09–52.48%, and balance the energy consumption of 86.97% of the nodes in the WSNs. PMID:29751589

  18. A Cross-Layer Optimized Opportunistic Routing Scheme for Loss-and-Delay Sensitive WSNs.

    PubMed

    Xu, Xin; Yuan, Minjiao; Liu, Xiao; Liu, Anfeng; Xiong, Neal N; Cai, Zhiping; Wang, Tian

    2018-05-03

    In wireless sensor networks (WSNs), communication links are typically error-prone and unreliable, so providing reliable and timely data routing for loss- and delay-sensitive applications in WSNs it is a challenge issue. Additionally, with specific thresholds in practical applications, the loss and delay sensitivity implies requirements for high reliability and low delay. Opportunistic Routing (OR) has been well studied in WSNs to improve reliability for error-prone and unreliable wireless communication links where the transmission power is assumed to be identical in the whole network. In this paper, a Cross-layer Optimized Opportunistic Routing (COOR) scheme is proposed to improve the communication link reliability and reduce delay for loss-and-delay sensitive WSNs. The main contribution of the COOR scheme is making full use of the remaining energy in networks to increase the transmission power of most nodes, which will provide a higher communication reliability or further transmission distance. Two optimization strategies referred to as COOR(R) and COOR(P) of the COOR scheme are proposed to improve network performance. In the case of increasing the transmission power, the COOR(R) strategy chooses a node that has a higher communication reliability with same distance in comparison to the traditional opportunistic routing when selecting the next hop candidate node. Since the reliability of data transmission is improved, the delay of the data reaching the sink is reduced by shortening the time of communication between candidate nodes. On the other hand, the COOR(P) strategy prefers a node that has the same communication reliability with longer distance. As a result, network performance can be improved for the following reasons: (a) the delay is reduced as fewer hops are needed while the packet reaches the sink in longer transmission distance circumstances; (b) the reliability can be improved since it is the product of the reliability of every hop of the routing path, and the count is reduced while the reliability of each hop is the same as the traditional method. After analyzing the energy consumption of the network in detail, the value of optimized transmission power in different areas is given. On the basis of a large number of experimental and theoretical analyses, the results show that the COOR scheme will increase communication reliability by 36.62⁻87.77%, decrease delay by 21.09⁻52.48%, and balance the energy consumption of 86.97% of the nodes in the WSNs.

  19. A multiple-feature and multiple-kernel scene segmentation algorithm for humanoid robot.

    PubMed

    Liu, Zhi; Xu, Shuqiong; Zhang, Yun; Chen, Chun Lung Philip

    2014-11-01

    This technical correspondence presents a multiple-feature and multiple-kernel support vector machine (MFMK-SVM) methodology to achieve a more reliable and robust segmentation performance for humanoid robot. The pixel wise intensity, gradient, and C1 SMF features are extracted via the local homogeneity model and Gabor filter, which would be used as inputs of MFMK-SVM model. It may provide multiple features of the samples for easier implementation and efficient computation of MFMK-SVM model. A new clustering method, which is called feature validity-interval type-2 fuzzy C-means (FV-IT2FCM) clustering algorithm, is proposed by integrating a type-2 fuzzy criterion in the clustering optimization process to improve the robustness and reliability of clustering results by the iterative optimization. Furthermore, the clustering validity is employed to select the training samples for the learning of the MFMK-SVM model. The MFMK-SVM scene segmentation method is able to fully take advantage of the multiple features of scene image and the ability of multiple kernels. Experiments on the BSDS dataset and real natural scene images demonstrate the superior performance of our proposed method.

  20. Measurement and modeling of intrinsic transcription terminators

    PubMed Central

    Cambray, Guillaume; Guimaraes, Joao C.; Mutalik, Vivek K.; Lam, Colin; Mai, Quynh-Anh; Thimmaiah, Tim; Carothers, James M.; Arkin, Adam P.; Endy, Drew

    2013-01-01

    The reliable forward engineering of genetic systems remains limited by the ad hoc reuse of many types of basic genetic elements. Although a few intrinsic prokaryotic transcription terminators are used routinely, termination efficiencies have not been studied systematically. Here, we developed and validated a genetic architecture that enables reliable measurement of termination efficiencies. We then assembled a collection of 61 natural and synthetic terminators that collectively encode termination efficiencies across an ∼800-fold dynamic range within Escherichia coli. We simulated co-transcriptional RNA folding dynamics to identify competing secondary structures that might interfere with terminator folding kinetics or impact termination activity. We found that structures extending beyond the core terminator stem are likely to increase terminator activity. By excluding terminators encoding such context-confounding elements, we were able to develop a linear sequence-function model that can be used to estimate termination efficiencies (r = 0.9, n = 31) better than models trained on all terminators (r = 0.67, n = 54). The resulting systematically measured collection of terminators should improve the engineering of synthetic genetic systems and also advance quantitative modeling of transcription termination. PMID:23511967

Top