Sample records for yield reliable results

  1. Strategy for continuous improvement in IC manufacturability, yield, and reliability

    NASA Astrophysics Data System (ADS)

    Dreier, Dean J.; Berry, Mark; Schani, Phil; Phillips, Michael; Steinberg, Joe; DePinto, Gary

    1993-01-01

    Continual improvements in yield, reliability and manufacturability measure a fab and ultimately result in Total Customer Satisfaction. A new organizational and technical methodology for continuous defect reduction has been established in a formal feedback loop, which relies on yield and reliability, failed bit map analysis, analytical tools, inline monitoring, cross functional teams and a defect engineering group. The strategy requires the fastest detection, identification and implementation of possible corrective actions. Feedback cycle time is minimized at all points to improve yield and reliability and reduce costs, essential for competitiveness in the memory business. Payoff was a 9.4X reduction in defectivity and a 6.2X improvement in reliability of 256 K fast SRAMs over 20 months.

  2. Reliability of reservoir firm yield determined from the historical drought of record

    USGS Publications Warehouse

    Archfield, S.A.; Vogel, R.M.

    2005-01-01

    The firm yield of a reservoir is typically defined as the maximum yield that could have been delivered without failure during the historical drought of record. In the future, reservoirs will experience droughts that are either more or less severe than the historical drought of record. The question addressed here is what the reliability of such systems will be when operated at the firm yield. To address this question, we examine the reliability of 25 hypothetical reservoirs sited across five locations in the central and western United States. These locations provided a continuous 756-month streamflow record spanning the same time interval. The firm yield of each reservoir was estimated from the historical drought of record at each location. To determine the steady-state monthly reliability of each firm-yield estimate, 12,000-month synthetic records were generated using the moving-blocks bootstrap method. Bootstrapping was repeated 100 times for each reservoir to obtain an average steady-state monthly reliability R, the number of months the reservoir did not fail divided by the total months. Values of R were greater than 0.99 for 60 percent of the study reservoirs; the other 40 percent ranged from 0.95 to 0.98. Estimates of R were highly correlated with both the level of development (ratio of firm yield to average streamflow) and average lag-1 monthly autocorrelation. Together these two predictors explained 92 percent of the variability in R, with the level of development alone explaining 85 percent of the variability. Copyright ASCE 2005.

  3. Increased Reliability for Single-Case Research Results: Is the Bootstrap the Answer?

    ERIC Educational Resources Information Center

    Parker, Richard I.

    2006-01-01

    There is need for objective and reliable single-case research (SCR) results in the movement toward evidence-based interventions (EBI), for inclusion in meta-analyses, and for funding accountability in clinical contexts. Yet SCR deals with data that often do not conform to parametric data assumptions and that yield results of low reliability. A…

  4. Reliable results from stochastic simulation models

    Treesearch

    Donald L., Jr. Gochenour; Leonard R. Johnson

    1973-01-01

    Development of a computer simulation model is usually done without fully considering how long the model should run (e.g. computer time) before the results are reliable. However construction of confidence intervals (CI) about critical output parameters from the simulation model makes it possible to determine the point where model results are reliable. If the results are...

  5. Reliable yields of public water-supply wells in the fractured-rock aquifers of central Maryland, USA

    NASA Astrophysics Data System (ADS)

    Hammond, Patrick A.

    2018-02-01

    Most studies of fractured-rock aquifers are about analytical models used for evaluating aquifer tests or numerical methods for describing groundwater flow, but there have been few investigations on how to estimate the reliable long-term drought yields of individual hard-rock wells. During the drought period of 1998 to 2002, many municipal water suppliers in the Piedmont/Blue Ridge areas of central Maryland (USA) had to institute water restrictions due to declining well yields. Previous estimates of the yields of those wells were commonly based on extrapolating drawdowns, measured during short-term single-well hydraulic pumping tests, to the first primary water-bearing fracture in a well. The extrapolations were often made from pseudo-equilibrium phases, frequently resulting in substantially over-estimated well yields. The methods developed in the present study to predict yields consist of extrapolating drawdown data from infinite acting radial flow periods or by fitting type curves of other conceptual models to the data, using diagnostic plots, inverse analysis and derivative analysis. Available drawdowns were determined by the positions of transition zones in crystalline rocks or thin-bedded consolidated sandstone/limestone layers (reservoir rocks). Aquifer dewatering effects were detected by type-curve matching of step-test data or by breaks in the drawdown curves constructed from hydraulic tests. Operational data were then used to confirm the predicted yields and compared to regional groundwater levels to determine seasonal variations in well yields. Such well yield estimates are needed by hydrogeologists and water engineers for the engineering design of water systems, but should be verified by the collection of long-term monitoring data.

  6. Improving yield and reliability of FIB modifications using electrical testing

    NASA Astrophysics Data System (ADS)

    Desplats, Romain; Benbrik, Jamel; Benteo, Bruno; Perdu, Philippe

    1998-08-01

    Focused Ion Beam technology has two main areas of application for ICs: modification and preparation for technological analysis. The most solicited area is modification. This involves physically modifying a circuit by cutting lines and creating new ones in order to change the electrical function of the circuit. IC planar technologies have an increasing number of metal interconnections making FIB modifications more complex and decreasing their changes of success. The yield of FIB operations on ICs reflects a downward trend that imposes a greater number of circuits to be modified in order to successfully correct a small number of them. This requires extended duration, which is not compatible with production line turn around times. To respond to this problem, two solutions can be defined: either, reducing the duration of each FIB operation or increasing the success rate of FIB modifications. Since reducing the time depends mainly on FIB operator experience, insuring a higher success rate represents a more crucial aspect as both experienced and novice operators could benefit from this improvement. In order to insure successful modifications, it is necessary to control each step of a FIB operation. To do this, we have developed a new method using in situ electrical testing which has a direct impact on the yield of FIB modifications. We will present this innovative development through a real case study of a CMOS ASIC for high-speed communications. Monitoring the electrical behavior at each step in a FIB operation makes it possible to reduce the number of circuits to be modified and consequently reduces system costs thanks to better yield control. Knowing the internal electrical behavior also gives us indications about the impact on reliability of FIB modified circuits. Finally, this approach can be applied to failure analysis and FIB operations on flip chip circuits.

  7. The effect of the labile organic fraction in food waste and the substrate/inoculum ratio on anaerobic digestion for a reliable methane yield.

    PubMed

    Kawai, Minako; Nagao, Norio; Tajima, Nobuaki; Niwa, Chiaki; Matsuyama, Tatsushi; Toda, Tatsuki

    2014-04-01

    Influence of the labile organic fraction (LOF) on anaerobic digestion of food waste was investigated in different S/I ratio of 0.33, 0.5, 1.0, 2.0 and 4.0g-VSsubstrate/g-VSinoculum. Two types of substrate, standard food waste (Substrate 1) and standard food waste with the supernatant (containing LOF) removed (Substrate 2) were used. Highest methane yield of 435ml-CH4g-VS(-1) in Substrate 1 was observed in the lowest S/I ratio, while the methane yield of the other S/I ratios were 38-73% lower than the highest yield due to acidification. The methane yields in Substrate 2 were relatively stable in all S/I conditions, although the maximum methane yield was low compared with Substrate 1. These results showed that LOF in food waste causes acidification, but also contributes to high methane yields, suggesting that low S/I ratio (<0.33) is required to obtain a reliable methane yield from food waste compared to other organic substrates. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. InfoDROUGHT: Technical reliability assessment using crop yield data at the Spanish-national level

    NASA Astrophysics Data System (ADS)

    Contreras, Sergio; Garcia-León, David; Hunink, Johannes E.

    2017-04-01

    Drought monitoring (DM) is a key component of risk-centered drought preparedness plans and drought policies. InfoDROUGHT (www.infosequia.es) is a a site- and user-tailored and fully-integrated DM system which combines functionalities for: a) the operational satellite-based weekly-1km tracking of severity and spatial extent of drought impacts, b) the interactive and faster query and delivery of drought information through a web-mapping service. InfoDROUGHT has a flexible and modular structure. The calibration (threshold definitions) and validation of the system is performed by combining expert knowledge and auxiliary impact assessments and datasets. Different technical solutions (basic or advanced versions) or deployment options (open-standard or restricted-authenticated) can be purchased by end-users and customers according to their needs. In this analysis, the technical reliability of InfoDROUGHT and its performance for detecting drought impacts on agriculture has been evaluated in the 2003-2014 period by exploring and quantifying the relationships among the drought severity indices reported by InfoDROUGHT and the annual yield anomalies observed for different rainfed crops (maize, wheat, barley) at Spain. We hypothesize a positive relationship between the crop anomalies and the drought severity level detected by InfoDROUGHT. Annual yield anomalies were computed at the province administrative level as the difference between the annual yield reported by the Spanish Annual Survey of Crop Acreages and Yields (ESYRCE database) and the mean annual yield estimated during the study period. Yield anomalies were finally compared against drought greenness-based and thermal-based drought indices (VCI and TCI, respectively) to check the coherence of the outputs and the hypothesis stated. InfoDROUGHT has been partly funded by the Spanish Ministry of Economy and Competiveness through a Torres-Quevedo grant, and by the H2020-EU project "Bridging the Gap for Innovations in

  9. Design of high-reliability low-cost amorphous silicon modules for high energy yield

    NASA Astrophysics Data System (ADS)

    Jansen, Kai W.; Varvar, Anthony; Twesme, Edward; Berens, Troy; Dhere, Neelkanth G.

    2008-08-01

    For PV modules to fulfill their intended purpose, they must generate sufficient economic return over their lifetime to justify their initial cost. Not only must modules be manufactured at a low cost/Wp with a high energy yield (kWh/kWp), they must also be designed to withstand the significant environmental stresses experienced throughout their 25+ year lifetime. Based on field experience, the most common factors affecting the lifetime energy yield of glass-based amorphous silicon (a-Si) modules have been identified; these include: 1) light-induced degradation; 2) moisture ingress and thin film corrosion; 3) transparent conductive oxide (TCO) delamination; and 4) glass breakage. The current approaches to mitigating the effect of these degradation mechanisms are discussed and the accelerated tests designed to simulate some of the field failures are described. In some cases, novel accelerated tests have been created to facilitate the development of improved manufacturing processes, including a unique test to screen for TCO delamination. Modules using the most reliable designs are tested in high voltage arrays at customer and internal test sites, as well as at independent laboratories. Data from tests at the Florida Solar Energy Center has shown that a-Si tandem modules can demonstrate an energy yield exceeding 1200 kWh/kWp/yr in a subtropical climate. In the same study, the test arrays demonstrated low long-term power loss over two years of data collection, after initial stabilization. The absolute power produced by the test arrays varied seasonally by approximately +/-7%, as expected.

  10. Interrelation Between Safety Factors and Reliability

    NASA Technical Reports Server (NTRS)

    Elishakoff, Isaac; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    An evaluation was performed to establish relationships between safety factors and reliability relationships. Results obtained show that the use of the safety factor is not contradictory to the employment of the probabilistic methods. In many cases the safety factors can be directly expressed by the required reliability levels. However, there is a major difference that must be emphasized: whereas the safety factors are allocated in an ad hoc manner, the probabilistic approach offers a unified mathematical framework. The establishment of the interrelation between the concepts opens an avenue to specify safety factors based on reliability. In cases where there are several forms of failure, then the allocation of safety factors should he based on having the same reliability associated with each failure mode. This immediately suggests that by the probabilistic methods the existing over-design or under-design can be eliminated. The report includes three parts: Part 1-Random Actual Stress and Deterministic Yield Stress; Part 2-Deterministic Actual Stress and Random Yield Stress; Part 3-Both Actual Stress and Yield Stress Are Random.

  11. Using operational data to estimate the reliable yields of water-supply wells

    NASA Astrophysics Data System (ADS)

    Misstear, Bruce D. R.; Beeson, Sarah

    The reliable yield of a water-supply well depends on many different factors, including the properties of the well and the aquifer; the capacities of the pumps, raw-water mains, and treatment works; the interference effects from other wells; and the constraints imposed by ion licences, water quality, and environmental issues. A relatively simple methodology for estimating reliable yields has been developed that takes into account all of these factors. The methodology is based mainly on an analysis of water-level and source-output data, where such data are available. Good operational data are especially important when dealing with wells in shallow, unconfined, fissure-flow aquifers, where actual well performance may vary considerably from that predicted using a more analytical approach. Key issues in the yield-assessment process are the identification of a deepest advisable pumping water level, and the collection of the appropriate well, aquifer, and operational data. Although developed for water-supply operators in the United Kingdom, this approach to estimating the reliable yields of water-supply wells using operational data should be applicable to a wide range of hydrogeological conditions elsewhere. Résumé La productivité d'un puits capté pour l'adduction d'eau potable dépend de différents facteurs, parmi lesquels les propriétés du puits et de l'aquifère, la puissance des pompes, le traitement des eaux brutes, les effets d'interférences avec d'autres puits et les contraintes imposées par les autorisations d'exploitation, par la qualité des eaux et par les conditions environnementales. Une méthodologie relativement simple d'estimation de la productivité qui prenne en compte tous ces facteurs a été mise au point. Cette méthodologie est basée surtout sur une analyse des données concernant le niveau piézométrique et le débit de prélèvement, quand ces données sont disponibles. De bonnes données opérationnelles sont particuli

  12. Modified Core Wash Cytology: A reliable same day biopsy result for breast clinics.

    PubMed

    Bulte, J P; Wauters, C A P; Duijm, L E M; de Wilt, J H W; Strobbe, L J A

    2016-12-01

    Fine Needle Aspiration Biopsy (FNAB), Core Needle biopsy (CNB) and hybrid techniques including Core Wash Cytology (CWC) are available for same-day diagnosis in breast lesions. In CWC a washing of the biopsy core is processed for a provisional cytological diagnosis, after which the core is processed like a regular CNB. This study focuses on the reliability of CWC in daily practice. All consecutive CWC procedures performed in a referral breast centre between May 2009 and May 2012 were reviewed, correlating CWC results with the CNB result, definitive diagnosis after surgical resection and/or follow-up. Symptomatic as well as screen-detected lesions, undergoing CNB were included. 1253 CWC procedures were performed. Definitive histology showed 849 (68%) malignant and 404 (32%) benign lesions. 80% of CWC procedures yielded a conclusive diagnosis: this percentage was higher amongst malignant lesions and lower for benign lesions: 89% and 62% respectively. Sensitivity and specificity of a conclusive CWC result were respectively 98.3% and 90.4%. The eventual incidence of malignancy in the cytological 'atypical' group (5%) was similar to the cytological 'benign' group (6%). CWC can be used to make a reliable provisional diagnosis of breast lesions within the hour. The high probability of conclusive results in malignant lesions makes CWC well suited for high risk populations. Copyright © 2016 Elsevier Ltd, BASO ~ the Association for Cancer Surgery, and the European Society of Surgical Oncology. All rights reserved.

  13. Benchmarks and Reliable DFT Results for Spin Gaps of Small Ligand Fe(II) Complexes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Suhwan; Kim, Min-Cheol; Sim, Eunji

    2017-05-01

    All-electron fixed-node diffusion Monte Carlo provides benchmark spin gaps for four Fe(II) octahedral complexes. Standard quantum chemical methods (semilocal DFT and CCSD(T)) fail badly for the energy difference between their high- and low-spin states. Density-corrected DFT is both significantly more accurate and reliable and yields a consistent prediction for the Fe-Porphyrin complex

  14. Reliability of abstracting performance measures: results of the cardiac rehabilitation referral and reliability (CR3) project.

    PubMed

    Thomas, Randal J; Chiu, Jensen S; Goff, David C; King, Marjorie; Lahr, Brian; Lichtman, Steven W; Lui, Karen; Pack, Quinn R; Shahriary, Melanie

    2014-01-01

    Assessment of the reliability of performance measure (PM) abstraction is an important step in PM validation. Reliability has not been previously assessed for abstracting PMs for the referral of patients to cardiac rehabilitation (CR) and secondary prevention (SP) programs. To help validate these PMs, we carried out a multicenter assessment of their reliability. Hospitals and clinical practices from around the United States were invited to participate in the Cardiac Rehabilitation Referral Reliability (CR3) Project. Twenty-nine hospitals and 23 outpatient centers expressed interest in participating. Seven hospitals and 6 outpatient centers met participation criteria and submitted completed data. Site coordinators identified 35 patients whose charts were reviewed by 2 site abstractors twice, 1 week apart. Percent agreement and the Cohen κ statistic were used to describe intra- and interabstractor reliability for patient eligibility for CR/SP, patient exceptions for CR/SP referral, and documented referral to CR/SP. Results were obtained from within-site data, as well as from pooled data of all inpatient and all outpatient sites. We found that intra-abstractor reliability reflected excellent repeatability (≥ 90% agreement; κ ≥ 0.75) for ratings of CR/SP eligibility, exceptions, and referral, both from pooled and site-specific analyses of inpatient and outpatient data. Similarly, the interabstractor agreement from pooled analysis ranged from good to excellent for the 3 items, although with slightly lower measures of reliability. Abstraction of PMs for CR/SP referral has high reliability, supporting the use of these PMs in quality improvement initiatives aimed at increasing CR/SP delivery to patients with cardiovascular disease.

  15. Reliability and validity of the McDonald Play Inventory.

    PubMed

    McDonald, Ann E; Vigen, Cheryl

    2012-01-01

    This study examined the ability of a two-part self-report instrument, the McDonald Play Inventory, to reliably and validly measure the play activities and play styles of 7- to 11-yr-old children and to discriminate between the play of neurotypical children and children with known learning and developmental disabilities. A total of 124 children ages 7-11 recruited from a sample of convenience and a subsample of 17 parents participated in this study. Reliability estimates yielded moderate correlations for internal consistency, total test intercorrelations, and test-retest reliability. Validity estimates were established for content and construct validity. The results suggest that a self-report instrument yields reliable and valid measures of a child's perceived play performance and discriminates between the play of children with and without disabilities. Copyright © 2012 by the American Occupational Therapy Association, Inc.

  16. WHO Study on the reliability and validity of the alcohol and drug use disorder instruments: overview of methods and results.

    PubMed

    Ustün, B; Compton, W; Mager, D; Babor, T; Baiyewu, O; Chatterji, S; Cottler, L; Göğüş, A; Mavreas, V; Peters, L; Pull, C; Saunders, J; Smeets, R; Stipec, M R; Vrasti, R; Hasin, D; Room, R; Van den Brink, W; Regier, D; Blaine, J; Grant, B F; Sartorius, N

    1997-09-25

    The WHO Study on the reliability and validity of the alcohol and drug use disorder instruments in an international study which has taken place in centres in ten countries, aiming to test the reliability and validity of three diagnostic instruments for alcohol and drug use disorders: the Composite International Diagnostic Interview (CIDI), the Schedules for Clinical Assessment in Neuropsychiatry (SCAN) and a special version of the Alcohol Use Disorder and Associated Disabilities Interview schedule-alcohol/drug-revised (AUDADIS-ADR). The purpose of the reliability and validity (R&V) study is to further develop the alcohol and drug sections of these instruments so that a range of substance-related diagnoses can be made in a systematic, consistent, and reliable way. The study focuses on new criteria proposed in the tenth revision of the International Classification of Diseases (ICD-10) and the fourth revision of the diagnostic and statistical manual of mental disorders (DSM-IV) for dependence, harmful use and abuse categories for alcohol and psychoactive substance use disorders. A systematic study including a scientifically rigorous measure of reliability (i.e. 1 week test-retest reliability) and validity (i.e. comparison between clinical and non-clinical measures) has been undertaken. Results have yielded useful information on reliability and validity of these instruments at diagnosis, criteria and question level. Overall the diagnostic concordance coefficients (kappa, kappa) were very good for dependence disorders (0.7-0.9), but were somewhat lower for the abuse and harmful use categories. The comparisons among instruments and independent clinical evaluations and debriefing interviews gave important information about possible sources of unreliability, and provided useful clues on the applicability and consistency of nosological concepts across cultures.

  17. Reliability analysis of a sensitive and independent stabilometry parameter set

    PubMed Central

    Nagymáté, Gergely; Orlovits, Zsanett

    2018-01-01

    Recent studies have suggested reduced independent and sensitive parameter sets for stabilometry measurements based on correlation and variance analyses. However, the reliability of these recommended parameter sets has not been studied in the literature or not in every stance type used in stabilometry assessments, for example, single leg stances. The goal of this study is to evaluate the test-retest reliability of different time-based and frequency-based parameters that are calculated from the center of pressure (CoP) during bipedal and single leg stance for 30- and 60-second measurement intervals. Thirty healthy subjects performed repeated standing trials in a bipedal stance with eyes open and eyes closed conditions and in a single leg stance with eyes open for 60 seconds. A force distribution measuring plate was used to record the CoP. The reliability of the CoP parameters was characterized by using the intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change (MDC), coefficient of variation (CV) and CV compliance rate (CVCR). Based on the ICC, SEM and MDC results, many parameters yielded fair to good reliability values, while the CoP path length yielded the highest reliability (smallest ICC > 0.67 (0.54–0.79), largest SEM% = 19.2%). Usually, frequency type parameters and extreme value parameters yielded poor reliability values. There were differences in the reliability of the maximum CoP velocity (better with 30 seconds) and mean power frequency (better with 60 seconds) parameters between the different sampling intervals. PMID:29664938

  18. Reliability analysis of a sensitive and independent stabilometry parameter set.

    PubMed

    Nagymáté, Gergely; Orlovits, Zsanett; Kiss, Rita M

    2018-01-01

    Recent studies have suggested reduced independent and sensitive parameter sets for stabilometry measurements based on correlation and variance analyses. However, the reliability of these recommended parameter sets has not been studied in the literature or not in every stance type used in stabilometry assessments, for example, single leg stances. The goal of this study is to evaluate the test-retest reliability of different time-based and frequency-based parameters that are calculated from the center of pressure (CoP) during bipedal and single leg stance for 30- and 60-second measurement intervals. Thirty healthy subjects performed repeated standing trials in a bipedal stance with eyes open and eyes closed conditions and in a single leg stance with eyes open for 60 seconds. A force distribution measuring plate was used to record the CoP. The reliability of the CoP parameters was characterized by using the intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change (MDC), coefficient of variation (CV) and CV compliance rate (CVCR). Based on the ICC, SEM and MDC results, many parameters yielded fair to good reliability values, while the CoP path length yielded the highest reliability (smallest ICC > 0.67 (0.54-0.79), largest SEM% = 19.2%). Usually, frequency type parameters and extreme value parameters yielded poor reliability values. There were differences in the reliability of the maximum CoP velocity (better with 30 seconds) and mean power frequency (better with 60 seconds) parameters between the different sampling intervals.

  19. Normal yield tables for red alder.

    Treesearch

    Norman P. Worthington; Floyd A. Johnson; George R. Staebler; William J. Lloyd

    1960-01-01

    Increasing interest in the management of red alder (Alnus rubra) has created a need for reliable yield information. Existing yield tables for red alder have been very useful as interim sources of information, but they are generally inadequate for current and prospective management needs. The advisory committee for the Station's Olympia...

  20. Yield: it's now an entitlement

    NASA Astrophysics Data System (ADS)

    George, Bill

    1994-09-01

    Only a few years ago, the primary method of cost reduction and productivity improvement in the semiconductor industry was increasing manufacturing yields throughout the process. Many of the remarkable reliability improvements realized over the past decade have come about as a result of actions that were originally taken primarily to improve device yields. Obviously, the practice of productivity improvement through yield enhancement is limited to the attainment of 100% yield, at which point some other mechanism must be employed. Traditionally, new products have been introduced to manufacturing at a point of relative immaturity, and semiconductor producers have relied on the traditional `learning curve' method of yield improvement to attain profitable levels of manufacturing yield. Recently, results of a survey of several fabs by a group of University of California at Berkeley researchers in the Competitive Semiconductor Manufacturing Program indicate that most factories learn at about the same rate after startup, in terms of both line yield and defectivity. If this is indeed generally true, then the most competitive factor is the one that starts with the highest yield, and it is difficult to displace a leader once his lead has been established. The two observations made above carry enormous implications for the semiconductor development or manufacturing professional. First, one must achieve very high yields in order to even play the game. Second, the achievement of competitive yields over time in the life of a factory is determined even before the factory is opened, in the planning and development phase. Third, and perhaps most uncomfortable for those of us who have relied on yield improvement as a cost driver, the winners of the nineties will find new levers to drive costs down, having already gotten the benefit of very high yield. This paper looks at the question of how the winners will achieve the critical measures of success, high initial yield and utilization

  1. SLAC modulator system improvements and reliability results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donaldson, A.R.

    1998-06-01

    In 1995, an improvement project was completed on the 244 klystron modulators in the linear accelerator. The modulator system has been previously described. This article offers project details and their resulting effect on modulator and component reliability. Prior to the project, the authors had collected four operating cycles (1991 through 1995) of MTTF data. In this discussion, the '91 data will be excluded since the modulators operated at 60 Hz. The five periods following the '91 run were reviewed due to the common repetition rate at 120 Hz.

  2. Interval Estimation of Revision Effect on Scale Reliability via Covariance Structure Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko

    2009-01-01

    A didactic discussion of a procedure for interval estimation of change in scale reliability due to revision is provided, which is developed within the framework of covariance structure modeling. The method yields ranges of plausible values for the population gain or loss in reliability of unidimensional composites, which results from deletion or…

  3. Covariance Matrix Evaluations for Independent Mass Fission Yields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terranova, N., E-mail: nicholas.terranova@unibo.it; Serot, O.; Archier, P.

    2015-01-15

    Recent needs for more accurate fission product yields include covariance information to allow improved uncertainty estimations of the parameters used by design codes. The aim of this work is to investigate the possibility to generate more reliable and complete uncertainty information on independent mass fission yields. Mass yields covariances are estimated through a convolution between the multi-Gaussian empirical model based on Brosa's fission modes, which describe the pre-neutron mass yields, and the average prompt neutron multiplicity curve. The covariance generation task has been approached using the Bayesian generalized least squared method through the CONRAD code. Preliminary results on mass yieldsmore » variance-covariance matrix will be presented and discussed from physical grounds in the case of {sup 235}U(n{sub th}, f) and {sup 239}Pu(n{sub th}, f) reactions.« less

  4. Grapevine canopy reflectance and yield

    NASA Technical Reports Server (NTRS)

    Minden, K. A.; Philipson, W. R.

    1982-01-01

    Field spectroradiometric and airborne multispectral scanner data were applied in a study of Concord grapevines. Spectroradiometric measurements of 18 experimental vines were collected on three dates during one growing season. Spectral reflectance, determined at 30 intervals from 0.4 to 1.1 microns, was correlated with vine yield, pruning weight, clusters/vine, and nitrogen input. One date of airborne multispectral scanner data (11 channels) was collected over commercial vineyards, and the average radiance values for eight vineyard sections were correlated with the corresponding average yields. Although some correlations were significant, they were inadequate for developing a reliable yield prediction model.

  5. Yield gap mapping as a support tool for risk management in agriculture

    NASA Astrophysics Data System (ADS)

    Lahlou, Ouiam; Imani, Yasmina; Slimani, Imane; Van Wart, Justin; Yang, Haishun

    2016-04-01

    The increasing frequency and magnitude of droughts in Morocco and the mounting losses from extended droughts in the agricultural sector emphasized the need to develop reliable and timely tools to manage drought and to mitigate resulting catastrophic damage. In 2011, Morocco launched a cereals multi-risk insurance with drought as the most threatening and the most frequent hazard in the country. However, and in order to assess the gap and to implement the more suitable compensation, it is essential to quantify the potential yield in each area. In collaboration with the University of Nebraska-Lincoln, a study is carried out in Morocco and aims to determine the yield potentials and the yield gaps in the different agro-climatic zones of the country. It fits into the large project: Global Yield Gap and Water Productivity Atlas: http://www.yieldgap.org/. The yield gap (Yg) is the magnitude and difference between crop yield potential (Yp) or water limited yield potential (Yw) and actual yields, reached by farmers. World Food Studies (WOFOST), which is a Crop simulation mechanistic model, has been used for this purpose. Prior to simulations, reliable information about actual yields, weather data, crop management data and soil data have been collected in 7 Moroccan buffer zones considered, each, within a circle of 100 km around a weather station point, homogenously spread across the country and where cereals are widely grown. The model calibration was also carried out using WOFOST default varieties data. The map-based results represent a robust tool, not only for drought insurance organization, but for agricultural and agricultural risk management. Moreover, accurate and geospatially granular estimates of Yg and Yw will allow to focus on regions with largest unexploited yield gaps and greatest potential to close them, and consequently to improve food security in the country.

  6. Satellite-based assessment of grassland yields

    NASA Astrophysics Data System (ADS)

    Grant, K.; Siegmund, R.; Wagner, M.; Hartmann, S.

    2015-04-01

    Cutting date and frequency are important parameters determining grassland yields in addition to the effects of weather, soil conditions, plant composition and fertilisation. Because accurate and area-wide data of grassland yields are currently not available, cutting frequency can be used to estimate yields. In this project, a method to detect cutting dates via surface changes in radar images is developed. The combination of this method with a grassland yield model will result in more reliable and regional-wide numbers of grassland yields. For the test-phase of the monitoring project, a study area situated southeast of Munich, Germany, was chosen due to its high density of managed grassland. For determining grassland cutting robust amplitude change detection techniques are used evaluating radar amplitude or backscatter statistics before and after the cutting event. CosmoSkyMed and Sentinel-1A data were analysed. All detected cuts were verified according to in-situ measurements recorded in a GIS database. Although the SAR systems had various acquisition geometries, the amount of detected grassland cut was quite similar. Of 154 tested grassland plots, covering in total 436 ha, 116 and 111 cuts were detected using CosmoSkyMed and Sentinel-1A radar data, respectively. Further improvement of radar data processes as well as additional analyses with higher sample number and wider land surface coverage will follow for optimisation of the method and for validation and generalisation of the results of this feasibility study. The automation of this method will than allow for an area-wide and cost efficient cutting date detection service improving grassland yield models.

  7. Results of a Demonstration Assessment of Passive System Reliability Utilizing the Reliability Method for Passive Systems (RMPS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bucknor, Matthew; Grabaskas, David; Brunett, Acacia

    2015-04-26

    Advanced small modular reactor designs include many advantageous design features such as passively driven safety systems that are arguably more reliable and cost effective relative to conventional active systems. Despite their attractiveness, a reliability assessment of passive systems can be difficult using conventional reliability methods due to the nature of passive systems. Simple deviations in boundary conditions can induce functional failures in a passive system, and intermediate or unexpected operating modes can also occur. As part of an ongoing project, Argonne National Laboratory is investigating various methodologies to address passive system reliability. The Reliability Method for Passive Systems (RMPS), amore » systematic approach for examining reliability, is one technique chosen for this analysis. This methodology is combined with the Risk-Informed Safety Margin Characterization (RISMC) approach to assess the reliability of a passive system and the impact of its associated uncertainties. For this demonstration problem, an integrated plant model of an advanced small modular pool-type sodium fast reactor with a passive reactor cavity cooling system is subjected to a station blackout using RELAP5-3D. This paper discusses important aspects of the reliability assessment, including deployment of the methodology, the uncertainty identification and quantification process, and identification of key risk metrics.« less

  8. Evaluation of the CEAS model for barley yields in North Dakota and Minnesota

    NASA Technical Reports Server (NTRS)

    Barnett, T. L. (Principal Investigator)

    1981-01-01

    The CEAS yield model is based upon multiple regression analysis at the CRD and state levels. For the historical time series, yield is regressed on a set of variables derived from monthly mean temperature and monthly precipitation. Technological trend is represented by piecewise linear and/or quadriatic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-79) demonstrated that biases are small and performance as indicated by the root mean square errors are acceptable for intended application, however, model response for individual years particularly unusual years, is not very reliable and shows some large errors. The model is objective, adequate, timely, simple and not costly. It considers scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.

  9. LACIE: Wheat yield models for the USSR

    NASA Technical Reports Server (NTRS)

    Sakamoto, C. M.; Leduc, S. K.

    1977-01-01

    A quantitative model determining the relationship between weather conditions and wheat yield in the U.S.S.R. was studied to provide early reliable forecasts on the size of the U.S.S.R. wheat harvest. Separate models are developed for spring wheat and for winter. Differences in yield potential and responses to stress conditions and cultural improvements necessitate models for each class.

  10. Quantifying yield gaps in wheat production in Russia

    NASA Astrophysics Data System (ADS)

    Schierhorn, Florian; Faramarzi, Monireh; Prishchepov, Alexander V.; Koch, Friedrich J.; Müller, Daniel

    2014-08-01

    Crop yields must increase substantially to meet the increasing demands for agricultural products. Crop yield increases are particularly important for Russia because low crop yields prevail across Russia’s widespread and fertile land resources. However, reliable data are lacking regarding the spatial distribution of potential yields in Russia, which can be used to determine yield gaps. We used a crop growth model to determine the yield potentials and yield gaps of winter and spring wheat at the provincial level across European Russia. We modeled the annual yield potentials from 1995 to 2006 with optimal nitrogen supplies for both rainfed and irrigated conditions. Overall, the results suggest yield gaps of 1.51-2.10 t ha-1, or 44-52% of the yield potential under rainfed conditions. Under irrigated conditions, yield gaps of 3.14-3.30 t ha-1, or 62-63% of the yield potential, were observed. However, recurring droughts cause large fluctuations in yield potentials under rainfed conditions, even when the nitrogen supply is optimal, particularly in the highly fertile black soil areas of southern European Russia. The highest yield gaps (up to 4 t ha-1) under irrigated conditions were detected in the steppe areas in southeastern European Russia along the border of Kazakhstan. Improving the nutrient and water supply and using crop breeds that are adapted to the frequent drought conditions are important for reducing yield gaps in European Russia. Our regional assessment helps inform policy and agricultural investors and prioritize research that aims to increase crop production in this important region for global agricultural markets.

  11. Improving precision of forage yield trials: A case study

    USDA-ARS?s Scientific Manuscript database

    Field-based agronomic and genetic research relies heavily on the data generated from field evaluations. Therefore, it is imperative to optimize the precision of yield estimates in cultivar evaluation trials to make reliable selections. Experimental error in yield trials is sensitive to several facto...

  12. The (un)reliability of item-level semantic priming effects.

    PubMed

    Heyman, Tom; Bruninx, Anke; Hutchison, Keith A; Storms, Gert

    2018-04-05

    Many researchers have tried to predict semantic priming effects using a myriad of variables (e.g., prime-target associative strength or co-occurrence frequency). The idea is that relatedness varies across prime-target pairs, which should be reflected in the size of the priming effect (e.g., cat should prime dog more than animal does). However, it is only insightful to predict item-level priming effects if they can be measured reliably. Thus, in the present study we examined the split-half and test-retest reliabilities of item-level priming effects under conditions that should discourage the use of strategies. The resulting priming effects proved extremely unreliable, and reanalyses of three published priming datasets revealed similar cases of low reliability. These results imply that previous attempts to predict semantic priming were unlikely to be successful. However, one study with an unusually large sample size yielded more favorable reliability estimates, suggesting that big data, in terms of items and participants, should be the future for semantic priming research.

  13. What's holding us back? Raising the alfalfa yield bar

    USDA-ARS?s Scientific Manuscript database

    Measuring yield of commodity crops is easy – weight and moisture content are determined on delivery. Consequently, reports of production or yield for grain crops can be made reliably to the agencies that track crop production, such as the USDA-National Agricultural Statistics Service (NASS). The s...

  14. Diverse Data Sets Can Yield Reliable Information through Mechanistic Modeling: Salicylic Acid Clearance.

    PubMed

    Raymond, G M; Bassingthwaighte, J B

    This is a practical example of a powerful research strategy: putting together data from studies covering a diversity of conditions can yield a scientifically sound grasp of the phenomenon when the individual observations failed to provide definitive understanding. The rationale is that defining a realistic, quantitative, explanatory hypothesis for the whole set of studies, brings about a "consilience" of the often competing hypotheses considered for individual data sets. An internally consistent conjecture linking multiple data sets simultaneously provides stronger evidence on the characteristics of a system than does analysis of individual data sets limited to narrow ranges of conditions. Our example examines three very different data sets on the clearance of salicylic acid from humans: a high concentration set from aspirin overdoses; a set with medium concentrations from a research study on the influences of the route of administration and of sex on the clearance kinetics, and a set on low dose aspirin for cardiovascular health. Three models were tested: (1) a first order reaction, (2) a Michaelis-Menten (M-M) approach, and (3) an enzyme kinetic model with forward and backward reactions. The reaction rates found from model 1 were distinctly different for the three data sets, having no commonality. The M-M model 2 fitted each of the three data sets but gave a reliable estimates of the Michaelis constant only for the medium level data (K m = 24±5.4 mg/L); analyzing the three data sets together with model 2 gave K m = 18±2.6 mg/L. (Estimating parameters using larger numbers of data points in an optimization increases the degrees of freedom, constraining the range of the estimates). Using the enzyme kinetic model (3) increased the number of free parameters but nevertheless improved the goodness of fit to the combined data sets, giving tighter constraints, and a lower estimated K m = 14.6±2.9 mg/L, demonstrating that fitting diverse data sets with a single model

  15. Splitting parameter yield (SPY): A program for semiautomatic analysis of shear-wave splitting

    NASA Astrophysics Data System (ADS)

    Zaccarelli, Lucia; Bianco, Francesca; Zaccarelli, Riccardo

    2012-03-01

    SPY is a Matlab algorithm that analyzes seismic waveforms in a semiautomatic way, providing estimates of the two observables of the anisotropy: the shear-wave splitting parameters. We chose to exploit those computational processes that require less intervention by the user, gaining objectivity and reliability as a result. The algorithm joins the covariance matrix and the cross-correlation techniques, and all the computation steps are interspersed by several automatic checks intended to verify the reliability of the yields. The resulting semiautomation generates two new advantages in the field of anisotropy studies: handling a huge amount of data at the same time, and comparing different yields. From this perspective, SPY has been developed in the Matlab environment, which is widespread, versatile, and user-friendly. Our intention is to provide the scientific community with a new monitoring tool for tracking the temporal variations of the crustal stress field.

  16. Global Crop Yields, Climatic Trends and Technology Enhancement

    NASA Astrophysics Data System (ADS)

    Najafi, E.; Devineni, N.; Khanbilvardi, R.; Kogan, F.

    2016-12-01

    During the last decades the global agricultural production has soared up and technology enhancement is still making positive contribution to yield growth. However, continuing population, water crisis, deforestation and climate change threaten the global food security. Attempts to predict food availability in the future around the world can be partly understood from the impact of changes to date. A new multilevel model for yield prediction at the country scale using climate covariates and technology trend is presented in this paper. The structural relationships between average yield and climate attributes as well as trends are estimated simultaneously. All countries are modeled in a single multilevel model with partial pooling and/or clustering to automatically group and reduce estimation uncertainties. El Niño Southern Oscillation (ENSO), Palmer Drought Severity Index (PDSI), Geopotential height (GPH), historical CO2 level and time-trend as a relatively reliable approximation of technology measurement are used as predictors to estimate annual agricultural crop yields for each country from 1961 to 2007. Results show that these indicators can explain the variability in historical crop yields for most of the countries and the model performs well under out-of-sample verifications.

  17. Measurement and Reliability of Response Inhibition

    PubMed Central

    Congdon, Eliza; Mumford, Jeanette A.; Cohen, Jessica R.; Galvan, Adriana; Canli, Turhan; Poldrack, Russell A.

    2012-01-01

    Response inhibition plays a critical role in adaptive functioning and can be assessed with the Stop-signal task, which requires participants to suppress prepotent motor responses. Evidence suggests that this ability to inhibit a prepotent motor response (reflected as Stop-signal reaction time (SSRT)) is a quantitative and heritable measure of interindividual variation in brain function. Although attention has been given to the optimal method of SSRT estimation, and initial evidence exists in support of its reliability, there is still variability in how Stop-signal task data are treated across samples. In order to examine this issue, we pooled data across three separate studies and examined the influence of multiple SSRT calculation methods and outlier calling on reliability (using Intra-class correlation). Our results suggest that an approach which uses the average of all available sessions, all trials of each session, and excludes outliers based on predetermined lenient criteria yields reliable SSRT estimates, while not excluding too many participants. Our findings further support the reliability of SSRT, which is commonly used as an index of inhibitory control, and provide support for its continued use as a neurocognitive phenotype. PMID:22363308

  18. Statistical Evaluations of Variations in Dairy Cows’ Milk Yields as a Precursor of Earthquakes

    PubMed Central

    Yamauchi, Hiroyuki; Hayakawa, Masashi; Asano, Tomokazu; Ohtani, Nobuyo; Ohta, Mitsuaki

    2017-01-01

    Simple Summary There are many reports of abnormal changes occurring in various natural systems prior to earthquakes. Unusual animal behavior is one of these abnormalities; however, there are few objective indicators and to date, reliability has remained uncertain. We found that milk yields of dairy cows decreased prior to an earthquake in our previous case study. In this study, we examined the reliability of decreases in milk yields as a precursor for earthquakes using long-term observation data. In the results, milk yields decreased approximately three weeks before earthquakes. We have come to the conclusion that dairy cow milk yields have applicability as an objectively observable unusual animal behavior prior to earthquakes, and dairy cows respond to some physical or chemical precursors of earthquakes. Abstract Previous studies have provided quantitative data regarding unusual animal behavior prior to earthquakes; however, few studies include long-term, observational data. Our previous study revealed that the milk yields of dairy cows decreased prior to an extremely large earthquake. To clarify whether the milk yields decrease prior to earthquakes, we examined the relationship between earthquakes of various magnitudes and daily milk yields. The observation period was one year. In the results, cross-correlation analyses revealed a significant negative correlation between earthquake occurrence and milk yields approximately three weeks beforehand. Approximately a week and a half beforehand, a positive correlation was revealed, and the correlation gradually receded to zero as the day of the earthquake approached. Future studies that use data from a longer observation period are needed because this study only considered ten earthquakes and therefore does not have strong statistical power. Additionally, we compared the milk yields with the subionospheric very low frequency/low frequency (VLF/LF) propagation data indicating ionospheric perturbations. The results showed

  19. Reliability Prediction Analysis: Airborne System Results and Best Practices

    NASA Astrophysics Data System (ADS)

    Silva, Nuno; Lopes, Rui

    2013-09-01

    This article presents the results of several reliability prediction analysis for aerospace components, made by both methodologies, the 217F and the 217Plus. Supporting and complementary activities are described, as well as the differences concerning the results and the applications of both methodologies that are summarized in a set of lessons learned that are very useful for RAMS and Safety Prediction practitioners.The effort that is required for these activities is also an important point that is discussed, as is the end result and their interpretation/impact on the system design.The article concludes while positioning these activities and methodologies in an overall process for space and aeronautics equipment/components certification, and highlighting their advantages. Some good practices have also been summarized and some reuse rules have been laid down.

  20. Automatic yield-line analysis of slabs using discontinuity layout optimization

    PubMed Central

    Gilbert, Matthew; He, Linwei; Smith, Colin C.; Le, Canh V.

    2014-01-01

    The yield-line method of analysis is a long established and extremely effective means of estimating the maximum load sustainable by a slab or plate. However, although numerous attempts to automate the process of directly identifying the critical pattern of yield-lines have been made over the past few decades, to date none has proved capable of reliably analysing slabs of arbitrary geometry. Here, it is demonstrated that the discontinuity layout optimization (DLO) procedure can successfully be applied to such problems. The procedure involves discretization of the problem using nodes inter-connected by potential yield-line discontinuities, with the critical layout of these then identified using linear programming. The procedure is applied to various benchmark problems, demonstrating that highly accurate solutions can be obtained, and showing that DLO provides a truly systematic means of directly and reliably automatically identifying yield-line patterns. Finally, since the critical yield-line patterns for many problems are found to be quite complex in form, a means of automatically simplifying these is presented. PMID:25104905

  1. Assessment of the Maximal Split-Half Coefficient to Estimate Reliability

    ERIC Educational Resources Information Center

    Thompson, Barry L.; Green, Samuel B.; Yang, Yanyun

    2010-01-01

    The maximal split-half coefficient is computed by calculating all possible split-half reliability estimates for a scale and then choosing the maximal value as the reliability estimate. Osburn compared the maximal split-half coefficient with 10 other internal consistency estimates of reliability and concluded that it yielded the most consistently…

  2. Relationships between milk culture results and milk yield in Norwegian dairy cattle.

    PubMed

    Reksen, O; Sølverød, L; Østerås, O

    2007-10-01

    Associations between test-day milk yield and positive milk cultures for Staphylococcus aureus, Streptococcus spp., and other mastitis pathogens or a negative milk culture for mastitis pathogens were assessed in quarter milk samples from randomly sampled cows selected without regard to current or previous udder health status. Staphylococcus aureus was dichotomized according to sparse (< or =1,500 cfu/mL of milk) or rich (>1,500 cfu/mL of milk) growth of the bacteria. Quarter milk samples were obtained on 1 to 4 occasions from 2,740 cows in 354 Norwegian dairy herds, resulting in a total of 3,430 samplings. Measures of test-day milk yield were obtained monthly and related to 3,547 microbiological diagnoses at the cow level. Mixed model linear regression models incorporating an autoregressive covariance structure accounting for repeated test-day milk yields within cow and random effects at the herd and sample level were used to quantify the effect of positive milk cultures on test-day milk yields. Identical models were run separately for first-parity, second-parity, and third-parity or older cows. Fixed effects were days in milk, the natural logarithm of days in milk, sparse and rich growth of Staph. aureus (1/0), Streptococcus spp. (1/0), other mastitis pathogens (1/0), calving season, time of test-day milk yields relative to time of microbiological diagnosis (test day relative to time of diagnosis), and the interaction terms between microbiological diagnosis and test day relative to time of diagnosis. The models were run with the logarithmically transformed composite milk somatic cell count excluded and included. Rich growth of Staph. aureus was associated with decreased production levels in first-parity cows. An interaction between rich growth of Staph. aureus and test day relative to time of diagnosis also predicted a decline in milk production in third-parity or older cows. Interaction between sparse growth of Staph. aureus and test day relative to time of

  3. ExEP yield modeling tool and validation test results

    NASA Astrophysics Data System (ADS)

    Morgan, Rhonda; Turmon, Michael; Delacroix, Christian; Savransky, Dmitry; Garrett, Daniel; Lowrance, Patrick; Liu, Xiang Cate; Nunez, Paul

    2017-09-01

    EXOSIMS is an open-source simulation tool for parametric modeling of the detection yield and characterization of exoplanets. EXOSIMS has been adopted by the Exoplanet Exploration Programs Standards Definition and Evaluation Team (ExSDET) as a common mechanism for comparison of exoplanet mission concept studies. To ensure trustworthiness of the tool, we developed a validation test plan that leverages the Python-language unit-test framework, utilizes integration tests for selected module interactions, and performs end-to-end crossvalidation with other yield tools. This paper presents the test methods and results, with the physics-based tests such as photometry and integration time calculation treated in detail and the functional tests treated summarily. The test case utilized a 4m unobscured telescope with an idealized coronagraph and an exoplanet population from the IPAC radial velocity (RV) exoplanet catalog. The known RV planets were set at quadrature to allow deterministic validation of the calculation of physical parameters, such as working angle, photon counts and integration time. The observing keepout region was tested by generating plots and movies of the targets and the keepout zone over a year. Although the keepout integration test required the interpretation of a user, the test revealed problems in the L2 halo orbit and the parameterization of keepout applied to some solar system bodies, which the development team was able to address. The validation testing of EXOSIMS was performed iteratively with the developers of EXOSIMS and resulted in a more robust, stable, and trustworthy tool that the exoplanet community can use to simulate exoplanet direct-detection missions from probe class, to WFIRST, up to large mission concepts such as HabEx and LUVOIR.

  4. Weather-based forecasts of California crop yields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lobell, D B; Cahill, K N; Field, C B

    2005-09-26

    Crop yield forecasts provide useful information to a range of users. Yields for several crops in California are currently forecast based on field surveys and farmer interviews, while for many crops official forecasts do not exist. As broad-scale crop yields are largely dependent on weather, measurements from existing meteorological stations have the potential to provide a reliable, timely, and cost-effective means to anticipate crop yields. We developed weather-based models of state-wide yields for 12 major California crops (wine grapes, lettuce, almonds, strawberries, table grapes, hay, oranges, cotton, tomatoes, walnuts, avocados, and pistachios), and tested their accuracy using cross-validation over themore » 1980-2003 period. Many crops were forecast with high accuracy, as judged by the percent of yield variation explained by the forecast, the number of yields with correctly predicted direction of yield change, or the number of yields with correctly predicted extreme yields. The most successfully modeled crop was almonds, with 81% of yield variance captured by the forecast. Predictions for most crops relied on weather measurements well before harvest time, allowing for lead times that were longer than existing procedures in many cases.« less

  5. fMRI reliability: influences of task and experimental design.

    PubMed

    Bennett, Craig M; Miller, Michael B

    2013-12-01

    As scientists, it is imperative that we understand not only the power of our research tools to yield results, but also their ability to obtain similar results over time. This study is an investigation into how common decisions made during the design and analysis of a functional magnetic resonance imaging (fMRI) study can influence the reliability of the statistical results. To that end, we gathered back-to-back test-retest fMRI data during an experiment involving multiple cognitive tasks (episodic recognition and two-back working memory) and multiple fMRI experimental designs (block, event-related genetic sequence, and event-related m-sequence). Using these data, we were able to investigate the relative influences of task, design, statistical contrast (task vs. rest, target vs. nontarget), and statistical thresholding (unthresholded, thresholded) on fMRI reliability, as measured by the intraclass correlation (ICC) coefficient. We also utilized data from a second study to investigate test-retest reliability after an extended, six-month interval. We found that all of the factors above were statistically significant, but that they had varying levels of influence on the observed ICC values. We also found that these factors could interact, increasing or decreasing the relative reliability of certain Task × Design combinations. The results suggest that fMRI reliability is a complex construct whose value may be increased or decreased by specific combinations of factors.

  6. Lactation persistency as a component trait of the selection index and increase in reliability by using single nucleotide polymorphism in net merit defined as the first five lactation milk yields and herd life.

    PubMed

    Togashi, K; Hagiya, K; Osawa, T; Nakanishi, T; Yamazaki, T; Nagamine, Y; Lin, C Y; Matsumoto, S; Aihara, M; Hayasaka, K

    2012-08-01

    We first sought to clarify the effects of discounted rate, survival rate, and lactation persistency as a component trait of the selection index on net merit, defined as the first five lactation milks and herd life (HL) weighted by 1 and 0.389 (currently used in Japan), respectively, in units of genetic standard deviation. Survival rate increased the relative economic importance of later lactation traits and the first five lactation milk yields during the first 120 months from the start of the breeding scheme. In contrast, reliabilities of the estimated breeding value (EBV) in later lactation traits are lower than those of earlier lactation traits. We then sought to clarify the effects of applying single nucleotide polymorphism (SNP) on net merit to improve the reliability of EBV of later lactation traits to maximize their increased economic importance due to increase in survival rate. Net merit, selection accuracy, and HL increased by adding lactation persistency to the selection index whose component traits were only milk yields. Lactation persistency of the second and (especially) third parities contributed to increasing HL while maintaining the first five lactation milk yields compared with the selection index whose only component traits were milk yields. A selection index comprising the first three lactation milk yields and persistency accounted for 99.4% of net merit derived from a selection index whose components were identical to those for net merit. We consider that the selection index comprising the first three lactation milk yields and persistency is a practical method for increasing lifetime milk yield in the absence of data regarding HL. Applying SNP to the second- and third-lactation traits and HL increased net merit and HL by maximizing the increased economic importance of later lactation traits, reducing the effect of first-lactation milk yield on HL (genetic correlation (rG) = -0.006), and by augmenting the effects of the second- and third

  7. Scale for positive aspects of caregiving experience: development, reliability, and factor structure.

    PubMed

    Kate, N; Grover, S; Kulhara, P; Nehra, R

    2012-06-01

    OBJECTIVE. To develop an instrument (Scale for Positive Aspects of Caregiving Experience [SPACE]) that evaluates positive caregiving experience and assess its psychometric properties. METHODS. Available scales which assess some aspects of positive caregiving experience were reviewed and a 50-item questionnaire with a 5-point rating was constructed. In all, 203 primary caregivers of patients with severe mental disorders were asked to complete the questionnaire. Internal consistency, test-retest reliability, cross-language reliability, split-half reliability, and face validity were evaluated. Principal component factor analysis was run to assess the factorial validity of the scale. RESULTS. The scale developed as part of the study was found to have good internal consistency, test-retest reliability, cross-language reliability, split-half reliability, and face validity. Principal component factor analysis yielded a 4-factor structure, which also had good test-retest reliability and cross-language reliability. There was a strong correlation between the 4 factors obtained. CONCLUSION. The SPACE developed as part of this study has good psychometric properties.

  8. Electronics reliability and measurement technology

    NASA Technical Reports Server (NTRS)

    Heyman, Joseph S. (Editor)

    1987-01-01

    A summary is presented of the Electronics Reliability and Measurement Technology Workshop. The meeting examined the U.S. electronics industry with particular focus on reliability and state-of-the-art technology. A general consensus of the approximately 75 attendees was that "the U.S. electronics industries are facing a crisis that may threaten their existence". The workshop had specific objectives to discuss mechanisms to improve areas such as reliability, yield, and performance while reducing failure rates, delivery times, and cost. The findings of the workshop addressed various aspects of the industry from wafers to parts to assemblies. Key problem areas that were singled out for attention are identified, and action items necessary to accomplish their resolution are recommended.

  9. A new electronic meter for measuring herbage yield

    Treesearch

    Donald L. Neal; Lee R. Neal

    1965-01-01

    A new electronic instrument, called the Heterodyne Vegetation Meter to measure herbage yield and utilization was built and tested. The instrument proved to be reliable and rapid. Further testing will be conducted.

  10. Reliability-based structural optimization: A proposed analytical-experimental study

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson; Nikolaidis, Efstratios

    1993-01-01

    An analytical and experimental study for assessing the potential of reliability-based structural optimization is proposed and described. In the study, competing designs obtained by deterministic and reliability-based optimization are compared. The experimental portion of the study is practical because the structure selected is a modular, actively and passively controlled truss that consists of many identical members, and because the competing designs are compared in terms of their dynamic performance and are not destroyed if failure occurs. The analytical portion of this study is illustrated on a 10-bar truss example. In the illustrative example, it is shown that reliability-based optimization can yield a design that is superior to an alternative design obtained by deterministic optimization. These analytical results provide motivation for the proposed study, which is underway.

  11. Reliability reporting across studies using the Buss Durkee Hostility Inventory.

    PubMed

    Vassar, Matt; Hale, William

    2009-01-01

    Empirical research on anger and hostility has pervaded the academic literature for more than 50 years. Accurate measurement of anger/hostility and subsequent interpretation of results requires that the instruments yield strong psychometric properties. For consistent measurement, reliability estimates must be calculated with each administration, because changes in sample characteristics may alter the scale's ability to generate reliable scores. Therefore, the present study was designed to address reliability reporting practices for a widely used anger assessment, the Buss Durkee Hostility Inventory (BDHI). Of the 250 published articles reviewed, 11.2% calculated and presented reliability estimates for the data at hand, 6.8% cited estimates from a previous study, and 77.1% made no mention of score reliability. Mean alpha estimates of scores for BDHI subscales generally fell below acceptable standards. Additionally, no detectable pattern was found between reporting practices and publication year or journal prestige. Areas for future research are also discussed.

  12. Modeling of unit operating considerations in generating-capacity reliability evaluation. Volume 1. Mathematical models, computing methods, and results. Final report. [GENESIS, OPCON and OPPLAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, A.D.; Ayoub, A.K.; Singh, C.

    1982-07-01

    Existing methods for generating capacity reliability evaluation do not explicitly recognize a number of operating considerations which may have important effects in system reliability performance. Thus, current methods may yield estimates of system reliability which differ appreciably from actual observed reliability. Further, current methods offer no means of accurately studying or evaluating alternatives which may differ in one or more operating considerations. Operating considerations which are considered to be important in generating capacity reliability evaluation include: unit duty cycles as influenced by load cycle shape, reliability performance of other units, unit commitment policy, and operating reserve policy; unit start-up failuresmore » distinct from unit running failures; unit start-up times; and unit outage postponability and the management of postponable outages. A detailed Monte Carlo simulation computer model called GENESIS and two analytical models called OPCON and OPPLAN have been developed which are capable of incorporating the effects of many operating considerations including those noted above. These computer models have been used to study a variety of actual and synthetic systems and are available from EPRI. The new models are shown to produce system reliability indices which differ appreciably from index values computed using traditional models which do not recognize operating considerations.« less

  13. NASA Earth Science Research Results for Improved Regional Crop Yield Prediction

    NASA Astrophysics Data System (ADS)

    Mali, P.; O'Hara, C. G.; Shrestha, B.; Sinclair, T. R.; G de Goncalves, L. G.; Salado Navarro, L. R.

    2007-12-01

    National agencies such as USDA Foreign Agricultural Service (FAS), Production Estimation and Crop Assessment Division (PECAD) work specifically to analyze and generate timely crop yield estimates that help define national as well as global food policies. The USDA/FAS/PECAD utilizes a Decision Support System (DSS) called CADRE (Crop Condition and Data Retrieval Evaluation) mainly through an automated database management system that integrates various meteorological datasets, crop and soil models, and remote sensing data; providing significant contribution to the national and international crop production estimates. The "Sinclair" soybean growth model has been used inside CADRE DSS as one of the crop models. This project uses Sinclair model (a semi-mechanistic crop growth model) for its potential to be effectively used in a geo-processing environment with remote-sensing-based inputs. The main objective of this proposed work is to verify, validate and benchmark current and future NASA earth science research results for the benefit in the operational decision making process of the PECAD/CADRE DSS. For this purpose, the NASA South American Land Data Assimilation System (SALDAS) meteorological dataset is tested for its applicability as a surrogate meteorological input in the Sinclair model meteorological input requirements. Similarly, NASA sensor MODIS products is tested for its applicability in the improvement of the crop yield prediction through improving precision of planting date estimation, plant vigor and growth monitoring. The project also analyzes simulated Visible/Infrared Imager/Radiometer Suite (VIIRS, a future NASA sensor) vegetation product for its applicability in crop growth prediction to accelerate the process of transition of VIIRS research results for the operational use of USDA/FAS/PECAD DSS. The research results will help in providing improved decision making capacity to the USDA/FAS/PECAD DSS through improved vegetation growth monitoring from high

  14. Field design factors affecting the precision of ryegrass forage yield estimation

    USDA-ARS?s Scientific Manuscript database

    Field-based agronomic and genetic research relies heavily on the data generated from field evaluations. Therefore, it is imperative to optimize the precision and accuracy of yield estimates in cultivar evaluation trials to make reliable selections. Experimental error in yield trials is sensitive to ...

  15. Reliability and diagnostic accuracy of history and physical examination for diagnosing glenoid labral tears.

    PubMed

    Walsworth, Matthew K; Doukas, William C; Murphy, Kevin P; Mielcarek, Billie J; Michener, Lori A

    2008-01-01

    Glenoid labral tears provide a diagnostic challenge. Combinations of items in the patient history and physical examination will provide stronger diagnostic accuracy to suggest the presence or absence of glenoid labral tear than will individual items. Cohort study (diagnosis); Level of evidence, 1. History and examination findings in patients with shoulder pain (N = 55) were compared with arthroscopic findings to determine diagnostic accuracy and intertester reliability. The intertester reliability of the crank, anterior slide, and active compression tests was 0.20 to 0.24. A combined history of popping or catching and positive crank or anterior slide results yielded specificities of 0.91 and 1.00 and positive likelihood ratios of 3.0 and infinity, respectively. A positive anterior slide result combined with either a positive active compression or crank result yielded specificities of 0.91 and positive likelihood ratio of 2.75 and 3.75, respectively. Requiring only a single positive finding in the combination of popping or catching and the anterior slide or crank yielded sensitivities of 0.82 and 0.89 and negative likelihood ratios of 0.31 and 0.33, respectively. The diagnostic accuracy of individual tests in previous studies is quite variable, which may be explained in part by the modest reliability of these tests. The combination of popping or catching with a positive crank or anterior slide result or a positive anterior slide result with a positive active compression or crank test result suggests the presence of a labral tear. The combined absence of popping or catching and a negative anterior slide or crank result suggests the absence of a labral tear.

  16. Quantifying the potential for reservoirs to secure future surface water yields in the world’s largest river basins

    NASA Astrophysics Data System (ADS)

    Liu, Lu; Parkinson, Simon; Gidden, Matthew; Byers, Edward; Satoh, Yusuke; Riahi, Keywan; Forman, Barton

    2018-04-01

    Surface water reservoirs provide us with reliable water supply, hydropower generation, flood control and recreation services. Yet reservoirs also cause flow fragmentation in rivers and lead to flooding of upstream areas, thereby displacing existing land-use activities and ecosystems. Anticipated population growth and development coupled with climate change in many regions of the globe suggests a critical need to assess the potential for future reservoir capacity to help balance rising water demands with long-term water availability. Here, we assess the potential of large-scale reservoirs to provide reliable surface water yields while also considering environmental flows within 235 of the world’s largest river basins. Maps of existing cropland and habitat conservation zones are integrated with spatially-explicit population and urbanization projections from the Shared Socioeconomic Pathways to identify regions unsuitable for increasing water supply by exploiting new reservoir storage. Results show that even when maximizing the global reservoir storage to its potential limit (∼4.3–4.8 times the current capacity), firm yields would only increase by about 50% over current levels. However, there exist large disparities across different basins. The majority of river basins in North America are found to gain relatively little firm yield by increasing storage capacity, whereas basins in Southeast Asia display greater potential for expansion as well as proportional gains in firm yield under multiple uncertainties. Parts of Europe, the United States and South America show relatively low reliability of maintaining current firm yields under future climate change, whereas most of Asia and higher latitude regions display comparatively high reliability. Findings from this study highlight the importance of incorporating different factors, including human development, land-use activities, and climate change, over a time span of multiple decades and across a range of different

  17. Overview of RICOR's reliability theoretical analysis, accelerated life demonstration test results and verification by field data

    NASA Astrophysics Data System (ADS)

    Vainshtein, Igor; Baruch, Shlomi; Regev, Itai; Segal, Victor; Filis, Avishai; Riabzev, Sergey

    2018-05-01

    The growing demand for EO applications that work around the clock 24hr/7days a week, such as in border surveillance systems, emphasizes the need for a highly reliable cryocooler having increased operational availability and optimized system's Integrated Logistic Support (ILS). In order to meet this need, RICOR developed linear and rotary cryocoolers which achieved successfully this goal. Cryocoolers MTTF was analyzed by theoretical reliability evaluation methods, demonstrated by normal and accelerated life tests at Cryocooler level and finally verified by field data analysis derived from Cryocoolers operating at system level. The following paper reviews theoretical reliability analysis methods together with analyzing reliability test results derived from standard and accelerated life demonstration tests performed at Ricor's advanced reliability laboratory. As a summary for the work process, reliability verification data will be presented as a feedback from fielded systems.

  18. Design features and results from fatigue reliability research machines.

    NASA Technical Reports Server (NTRS)

    Lalli, V. R.; Kececioglu, D.; Mcconnell, J. B.

    1971-01-01

    The design, fabrication, development, operation, calibration and results from reversed bending combined with steady torque fatigue research machines are presented. Fifteen-centimeter long, notched, SAE 4340 steel specimens are subjected to various combinations of these stresses and cycled to failure. Failure occurs when the crack in the notch passes through the specimen automatically shutting down the test machine. These cycles-to-failure data are statistically analyzed to develop a probabilistic S-N diagram. These diagrams have many uses; a rotating component design example given in the literature shows that minimum size and weight for a specified number of cycles and reliability can be calculated using these diagrams.

  19. Predicting red meat yields in carcasses from beef-type and calf-fed Holstein steers using the United States Department of Agriculture calculated yield grade.

    PubMed

    Lawrence, T E; Elam, N A; Miller, M F; Brooks, J C; Hilton, G G; VanOverbeke, D L; McKeith, F K; Killefer, J; Montgomery, T H; Allen, D M; Griffin, D B; Delmore, R J; Nichols, W T; Streeter, M N; Yates, D A; Hutcheson, J P

    2010-06-01

    Analyses were conducted to evaluate the ability of the USDA yield grade equation to detect differences in subprimal yield of beef-type steers and calf-fed Holstein steers that had been fed zilpaterol hydrochloride (ZH; Intervet Inc., Millsboro, DE) as well as those that had not been fed ZH. Beef-type steer (n = 801) and calf-fed Holstein steer (n = 235) carcasses were fabricated into subprimal cuts and trim. Simple correlations between calculated yield grades and total red meat yields ranged from -0.56 to -0.62 for beef-type steers. Reliable correlations from calf-fed Holstein steers were unobtainable; the probability of a type I error met or exceeded 0.39. Linear models were developed for the beef-type steers to predict total red meat yield based on calculated USDA yield grade within each ZH duration. At an average calculated USDA yield grade of 2.9, beef-type steer carcasses that had not been fed ZH had an estimated 69.4% red meat yield, whereas those fed ZH had an estimated 70.7% red meat yield. These results indicate that feeding ZH increased red meat yield by 1.3% at a constant calculated yield grade. However, these data also suggest that the calculated USDA yield grade score is a poor and variable estimator (adjusted R(2) of 0.31 to 0.38) of total red meat yield of beef-type steer carcasses, regardless of ZH feeding. Moreover, no relationship existed (adjusted R(2) of 0.00 to 0.01) for calf-fed Holstein steer carcasses, suggesting the USDA yield grade is not a valid estimate of calf-fed Holstein red meat yield.

  20. Improving the reliability of female fertility breeding values using type and milk yield traits that predict energy status in Australian Holstein cattle.

    PubMed

    González-Recio, O; Haile-Mariam, M; Pryce, J E

    2016-01-01

    The objectives of this study were (1) to propose changing the selection criteria trait for evaluating fertility in Australia from calving interval to conception rate at d 42 after the beginning of the mating season and (2) to use type traits as early fertility predictors, to increase the reliability of estimated breeding values for fertility. The breeding goal in Australia is conception within 6 wk of the start of the mating season. Currently, the Australian model to predict fertility breeding values (expressed as a linear transformation of calving interval) is a multitrait model that includes calving interval (CVI), lactation length (LL), calving to first service (CFS), first nonreturn rate (FNRR), and conception rate. However, CVI has a lower genetic correlation with the breeding goal (conception within 6 wk of the start of the mating season) than conception rate. Milk yield, type, and fertility data from 164,318 cow sired by 4,766 bulls were used. Principal component analysis and genetic correlation estimates between type and fertility traits were used to select type traits that could subsequently be used in a multitrait analysis. Angularity, foot angle, and pin set were chosen as type traits to include in an index with the traits that are included in the multitrait fertility model: CVI, LL, CFS, FNRR, and conception rate at d 42 (CR42). An index with these 8 traits is expected to achieve an average bull first proof reliability of 0.60 on the breeding objective (conception within 6 wk of the start of the mating season) compared with reliabilities of 0.39 and 0.45 for CR42 only or the current 5-trait Australian model. Subsequently, we used the first eigenvector of a principal component analysis with udder texture, bone quality, angularity, and body condition score to calculate an energy status indicator trait. The inclusion of the energy status indicator trait composite in a multitrait index with CVI, LL, CFS, FNRR, and CR42 achieved a 12-point increase in

  1. Incorporating uncertainty into the ranking of SPARROW model nutrient yields from Mississippi/Atchafalaya River basin watersheds

    USGS Publications Warehouse

    Robertson, Dale M.; Schwarz, Gregory E.; Saad, David A.; Alexander, Richard B.

    2009-01-01

    Excessive loads of nutrients transported by tributary rivers have been linked to hypoxia in the Gulf of Mexico. Management efforts to reduce the hypoxic zone in the Gulf of Mexico and improve the water quality of rivers and streams could benefit from targeting nutrient reductions toward watersheds with the highest nutrient yields delivered to sensitive downstream waters. One challenge is that most conventional watershed modeling approaches (e.g., mechanistic models) used in these management decisions do not consider uncertainties in the predictions of nutrient yields and their downstream delivery. The increasing use of parameter estimation procedures to statistically estimate model coefficients, however, allows uncertainties in these predictions to be reliably estimated. Here, we use a robust bootstrapping procedure applied to the results of a previous application of the hybrid statistical/mechanistic watershed model SPARROW (Spatially Referenced Regression On Watershed attributes) to develop a statistically reliable method for identifying “high priority” areas for management, based on a probabilistic ranking of delivered nutrient yields from watersheds throughout a basin. The method is designed to be used by managers to prioritize watersheds where additional stream monitoring and evaluations of nutrient-reduction strategies could be undertaken. Our ranking procedure incorporates information on the confidence intervals of model predictions and the corresponding watershed rankings of the delivered nutrient yields. From this quantified uncertainty, we estimate the probability that individual watersheds are among a collection of watersheds that have the highest delivered nutrient yields. We illustrate the application of the procedure to 818 eight-digit Hydrologic Unit Code watersheds in the Mississippi/Atchafalaya River basin by identifying 150 watersheds having the highest delivered nutrient yields to the Gulf of Mexico. Highest delivered yields were from

  2. Comparison of Hyperthermal Ground Laboratory Atomic Oxygen Erosion Yields With Those in Low Earth Orbit

    NASA Technical Reports Server (NTRS)

    Banks, Bruce A.; Dill, Grace C.; Loftus, Ryan J.; deGroh, Kim K.; Miller, Sharon K.

    2013-01-01

    The atomic oxygen erosion yields of 26 materials (all polymers except for pyrolytic graphite) were measured in two directed hyperthermal radio frequency (RF) plasma ashers operating at 30 or 35 kHz with air. The hyperthermal asher results were compared with thermal energy asher results and low Earth orbital (LEO) results from the Materials International Space Station Experiment 2 and 7 (MISSE 2 and 7) flight experiments. The hyperthermal testing was conducted to a significant portion of the atomic oxygen fluence similar polymers were exposed to during the MISSE 2 and 7 missions. Comparison of the hyperthermal asher prediction of LEO erosion yields with thermal energy asher erosion yields indicates that except for the fluorocarbon polymers of PTFE and FEP, the hyperthermal energy ashers are a much more reliable predictor of LEO erosion yield than thermal energy asher testing, by a factor of four.

  3. Highly reliable oxide VCSELs for datacom applications

    NASA Astrophysics Data System (ADS)

    Aeby, Ian; Collins, Doug; Gibson, Brian; Helms, Christopher J.; Hou, Hong Q.; Lou, Wenlin; Bossert, David J.; Wang, Charlie X.

    2003-06-01

    In this paper we describe the processes and procedures that have been developed to ensure high reliability for Emcore"s 850 nm oxide confined GaAs VCSELs. Evidence from on-going accelerated life testing and other reliability studies that confirm that this process yields reliable products will be discussed. We will present data and analysis techniques used to determine the activation energy and acceleration factors for the dominant wear-out failure mechanisms for our devices as well as our estimated MTTF of greater than 2 million use hours. We conclude with a summary of internal verification and field return rate validation data.

  4. Assessing disease stress and modeling yield losses in alfalfa

    NASA Astrophysics Data System (ADS)

    Guan, Jie

    Alfalfa is the most important forage crop in the U.S. and worldwide. Fungal foliar diseases are believed to cause significant yield losses in alfalfa, yet, little quantitative information exists regarding the amount of crop loss. Different fungicides and application frequencies were used as tools to generate a range of foliar disease intensities in Ames and Nashua, IA. Visual disease assessments (disease incidence, disease severity, and percentage defoliation) were obtained weekly for each alfalfa growth cycle (two to three growing cycles per season). Remote sensing assessments were performed using a hand-held, multispectral radiometer to measure the amount and quality of sunlight reflected from alfalfa canopies. Factors such as incident radiation, sun angle, sensor height, and leaf wetness were all found to significantly affect the percentage reflectance of sunlight reflected from alfalfa canopies. The precision of visual and remote sensing assessment methods was quantified. Precision was defined as the intra-rater repeatability and inter-rater reliability of assessment methods. F-tests, slopes, intercepts, and coefficients of determination (R2) were used to compare assessment methods for precision. Results showed that among the three visual disease assessment methods (disease incidence, disease severity, and percentage defoliation), percentage defoliation had the highest intra-rater repeatability and inter-rater reliability. Remote sensing assessment method had better precision than the percentage defoliation assessment method based upon higher intra-rater repeatability and inter-rater reliability. Significant linear relationships between canopy reflectance (810 nm), percentage defoliation and yield were detected using linear regression and percentage reflectance (810 nm) assessments were found to have a stronger relationship with yield than percentage defoliation assessments. There were also significant linear relationships between percentage defoliation, dry

  5. Postpartum body condition score and results from the first test day milk as predictors of disease, fertility, yield, and culling in commercial dairy herds.

    PubMed

    Heuer, C; Schukken, Y H; Dobbelaar, P

    1999-02-01

    The study used field data from a regular herd health service to investigate the relationships between body condition scores or first test day milk data and disease incidence, milk yield, fertility, and culling. Path model analysis with adjustment for time at risk was applied to delineate the time sequence of events. Milk fever occurred more often in fat cows, and endometritis occurred between calving and 20 d of lactation more often in thin cows. Fat cows were less likely to conceive at first service than were cows in normal condition. Fat body condition postpartum, higher first test day milk yield, and a fat to protein ratio of > 1.5 increased body condition loss. Fat or thin condition or condition loss was not related to other lactation diseases, fertility parameters, milk yield, or culling. First test day milk yield was 1.3 kg higher after milk fever and was 7.1 kg lower after displaced abomasum. Higher first test day milk yield directly increased the risk of ovarian cyst and lameness, increased 100-d milk yield, and reduced the risk of culling and indirectly decreased reproductive performance. Cows with a fat to protein ratio of > 1.5 had higher risks for ketosis, displaced abomasum, ovarian cyst, lameness, and mastitis. Those cows produced more milk but showed poor reproductive performance. Given this type of herd health data, we concluded that the first test day milk yield and the fat to protein ratio were more reliable indicators of disease, fertility, and milk yield than was body condition score or loss of body condition score.

  6. Optical fiber reliability results from the Biarritz field trial

    NASA Astrophysics Data System (ADS)

    Gouronnec, Alain; Goarin, Rolland; Le Moigne, G.; Baptiste, M.

    1994-09-01

    The first experimental optical fiber network (fiber to the home CATV and video-phone) was installed in BIARRITZ, France) at the beginning or 1980. Some parts of the first optical links have now been removed. FRANCE TELECOM decided to stop field trial services, it appeared interesting to evaluate and expertise fiber reliability after more than 10 years of aging in a real adverse field environment. In this paper we give a short description of the layed down links, and indicate how we have carefully removed the individual fibers from the cables. After a first measurement of the mechanical parameters using normalized dynamic and static tests, we compared the results obtained with those of the equivalent tests used to evaluate these fibers before their installation on the field. The tests used are the same than those used in the 80 th. In conclusion, the paper gives the ageing results measured on the BIARRITZ optical fibers after more than 10 years of service in real environment and evaluate it by comparison with the results before installation.

  7. Evaluation of the Williams-type model for barley yields in North Dakota and Minnesota

    NASA Technical Reports Server (NTRS)

    Barnett, T. L. (Principal Investigator)

    1981-01-01

    The Williams-type yield model is based on multiple regression analysis of historial time series data at CRD level pooled to regional level (groups of similar CRDs). Basic variables considered in the analysis include USDA yield, monthly mean temperature, monthly precipitation, soil texture and topographic information, and variables derived from these. Technologic trend is represented by piecewise linear and/or quadratic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-1979) demonstrate that biases are small and performance based on root mean square appears to be acceptable for the intended AgRISTARS large area applications. The model is objective, adequate, timely, simple, and not costly. It consideres scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.

  8. Plausible rice yield losses under future climate warming.

    PubMed

    Zhao, Chuang; Piao, Shilong; Wang, Xuhui; Huang, Yao; Ciais, Philippe; Elliott, Joshua; Huang, Mengtian; Janssens, Ivan A; Li, Tao; Lian, Xu; Liu, Yongwen; Müller, Christoph; Peng, Shushi; Wang, Tao; Zeng, Zhenzhong; Peñuelas, Josep

    2016-12-19

    Rice is the staple food for more than 50% of the world's population 1-3 . Reliable prediction of changes in rice yield is thus central for maintaining global food security. This is an extraordinary challenge. Here, we compare the sensitivity of rice yield to temperature increase derived from field warming experiments and three modelling approaches: statistical models, local crop models and global gridded crop models. Field warming experiments produce a substantial rice yield loss under warming, with an average temperature sensitivity of -5.2 ± 1.4% K -1 . Local crop models give a similar sensitivity (-6.3 ± 0.4% K -1 ), but statistical and global gridded crop models both suggest less negative impacts of warming on yields (-0.8 ± 0.3% and -2.4 ± 3.7% K -1 , respectively). Using data from field warming experiments, we further propose a conditional probability approach to constrain the large range of global gridded crop model results for the future yield changes in response to warming by the end of the century (from -1.3% to -9.3% K -1 ). The constraint implies a more negative response to warming (-8.3 ± 1.4% K -1 ) and reduces the spread of the model ensemble by 33%. This yield reduction exceeds that estimated by the International Food Policy Research Institute assessment (-4.2 to -6.4% K -1 ) (ref. 4). Our study suggests that without CO 2 fertilization, effective adaptation and genetic improvement, severe rice yield losses are plausible under intensive climate warming scenarios.

  9. Analysis of the Reliability and Validity of a Mentor's Assessment for Principal Internships

    ERIC Educational Resources Information Center

    Koonce, Glenn L.; Kelly, Michael D.

    2014-01-01

    In this study, researchers analyzed the reliability and validity of the mentor's assessment for principal internships at a university in the Southeast region of the United States. The results of the study yielded how trustworthy and dependable the instrument is and the effectiveness of the instrument in the current principal preparation program.…

  10. Identification of saline soils with multi-year remote sensing of crop yields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lobell, D; Ortiz-Monasterio, I; Gurrola, F C

    2006-10-17

    Soil salinity is an important constraint to agricultural sustainability, but accurate information on its variation across agricultural regions or its impact on regional crop productivity remains sparse. We evaluated the relationships between remotely sensed wheat yields and salinity in an irrigation district in the Colorado River Delta Region. The goals of this study were to (1) document the relative importance of salinity as a constraint to regional wheat production and (2) develop techniques to accurately identify saline fields. Estimates of wheat yield from six years of Landsat data agreed well with ground-based records on individual fields (R{sup 2} = 0.65).more » Salinity measurements on 122 randomly selected fields revealed that average 0-60 cm salinity levels > 4 dS m{sup -1} reduced wheat yields, but the relative scarcity of such fields resulted in less than 1% regional yield loss attributable to salinity. Moreover, low yield was not a reliable indicator of high salinity, because many other factors contributed to yield variability in individual years. However, temporal analysis of yield images showed a significant fraction of fields exhibited consistently low yields over the six year period. A subsequent survey of 60 additional fields, half of which were consistently low yielding, revealed that this targeted subset had significantly higher salinity at 30-60 cm depth than the control group (p = 0.02). These results suggest that high subsurface salinity is associated with consistently low yields in this region, and that multi-year yield maps derived from remote sensing therefore provide an opportunity to map salinity across agricultural regions.« less

  11. Modelling crop yield in Iberia under drought conditions

    NASA Astrophysics Data System (ADS)

    Ribeiro, Andreia; Páscoa, Patrícia; Russo, Ana; Gouveia, Célia

    2017-04-01

    The improved assessment of the cereal yield and crop loss under drought conditions are essential to meet the increasing economy demands. The growing frequency and severity of the extreme drought conditions in the Iberian Peninsula (IP) has been likely responsible for negative impacts on agriculture, namely on crop yield losses. Therefore, a continuous monitoring of vegetation activity and a reliable estimation of drought impacts is crucial to contribute for the agricultural drought management and development of suitable information tools. This works aims to assess the influence of drought conditions in agricultural yields over the IP, considering cereal yields from mainly rainfed agriculture for the provinces with higher productivity. The main target is to develop a strategy to model drought risk on agriculture for wheat yield at a province level. In order to achieve this goal a combined assessment was made using a drought indicator (Standardized Precipitation Evapotranspiration Index, SPEI) to evaluate drought conditions together with a widely used vegetation index (Normalized Difference Vegetation Index, NDVI) to monitor vegetation activity. A correlation analysis between detrended wheat yield and SPEI was performed in order to assess the vegetation response to each time scale of drought occurrence and also identify the moment of the vegetative cycle when the crop yields are more vulnerable to drought conditions. The time scales and months of SPEI, together with the months of NDVI, better related with wheat yield were chosen to perform a multivariate regression analysis to simulate crop yield. Model results are satisfactory and highlighted the usefulness of such analysis in the framework of developing a drought risk model for crop yields. In terms of an operational point of view, the results aim to contribute to an improved understanding of crop yield management under dry conditions, particularly adding substantial information on the advantages of combining

  12. The revised Generalized Expectancy for Success Scale: a validity and reliability study.

    PubMed

    Hale, W D; Fiedler, L R; Cochran, C D

    1992-07-01

    The Generalized Expectancy for Success Scale (GESS; Fibel & Hale, 1978) was revised and assessed for reliability and validity. The revised version was administered to 199 college students along with other conceptually related measures, including the Rosenberg Self-Esteem Scale, the Life Orientation Test, and Rotter's Internal-External Locus of Control Scale. One subsample of students also completed the Eysenck Personality Inventory, while another subsample performed a criterion-related task that involved risk taking. Item analysis yielded 25 items with correlations of .45 or higher with the total score. Results indicated high internal consistency and test-retest reliability.

  13. Statistical methodology: II. Reliability and validity assessment in study design, Part B.

    PubMed

    Karras, D J

    1997-02-01

    Validity measures the correspondence between a test and other purported measures of the same or similar qualities. When a reference standard exists, a criterion-based validity coefficient can be calculated. If no such standard is available, the concepts of content and construct validity may be used, but quantitative analysis may not be possible. The Pearson and Spearman tests of correlation are often used to assess the correspondence between tests, but do not account for measurement biases and may yield misleading results. Techniques that measure interest differences may be more meaningful in validity assessment, and the kappa statistic is useful for analyzing categorical variables. Questionnaires often can be designed to allow quantitative assessment of reliability and validity, although this may be difficult. Inclusion of homogeneous questions is necessary to assess reliability. Analysis is enhanced by using Likert scales or similar techniques that yield ordinal data. Validity assessment of questionnaires requires careful definition of the scope of the test and comparison with previously validated tools.

  14. Reliable and valid tools for measuring surgeons' teaching performance: residents' vs. self evaluation.

    PubMed

    Boerebach, Benjamin C M; Arah, Onyebuchi A; Busch, Olivier R C; Lombarts, Kiki M J M H

    2012-01-01

    In surgical education, there is a need for educational performance evaluation tools that yield reliable and valid data. This paper describes the development and validation of robust evaluation tools that provide surgeons with insight into their clinical teaching performance. We investigated (1) the reliability and validity of 2 tools for evaluating the teaching performance of attending surgeons in residency training programs, and (2) whether surgeons' self evaluation correlated with the residents' evaluation of those surgeons. We surveyed 343 surgeons and 320 residents as part of a multicenter prospective cohort study of faculty teaching performance in residency training programs. The reliability and validity of the SETQ (System for Evaluation Teaching Qualities) tools were studied using standard psychometric techniques. We then estimated the correlations between residents' and surgeons' evaluations. The response rate was 87% among surgeons and 84% among residents, yielding 2625 residents' evaluations and 302 self evaluations. The SETQ tools yielded reliable and valid data on 5 domains of surgical teaching performance, namely, learning climate, professional attitude towards residents, communication of goals, evaluation of residents, and feedback. The correlations between surgeons' self and residents' evaluations were low, with coefficients ranging from 0.03 for evaluation of residents to 0.18 for communication of goals. The SETQ tools for the evaluation of surgeons' teaching performance appear to yield reliable and valid data. The lack of strong correlations between surgeons' self and residents' evaluations suggest the need for using external feedback sources in informed self evaluation of surgeons. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  15. Advanced reliability methods for structural evaluation

    NASA Technical Reports Server (NTRS)

    Wirsching, P. H.; Wu, Y.-T.

    1985-01-01

    Fast probability integration (FPI) methods, which can yield approximate solutions to such general structural reliability problems as the computation of the probabilities of complicated functions of random variables, are known to require one-tenth the computer time of Monte Carlo methods for a probability level of 0.001; lower probabilities yield even more dramatic differences. A strategy is presented in which a computer routine is run k times with selected perturbed values of the variables to obtain k solutions for a response variable Y. An approximating polynomial is fit to the k 'data' sets, and FPI methods are employed for this explicit form.

  16. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  17. Brain dead or not? CT angiogram yielding false-negative result on brain death confirmation.

    PubMed

    Johnston, Robyn; Kaliaperumal, Chandrasekaran; Wyse, Gerald; Kaar, George

    2013-01-08

    We describe a case of severe traumatic brain injury with multiple facial and skull fractures where CT angiogram (CTA) failed to yield a definite result of brain death as an ancillary test. A 28-year-old man was admitted following a road traffic accident with a Glasgow Coma Score (GCS) of 3/15 and fixed pupils. CT brain revealed uncal herniation and diffuse cerebral oedema with associated multiple facial and skull fractures. 72 h later, his clinical condition remained the same with high intracranial pressure refractory to medical management. Clinical confirmation on brain death was not feasible owing to facial injuries. A CTA, performed to determine brain perfusion, yielded a 'false-negative' result. Skull fractures have possibly led to venous prominence in the cortical and deep venous drainage system. This point needs to be borne in mind while considering CTA as an ancillary test to confirm brain death.

  18. Ceramic material life prediction: A program to translate ANSYS results to CARES/LIFE reliability analysis

    NASA Technical Reports Server (NTRS)

    Vonhermann, Pieter; Pintz, Adam

    1994-01-01

    This manual describes the use of the ANSCARES program to prepare a neutral file of FEM stress results taken from ANSYS Release 5.0, in the format needed by CARES/LIFE ceramics reliability program. It is intended for use by experienced users of ANSYS and CARES. Knowledge of compiling and linking FORTRAN programs is also required. Maximum use is made of existing routines (from other CARES interface programs and ANSYS routines) to extract the finite element results and prepare the neutral file for input to the reliability analysis. FORTRAN and machine language routines as described are used to read the ANSYS results file. Sub-element stresses are computed and written to a neutral file using FORTRAN subroutines which are nearly identical to those used in the NASCARES (MSC/NASTRAN to CARES) interface.

  19. Evaluation of the CEAS trend and monthly weather data models for soybean yields in Iowa, Illinois, and Indiana

    NASA Technical Reports Server (NTRS)

    French, V. (Principal Investigator)

    1982-01-01

    The CEAS models evaluated use historic trend and meteorological and agroclimatic variables to forecast soybean yields in Iowa, Illinois, and Indiana. Indicators of yield reliability and current measures of modeled yield reliability were obtained from bootstrap tests on the end of season models. Indicators of yield reliability show that the state models are consistently better than the crop reporting district (CRD) models. One CRD model is especially poor. At the state level, the bias of each model is less than one half quintal/hectare. The standard deviation is between one and two quintals/hectare. The models are adequate in terms of coverage and are to a certain extent consistent with scientific knowledge. Timely yield estimates can be made during the growing season using truncated models. The models are easy to understand and use and are not costly to operate. Other than the specification of values used to determine evapotranspiration, the models are objective. Because the method of variable selection used in the model development is adequately documented, no evaluation can be made of the objectivity and cost of redevelopment of the model.

  20. Development of a European Ensemble System for Seasonal Prediction: Application to crop yield

    NASA Astrophysics Data System (ADS)

    Terres, J. M.; Cantelaube, P.

    2003-04-01

    Western European agriculture is highly intensive and the weather is the main source of uncertainty for crop yield assessment and for crop management. In the current system, at the time when a crop yield forecast is issued, the weather conditions leading up to harvest time are unknown and are therefore a major source of uncertainty. The use of seasonal weather forecast would bring additional information for the remaining crop season and has valuable benefit for improving the management of agricultural markets and environmentally sustainable farm practices. An innovative method for supplying seasonal forecast information to crop simulation models has been developed in the frame of the EU funded research project DEMETER. It consists in running a crop model on each individual member of the seasonal hindcasts to derive a probability distribution of crop yield. Preliminary results of cumulative probability function of wheat yield provides information on both the yield anomaly and the reliability of the forecast. Based on the spread of the probability distribution, the end-user can directly quantify the benefits and risks of taking weather-sensitive decisions.

  1. Further Examination of the Reliability of the Modified Rathus Assertiveness Schedule.

    ERIC Educational Resources Information Center

    Del Greco, Linda; And Others

    1986-01-01

    Examined the reliability of the 30-item Modified Rathus Assertiveness Schedule (MRAS) using the test-retest method over a three-week period. The MRAS yielded correlations of .74 using the Pearson product and Spearman Brown correlation coefficient. Correlations for males yielded .77 and .72. For females correlations for both tests were .72.…

  2. Measurement of fission yields and isomeric yield ratios at IGISOL

    NASA Astrophysics Data System (ADS)

    Pomp, Stephan; Mattera, Andrea; Rakopoulos, Vasileios; Al-Adili, Ali; Lantz, Mattias; Solders, Andreas; Jansson, Kaj; Prokofiev, Alexander V.; Eronen, Tommi; Gorelov, Dimitri; Jokinen, Ari; Kankainen, Anu; Moore, Iain D.; Penttilä, Heikki; Rinta-Antila, Sami

    2018-03-01

    Data on fission yields and isomeric yield ratios (IYR) are tools to study the fission process, in particular the generation of angular momentum. We use the IGISOL facility with the Penning trap JYFLTRAP in Jyväskylä, Finland, for such measurements on 232Th and natU targets. Previously published fission yield data from IGISOL concern the 232Th(p,f) and 238U(p,f) reactions at 25 and 50 MeV. Recently, a neutron source, using the Be(p,n) reaction, has been developed, installed and tested. We summarize the results for (p,f) focusing on the first measurement of IYR by direct ion counting. We also present first results for IYR and relative yields for Sn and Sb isotopes in the 128-133 mass range from natU(n,f) based on γ-spectrometry. We find a staggering behaviour in the cumulative yields for Sn and a shift in the independent fission yields for Sb as compared to current evaluations. Plans for the future experimental program on fission yields and IYR measurements are discussed.

  3. Claims about the Reliability of Student Evaluations of Instruction: The Ecological Fallacy Rides Again

    ERIC Educational Resources Information Center

    Morley, Donald D.

    2012-01-01

    The vast majority of the research on student evaluation of instruction has assessed the reliability of groups of courses and yielded either a single reliability coefficient for the entire group, or grouped reliability coefficients for each student evaluation of teaching (SET) item. This manuscript argues that these practices constitute a form of…

  4. Brain dead or not? CT angiogram yielding false-negative result on brain death confirmation

    PubMed Central

    Johnston, Robyn; Kaliaperumal, Chandrasekaran; Wyse, Gerald; Kaar, George

    2013-01-01

    We describe a case of severe traumatic brain injury with multiple facial and skull fractures where CT angiogram (CTA) failed to yield a definite result of brain death as an ancillary test. A 28-year-old man was admitted following a road traffic accident with a Glasgow Coma Score (GCS) of 3/15 and fixed pupils. CT brain revealed uncal herniation and diffuse cerebral oedema with associated multiple facial and skull fractures. 72 h later, his clinical condition remained the same with high intracranial pressure refractory to medical management. Clinical confirmation on brain death was not feasible owing to facial injuries. A CTA, performed to determine brain perfusion, yielded a ‘false-negative’ result. Skull fractures have possibly led to venous prominence in the cortical and deep venous drainage system. This point needs to be borne in mind while considering CTA as an ancillary test to confirm brain death. PMID:23302550

  5. Optimizing rice yields while minimizing yield-scaled global warming potential.

    PubMed

    Pittelkow, Cameron M; Adviento-Borbe, Maria A; van Kessel, Chris; Hill, James E; Linquist, Bruce A

    2014-05-01

    To meet growing global food demand with limited land and reduced environmental impact, agricultural greenhouse gas (GHG) emissions are increasingly evaluated with respect to crop productivity, i.e., on a yield-scaled as opposed to area basis. Here, we compiled available field data on CH4 and N2 O emissions from rice production systems to test the hypothesis that in response to fertilizer nitrogen (N) addition, yield-scaled global warming potential (GWP) will be minimized at N rates that maximize yields. Within each study, yield N surplus was calculated to estimate deficit or excess N application rates with respect to the optimal N rate (defined as the N rate at which maximum yield was achieved). Relationships between yield N surplus and GHG emissions were assessed using linear and nonlinear mixed-effects models. Results indicate that yields increased in response to increasing N surplus when moving from deficit to optimal N rates. At N rates contributing to a yield N surplus, N2 O and yield-scaled N2 O emissions increased exponentially. In contrast, CH4 emissions were not impacted by N inputs. Accordingly, yield-scaled CH4 emissions decreased with N addition. Overall, yield-scaled GWP was minimized at optimal N rates, decreasing by 21% compared to treatments without N addition. These results are unique compared to aerobic cropping systems in which N2 O emissions are the primary contributor to GWP, meaning yield-scaled GWP may not necessarily decrease for aerobic crops when yields are optimized by N fertilizer addition. Balancing gains in agricultural productivity with climate change concerns, this work supports the concept that high rice yields can be achieved with minimal yield-scaled GWP through optimal N application rates. Moreover, additional improvements in N use efficiency may further reduce yield-scaled GWP, thereby strengthening the economic and environmental sustainability of rice systems. © 2013 John Wiley & Sons Ltd.

  6. Hyperspectral sensing to detect the impact of herbicide drift on cotton growth and yield

    NASA Astrophysics Data System (ADS)

    Suarez, L. A.; Apan, A.; Werth, J.

    2016-10-01

    Yield loss in crops is often associated with plant disease or external factors such as environment, water supply and nutrient availability. Improper agricultural practices can also introduce risks into the equation. Herbicide drift can be a combination of improper practices and environmental conditions which can create a potential yield loss. As traditional assessment of plant damage is often imprecise and time consuming, the ability of remote and proximal sensing techniques to monitor various bio-chemical alterations in the plant may offer a faster, non-destructive and reliable approach to predict yield loss caused by herbicide drift. This paper examines the prediction capabilities of partial least squares regression (PLS-R) models for estimating yield. Models were constructed with hyperspectral data of a cotton crop sprayed with three simulated doses of the phenoxy herbicide 2,4-D at three different growth stages. Fibre quality, photosynthesis, conductance, and two main hormones, indole acetic acid (IAA) and abscisic acid (ABA) were also analysed. Except for fibre quality and ABA, Spearman correlations have shown that these variables were highly affected by the chemical. Four PLS-R models for predicting yield were developed according to four timings of data collection: 2, 7, 14 and 28 days after the exposure (DAE). As indicated by the model performance, the analysis revealed that 7 DAE was the best time for data collection purposes (RMSEP = 2.6 and R2 = 0.88), followed by 28 DAE (RMSEP = 3.2 and R2 = 0.84). In summary, the results of this study show that it is possible to accurately predict yield after a simulated herbicide drift of 2,4-D on a cotton crop, through the analysis of hyperspectral data, thereby providing a reliable, effective and non-destructive alternative based on the internal response of the cotton leaves.

  7. Evaluation of Thompson-type trend and monthly weather data models for corn yields in Iowa, Illinois, and Indiana

    NASA Technical Reports Server (NTRS)

    French, V. (Principal Investigator)

    1982-01-01

    An evaluation was made of Thompson-Type models which use trend terms (as a surrogate for technology), meteorological variables based on monthly average temperature, and total precipitation to forecast and estimate corn yields in Iowa, Illinois, and Indiana. Pooled and unpooled Thompson-type models were compared. Neither was found to be consistently superior to the other. Yield reliability indicators show that the models are of limited use for large area yield estimation. The models are objective and consistent with scientific knowledge. Timely yield forecasts and estimates can be made during the growing season by using normals or long range weather forecasts. The models are not costly to operate and are easy to use and understand. The model standard errors of prediction do not provide a useful current measure of modeled yield reliability.

  8. Recent Results from Lohengrin on Fission Yields and Related Decay Properties

    NASA Astrophysics Data System (ADS)

    Serot, O.; Amouroux, C.; Bidaud, A.; Capellan, N.; Chabod, S.; Ebran, A.; Faust, H.; Kessedjian, G.; Köester, U.; Letourneau, A.; Litaize, O.; Martin, F.; Materna, T.; Mathieu, L.; Panebianco, S.; Regis, J.-M.; Rudigier, M.; Sage, C.; Urban, W.

    2014-05-01

    The Lohengrin mass spectrometer is one of the 40 instruments built around the reactor of the Institute Laue-Langevin (France) which delivers a very intense thermal neutron flux. Usually, Lohengrin was combined with a high-resolution ionization chamber in order to obtain good nuclear charge discrimination within a mass line, yielding an accurate isotopic yield determination. Unfortunately, this experimental procedure can only be applied for fission products with a nuclear charge less than about 42, i.e. in the light fission fragment region. Since 2008, a large collaboration has started with the aim of studying various fission aspects, mainly in the heavy fragment region. For that, a new experimental setup which allows isotopic identification by γ-ray spectrometry has been developed and validated. This technique was applied on the 239Pu(nth,f) reaction where about 65 fission product yields were measured with an uncertainty that has been reduced on average by a factor of 2 compared with what was that previously available in nuclear data libraries. The same γ-ray spectrometric technique is currently being applied to the study of the 233U(nth,f) reaction. Our aim is to deduce charge and mass distributions of the fission products and to complete the experimental data that exist mainly for light fission fragments. The measurement of 41 mass yields from the 241Am(2nth,f) reaction has been also performed. In addition to these activities on fission yield measurements, various new nanosecond isomers were discovered. Their presence can be revealed from a strong deformed ionic charge distribution compared to a 'normal' Gaussian shape. Finally, a new neutron long-counter detector designed to have a detection efficiency independent of the detected neutron energy has been built. Combining this neutron device with a Germanium detector and a beta-ray detector array allowed us to measure the beta-delayed neutron emission probability Pn of some important fission products for reactor

  9. Whiskey springs long-term coast redwood density management; final growth, sprout, and yield results

    Treesearch

    Lynn A. Webb; James L. Lindquist; Erik Wahl; Andrew Hubb

    2012-01-01

    Multi-decadal studies of commercial and precommercial thinning in redwood stands are rare and consequently of value. The Whiskey Springs study at Jackson Demonstration State Forest has a data set spanning 35 years. In addition to growth and yield response to commercial thinning, the results provide important information for evaluating regeneration and...

  10. Reliability of In Vitro Methods used to Measure Intrinsic Clearance of Hydrophobic Organic Chemicals by Rainbow Trout: Results of an International Ring Trial.

    PubMed

    Nichols, John; Fay, Kellie; Bernhard, Mary Jo; Bischof, Ina; Davis, John; Halder, Marlies; Hu, Jing; Johanning, Karla; Laue, Heike; Nabb, Diane; Schlechtriem, Christian; Segner, Helmut; Swintek, Joe; Weeks, John; Embry, Michelle

    2018-05-14

    In vitro assays are widely employed to obtain intrinsic clearance estimates used in toxicokinetic modeling efforts. However, the reliability of these methods is seldom reported. Here we describe the results of an international ring trial designed to evaluate two in vitro assays used to measure intrinsic clearance in rainbow trout. An important application of these assays is to predict the effect of biotransformation on chemical bioaccumulation. Six laboratories performed substrate depletion experiments with cyclohexyl salicylate, fenthion, 4-n-nonylphenol, deltamethrin, methoxychlor, and pyrene using cryopreserved hepatocytes and liver S9 fractions from trout. Variability within and among laboratories was characterized as the percent coefficient of variation (CV) in measured in vitro intrinsic clearance rates (CLIN VITRO, INT; ml/h/mg protein or 106 cells) for each chemical and test system. Mean intra-laboratory CVs for each test chemical averaged 18.9% for hepatocytes and 14.1% for S9 fractions, while inter-laboratory CVs (all chemicals and all tests) averaged 30.1% for hepatocytes and 22.4% for S9 fractions. When CLIN VITRO, INT values were extrapolated to in vivo intrinsic clearance estimates (CLIN VIVO,INT; L/d/kg fish), both assays yielded similar levels of activity (< 4-fold difference for all chemicals). Hepatic clearance rates (CLH; L/d/kg fish) calculated using data from both assays exhibited even better agreement. These findings show that both assays are highly reliable and suggest that either may be used to inform chemical bioaccumulation assessments for fish. This study highlights several issues related to the demonstration of assay reliability and may provide a template for evaluating other in vitro biotransformation assays.

  11. Using multivariate generalizability theory to assess the effect of content stratification on the reliability of a performance assessment.

    PubMed

    Keller, Lisa A; Clauser, Brian E; Swanson, David B

    2010-12-01

    In recent years, demand for performance assessments has continued to grow. However, performance assessments are notorious for lower reliability, and in particular, low reliability resulting from task specificity. Since reliability analyses typically treat the performance tasks as randomly sampled from an infinite universe of tasks, these estimates of reliability may not be accurate. For tests built according to a table of specifications, tasks are randomly sampled from different strata (content domains, skill areas, etc.). If these strata remain fixed in the test construction process, ignoring this stratification in the reliability analysis results in an underestimate of "parallel forms" reliability, and an overestimate of the person-by-task component. This research explores the effect of representing and misrepresenting the stratification appropriately in estimation of reliability and the standard error of measurement. Both multivariate and univariate generalizability studies are reported. Results indicate that the proper specification of the analytic design is essential in yielding the proper information both about the generalizability of the assessment and the standard error of measurement. Further, illustrative D studies present the effect under a variety of situations and test designs. Additional benefits of multivariate generalizability theory in test design and evaluation are also discussed.

  12. Assessing the Reliability of Material Flow Analysis Results: The Cases of Rhenium, Gallium, and Germanium in the United States Economy.

    PubMed

    Meylan, Grégoire; Reck, Barbara K; Rechberger, Helmut; Graedel, Thomas E; Schwab, Oliver

    2017-10-17

    Decision-makers traditionally expect "hard facts" from scientific inquiry, an expectation that the results of material flow analyses (MFAs) can hardly meet. MFA limitations are attributable to incompleteness of flowcharts, limited data quality, and model assumptions. Moreover, MFA results are, for the most part, based less on empirical observation but rather on social knowledge construction processes. Developing, applying, and improving the means of evaluating and communicating the reliability of MFA results is imperative. We apply two recently proposed approaches for making quantitative statements on MFA reliability to national minor metals systems: rhenium, gallium, and germanium in the United States in 2012. We discuss the reliability of results in policy and management contexts. The first approach consists of assessing data quality based on systematic characterization of MFA data and the associated meta-information and quantifying the "information content" of MFAs. The second is a quantification of data inconsistencies indicated by the "degree of data reconciliation" between the data and the model. A high information content and a low degree of reconciliation indicate reliable or certain MFA results. This article contributes to reliability and uncertainty discourses in MFA, exemplifying the usefulness of the approaches in policy and management, and to raw material supply discussions by providing country-level information on three important minor metals often considered critical.

  13. Compression of freestanding gold nanostructures: from stochastic yield to predictable flow

    NASA Astrophysics Data System (ADS)

    Mook, W. M.; Niederberger, C.; Bechelany, M.; Philippe, L.; Michler, J.

    2010-02-01

    Characterizing the mechanical response of isolated nanostructures is vitally important to fields such as microelectromechanical systems (MEMS) where the behaviour of nanoscale contacts can in large part determine system reliability and lifetime. To address this challenge directly, single crystal gold nanodots are compressed inside a high resolution scanning electron microscope (SEM) using a nanoindenter equipped with a flat punch tip. These structures load elastically, and then yield in a stochastic manner, at loads ranging from 16 to 110 µN, which is up to five times higher than the load necessary for flow after yield. Yielding is immediately followed by displacement bursts equivalent to 1-50% of the initial height, depending on the yield point. During the largest displacement bursts, strain energy within the structure is released while new surface area is created in the form of localized slip bands, which are evident in both the SEM movies and still-images. A first order estimate of the apparent energy release rate, in terms of fracture mechanics concepts, for bursts representing 5-50% of the structure's initial height is on the order of 10-100 J m-2, which is approximately two orders of magnitude lower than bulk values. Once this initial strain burst during yielding has occurred, the structures flow in a ductile way. The implications of this behaviour, which is analogous to a brittle to ductile transition, are discussed with respect to mechanical reliability at the micro- and nanoscales.

  14. Simulated Impacts of Climate Change on Water Use and Yield of Irrigated Sugarcane in South Africa

    NASA Technical Reports Server (NTRS)

    Jones, M.R; Singels, A.; Ruane, A. C.

    2015-01-01

    Reliable predictions of climate change impacts on water use, irrigation requirements and yields of irrigated sugarcane in South Africa (a water-scarce country) are necessary to plan adaptation strategies. Although previous work has been done in this regard, methodologies and results vary considerably. The objectives were (1) to estimate likely impacts of climate change on sugarcane yields, water use and irrigation demand at three irrigated sugarcane production sites in South Africa (Malelane, Pongola and La Mercy) for current (1980-2010) and future (2070-2100) climate scenarios, using an approach based on the Agricultural Model Inter-comparison and Improvement Project (AgMIP) protocols; and (2) to assess the suitability of this methodology for investigating climate change impacts on sugarcane production. Future climate datasets were generated using the Delta downscaling method and three Global Circulation Models (GCMs) assuming atmospheric CO2 concentration [CO2] of 734 ppm(A2 emissions scenario). Yield and water use were simulated using the DSSAT-Canegro v4.5 model. Irrigated cane yields are expected to increase at all three sites (between 11 and 14%), primarily due to increased interception of radiation as a result of accelerated canopy development. Evapotranspiration and irrigation requirements increased by 11% due to increased canopy cover and evaporative demand. Sucrose yields are expected to decline because of increased consumption of photo-assimilate for structural growth and maintenance respiration. Crop responses in canopy development and yield formation differed markedly between the crop cycles investigated. Possible agronomic implications of these results include reduced weed control costs due to shortened periods of partial canopy, a need for improved efficiency of irrigation to counter increased demands, and adjustments to ripening and harvest practices to counter decreased cane quality and optimize productivity. Although the Delta climate data

  15. Reliability Sampling Plans: A Review and Some New Results

    ERIC Educational Resources Information Center

    Isaic-Maniu, Alexandru; Voda, Viorel Gh.

    2009-01-01

    In this work we present a large area of aspects related to the problem of sampling inspection in the case of reliability. First we discuss the actual status of this domain, mentioning the newest approaches (from a technical view point) such as HALT and HASS and the statistical perspective. After a brief description of the general procedure in…

  16. Accuracy and reliability of the Pfeffer Questionnaire for the Brazilian elderly population

    PubMed Central

    Dutra, Marina Carneiro; Ribeiro, Raynan dos Santos; Pinheiro, Sarah Brandão; de Melo, Gislane Ferreira; Carvalho, Gustavo de Azevedo

    2015-01-01

    The aging population calls for instruments to assess functional and cognitive impairment in the elderly, aiming to prevent conditions that affect functional abilities. Objective To verify the accuracy and reliability of the Pfeffer (FAQ) scale for the Brazilian elderly population and to evaluate the reliability and reproducibility of the translated version of the Pfeffer Questionnaire. Methods The Brazilian version of the FAQ was applied to 110 elderly divided into two groups. Both groups were assessed by two blinded investigators at baseline and again after 15 days. In order to verify the accuracy and reliability of the instrument, sensitivity and specificity measurements for the presence or absence of functional and cognitive decline were calculated for various cut-off points and the ROC curve. Intra and inter-examiner reliability were assessed using the Interclass Correlation Coefficient (ICC) and Bland-Altman plots. Results For the occurrence of cognitive decline, the ROC curve yielded an area under the curve of 0.909 (95%CI of 0.845 to 0.972), sensitivity of 75.68% (95%CI of 93.52% to 100%) and specificity of 97.26%. For the occurrence of functional decline, the ROC curve yielded an area under the curve of 0.851 (95%CI of 64.52% to 87.33%) and specificity of 80.36% (95%CI of 69.95% to 90.76%). The ICC was excellent, with all values exceeding 0.75. On the Bland-Altman plot, intra-examiner agreement was good, with p>0.05consistently close to 0. A systematic difference was found for inter-examiner agreement. Conclusion The Pfeffer Questionnaire is applicable in the Brazilian elderly population and showed reliability and reproducibility compared to the original test. PMID:29213959

  17. Movement-related beta oscillations show high intra-individual reliability.

    PubMed

    Espenhahn, Svenja; de Berker, Archy O; van Wijk, Bernadette C M; Rossiter, Holly E; Ward, Nick S

    2017-02-15

    Oscillatory activity in the beta frequency range (15-30Hz) recorded from human sensorimotor cortex is of increasing interest as a putative biomarker of motor system function and dysfunction. Despite its increasing use in basic and clinical research, surprisingly little is known about the test-retest reliability of spectral power and peak frequency measures of beta oscillatory signals from sensorimotor cortex. Establishing that these beta measures are stable over time in healthy populations is a necessary precursor to their use in the clinic. Here, we used scalp electroencephalography (EEG) to evaluate intra-individual reliability of beta-band oscillations over six sessions, focusing on changes in beta activity during movement (Movement-Related Beta Desynchronization, MRBD) and after movement termination (Post-Movement Beta Rebound, PMBR). Subjects performed visually-cued unimanual wrist flexion and extension. We assessed Intraclass Correlation Coefficients (ICC) and between-session correlations for spectral power and peak frequency measures of movement-related and resting beta activity. Movement-related and resting beta power from both sensorimotor cortices was highly reliable across sessions. Resting beta power yielded highest reliability (average ICC=0.903), followed by MRBD (average ICC=0.886) and PMBR (average ICC=0.663). Notably, peak frequency measures yielded lower ICC values compared to the assessment of spectral power, particularly for movement-related beta activity (ICC=0.386-0.402). Our data highlight that power measures of movement-related beta oscillations are highly reliable, while corresponding peak frequency measures show greater intra-individual variability across sessions. Importantly, our finding that beta power estimates show high intra-individual reliability over time serves to validate the notion that these measures reflect meaningful individual differences that can be utilised in basic research and clinical studies. Copyright © 2016 The

  18. Satellite techniques yield insight into devastating rainfall from Hurricane Mitch

    NASA Astrophysics Data System (ADS)

    Ferraro, R.; Vicente, G.; Ba, M.; Gruber, A.; Scofield, R.; Li, Q.; Weldon, R.

    Hurricane Mitch may prove to be one of the most devastating tropical cyclones to affect the western hemisphere. Heavy rains over Central America from October 28, 1998, to November 1, 1998, caused widespread flooding and mud slides in Nicaragua and Honduras resulting in thousands of deaths and missing persons. News reports indicated entire towns being swept away, destruction of national economies and infrastructure, and widespread disease in the aftermath of the storm, which some estimates suggested dropped as much as 1300 mm of rain.However, in view of the widespread damage it is difficult to determine the actual amounts and distribution of rainfall. More accurate means of determining the rainfall associated with Mitch are vital for diagnosing and understanding the evolution of this disaster and for developing new mitigation strategies for future tropical cyclones. Satellite data may prove to be a reliable resource for accurate rainfall analysis and have yielded apparently reliable figures for Hurricane Mitch.

  19. Results of instrument reliability study for high-level nuclear-waste repositories. [Geotechnical parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rogue, F.; Binnall, E.P.

    1982-10-01

    Reliable instrumentation will be needed to monitor the performance of future high-level waste repository sites. A study has been made to assess instrument reliability at Department of Energy (DOE) waste repository related experiments. Though the study covers a wide variety of instrumentation, this paper concentrates on experiences with geotechnical instrumentation in hostile repository-type environments. Manufacturers have made some changes to improve the reliability of instruments for repositories. This paper reviews the failure modes, rates, and mechanisms, along with manufacturer modifications and recommendations for additional improvements to enhance instrument performance. 4 tables.

  20. An Acoustic Charge Transport Imager for High Definition Television Applications: Reliability Modeling and Parametric Yield Prediction of GaAs Multiple Quantum Well Avalanche Photodiodes. Degree awarded Oct. 1997

    NASA Technical Reports Server (NTRS)

    Hunt, W. D.; Brennan, K. F.; Summers, C. J.; Yun, Ilgu

    1994-01-01

    Reliability modeling and parametric yield prediction of GaAs/AlGaAs multiple quantum well (MQW) avalanche photodiodes (APDs), which are of interest as an ultra-low noise image capture mechanism for high definition systems, have been investigated. First, the effect of various doping methods on the reliability of GaAs/AlGaAs multiple quantum well (MQW) avalanche photodiode (APD) structures fabricated by molecular beam epitaxy is investigated. Reliability is examined by accelerated life tests by monitoring dark current and breakdown voltage. Median device lifetime and the activation energy of the degradation mechanism are computed for undoped, doped-barrier, and doped-well APD structures. Lifetimes for each device structure are examined via a statistically designed experiment. Analysis of variance shows that dark-current is affected primarily by device diameter, temperature and stressing time, and breakdown voltage depends on the diameter, stressing time and APD type. It is concluded that the undoped APD has the highest reliability, followed by the doped well and doped barrier devices, respectively. To determine the source of the degradation mechanism for each device structure, failure analysis using the electron-beam induced current method is performed. This analysis reveals some degree of device degradation caused by ionic impurities in the passivation layer, and energy-dispersive spectrometry subsequently verified the presence of ionic sodium as the primary contaminant. However, since all device structures are similarly passivated, sodium contamination alone does not account for the observed variation between the differently doped APDs. This effect is explained by the dopant migration during stressing, which is verified by free carrier concentration measurements using the capacitance-voltage technique.

  1. Atomic Oxygen Erosion Yield Prediction for Spacecraft Polymers in Low Earth Orbit

    NASA Technical Reports Server (NTRS)

    Banks, Bruce A.; Backus, Jane A.; Manno, Michael V.; Waters, Deborah L.; Cameron, Kevin C.; deGroh, Kim K.

    2009-01-01

    The ability to predict the atomic oxygen erosion yield of polymers based on their chemistry and physical properties has been only partially successful because of a lack of reliable low Earth orbit (LEO) erosion yield data. Unfortunately, many of the early experiments did not utilize dehydrated mass loss measurements for erosion yield determination, and the resulting mass loss due to atomic oxygen exposure may have been compromised because samples were often not in consistent states of dehydration during the pre-flight and post-flight mass measurements. This is a particular problem for short duration mission exposures or low erosion yield materials. However, as a result of the retrieval of the Polymer Erosion and Contamination Experiment (PEACE) flown as part of the Materials International Space Station Experiment 2 (MISSE 2), the erosion yields of 38 polymers and pyrolytic graphite were accurately measured. The experiment was exposed to the LEO environment for 3.95 years from August 16, 2001 to July 30, 2005 and was successfully retrieved during a space walk on July 30, 2005 during Discovery s STS-114 Return to Flight mission. The 40 different materials tested (including Kapton H fluence witness samples) were selected specifically to represent a variety of polymers used in space as well as a wide variety of polymer chemical structures. The MISSE 2 PEACE Polymers experiment used carefully dehydrated mass measurements, as well as accurate density measurements to obtain accurate erosion yield data for high-fluence (8.43 1021 atoms/sq cm). The resulting data was used to develop an erosion yield predictive tool with a correlation coefficient of 0.895 and uncertainty of +/-6.3 10(exp -25)cu cm/atom. The predictive tool utilizes the chemical structures and physical properties of polymers to predict in-space atomic oxygen erosion yields. A predictive tool concept (September 2009 version) is presented which represents an improvement over an earlier (December 2008) version.

  2. The Z {yields} cc-bar {yields} {gamma}{gamma}*, Z {yields} bb-bar {yields} {gamma}{gamma}* triangle diagrams and the Z {yields} {gamma}{psi}, Z {yields} {gamma}Y decays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Achasov, N. N., E-mail: achasov@math.nsc.ru

    2011-03-15

    The approach to the Z {yields} {gamma}{psi} and Z {yields} {gamma}Y decay study is presented in detail, based on the sum rules for the Z {yields} cc-bar {yields} {gamma}{gamma}* and Z {yields} bb-bar {yields} {gamma}{gamma}* amplitudes and their derivatives. The branching ratios of the Z {yields} {gamma}{psi} and Z {yields} {gamma}Y decays are calculated for different hypotheses on saturation of the sum rules. The lower bounds of {Sigma}{sub {psi}} BR(Z {yields} {gamma}{psi}) = 1.95 Multiplication-Sign 10{sup -7} and {Sigma}{sub {upsilon}} BR(Z {yields} {gamma}Y) = 7.23 Multiplication-Sign 10{sup -7} are found. Deviations from the lower bounds are discussed, including the possibilitymore » of BR(Z {yields} {gamma}J/{psi}(1S)) {approx} BR(Z {yields} {gamma}Y(1S)) {approx} 10{sup -6}, that could be probably measured in LHC. The angular distributions in the Z {yields} {gamma}{psi} and Z {yields} {gamma}Y decays are also calculated.« less

  3. Yield of illicit indoor cannabis cultivation in the Netherlands.

    PubMed

    Toonen, Marcel; Ribot, Simon; Thissen, Jac

    2006-09-01

    To obtain a reliable estimation on the yield of illicit indoor cannabis cultivation in The Netherlands, cannabis plants confiscated by the police were used to determine the yield of dried female flower buds. The developmental stage of flower buds of the seized plants was described on a scale from 1 to 10 where the value of 10 indicates a fully developed flower bud ready for harvesting. Using eight additional characteristics describing the grow room and cultivation parameters, regression analysis with subset selection was carried out to develop two models for the yield of indoor cannabis cultivation. The median Dutch illicit grow room consists of 259 cannabis plants, has a plant density of 15 plants/m(2), and 510 W of growth lamps per m(2). For the median Dutch grow room, the predicted yield of female flower buds at the harvestable developmental stage (stage 10) was 33.7 g/plant or 505 g/m(2).

  4. Building Relationships, Yielding Results: How Superintendents Can Work with School Boards to Create Productive Teams

    ERIC Educational Resources Information Center

    Hackett, Julie L.

    2015-01-01

    In "Building Relationships, Yielding Results," the seasoned superintendent of an urban school district provides a clear road map for effective collaboration with school boards and the type of relationship-building required to achieve long-term, sustainable reforms. Instead of keeping school board members at arm's length or inundating…

  5. Specific yield: compilation of specific yields for various materials

    USGS Publications Warehouse

    Johnson, A.I.

    1967-01-01

    Specific yield is defined as the ratio of (1) the volume of water that a saturated rock or soil will yield by gravity to (2) the total volume of the rock or soft. Specific yield is usually expressed as a percentage. The value is not definitive, because the quantity of water that will drain by gravity depends on variables such as duration of drainage, temperature, mineral composition of the water, and various physical characteristics of the rock or soil under consideration. Values of specific yields nevertheless offer a convenient means by which hydrologists can estimate the water-yielding capacities of earth materials and, as such, are very useful in hydrologic studies. The present report consists mostly of direct or modified quotations from many selected reports that present and evaluate methods for determining specific yield, limitations of those methods, and results of the determinations made on a wide variety of rock and soil materials. Although no particular values are recommended in this report, a table summarizes values of specific yield, and their averages, determined for 10 rock textures. The following is an abstract of the table. [Table

  6. Acid soil infertility effects on peanut yields and yield components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blamey, F.P.C.

    1983-01-01

    The interpretation of soil amelioration experiments with peanuts is made difficult by the unpredictibility of the crop and by the many factors altered when ameliorating acid soils. The present study was conducted to investigate the effects of lime and gypsum applications on peanut kernel yield via the three first order yield components, pods per ha, kernels per pod, and kernel mass. On an acid medium sandy loam soil (typic Plinthustult), liming resulted in a highly significant kernel yield increase of 117% whereas gypsum applications were of no significant benefit. As indicated by path coefficient analysis, an increase in the numbermore » of pods per ha was markedly more important in increasing yield than an increase in either the number of kernels per pod or kernel mass. Furthermore, exch. Al was found to be particularly detrimental to pod number. It was postulated that poor peanut yields resulting from acid soil infertility were mainly due to the depressive effect of exch. Al on pod number. Exch. Ca appeared to play a secondary role by ameliorating the adverse effects of exch. Al.« less

  7. Angular distribution measurements of photo-neutron yields produced by 2.0 GeV electrons incident on thick targets.

    PubMed

    Lee, Hee-Seock; Ban, Syuichi; Sanami, Toshiya; Takahashi, Kazutoshi; Sato, Tatsuhiko; Shin, Kazuo; Chung, Chinwha

    2005-01-01

    A study of differential photo-neutron yields by irradiation with 2 GeV electrons has been carried out. In this extension of a previous study in which measurements were made at an angle of 90 degrees relative to incident electrons, the differential photo-neutron yield was obtained at two other angles, 48 degrees and 140 degrees, to study its angular characteristics. Photo-neutron spectra were measured using a pulsed beam time-of-flight method and a BC418 plastic scintillator. The reliable range of neutron energy measurement was 8-250 MeV. The neutron spectra were measured for 10 Xo-thick Cu, Sn, W and Pb targets. The angular distribution characteristics, together with the previous results for 90 degrees, are presented in the study. The experimental results are compared with Monte Carlo calculation results. The yields predicted by MCNPX 2.5 tend to underestimate the measured ones. The same trend holds for the comparison results using the EGS4 and PICA3 codes.

  8. Reliability of fully automated versus visually controlled pre- and post-processing of resting-state EEG.

    PubMed

    Hatz, F; Hardmeier, M; Bousleiman, H; Rüegg, S; Schindler, C; Fuhr, P

    2015-02-01

    To compare the reliability of a newly developed Matlab® toolbox for the fully automated, pre- and post-processing of resting state EEG (automated analysis, AA) with the reliability of analysis involving visually controlled pre- and post-processing (VA). 34 healthy volunteers (age: median 38.2 (20-49), 82% female) had three consecutive 256-channel resting-state EEG at one year intervals. Results of frequency analysis of AA and VA were compared with Pearson correlation coefficients, and reliability over time was assessed with intraclass correlation coefficients (ICC). Mean correlation coefficient between AA and VA was 0.94±0.07, mean ICC for AA 0.83±0.05 and for VA 0.84±0.07. AA and VA yield very similar results for spectral EEG analysis and are equally reliable. AA is less time-consuming, completely standardized, and independent of raters and their training. Automated processing of EEG facilitates workflow in quantitative EEG analysis. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  9. Comparison of CEAS and Williams-type models for spring wheat yields in North Dakota and Minnesota

    NASA Technical Reports Server (NTRS)

    Barnett, T. L. (Principal Investigator)

    1982-01-01

    The CEAS and Williams-type yield models are both based on multiple regression analysis of historical time series data at CRD level. The CEAS model develops a separate relation for each CRD; the Williams-type model pools CRD data to regional level (groups of similar CRDs). Basic variables considered in the analyses are USDA yield, monthly mean temperature, monthly precipitation, and variables derived from these. The Williams-type model also used soil texture and topographic information. Technological trend is represented in both by piecewise linear functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test of each model (1970-1979) demonstrate that the models are very similar in performance in all respects. Both models are about equally objective, adequate, timely, simple, and inexpensive. Both consider scientific knowledge on a broad scale but not in detail. Neither provides a good current measure of modeled yield reliability. The CEAS model is considered very slightly preferable for AgRISTARS applications.

  10. Increasing Crop Yields in Water Stressed Countries by Combining Operations of Freshwater Reservoir and Wastewater Reclamation Plant

    NASA Astrophysics Data System (ADS)

    Bhushan, R.; Ng, T. L.

    2015-12-01

    Freshwater resources around the world are increasing in scarcity due to population growth, industrialization and climate change. This is a serious concern for water stressed countries, including those in Asia and North Africa where future food production is expected to be negatively affected by this. To address this problem, we investigate the potential of combining freshwater reservoir and wastewater reclamation operations. Reservoir water is the cheaper source of irrigation, but is often limited and climate sensitive. Treated wastewater is a more reliable alternative for irrigation, but often requires extensive further treatment which can be expensive. We propose combining the operations of a reservoir and a wastewater reclamation plant (WWRP) to augment the supply from the reservoir with reclaimed water for increasing crop yields in water stressed regions. The joint system of reservoir and WWRP is modeled as a multi-objective optimization problem with the double objective of maximizing the crop yield and minimizing total cost, subject to constraints on reservoir storage, spill and release, and capacity of the WWRP. We use the crop growth model Aquacrop, supported by The Food and Agriculture Organization of the United Nations (FAO), to model crop growth in response to water use. Aquacrop considers the effects of water deficit on crop growth stages, and from there estimates crop yield. We generate results comparing total crop yield under irrigation with water from just the reservoir (which is limited and often interrupted), and yield with water from the joint system (which has the potential of higher supply and greater reliability). We will present results for locations in India and Africa to evaluate the potential of the joint operations for improving food security in those areas for different budgets.

  11. [Prediction of the side-cut product yield of atmospheric/vacuum distillation unit by NIR crude oil rapid assay].

    PubMed

    Wang, Yan-Bin; Hu, Yu-Zhong; Li, Wen-Le; Zhang, Wei-Song; Zhou, Feng; Luo, Zhi

    2014-10-01

    In the present paper, based on the fast evaluation technique of near infrared, a method to predict the yield of atmos- pheric and vacuum line was developed, combined with H/CAMS software. Firstly, the near-infrared (NIR) spectroscopy method for rapidly determining the true boiling point of crude oil was developed. With commercially available crude oil spectroscopy da- tabase and experiments test from Guangxi Petrochemical Company, calibration model was established and a topological method was used as the calibration. The model can be employed to predict the true boiling point of crude oil. Secondly, the true boiling point based on NIR rapid assay was converted to the side-cut product yield of atmospheric/vacuum distillation unit by H/CAMS software. The predicted yield and the actual yield of distillation product for naphtha, diesel, wax and residual oil were compared in a 7-month period. The result showed that the NIR rapid crude assay can predict the side-cut product yield accurately. The near infrared analytic method for predicting yield has the advantages of fast analysis, reliable results, and being easy to online operate, and it can provide elementary data for refinery planning optimization and crude oil blending.

  12. Calculations of reliability predictions for the Apollo spacecraft

    NASA Technical Reports Server (NTRS)

    Amstadter, B. L.

    1966-01-01

    A new method of reliability prediction for complex systems is defined. Calculation of both upper and lower bounds are involved, and a procedure for combining the two to yield an approximately true prediction value is presented. Both mission success and crew safety predictions can be calculated, and success probabilities can be obtained for individual mission phases or subsystems. Primary consideration is given to evaluating cases involving zero or one failure per subsystem, and the results of these evaluations are then used for analyzing multiple failure cases. Extensive development is provided for the overall mission success and crew safety equations for both the upper and lower bounds.

  13. Reliability and Validity of Wisconsin Upper Respiratory Symptom Survey, Korean Version

    PubMed Central

    Yang, Su-Young; Kang, Weechang; Yeo, Yoon; Park, Yang-Chun

    2011-01-01

    Background The Wisconsin Upper Respiratory Symptom Survey (WURSS) is a self-administered questionnaire developed in the United States to evaluate the severity of the common cold and its reliability has been validated. We developed a Korean language version of this questionnaire by using a sequential forward and backward translation approach. The purpose of this study was to validate the Korean version of the Wisconsin Upper Respiratory Symptom Survey (WURSS-K) in Korean patients with common cold. Methods This multicenter prospective study enrolled 107 participants who were diagnosed with common cold and consented to participate in the study. The WURSS-K includes 1 global illness severity item, 32 symptom-based items, 10 functional quality-of-life (QOL) items, and 1 item assessing global change. The SF-8 was used as an external comparator. Results The participants were 54 women and 53 men aged 18 to 42 years. The WURSS-K showed good reliability in 10 domains, with Cronbach’s alphas ranging from 0.67 to 0.96 (mean: 0.84). Comparison of the reliability coefficients of the WURSS-K and WURSS yielded a Pearson correlation coefficient of 0.71 (P = 0.02). Validity of the WURSS-K was evaluated by comparing it with the SF-8, which yielded a Pearson correlation coefficient of −0.267 (P < 0.001). The Guyatt’s responsiveness index of the WURSS-K ranged from 0.13 to 0.46, and the correlation coefficient with the WURSS was 0.534 (P < 0.001), indicating that there was close correlation between the WURSS-K and WURSS. Conclusions The WURSS-K is a reliable, valid, and responsive disease-specific questionnaire for assessing symptoms and QOL in Korean patients with common cold. PMID:21691034

  14. The optimal duration of frequency-volume charts related to compliance and reliability.

    PubMed

    van Haarst, Ernst P; Bosch, J L H Ruud

    2014-03-01

    To assess Frequency-volume charts (FVCs) for the yield of additional recorded days and the ideal duration of recording related to compliance and reliability. Of 500 consecutive urologic outpatients willing to complete a 7-day FVC, 378 FVCs were evaluable. During seven consecutive days every voiding time and volume were recorded. Missed entries were indicated with a coded letter, thereby assessing the true frequency and compliance. Reliability is the agreement of the day-to-day FVC parameters with the 7-day FVC pattern. Single-day reliability was assessed and used in the Spearman-Brown formula. FVCs of 228 male and 150 females were evaluated. Mean age was 55.2 years (standard deviation [SD]: 16.2 years), and mean 24-hr urine production was 1,856 ml (SD: 828 ml). The percentage of patients with complete FVCs decreased from 78% on day 2 to 58% on day 7, and dropped below 70% after 4 days. Single-day reliability was r = 0.63 for nocturnal urine production, r = 0.72 for 24-hr urine production, and r = 0.80 for mean voided volume. At 5 days, reliability of 90% was achieved for all parameters. With each additional day, FVCs showed a decrease in compliance and an increase in reliability. At day 3, reliability of 80% was achieved for all FVC parameters, but compliance dropped to 73%. Beyond 5 days, the yield of additional recorded days was limited. We advocate an FVC duration of 3 days, but the duration may be shortened or extended depending on the goal of the FVC. © 2013 Wiley Periodicals, Inc.

  15. Reliability and validity of the faculty evaluation instrument used at King Saud bin Abdulaziz University for Health Sciences: Results from the Haematology Course.

    PubMed

    Al-Eidan, Fahad; Baig, Lubna Ansari; Magzoub, Mohi-Eldin; Omair, Aamir

    2016-04-01

    To assess reliability and validity of evaluation tool using Haematology course as an example. The cross-sectional study was conducted at King Saud Bin Abdul Aziz University of Health Sciences, Riyadh, Saudi Arabia, in 2012, while data analysis was completed in 2013. The 27-item block evaluation instrument was developed by a multidisciplinary faculty after a comprehensive literature review. Validity of the questionnaire was confirmed using principal component analysis with varimax rotation and Kaiser normalisation. Identified factors were combined to get the internal consistency reliability of each factor. Student's t-test was used to compare mean ratings between male and female students for the faculty and block evaluation. Of the 116 subjects in the study, 80(69%) were males and 36(31%) were females. Reliability of the questionnaire was Cronbach's alpha 0.91. Factor analysis yielded a logically coherent 7 factor solution that explained 75% of the variation in the data. The factors were group dynamics in problem-based learning (alpha0.92), block administration (alpha 0.89), quality of objective structured clinical examination (alpha 0.86), block coordination (alpha 0.81), structure of problem-based learning (alpha 0.84), quality of written exam (alpha 0.91), and difficulty of exams (alpha0.41). Female students' opinion on depth of analysis and critical thinking was significantly higher than that of the males (p=0.03). The faculty evaluation tool used was found to be reliable, but its validity, as assessed through factor analysis, has to be interpreted with caution as the responders were less than the minimum required for factor analysis.

  16. WaferOptics® mass volume production and reliability

    NASA Astrophysics Data System (ADS)

    Wolterink, E.; Demeyer, K.

    2010-05-01

    The Anteryon WaferOptics® Technology platform contains imaging optics designs, materials, metrologies and combined with wafer level based Semicon & MEMS production methods. WaferOptics® first required complete new system engineering. This system closes the loop between application requirement specifications, Anteryon product specification, Monte Carlo Analysis, process windows, process controls and supply reject criteria. Regarding the Anteryon product Integrated Lens Stack (ILS), new design rules, test methods and control systems were assessed, implemented, validated and customer released for mass production. This includes novel reflowable materials, mastering process, replication, bonding, dicing, assembly, metrology, reliability programs and quality assurance systems. Many of Design of Experiments were performed to assess correlations between optical performance parameters and machine settings of all process steps. Lens metrologies such as FFL, BFL, and MTF were adapted for wafer level production and wafer mapping was introduced for yield management. Test methods for screening and validating suitable optical materials were designed. Critical failure modes such as delamination and popcorning were assessed and modeled with FEM. Anteryon successfully managed to integrate the different technologies starting from single prototypes to high yield mass volume production These parallel efforts resulted in a steep yield increase from 30% to over 90% in a 8 months period.

  17. Alternate Forms Reliability of the Behavioral Relaxation Scale: Preliminary Results

    ERIC Educational Resources Information Center

    Lundervold, Duane A.; Dunlap, Angel L.

    2006-01-01

    Alternate forms reliability of the Behavioral Relaxation Scale (BRS; Poppen,1998), a direct observation measure of relaxed behavior, was examined. A single BRS score, based on long duration observation (5-minute), has been found to be a valid measure of relaxation and is correlated with self-report and some physiological measures. Recently,…

  18. The reliability of a quality appraisal tool for studies of diagnostic reliability (QAREL).

    PubMed

    Lucas, Nicholas; Macaskill, Petra; Irwig, Les; Moran, Robert; Rickards, Luke; Turner, Robin; Bogduk, Nikolai

    2013-09-09

    The aim of this project was to investigate the reliability of a new 11-item quality appraisal tool for studies of diagnostic reliability (QAREL). The tool was tested on studies reporting the reliability of any physical examination procedure. The reliability of physical examination is a challenging area to study given the complex testing procedures, the range of tests, and lack of procedural standardisation. Three reviewers used QAREL to independently rate 29 articles, comprising 30 studies, published during 2007. The articles were identified from a search of relevant databases using the following string: "Reproducibility of results (MeSH) OR reliability (t.w.) AND Physical examination (MeSH) OR physical examination (t.w.)." A total of 415 articles were retrieved and screened for inclusion. The reviewers undertook an independent trial assessment prior to data collection, followed by a general discussion about how to score each item. At no time did the reviewers discuss individual papers. Reliability was assessed for each item using multi-rater kappa (κ). Multi-rater reliability estimates ranged from κ = 0.27 to 0.92 across all items. Six items were recorded with good reliability (κ > 0.60), three with moderate reliability (κ = 0.41 - 0.60), and two with fair reliability (κ = 0.21 - 0.40). Raters found it difficult to agree about the spectrum of patients included in a study (Item 1) and the correct application and interpretation of the test (Item 10). In this study, we found that QAREL was a reliable assessment tool for studies of diagnostic reliability when raters agreed upon criteria for the interpretation of each item. Nine out of 11 items had good or moderate reliability, and two items achieved fair reliability. The heterogeneity in the tests included in this study may have resulted in an underestimation of the reliability of these two items. We discuss these and other factors that could affect our results and make recommendations for the use of QAREL.

  19. Reliability of reflectance measures in passive filters

    NASA Astrophysics Data System (ADS)

    Saldiva de André, Carmen Diva; Afonso de André, Paulo; Rocha, Francisco Marcelo; Saldiva, Paulo Hilário Nascimento; Carvalho de Oliveira, Regiani; Singer, Julio M.

    2014-08-01

    Measurements of optical reflectance in passive filters impregnated with a reactive chemical solution may be transformed to ozone concentrations via a calibration curve and constitute a low cost alternative for environmental monitoring, mainly to estimate human exposure. Given the possibility of errors caused by exposure bias, it is common to consider sets of m filters exposed during a certain period to estimate the latent reflectance on n different sample occasions at a certain location. Mixed models with sample occasions as random effects are useful to analyze data obtained under such setups. The intra-class correlation coefficient of the mean of the m measurements is an indicator of the reliability of the latent reflectance estimates. Our objective is to determine m in order to obtain a pre-specified reliability of the estimates, taking possible outliers into account. To illustrate the procedure, we consider an experiment conducted at the Laboratory of Experimental Air Pollution, University of São Paulo, Brazil (LPAE/FMUSP), where sets of m = 3 filters were exposed during 7 days on n = 9 different occasions at a certain location. The results show that the reliability of the latent reflectance estimates for each occasion obtained under homoskedasticity is km = 0.74. A residual analysis suggests that the within-occasion variance for two of the occasions should be different from the others. A refined model with two within-occasion variance components was considered, yielding km = 0.56 for these occasions and km = 0.87 for the remaining ones. To guarantee that all estimates have a reliability of at least 80% we require measurements on m = 10 filters on each occasion.

  20. Software reliability studies

    NASA Technical Reports Server (NTRS)

    Wilson, Larry W.

    1989-01-01

    The longterm goal of this research is to identify or create a model for use in analyzing the reliability of flight control software. The immediate tasks addressed are the creation of data useful to the study of software reliability and production of results pertinent to software reliability through the analysis of existing reliability models and data. The completed data creation portion of this research consists of a Generic Checkout System (GCS) design document created in cooperation with NASA and Research Triangle Institute (RTI) experimenters. This will lead to design and code reviews with the resulting product being one of the versions used in the Terminal Descent Experiment being conducted by the Systems Validations Methods Branch (SVMB) of NASA/Langley. An appended paper details an investigation of the Jelinski-Moranda and Geometric models for software reliability. The models were given data from a process that they have correctly simulated and asked to make predictions about the reliability of that process. It was found that either model will usually fail to make good predictions. These problems were attributed to randomness in the data and replication of data was recommended.

  1. Safety Climate Survey: reliability of results from a multicenter ICU survey.

    PubMed

    Kho, M E; Carbone, J M; Lucas, J; Cook, D J

    2005-08-01

    It is important to understand the clinical properties of instruments used to measure patient safety before they are used in the setting of an intensive care unit (ICU). The Safety Climate Survey (SCSu), an instrument endorsed by the Institute for Healthcare Improvement, the Safety Culture Scale (SCSc), and the Safety Climate Mean (SCM), a subset of seven items from the SCSu, were administered in four Canadian university affiliated ICUs. All staff including nurses, allied healthcare professionals, non-clinical staff, intensivists, and managers were invited to participate in the cross sectional survey. The response rate was 74% (313/426). The internal consistency of the SCSu and SCSc was 0.86 and 0.80, respectively, while the SCM performed poorly at 0.51. Because of poor internal consistency, no further analysis of the SCM was therefore performed. Test-retest reliability of the SCSu and SCSc was 0.92. Out of a maximum score of 5, the mean (SD) scores of the SCSu and SCSc were 3.4 (0.6) and 3.4 (0.7), respectively. No differences were noted between the three medical-surgical and one cardiovascular ICU. Managers perceived a significantly more positive safety climate than other staff, as measured by the SCSu and SCSc. These results need to be interpreted cautiously because of the small number of management participants. Of the three instruments, the SCSu and SCSc appear to be measuring one construct and are sufficiently reliable. Future research should examine the properties of patient safety instruments in other ICUs, including responsiveness to change, to ensure that they are valid outcome measures for patient safety initiatives.

  2. Reliability Analysis and Reliability-Based Design Optimization of Circular Composite Cylinders Under Axial Compression

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2001-01-01

    This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.

  3. Donor management parameters and organ yield: single center results.

    PubMed

    Marshall, George Ryne; Mangus, Richard S; Powelson, John A; Fridell, Jonathan A; Kubal, Chandrashekhar A; Tector, A Joseph

    2014-09-01

    Management of organ donors in the intensive care unit is an emerging subject in critical care and transplantation. This study evaluates organ yield outcomes for a large number of patients managed by the Indiana Organ Procurement Organization. This is a retrospective review of intensive care unit records from 2008-2012. Donor demographic information and seven donor management parameters (DMP) were recorded at admission, consent, 12 h after consent, and before procurement. Three study groups were created: donors meeting 0-3, 4, or 5-7 DMP. Active donor Organ Procurement Organization management began at consent; so, data analysis focuses on the 12-h postconsent time point. Outcomes included organs transplanted per donor (OTPD) and transplantation of individual solid organs. Complete records for 499 patients were reviewed. Organ yield was 1415 organs of 3992 possible (35%). At 12 h, donors meeting more DMP had more OTPD: 2.2 (0-3) versus 3.0 (4) versus 3.5 (5-7) (P < 0.01). Aggregate DMP met was significantly associated with transplantation of every organ except intestine. Oxygen tension, vasopressor use, and central venous pressure were the most frequent independent predictors of organ usage. There were significantly more organs transplanted for donors meeting all three of these parameters (4.5 versus 2.7, P < 0.01). Initial DMP met does not appear to be a significant prognostic factor for OTPD. Aggregate DMP is associated with transplantation rates for most organs, with analysis of individual parameters suggesting that appropriate management of oxygenation, volume status, and vasopressor use could lead to more organs procured per donor. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. A tonic heat test stimulus yields a larger and more reliable conditioned pain modulation effect compared to a phasic heat test stimulus

    PubMed Central

    Lie, Marie Udnesseter; Matre, Dagfinn; Hansson, Per; Stubhaug, Audun; Zwart, John-Anker; Nilsen, Kristian Bernhard

    2017-01-01

    Abstract Introduction: The interest in conditioned pain modulation (CPM) as a clinical tool for measuring endogenously induced analgesia is increasing. There is, however, large variation in the CPM methodology, hindering comparison of results across studies. Research comparing different CPM protocols is needed in order to obtain a standardized test paradigm. Objectives: The aim of the study was to assess whether a protocol with phasic heat stimuli as test-stimulus is preferable to a protocol with tonic heat stimulus as test-stimulus. Methods: In this experimental crossover study, we compared 2 CPM protocols with different test-stimulus; one with tonic test-stimulus (constant heat stimulus of 120-second duration) and one with phasic test-stimuli (3 heat stimulations of 5 seconds duration separated by 10 seconds). Conditioning stimulus was a 7°C water bath in parallel with the test-stimulus. Twenty-four healthy volunteers were assessed on 2 occasions with minimum 1 week apart. Differences in the magnitude and test–retest reliability of the CPM effect in the 2 protocols were investigated with repeated-measures analysis of variance and by relative and absolute reliability indices. Results: The protocol with tonic test-stimulus induced a significantly larger CPM effect compared to the protocol with phasic test-stimuli (P < 0.001). Fair and good relative reliability was found with the phasic and tonic test-stimuli, respectively. Absolute reliability indices showed large intraindividual variability from session to session in both protocols. Conclusion: The present study shows that a CPM protocol with a tonic test-stimulus is preferable to a protocol with phasic test-stimuli. However, we emphasize that one should be cautious to use the CPM effect as biomarker or in clinical decision making on an individual level due to large intraindividual variability. PMID:29392240

  5. High-yield exfoliation of tungsten disulphide nanosheets by rational mixing of low-boiling-point solvents

    NASA Astrophysics Data System (ADS)

    Sajedi-Moghaddam, Ali; Saievar-Iranizad, Esmaiel

    2018-01-01

    Developing high-throughput, reliable, and facile approaches for producing atomically thin sheets of transition metal dichalcogenides is of great importance to pave the way for their use in real applications. Here, we report a highly promising route for exfoliating two-dimensional tungsten disulphide sheets by using binary combination of low-boiling-point solvents. Experimental results show significant dependence of exfoliation yield on the type of solvents as well as relative volume fraction of each solvent. The highest yield was found for appropriate combination of isopropanol/water (20 vol% isopropanol and 80 vol% water) which is approximately 7 times higher than that in pure isopropanol and 4 times higher than that in pure water. The dramatic increase in exfoliation yield can be attributed to perfect match between the surface tension of tungsten disulphide and binary solvent system. Furthermore, solvent molecular size also has a profound impact on the exfoliation efficiency, due to the steric repulsion.

  6. Why is it so difficult to determine the yield of indoor cannabis plantations? A case study from the Netherlands.

    PubMed

    Vanhove, Wouter; Maalsté, Nicole; Van Damme, Patrick

    2017-07-01

    Together, the Netherlands and Belgium are the largest indoor cannabis producing countries in Europe. In both countries, legal prosecution procedure of convicted illicit cannabis growers usually includes recovery of the profits gained. However, it is not easy to make a reliable estimation of the latter profits, due to the wide range of factors that determine indoor cannabis yields and eventual selling prices. In the Netherlands, since 2005, a reference model is used that assumes a constant yield (g) per plant for a given indoor cannabis plant density. Later, in 2011, a new model was developed in Belgium for yield estimation of Belgian indoor cannabis plantations that assumes a constant yield per m 2 of growth surface, provided that a number of growth conditions are met. Indoor cannabis plantations in the Netherlands and Belgium share similar technical characteristics. As a result, for indoor cannabis plantations in both countries, both aforementioned yield estimation models should yield similar yield estimations. By means of a real-case study from the Netherlands, we show that the reliability of both models is hampered by a number of flaws and unmet preconditions. The Dutch model is based on a regression equation that makes use of ill-defined plant development stages, assumes a linear plant growth, does not discriminate between different plantation size categories and does not include other important yield determining factors (such as fertilization). The Belgian model addresses some of the latter shortcomings, but its applicability is constrained by a number of pre-conditions including plantation size between 50 and 1000 plants; cultivation in individual pots with peat soil; 600W (electrical power) assimilation lamps; constant temperature between 20°C and 30°C; adequate fertilizer application and plants unaffected by pests and diseases. Judiciary in both the Netherlands and Belgium require robust indoor cannabis yield models for adequate legal prosecution of

  7. Reliability of Measurement of Glenohumeral Internal Rotation, External Rotation, and Total Arc of Motion in 3 Test Positions

    PubMed Central

    Kevern, Mark A.; Beecher, Michael; Rao, Smita

    2014-01-01

    Context: Athletes who participate in throwing and racket sports consistently demonstrate adaptive changes in glenohumeral-joint internal and external rotation in the dominant arm. Measurements of these motions have demonstrated excellent intrarater and poor interrater reliability. Objective: To determine intrarater reliability, interrater reliability, and standard error of measurement for shoulder internal rotation, external rotation, and total arc of motion using an inclinometer in 3 testing procedures in National Collegiate Athletic Association Division I baseball and softball athletes. Design: Cross-sectional study. Setting: Athletic department. Patients or Other Participants Thirty-eight players participated in the study. Shoulder internal rotation, external rotation, and total arc of motion were measured by 2 investigators in 3 test positions. The standard supine position was compared with a side-lying test position, as well as a supine test position without examiner overpressure. Results: Excellent intrarater reliability was noted for all 3 test positions and ranges of motion, with intraclass correlation coefficient values ranging from 0.93 to 0.99. Results for interrater reliability were less favorable. Reliability for internal rotation was highest in the side-lying position (0.68) and reliability for external rotation and total arc was highest in the supine-without-overpressure position (0.774 and 0.713, respectively). The supine-with-overpressure position yielded the lowest interrater reliability results in all positions. The side-lying position had the most consistent results, with very little variation among intraclass correlation coefficient values for the various test positions. Conclusions: The results of our study clearly indicate that the side-lying test procedure is of equal or greater value than the traditional supine-with-overpressure method. PMID:25188316

  8. Toward an Economic Definition of Sustainable Yield for Coastal Aquifers

    NASA Astrophysics Data System (ADS)

    Jenson, J. W.; Habana, N. C.; Lander, M.

    2016-12-01

    The concept of aquifer sustainable yield has long been criticized, debated, and even disparaged among groundwater hydrologists, but policy-makers and professional water resource managers inevitably ask them for unequivocal answers to such questions as "What is the absolute maximum volume of water that could be sustainably withdrawn from this aquifer?" We submit that it is therefore incumbent upon hydrologists to develop and offer valid practical definitions of sustainable yield that can be usefully applied to given conditions and types of aquifers. In coastal aquifers, water quality—in terms of salinity—is affected by changes in the natural water budget and the volume rate of artificial extraction. In principle, one can identify a family of assay curves for a given aquifer, showing the specific relationships between the quantity and quality of the water extracted under given conditions of recharge. The concept of the assay curve, borrowed from the literature of natural-resource extraction economics, has to our knowledge not yet found its way into the literature of applied hydrology. The relationships between recharge, extraction, and water quality that define the assay curve can be determined empirically from sufficient observations of groundwater response to recharge and extraction and can be estimated from models that have been reliably history-matched ("calibrated") to such data. We thus propose a working definition of sustainable yield for coastal aquifers in terms of the capacity that ultimately could be achieved by an ideal production system, given what is known or can be assumed about the natural limiting conditions. Accordingly, we also offer an approach for defining an ideal production system for a given aquifer, and demonstrate how observational data and/or modeling results can be used to develop assay curves of quality vs. quantity extracted, which can serve as reliable predictive tools for engineers, managers, regulators, and policy

  9. Formal design and verification of a reliable computing platform for real-time control. Phase 1: Results

    NASA Technical Reports Server (NTRS)

    Divito, Ben L.; Butler, Ricky W.; Caldwell, James L.

    1990-01-01

    A high-level design is presented for a reliable computing platform for real-time control applications. Design tradeoffs and analyses related to the development of the fault-tolerant computing platform are discussed. The architecture is formalized and shown to satisfy a key correctness property. The reliable computing platform uses replicated processors and majority voting to achieve fault tolerance. Under the assumption of a majority of processors working in each frame, it is shown that the replicated system computes the same results as a single processor system not subject to failures. Sufficient conditions are obtained to establish that the replicated system recovers from transient faults within a bounded amount of time. Three different voting schemes are examined and proved to satisfy the bounded recovery time conditions.

  10. Slope Controls Grain Yield and Climatic Yield in Mountainous Yunnan province, China

    NASA Astrophysics Data System (ADS)

    Duan, X.; Rong, L.; Gu, Z.; Feng, D.

    2017-12-01

    Mountainous regions are increasingly vulnerable to food insecurity because of limited arable land, growing population pressure, and climate change. Development of sustainable mountain agriculture will require an increased understanding of the effects of environmental factors on grain and climatic yields. The objective of this study was to explore the relationships between actual grain yield, climatic yield, and environmental factors in a mountainous region in China. We collected data on the average grain yield per unit area in 119 counties in Yunnan province from 1985 to 2012, and chose 17 environmental factors for the same period. Our results showed that actual grain yield ranged from 1.43 to 6.92 t·ha-1, and the climatic yield ranged from -0.15 to -0.01 t·ha-1. Lower climatic yield but higher grain yield was generally found in central areas and at lower slopes and elevations in the western and southwestern counties of Yunnan province. Higher climatic yield but lower grain yield were found in northwestern parts of Yunnan province on steep slopes. Annual precipation and temperature had a weak influence on the climatic yield. Slope explained 44.62 and 26.29% of the variation in grain yield and climatic yield. The effects of topography on grain and climatic yields were greater than climatic factors. Slope was the most important environmental variable for the variability in climatic and grain yields in the mountainous Yunnan province due to the highly heterogeneous topographic conditions. Conversion of slopes to terraces in areas with higher climatic yields is an effective way to maintain grain production in response to climate variability. Additionally, soil amendments and soil and water conservation measures should be considered to maintain soil fertility and aid in sustainable development in central areas, and in counties at lower slopes and elevations in western and southwestern Yunnan province.

  11. Reliability and maintainability assessment factors for reliable fault-tolerant systems

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.

    1984-01-01

    A long term goal of the NASA Langley Research Center is the development of a reliability assessment methodology of sufficient power to enable the credible comparison of the stochastic attributes of one ultrareliable system design against others. This methodology, developed over a 10 year period, is a combined analytic and simulative technique. An analytic component is the Computer Aided Reliability Estimation capability, third generation, or simply CARE III. A simulative component is the Gate Logic Software Simulator capability, or GLOSS. The numerous factors that potentially have a degrading effect on system reliability and the ways in which these factors that are peculiar to highly reliable fault tolerant systems are accounted for in credible reliability assessments. Also presented are the modeling difficulties that result from their inclusion and the ways in which CARE III and GLOSS mitigate the intractability of the heretofore unworkable mathematics.

  12. Reliable Digit Span: A Systematic Review and Cross-Validation Study

    ERIC Educational Resources Information Center

    Schroeder, Ryan W.; Twumasi-Ankrah, Philip; Baade, Lyle E.; Marshall, Paul S.

    2012-01-01

    Reliable Digit Span (RDS) is a heavily researched symptom validity test with a recent literature review yielding more than 20 studies ranging in dates from 1994 to 2011. Unfortunately, limitations within some of the research minimize clinical generalizability. This systematic review and cross-validation study was conducted to address these…

  13. Uncertainty quantification and reliability assessment in operational oil spill forecast modeling system.

    PubMed

    Hou, Xianlong; Hodges, Ben R; Feng, Dongyu; Liu, Qixiao

    2017-03-15

    As oil transport increasing in the Texas bays, greater risks of ship collisions will become a challenge, yielding oil spill accidents as a consequence. To minimize the ecological damage and optimize rapid response, emergency managers need to be informed with how fast and where oil will spread as soon as possible after a spill. The state-of-the-art operational oil spill forecast modeling system improves the oil spill response into a new stage. However uncertainty due to predicted data inputs often elicits compromise on the reliability of the forecast result, leading to misdirection in contingency planning. Thus understanding the forecast uncertainty and reliability become significant. In this paper, Monte Carlo simulation is implemented to provide parameters to generate forecast probability maps. The oil spill forecast uncertainty is thus quantified by comparing the forecast probability map and the associated hindcast simulation. A HyosPy-based simple statistic model is developed to assess the reliability of an oil spill forecast in term of belief degree. The technologies developed in this study create a prototype for uncertainty and reliability analysis in numerical oil spill forecast modeling system, providing emergency managers to improve the capability of real time operational oil spill response and impact assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Preparation of DNA from cytological material: effects of fixation, staining, and mounting medium on DNA yield and quality.

    PubMed

    Dejmek, Annika; Zendehrokh, Nooreldin; Tomaszewska, Malgorzata; Edsjö, Anders

    2013-07-01

    Personalized oncology requires molecular analysis of tumor cells. Several studies have demonstrated that cytological material is suitable for DNA analysis, but to the authors' knowledge there are no systematic studies comparing how the yield and quality of extracted DNA is affected by the various techniques used for the preparation of cytological material. DNA yield and quality were compared using cultured human lung cancer cells subjected to different preparation techniques used in routine cytology, including fixation, mounting medium, and staining. The results were compared with the outcome of epidermal growth factor receptor (EGFR) genotyping of 66 clinical cytological samples using the same DNA preparation protocol. All tested protocol combinations resulted in fragment lengths of at least 388 base pairs. The mounting agent EcoMount resulted in higher yields than traditional xylene-based medium. Spray and ethanol fixation resulted in both a higher yield and better DNA quality than air drying. In liquid-based cytology (LBC) methods, CytoLyt solution resulted in a 5-fold higher yield than CytoRich Red. Papanicolaou staining provided twice the yield of hematoxylin and eosin staining in both liquid-based preparations. Genotyping outcome and quality control values from the clinical EGFR genotyping demonstrated a sufficient amount and amplifiability of DNA in both spray-fixed and air-dried cytological samples. Reliable clinical genotyping can be performed using all tested methods. However, in the cell line experiments, spray- or ethanol-fixed, Papanicolaou-stained slides provided the best results in terms of yield and fragment length. In LBC, the DNA recovery efficiency of the preserving medium may differ considerably, which should be taken into consideration when introducing LBC. Cancer (Cancer Cytopathol) 2013;121:344-353. © 2013 American Cancer Society. © 2013 American Cancer Society.

  15. Model uncertainty and multimodel inference in reliability estimation within a longitudinal framework.

    PubMed

    Alonso, Ariel; Laenen, Annouschka

    2013-05-01

    Laenen, Alonso, and Molenberghs (2007) and Laenen, Alonso, Molenberghs, and Vangeneugden (2009) proposed a method to assess the reliability of rating scales in a longitudinal context. The methodology is based on hierarchical linear models, and reliability coefficients are derived from the corresponding covariance matrices. However, finding a good parsimonious model to describe complex longitudinal data is a challenging task. Frequently, several models fit the data equally well, raising the problem of model selection uncertainty. When model uncertainty is high one may resort to model averaging, where inferences are based not on one but on an entire set of models. We explored the use of different model building strategies, including model averaging, in reliability estimation. We found that the approach introduced by Laenen et al. (2007, 2009) combined with some of these strategies may yield meaningful results in the presence of high model selection uncertainty and when all models are misspecified, in so far as some of them manage to capture the most salient features of the data. Nonetheless, when all models omit prominent regularities in the data, misleading results may be obtained. The main ideas are further illustrated on a case study in which the reliability of the Hamilton Anxiety Rating Scale is estimated. Importantly, the ambit of model selection uncertainty and model averaging transcends the specific setting studied in the paper and may be of interest in other areas of psychometrics. © 2012 The British Psychological Society.

  16. [Santa Claus is perceived as reliable and friendly: results of the Danish Christmas 2013 survey].

    PubMed

    Amin, Faisal Mohammad; West, Anders Sode; Jørgensen, Carina Sleiborg; Simonsen, Sofie Amalie; Lindberg, Ulrich; Tranum-Jensen, Jørgen; Hougaard, Anders

    2013-12-02

    Several studies have indicated that the population in general perceives doctors as reliable. In the present study perceptions of reliability and kindness attributed to another socially significant archetype, Santa Claus, have been comparatively examined in relation to the doctor. In all, 52 randomly chosen participants were shown a film, where a narrator dressed either as Santa Claus or as a doctor tells an identical story. Structured interviews were then used to assess the subjects' perceptions of reliability and kindness in relation to the narrator's appearance. We found a strong inclination for Santa Claus being perceived as friendlier than the doctor (p = 0.053). However, there was no significant difference in the perception of reliability between Santa Claus and the doctor (p = 0.524). The positive associations attributed to Santa Claus probably cause that he is perceived friendlier than the doctor who may be associated with more serious and unpleasant memories of illness and suffering. Surprisingly, and despite him being an imaginary person, Santa Claus was assessed as being as reliable as the doctor.

  17. Predicting paddlefish roe yields using an extension of the Beverton–Holt equilibrium yield-per-recruit model

    USGS Publications Warehouse

    Colvin, M.E.; Bettoli, Phillip William; Scholten, G.D.

    2013-01-01

    Equilibrium yield models predict the total biomass removed from an exploited stock; however, traditional yield models must be modified to simulate roe yields because a linear relationship between age (or length) and mature ovary weight does not typically exist. We extended the traditional Beverton-Holt equilibrium yield model to predict roe yields of Paddlefish Polyodon spathula in Kentucky Lake, Tennessee-Kentucky, as a function of varying conditional fishing mortality rates (10-70%), conditional natural mortality rates (cm; 9% and 18%), and four minimum size limits ranging from 864 to 1,016mm eye-to-fork length. These results were then compared to a biomass-based yield assessment. Analysis of roe yields indicated the potential for growth overfishing at lower exploitation rates and smaller minimum length limits than were suggested by the biomass-based assessment. Patterns of biomass and roe yields in relation to exploitation rates were similar regardless of the simulated value of cm, thus indicating that the results were insensitive to changes in cm. Our results also suggested that higher minimum length limits would increase roe yield and reduce the potential for growth overfishing and recruitment overfishing at the simulated cm values. Biomass-based equilibrium yield assessments are commonly used to assess the effects of harvest on other caviar-based fisheries; however, our analysis demonstrates that such assessments likely underestimate the probability and severity of growth overfishing when roe is targeted. Therefore, equilibrium roe yield-per-recruit models should also be considered to guide the management process for caviar-producing fish species.

  18. Feasibility model of a high reliability five-year tape transport, Volume 1. [development, performance, and test results

    NASA Technical Reports Server (NTRS)

    Eshleman, R. L.; Meyers, A. P.; Davidson, W. A.; Gortowski, R. C.; Anderson, M. E.

    1973-01-01

    The development, performance, and test results for the spaceborne magnetic tape transport are discussed. An analytical model of the tape transport was used to optimize its conceptual design. Each of the subsystems was subjected to reliability analyses which included structural integrity, maintenance of system performance within acceptable bounds, and avoidance of fatigue failure. These subsystems were also compared with each other in order to evaluate reliability characteristics. The transport uses no mechanical couplings. Four drive motors, one for each reel and one for each of two capstans, are used in a differential mode. There are two hybrid, spherical, cone tapered-crown rollers for tape guidance. Storage of the magnetic tape is provided by a reel assembly which includes the reel, a reel support structure and bearings, dust seals, and a dc drive motor. A summary of transport test results on tape guidance, flutter, and skew is provided.

  19. Evaluating the capabilities of watershed-scale models in estimating sediment yield at field-scale.

    PubMed

    Sommerlot, Andrew R; Nejadhashemi, A Pouyan; Woznicki, Sean A; Giri, Subhasis; Prohaska, Michael D

    2013-09-30

    Many watershed model interfaces have been developed in recent years for predicting field-scale sediment loads. They share the goal of providing data for decisions aimed at improving watershed health and the effectiveness of water quality conservation efforts. The objectives of this study were to: 1) compare three watershed-scale models (Soil and Water Assessment Tool (SWAT), Field_SWAT, and the High Impact Targeting (HIT) model) against calibrated field-scale model (RUSLE2) in estimating sediment yield from 41 randomly selected agricultural fields within the River Raisin watershed; 2) evaluate the statistical significance among models; 3) assess the watershed models' capabilities in identifying areas of concern at the field level; 4) evaluate the reliability of the watershed-scale models for field-scale analysis. The SWAT model produced the most similar estimates to RUSLE2 by providing the closest median and the lowest absolute error in sediment yield predictions, while the HIT model estimates were the worst. Concerning statistically significant differences between models, SWAT was the only model found to be not significantly different from the calibrated RUSLE2 at α = 0.05. Meanwhile, all models were incapable of identifying priorities areas similar to the RUSLE2 model. Overall, SWAT provided the most correct estimates (51%) within the uncertainty bounds of RUSLE2 and is the most reliable among the studied models, while HIT is the least reliable. The results of this study suggest caution should be exercised when using watershed-scale models for field level decision-making, while field specific data is of paramount importance. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Limits on reliable information flows through stochastic populations.

    PubMed

    Boczkowski, Lucas; Natale, Emanuele; Feinerman, Ofer; Korman, Amos

    2018-06-06

    Biological systems can share and collectively process information to yield emergent effects, despite inherent noise in communication. While man-made systems often employ intricate structural solutions to overcome noise, the structure of many biological systems is more amorphous. It is not well understood how communication noise may affect the computational repertoire of such groups. To approach this question we consider the basic collective task of rumor spreading, in which information from few knowledgeable sources must reliably flow into the rest of the population. We study the effect of communication noise on the ability of groups that lack stable structures to efficiently solve this task. We present an impossibility result which strongly restricts reliable rumor spreading in such groups. Namely, we prove that, in the presence of even moderate levels of noise that affect all facets of the communication, no scheme can significantly outperform the trivial one in which agents have to wait until directly interacting with the sources-a process which requires linear time in the population size. Our results imply that in order to achieve efficient rumor spread a system must exhibit either some degree of structural stability or, alternatively, some facet of the communication which is immune to noise. We then corroborate this claim by providing new analyses of experimental data regarding recruitment in Cataglyphis niger desert ants. Finally, in light of our theoretical results, we discuss strategies to overcome noise in other biological systems.

  1. Reliable bonding using indium-based solders

    NASA Astrophysics Data System (ADS)

    Cheong, Jongpil; Goyal, Abhijat; Tadigadapa, Srinivas; Rahn, Christopher

    2004-01-01

    Low temperature bonding techniques with high bond strengths and reliability are required for the fabrication and packaging of MEMS devices. Indium and indium-tin based bonding processes are explored for the fabrication of a flextensional MEMS actuator, which requires the integration of lead zirconate titanate (PZT) substrate with a silicon micromachined structure at low temperatures. The developed technique can be used either for wafer or chip level bonding. The lithographic steps used for the patterning and delineation of the seed layer limit the resolution of this technique. Using this technique, reliable bonds were achieved at a temperature of 200°C. The bonds yielded an average tensile strength of 5.41 MPa and 7.38 MPa for samples using indium and indium-tin alloy solders as the intermediate bonding layers respectively. The bonds (with line width of 100 microns) showed hermetic sealing capability of better than 10-11 mbar-l/s when tested using a commercial helium leak tester.

  2. Reliable bonding using indium-based solders

    NASA Astrophysics Data System (ADS)

    Cheong, Jongpil; Goyal, Abhijat; Tadigadapa, Srinivas; Rahn, Christopher

    2003-12-01

    Low temperature bonding techniques with high bond strengths and reliability are required for the fabrication and packaging of MEMS devices. Indium and indium-tin based bonding processes are explored for the fabrication of a flextensional MEMS actuator, which requires the integration of lead zirconate titanate (PZT) substrate with a silicon micromachined structure at low temperatures. The developed technique can be used either for wafer or chip level bonding. The lithographic steps used for the patterning and delineation of the seed layer limit the resolution of this technique. Using this technique, reliable bonds were achieved at a temperature of 200°C. The bonds yielded an average tensile strength of 5.41 MPa and 7.38 MPa for samples using indium and indium-tin alloy solders as the intermediate bonding layers respectively. The bonds (with line width of 100 microns) showed hermetic sealing capability of better than 10-11 mbar-l/s when tested using a commercial helium leak tester.

  3. Anatomical landmark position--can we trust what we see? Results from an online reliability and validity study of osteopaths.

    PubMed

    Pattyn, Elise; Rajendran, Dévan

    2014-04-01

    Practitioners traditionally use observation to classify the position of patients' anatomical landmarks. This information may contribute to diagnosis and patient management. To calculate a) Inter-rater reliability of categorising the sagittal plane position of four anatomical landmarks (lateral femoral epicondyle, greater trochanter, mastoid process and acromion) on side-view photographs (with landmarks highlighted and not-highlighted) of anonymised subjects; b) Intra-rater reliability; c) Individual landmark inter-rater reliability; d) Validity against a 'gold standard' photograph. Online inter- and intra-rater reliability study. Photographed subjects: convenience sample of asymptomatic students; raters: randomly selected UK registered osteopaths. 40 photographs of 30 subjects were used, a priori clinically acceptable reliability was ≥0.4. Inter-rater arm: 20 photographs without landmark highlights plus 10 with highlights; Intra-rater arm: 10 duplicate photographs (non-highlighted landmarks). Validity arm: highlighted landmark scores versus 'gold standard' photographs with vertical line. Research ethics approval obtained. Osteopaths (n = 48) categorised landmark position relative to imagined vertical-line; Gwet's Agreement Coefficient 1 (AC1) calculated and chance-corrected coefficient benchmarked against Landis and Koch's scale; Validity calculation used Kendall's tau-B. Inter-rater reliability was 'fair' (AC1 = 0.342; 95% confidence interval (CI) = 0.279-0.404) for non-highlighted landmarks and 'moderate' (AC1 = 0.700; 95% CI = 0.596-0.805) for highlighted landmarks. Intra-rater reliability was 'fair' (AC1 = 0.522); range was 'poor' (AC1 = 0.160) to 'substantial' (AC1 = 0.896). No differences were found between individual landmarks. Validity was 'low' (TB = 0.327; p = 0.104). Both inter- and intra-rater reliability was 'fair' but below clinically acceptable levels, validity was 'low'. Together these results challenge the clinical practice of

  4. Nanowire growth process modeling and reliability models for nanodevices

    NASA Astrophysics Data System (ADS)

    Fathi Aghdam, Faranak

    Nowadays, nanotechnology is becoming an inescapable part of everyday life. The big barrier in front of its rapid growth is our incapability of producing nanoscale materials in a reliable and cost-effective way. In fact, the current yield of nano-devices is very low (around 10 %), which makes fabrications of nano-devices very expensive and uncertain. To overcome this challenge, the first and most important step is to investigate how to control nano-structure synthesis variations. The main directions of reliability research in nanotechnology can be classified either from a material perspective or from a device perspective. The first direction focuses on restructuring materials and/or optimizing process conditions at the nano-level (nanomaterials). The other direction is linked to nano-devices and includes the creation of nano-electronic and electro-mechanical systems at nano-level architectures by taking into account the reliability of future products. In this dissertation, we have investigated two topics on both nano-materials and nano-devices. In the first research work, we have studied the optimization of one of the most important nanowire growth processes using statistical methods. Research on nanowire growth with patterned arrays of catalyst has shown that the wire-to-wire spacing is an important factor affecting the quality of resulting nanowires. To improve the process yield and the length uniformity of fabricated nanowires, it is important to reduce the resource competition between nanowires during the growth process. We have proposed a physical-statistical nanowire-interaction model considering the shadowing effect and shared substrate diffusion area to determine the optimal pitch that would ensure the minimum competition between nanowires. A sigmoid function is used in the model, and the least squares estimation method is used to estimate the model parameters. The estimated model is then used to determine the optimal spatial arrangement of catalyst arrays

  5. Stirling Convertor Fasteners Reliability Quantification

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Korovaichuk, Igor; Kovacevich, Tiodor; Schreiber, Jeffrey G.

    2006-01-01

    Onboard Radioisotope Power Systems (RPS) being developed for NASA s deep-space science and exploration missions require reliable operation for up to 14 years and beyond. Stirling power conversion is a candidate for use in an RPS because it offers a multifold increase in the conversion efficiency of heat to electric power and reduced inventory of radioactive material. Structural fasteners are responsible to maintain structural integrity of the Stirling power convertor, which is critical to ensure reliable performance during the entire mission. Design of fasteners involve variables related to the fabrication, manufacturing, behavior of fasteners and joining parts material, structural geometry of the joining components, size and spacing of fasteners, mission loads, boundary conditions, etc. These variables have inherent uncertainties, which need to be accounted for in the reliability assessment. This paper describes these uncertainties along with a methodology to quantify the reliability, and provides results of the analysis in terms of quantified reliability and sensitivity of Stirling power conversion reliability to the design variables. Quantification of the reliability includes both structural and functional aspects of the joining components. Based on the results, the paper also describes guidelines to improve the reliability and verification testing.

  6. Calculating system reliability with SRFYDO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M

    2010-01-01

    SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for themore » system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.« less

  7. Test-retest reliability and cross validation of the functioning everyday with a wheelchair instrument.

    PubMed

    Mills, Tamara L; Holm, Margo B; Schmeler, Mark

    2007-01-01

    The purpose of this study was to establish the test-retest reliability and content validity of an outcomes tool designed to measure the effectiveness of seating-mobility interventions on the functional performance of individuals who use wheelchairs or scooters as their primary seating-mobility device. The instrument, Functioning Everyday With a Wheelchair (FEW), is a questionnaire designed to measure perceived user function related to wheelchair/scooter use. Using consumer-generated items, FEW Beta Version 1.0 was developed and test-retest reliability was established. Cross-validation of FEW Beta Version 1.0 was then carried out with five samples of seating-mobility users to establish content validity. Based on the content validity study, FEW Version 2.0 was developed and administered to seating-mobility consumers to examine its test-retest reliability. FEW Beta Version 1.0 yielded an intraclass correlation coefficient (ICC) Model (3,k) of .92, p < .001, and the content validity results revealed that FEW Beta Version 1.0 captured 55% of seating-mobility goals reported by consumers across five samples. FEW Version 2.0 yielded ICC(3,k) = .86, p < .001, and captured 98.5% of consumers' seating-mobility goals. The cross-validation study identified new categories of seating-mobility goals for inclusion in FEW Version 2.0, and the content validity of FEW Version 2.0 was confirmed. FEW Beta Version 1.0 and FEW Version 2.0 were highly stable in their measurement of participants' seating-mobility goals over a 1-week interval.

  8. The Use of Invariance and Bootstrap Procedures as a Method to Establish the Reliability of Research Results.

    ERIC Educational Resources Information Center

    Sandler, Andrew B.

    Statistical significance is misused in educational and psychological research when it is applied as a method to establish the reliability of research results. Other techniques have been developed which can be correctly utilized to establish the generalizability of findings. Methods that do provide such estimates are known as invariance or…

  9. Facial disability index (FDI): Adaptation to Spanish, reliability and validity

    PubMed Central

    Gonzalez-Cardero, Eduardo; Cayuela, Aurelio; Acosta-Feria, Manuel; Gutierrez-Perez, Jose-Luis

    2012-01-01

    Objectives: To adapt to Spanish the facial disability index (FDI) described by VanSwearingen and Brach in 1995 and to assess its reliability and validity in patients with facial nerve paresis after parotidectomy. Study Design: The present study was conducted in two different stages: a) cross-cultural adaptation of the questionnaire and b) cross-sectional study of a control group of 79 Spanish-speaking patients who suffered facial paresis after superficial parotidectomy with facial nerve preservation. The cross-cultural adaptation process comprised the following stages: (I) initial translation, (II) synthesis of the translated document, (III) retro-translation, (IV) review by a board of experts, (V) pilot study of the pre-final draft and (VI) analysis of the pilot study and final draft. Results: The reliability and internal consistency of every one of the rating scales included in the FDI (Cronbach’s alpha coefficient) was 0.83 for the complete scale and 0.77 and 0.82 for the physical and the social well-being subscales. The analysis of the factorial validity of the main components of the adapted FDI yielded similar results to the original questionnaire. Bivariate correlations between FDI and House-Brackmann scale were positive. The variance percentage was calculated for all FDI components. Conclusions: The FDI questionnaire is a specific instrument for assessing facial neuromuscular dysfunction which becomes a useful tool in order to determine quality of life in patients with facial nerve paralysis. Spanish adapted FDI is equivalent to the original questionnaire and shows similar reliability and validity. The proven reproducibi-lity, reliability and validity of this questionnaire make it a useful additional tool for evaluating the impact of facial nerve paralysis in Spanish-speaking patients. Key words:Parotidectomy, facial nerve paralysis, facial disability. PMID:22926474

  10. Prediction of kharif rice yield at Kharagpur using disaggregated extended range rainfall forecasts

    NASA Astrophysics Data System (ADS)

    Dhekale, B. S.; Nageswararao, M. M.; Nair, Archana; Mohanty, U. C.; Swain, D. K.; Singh, K. K.; Arunbabu, T.

    2017-08-01

    The Extended Range Forecasts System (ERFS) has been generating monthly and seasonal forecasts on real-time basis throughout the year over India since 2009. India is one of the major rice producer and consumer in South Asia; more than 50% of the Indian population depends on rice as staple food. Rice is mainly grown in kharif season, which contributed 84% of the total annual rice production of the country. Rice cultivation in India is rainfed, which depends largely on rains, so reliability of the rainfall forecast plays a crucial role for planning the kharif rice crop. In the present study, an attempt has been made to test the reliability of seasonal and sub-seasonal ERFS summer monsoon rainfall forecasts for kharif rice yield predictions at Kharagpur, West Bengal by using CERES-Rice (DSSATv4.5) model. These ERFS forecasts are produced as monthly and seasonal mean values and are converted into daily sequences with stochastic weather generators for use with crop growth models. The daily sequences are generated from ERFS seasonal (June-September) and sub-seasonal (July-September, August-September, and September) summer monsoon (June to September) rainfall forecasts which are considered as input in CERES-rice crop simulation model for the crop yield prediction for hindcast (1985-2008) and real-time mode (2009-2015). The yield simulated using India Meteorological Department (IMD) observed daily rainfall data is considered as baseline yield for evaluating the performance of predicted yields using the ERFS forecasts. The findings revealed that the stochastic disaggregation can be used to disaggregate the monthly/seasonal ERFS forecasts into daily sequences. The year to year variability in rice yield at Kharagpur is efficiently predicted by using the ERFS forecast products in hindcast as well as real time, and significant enhancement in the prediction skill is noticed with advancement in the season due to incorporation of observed weather data which reduces uncertainty of

  11. Spatial cue reliability drives frequency tuning in the barn Owl's midbrain

    PubMed Central

    Cazettes, Fanny; Fischer, Brian J; Pena, Jose L

    2014-01-01

    The robust representation of the environment from unreliable sensory cues is vital for the efficient function of the brain. However, how the neural processing captures the most reliable cues is unknown. The interaural time difference (ITD) is the primary cue to localize sound in horizontal space. ITD is encoded in the firing rate of neurons that detect interaural phase difference (IPD). Due to the filtering effect of the head, IPD for a given location varies depending on the environmental context. We found that, in barn owls, at each location there is a frequency range where the head filtering yields the most reliable IPDs across contexts. Remarkably, the frequency tuning of space-specific neurons in the owl's midbrain varies with their preferred sound location, matching the range that carries the most reliable IPD. Thus, frequency tuning in the owl's space-specific neurons reflects a higher-order feature of the code that captures cue reliability. DOI: http://dx.doi.org/10.7554/eLife.04854.001 PMID:25531067

  12. Key Reliability Drivers of Liquid Propulsion Engines and A Reliability Model for Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Huang, Zhao-Feng; Fint, Jeffry A.; Kuck, Frederick M.

    2005-01-01

    This paper is to address the in-flight reliability of a liquid propulsion engine system for a launch vehicle. We first establish a comprehensive list of system and sub-system reliability drivers for any liquid propulsion engine system. We then build a reliability model to parametrically analyze the impact of some reliability parameters. We present sensitivity analysis results for a selected subset of the key reliability drivers using the model. Reliability drivers identified include: number of engines for the liquid propulsion stage, single engine total reliability, engine operation duration, engine thrust size, reusability, engine de-rating or up-rating, engine-out design (including engine-out switching reliability, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction), propellant specific hazards, engine start and cutoff transient hazards, engine combustion cycles, vehicle and engine interface and interaction hazards, engine health management system, engine modification, engine ground start hold down with launch commit criteria, engine altitude start (1 in. start), Multiple altitude restart (less than 1 restart), component, subsystem and system design, manufacturing/ground operation support/pre and post flight check outs and inspection, extensiveness of the development program. We present some sensitivity analysis results for the following subset of the drivers: number of engines for the propulsion stage, single engine total reliability, engine operation duration, engine de-rating or up-rating requirements, engine-out design, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction, and engine health management system implementation (basic redlines and more advanced health management systems).

  13. Reliability and validity of two isometric squat tests.

    PubMed

    Blazevich, Anthony J; Gill, Nicholas; Newton, Robert U

    2002-05-01

    The purpose of the present study was first to examine the reliability of isometric squat (IS) and isometric forward hack squat (IFHS) tests to determine if repeated measures on the same subjects yielded reliable results. The second purpose was to examine the relation between isometric and dynamic measures of strength to assess validity. Fourteen male subjects performed maximal IS and IFHS tests on 2 occasions and 1 repetition maximum (1-RM) free-weight squat and forward hack squat (FHS) tests on 1 occasion. The 2 tests were found to be highly reliable (intraclass correlation coefficient [ICC](IS) = 0.97 and ICC(IFHS) = 1.00). There was a strong relation between average IS and 1-RM squat performance, and between IFHS and 1-RM FHS performance (r(squat) = 0.77, r(FHS) = 0.76; p < 0.01), but a weak relation between squat and FHS test performances (r < 0.55). There was also no difference between observed 1-RM values and those predicted by our regression equations. Errors in predicting 1-RM performance were in the order of 8.5% (standard error of the estimate [SEE] = 13.8 kg) and 7.3% (SEE = 19.4 kg) for IS and IFHS respectively. Correlations between isometric and 1-RM tests were not of sufficient size to indicate high validity of the isometric tests. Together the results suggest that IS and IFHS tests could detect small differences in multijoint isometric strength between subjects, or performance changes over time, and that the scores in the isometric tests are well related to 1-RM performance. However, there was a small error when predicting 1-RM performance from isometric performance, and these tests have not been shown to discriminate between small changes in dynamic strength. The weak relation between squat and FHS test performance can be attributed to differences in the movement patterns of the tests

  14. Brazilian Soybean Yields and Yield Gaps Vary with Farm Size

    NASA Astrophysics Data System (ADS)

    Jeffries, G. R.; Cohn, A.; Griffin, T. S.; Bragança, A.

    2017-12-01

    Understanding the farm size-specific characteristics of crop yields and yield gaps may help to improve yields by enabling better targeting of technical assistance and agricultural development programs. Linking remote sensing-based yield estimates with property boundaries provides a novel view of the relationship between farm size and yield structure (yield magnitude, gaps, and stability over time). A growing literature documents variations in yield gaps, but largely ignores the role of farm size as a factor shaping yield structure. Research on the inverse farm size-productivity relationship (IR) theory - that small farms are more productive than large ones all else equal - has documented that yield magnitude may vary by farm size, but has not considered other yield structure characteristics. We examined farm size - yield structure relationships for soybeans in Brazil for years 2001-2015. Using out-of-sample soybean yield predictions from a statistical model, we documented 1) gaps between the 95th percentile of attained yields and mean yields within counties and individual fields, and 2) yield stability defined as the standard deviation of time-detrended yields at given locations. We found a direct relationship between soy yields and farm size at the national level, while the strength and the sign of the relationship varied by region. Soybean yield gaps were found to be inversely related to farm size metrics, even when yields were only compared to farms of similar size. The relationship between farm size and yield stability was nonlinear, with mid-sized farms having the most stable yields. The work suggests that farm size is an important factor in understanding yield structure and that opportunities for improving soy yields in Brazil are greatest among smaller farms.

  15. [A systematic social observation tool: methods and results of inter-rater reliability].

    PubMed

    Freitas, Eulilian Dias de; Camargos, Vitor Passos; Xavier, César Coelho; Caiaffa, Waleska Teixeira; Proietti, Fernando Augusto

    2013-10-01

    Systematic social observation has been used as a health research methodology for collecting information from the neighborhood physical and social environment. The objectives of this article were to describe the operationalization of direct observation of the physical and social environment in urban areas and to evaluate the instrument's reliability. The systematic social observation instrument was designed to collect information in several domains. A total of 1,306 street segments belonging to 149 different neighborhoods in Belo Horizonte, Minas Gerais, Brazil, were observed. For the reliability study, 149 segments (1 per neighborhood) were re-audited, and Fleiss kappa was used to access inter-rater agreement. Mean agreement was 0.57 (SD = 0.24); 53% had substantial or almost perfect agreement, and 20.4%, moderate agreement. The instrument appears to be appropriate for observing neighborhood characteristics that are not time-dependent, especially urban services, property characterization, pedestrian environment, and security.

  16. Results of the Abbott RealTime HIV-1 assay for specimens yielding "target not detected" results by the Cobas AmpliPrep/Cobas TaqMan HIV-1 Test.

    PubMed

    Babady, N Esther; Germer, Jeffrey J; Yao, Joseph D C

    2010-03-01

    No significantly discordant results were observed between the Abbott RealTime HIV-1 assay and the COBAS AmpliPrep/COBAS TaqMan HIV-1 Test (CTM) among 1,190 unique clinical plasma specimens obtained from laboratories located in 40 states representing all nine U.S. geographic regions and previously yielding "target not detected" results by CTM.

  17. Identifying seedling root architectural traits associated with yield and yield components in wheat.

    PubMed

    Xie, Quan; Fernando, Kurukulasuriya M C; Mayes, Sean; Sparkes, Debbie L

    2017-05-01

    Plant roots growing underground are critical for soil resource acquisition, anchorage and plant-environment interactions. In wheat ( Triticum aestivum ), however, the target root traits to improve yield potential still remain largely unknown. This study aimed to identify traits of seedling root system architecture (RSA) associated with yield and yield components in 226 recombinant inbred lines (RILs) derived from a cross between the bread wheat Triticum aestivum 'Forno' (small, wide root system) and spelt Triticum spelta 'Oberkulmer' (large, narrow root system). A 'pouch and wick' high-throughput phenotyping pipeline was used to determine the RSA traits of 13-day-old RIL seedlings. Two field experiments and one glasshouse experiment were carried out to investigate the yield, yield components and phenology, followed by identification of quantitative trait loci (QTLs). There was substantial variation in RSA traits between genotypes. Seminal root number and total root length were both positively associated with grains m -2 , grains per spike, above-ground biomass m -2 and grain yield. More seminal roots and longer total root length were also associated with delayed maturity and extended grain filling, likely to be a consequence of more grains being defined before anthesis. Additionally, the maximum width of the root system displayed positive relationships with spikes m -2 , grains m -2 and grain yield. Ten RILs selected for the longest total roots exhibited the same effects on yield and phenology as described above, compared with the ten lines with the shortest total roots. Genetic analysis revealed 38 QTLs for the RSA, and QTL coincidence between the root and yield traits was frequently observed, indicating tightly linked genes or pleiotropy, which concurs with the results of phenotypic correlation analysis. Based on the results from the Forno × Oberkulmer population, it is proposed that vigorous early root growth, particularly more seminal roots and longer total

  18. Yield and economic performance of organic and conventional cotton-based farming systems--results from a field trial in India.

    PubMed

    Forster, Dionys; Andres, Christian; Verma, Rajeev; Zundel, Christine; Messmer, Monika M; Mäder, Paul

    2013-01-01

    The debate on the relative benefits of conventional and organic farming systems has in recent time gained significant interest. So far, global agricultural development has focused on increased productivity rather than on a holistic natural resource management for food security. Thus, developing more sustainable farming practices on a large scale is of utmost importance. However, information concerning the performance of farming systems under organic and conventional management in tropical and subtropical regions is scarce. This study presents agronomic and economic data from the conversion phase (2007-2010) of a farming systems comparison trial on a Vertisol soil in Madhya Pradesh, central India. A cotton-soybean-wheat crop rotation under biodynamic, organic and conventional (with and without Bt cotton) management was investigated. We observed a significant yield gap between organic and conventional farming systems in the 1(st) crop cycle (cycle 1: 2007-2008) for cotton (-29%) and wheat (-27%), whereas in the 2(nd) crop cycle (cycle 2: 2009-2010) cotton and wheat yields were similar in all farming systems due to lower yields in the conventional systems. In contrast, organic soybean (a nitrogen fixing leguminous plant) yields were marginally lower than conventional yields (-1% in cycle 1, -11% in cycle 2). Averaged across all crops, conventional farming systems achieved significantly higher gross margins in cycle 1 (+29%), whereas in cycle 2 gross margins in organic farming systems were significantly higher (+25%) due to lower variable production costs but similar yields. Soybean gross margin was significantly higher in the organic system (+11%) across the four harvest years compared to the conventional systems. Our results suggest that organic soybean production is a viable option for smallholder farmers under the prevailing semi-arid conditions in India. Future research needs to elucidate the long-term productivity and profitability, particularly of cotton and

  19. Reliability and validity of the Brief Pain Inventory in individuals with chronic obstructive pulmonary disease.

    PubMed

    Chen, Y-W; HajGhanbari, B; Road, J D; Coxson, H O; Camp, P G; Reid, W D

    2018-06-08

    Pain is prevalent in chronic obstructive pulmonary disease (COPD) and the Brief Pain Inventory (BPI) appears to be a feasible questionnaire to assess this symptom. However, the reliability and validity of the BPI have not been determined in individuals with COPD. This study aimed to determine the internal consistency, test-retest reliability and validity (construct, convergent, divergent and discriminant) of the BPI in individuals with COPD. In order to examine the test-retest reliability, individuals with COPD were recruited from pulmonary rehabilitation programmes to complete the BPI twice 1 week apart. In order to investigate validity, de-identified data was retrieved from two previous studies, including forced expiratory volume in 1-s, age, sex and data from four questionnaires: the BPI, short-form McGill Pain Questionnaire (SF-MPQ), 36-Item Short Form Survey (SF-36) and Community Health Activities Model Program for Seniors (CHAMPS) questionnaire. In total, 123 participants were included in the analyses (eligible data were retrieved from 86 participants and additional 37 participants were recruited). The BPI demonstrated excellent internal consistency and test-retest reliability. It also showed convergent validity with the SF-MPQ and divergent validity with the SF-36. The factor analysis yielded two factors of the BPI, which demonstrated that the two domains of the BPI measure the intended constructs. The BPI can also discriminate pain levels among COPD patients with varied levels of quality of life (SF-36) and physical activity (CHAMPS). The BPI is a reliable and valid pain questionnaire that can be used to evaluate pain in COPD. This study formally established the reliability and validity of the BPI in individuals with COPD, which have not been determined in this patient group. The results of this study provide strong evidence that assessment results from this pain questionnaire are reliable and valid. © 2018 European Pain Federation - EFIC®.

  20. Genetic correlations between the cumulative pseudo-survival rate, milk yield, and somatic cell score during lactation in Holstein cattle in Japan using a random regression model.

    PubMed

    Sasaki, O; Aihara, M; Nishiura, A; Takeda, H

    2017-09-01

    Trends in genetic correlations between longevity, milk yield, and somatic cell score (SCS) during lactation in cows are difficult to trace. In this study, changes in the genetic correlations between milk yield, SCS, and cumulative pseudo-survival rate (PSR) during lactation were examined, and the effect of milk yield and SCS information on the reliability of estimated breeding value (EBV) of PSR were determined. Test day milk yield, SCS, and PSR records were obtained for Holstein cows in Japan from 2004 to 2013. A random subset of the data was used for the analysis (825 herds, 205,383 cows). This data set was randomly divided into 5 subsets (162-168 herds, 83,389-95,854 cows), and genetic parameters were estimated in each subset independently. Data were analyzed using multiple-trait random regression animal models including either the residual effect for the whole lactation period (H0), the residual effects for 5 lactation stages (H5), or both of these residual effects (HD). Milk yield heritability increased until 310 to 351 d in milk (DIM) and SCS heritability increased until 330 to 344 DIM. Heritability estimates for PSR increased with DIM from 0.00 to 0.05. The genetic correlation between milk yield and SCS increased negatively to under -0.60 at 455 DIM. The genetic correlation between milk yield and PSR increased until 342 to 355 DIM (0.53-0.57). The genetic correlation between the SCS and PSR was -0.82 to -0.83 at around 180 DIM, and decreased to -0.65 to -0.71 at 455 DIM. The reliability of EBV of PSR for sires with 30 or more recorded daughters was 0.17 to 0.45 when the effects of correlated traits were ignored. The maximum reliability of EBV was observed at 257 (H0) or 322 (HD) DIM. When the correlations of PSR with milk yield and SCS were considered, the reliabilities of PSR estimates increased to 0.31-0.76. The genetic parameter estimates of H5 were the same as those for HD. The rank correlation coefficients of the EBV of PSR between H0 and H5 or HD were

  1. Extracting More Information from Passive Optical Tracking Observations for Reliable Orbit Element Generation

    NASA Astrophysics Data System (ADS)

    Bennett, J.; Gehly, S.

    2016-09-01

    This paper presents results from a preliminary method for extracting more orbital information from low rate passive optical tracking data. An improvement in the accuracy of the observation data yields more accurate and reliable orbital elements. A comparison between the orbit propagations from the orbital element generated using the new data processing method is compared with the one generated from the raw observation data for several objects. Optical tracking data collected by EOS Space Systems, located on Mount Stromlo, Australia, is fitted to provide a new orbital element. The element accuracy is determined from a comparison between the predicted orbit and subsequent tracking data or reference orbit if available. The new method is shown to result in a better orbit prediction which has important implications in conjunction assessments and the Space Environment Research Centre space object catalogue. The focus is on obtaining reliable orbital solutions from sparse data. This work forms part of the collaborative effort of the Space Environment Management Cooperative Research Centre which is developing new technologies and strategies to preserve the space environment (www.serc.org.au).

  2. Health search engine with e-document analysis for reliable search results.

    PubMed

    Gaudinat, Arnaud; Ruch, Patrick; Joubert, Michel; Uziel, Philippe; Strauss, Anne; Thonnet, Michèle; Baud, Robert; Spahni, Stéphane; Weber, Patrick; Bonal, Juan; Boyer, Celia; Fieschi, Marius; Geissbuhler, Antoine

    2006-01-01

    After a review of the existing practical solution available to the citizen to retrieve eHealth document, the paper describes an original specialized search engine WRAPIN. WRAPIN uses advanced cross lingual information retrieval technologies to check information quality by synthesizing medical concepts, conclusions and references contained in the health literature, to identify accurate, relevant sources. Thanks to MeSH terminology [1] (Medical Subject Headings from the U.S. National Library of Medicine) and advanced approaches such as conclusion extraction from structured document, reformulation of the query, WRAPIN offers to the user a privileged access to navigate through multilingual documents without language or medical prerequisites. The results of an evaluation conducted on the WRAPIN prototype show that results of the WRAPIN search engine are perceived as informative 65% (59% for a general-purpose search engine), reliable and trustworthy 72% (41% for the other engine) by users. But it leaves room for improvement such as the increase of database coverage, the explanation of the original functionalities and an audience adaptability. Thanks to evaluation outcomes, WRAPIN is now in exploitation on the HON web site (http://www.healthonnet.org), free of charge. Intended to the citizen it is a good alternative to general-purpose search engines when the user looks up trustworthy health and medical information or wants to check automatically a doubtful content of a Web page.

  3. Linkage design effect on the reliability of surface-micromachined microengines driving a load

    NASA Astrophysics Data System (ADS)

    Tanner, Danelle M.; Peterson, Kenneth A.; Irwin, Lloyd W.; Tangyunyong, Paiboon; Miller, William M.; Eaton, William P.; Smith, Norman F.; Rodgers, M. Steven

    1998-09-01

    The reliability of microengines is a function of the design of the mechanical linkage used to connect the electrostatic actuator to the drive. We have completed a series of reliability stress tests on surface micromachined microengines driving an inertial load. In these experiments, we used microengines that had pin mechanisms with guides connecting the drive arms to the electrostatic actuators. Comparing this data to previous results using flexure linkages revealed that the pin linkage design was less reliable. The devices were stressed to failure at eight frequencies, both above and below the measured resonance frequency of the microengine. Significant amounts of wear debris were observed both around the hub and pin joint of the drive gear. Additionally, wear tracks were observed in the area where the moving shuttle rubbed against the guides of the pin linkage. At each frequency, we analyzed the statistical data yielding a lifetime (t50) for median cycles to failure and (sigma) , the shape parameter of the distribution. A model was developed to describe the failure data based on fundamental wear mechanisms and forces exhibited in mechanical resonant systems. The comparison to the model will be discussed.

  4. The Reliability of a Novel Mobile 3-dimensional Wound Measurement Device.

    PubMed

    Anghel, Ersilia L; Kumar, Anagha; Bigham, Thomas E; Maselli, Kathryn M; Steinberg, John S; Evans, Karen K; Kim, Paul J; Attinger, Christopher E

    2016-11-01

    Objective assessment of wound dimensions is essential for tracking progression and determining treatment effectiveness. A reliability study was designed to establish intrarater and interrater reliability of a novel mobile 3-dimensional wound measurement (3DWM) device. Forty-five wounds were assessed by 2 raters using a 3DWM device to obtain length, width, area, depth, and volume measurements. Wounds were also measured manually, using a disposable ruler and digital planimetry. The intraclass correlation coefficient (ICC) was used to establish intrarater and interrater reliability. High levels of intrarater and interrater agreement were observed for area, length, and width; ICC = 0.998, 0.977, 0.955 and 0.999, 0.997, 0.995, respectively. Moderate levels of intrarater (ICC = 0.888) and interrater (ICC = 0.696) agreement were observed for volume. Lastly, depth yielded an intrarater ICC of 0.360 and an interrater ICC of 0.649. Measures from the 3DWM device were highly correlated with those obtained from scaled photography for length, width, and area (ρ = 0.997, 0.988, 0.997, P < 0.001). The 3DWM device yielded correlations of ρ = 0.990, 0.987, 0.996 with P < 0.001 for length, width, and area when compared to manual measurements. The 3DWM device was found to be highly reliable for measuring wound areas for a range of wound sizes and types as compared to manual measurement and digital planimetry. The depth and therefore volume measurement using the 3DWM device was found to have a lower ICC, but volume ICC alone was moderate. Overall, this device offers a mobile option for objective wound measurement in the clinical setting.

  5. Reliability Generalization: Exploring Variation of Reliability Coefficients of MMPI Clinical Scales Scores.

    ERIC Educational Resources Information Center

    Vacha-Haase, Tammi; Kogan, Lori R.; Tani, Crystal R.; Woodall, Renee A.

    2001-01-01

    Used reliability generalization to explore the variance of scores on 10 Minnesota Multiphasic Personality Inventory (MMPI) clinical scales drawing on 1,972 articles in the literature on the MMPI. Results highlight the premise that scores, not tests, are reliable or unreliable, and they show that study characteristics do influence scores on the…

  6. Covariate-free and Covariate-dependent Reliability.

    PubMed

    Bentler, Peter M

    2016-12-01

    Classical test theory reliability coefficients are said to be population specific. Reliability generalization, a meta-analysis method, is the main procedure for evaluating the stability of reliability coefficients across populations. A new approach is developed to evaluate the degree of invariance of reliability coefficients to population characteristics. Factor or common variance of a reliability measure is partitioned into parts that are, and are not, influenced by control variables, resulting in a partition of reliability into a covariate-dependent and a covariate-free part. The approach can be implemented in a single sample and can be applied to a variety of reliability coefficients.

  7. Assimilation of Remotely Sensed Soil Moisture Profiles into a Crop Modeling Framework for Reliable Yield Estimations

    NASA Astrophysics Data System (ADS)

    Mishra, V.; Cruise, J.; Mecikalski, J. R.

    2017-12-01

    Much effort has been expended recently on the assimilation of remotely sensed soil moisture into operational land surface models (LSM). These efforts have normally been focused on the use of data derived from the microwave bands and results have often shown that improvements to model simulations have been limited due to the fact that microwave signals only penetrate the top 2-5 cm of the soil surface. It is possible that model simulations could be further improved through the introduction of geostationary satellite thermal infrared (TIR) based root zone soil moisture in addition to the microwave deduced surface estimates. In this study, root zone soil moisture estimates from the TIR based Atmospheric Land Exchange Inverse (ALEXI) model were merged with NASA Soil Moisture Active Passive (SMAP) based surface estimates through the application of informational entropy. Entropy can be used to characterize the movement of moisture within the vadose zone and accounts for both advection and diffusion processes. The Principle of Maximum Entropy (POME) can be used to derive complete soil moisture profiles and, fortuitously, only requires a surface boundary condition as well as the overall mean moisture content of the soil column. A lower boundary can be considered a soil parameter or obtained from the LSM itself. In this study, SMAP provided the surface boundary while ALEXI supplied the mean and the entropy integral was used to tie the two together and produce the vertical profile. However, prior to the merging, the coarse resolution (9 km) SMAP data were downscaled to the finer resolution (4.7 km) ALEXI grid. The disaggregation scheme followed the Soil Evaporative Efficiency approach and again, all necessary inputs were available from the TIR model. The profiles were then assimilated into a standard agricultural crop model (Decision Support System for Agrotechnology, DSSAT) via the ensemble Kalman Filter. The study was conducted over the Southeastern United States for the

  8. Declining water yield from forested mountain watersheds in response to climate change and forest mesophication.

    PubMed

    Caldwell, Peter V; Miniat, Chelcy F; Elliott, Katherine J; Swank, Wayne T; Brantley, Steven T; Laseter, Stephanie H

    2016-09-01

    Climate change and forest disturbances are threatening the ability of forested mountain watersheds to provide the clean, reliable, and abundant fresh water necessary to support aquatic ecosystems and a growing human population. Here, we used 76 years of water yield, climate, and field plot vegetation measurements in six unmanaged, reference watersheds in the southern Appalachian Mountains of North Carolina, USA to determine whether water yield has changed over time, and to examine and attribute the causal mechanisms of change. We found that annual water yield increased in some watersheds from 1938 to the mid-1970s by as much as 55%, but this was followed by decreases up to 22% by 2013. Changes in forest evapotranspiration were consistent with, but opposite in direction to the changes in water yield, with decreases in evapotranspiration up to 31% by the mid-1970s followed by increases up to 29% until 2013. Vegetation survey data showed commensurate reductions in forest basal area until the mid-1970s and increases since that time accompanied by a shift in dominance from xerophytic oak and hickory species to several mesophytic species (i.e., mesophication) that use relatively more water. These changes in forest structure and species composition may have decreased water yield by as much as 18% in a given year since the mid-1970s after accounting for climate. Our results suggest that changes in climate and forest structure and species composition in unmanaged forests brought about by disturbance and natural community dynamics over time can result in large changes in water supply. © 2016 John Wiley & Sons Ltd.

  9. Reliability-Based Design Optimization of a Composite Airframe Component

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2009-01-01

    A stochastic design optimization methodology (SDO) has been developed to design components of an airframe structure that can be made of metallic and composite materials. The design is obtained as a function of the risk level, or reliability, p. The design method treats uncertainties in load, strength, and material properties as distribution functions, which are defined with mean values and standard deviations. A design constraint or a failure mode is specified as a function of reliability p. Solution to stochastic optimization yields the weight of a structure as a function of reliability p. Optimum weight versus reliability p traced out an inverted-S-shaped graph. The center of the inverted-S graph corresponded to 50 percent (p = 0.5) probability of success. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponds to unity for reliability p (or p = 1). Weight can be reduced to a small value for the most failure-prone design with a reliability that approaches zero (p = 0). Reliability can be changed for different components of an airframe structure. For example, the landing gear can be designed for a very high reliability, whereas it can be reduced to a small extent for a raked wingtip. The SDO capability is obtained by combining three codes: (1) The MSC/Nastran code was the deterministic analysis tool, (2) The fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and (3) NASA Glenn Research Center s optimization testbed CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life raked wingtip structure of the Boeing 767-400 extended range airliner made of metallic and composite materials.

  10. Validity and reliability of Persian version of Listening Styles Profile-Revised (LSP- R) in Iranian students.

    PubMed

    Fatehi, Zahra; Baradaran, Hamid Reza; Asadpour, Mohamad; Rezaeian, Mohsen

    2017-01-01

    Background: Individuals' listening styles differs based on their characters, professions and situations. This study aimed to assess the validity and reliability of Listening Styles Profile- Revised (LSP- R) in Iranian students. Methods: After translating into Persian, LSP-R was employed in a sample of 240 medical and nursing Persian speaking students in Iran. Statistical analysis was performed to test the reliability and validity of the LSP-R. Results: The study revealed high internal consistency and good test-retest reliability for the Persian version of the questionnaire. The Cronbach's alpha coefficient was 0.72 and intra-class correlation coefficient 0.87. The means for the content validity index and the content validity ratio (CVR) were 0.90 and 0.83, respectively. Exploratory factor analysis (EFA) yielded a four-factor solution accounted for 60.8% of the observed variance. Majority of medical students (73%) as well as majority of nursing students (70%) stated that their listening styles were task-oriented. Conclusion: In general, the study finding suggests that the Persian version of LSP-R is a valid and reliable instrument for assessing listening styles profile in the studied sample.

  11. Estimation of rice yield affected by drought and relation between rice yield and TVDI

    NASA Astrophysics Data System (ADS)

    Hongo, C.; Tamura, E.; Sigit, G.

    2016-12-01

    Impact of climate change is not only seen on food production but also on food security and sustainable development of society. Adaptation to climate change is a pressing issue throughout the world to reduce the risks along with the plans and strategies for food security and sustainable development. As a key adaptation to the climate change, agricultural insurance is expected to play an important role in stabilizing agricultural production through compensating the losses caused by the climate change. As the adaptation, the Government of Indonesia has launched agricultural insurance program for damage of rice by drought, flood and pest and disease. The Government started a pilot project in 2013 and this year the pilot project has been extended to 22 provinces. Having the above as background, we conducted research on development of new damage assessment method for rice using remote sensing data which could be used for evaluation of damage ratio caused by drought in West Java, Indonesia. For assessment of the damage ratio, estimation of rice yield is a key. As the result of our study, rice yield affected by drought in dry season could be estimated at level of 1 % significance using SPOT 7 data taken in 2015, and the validation result was 0.8t/ha. Then, the decrease ratio in rice yield about each individual paddy field was calculated using data of the estimated result and the average yield of the past 10 years. In addition, TVDI (Temperature Vegetation Dryness Index) which was calculated from Landsat8 data in heading season indicated the dryness in low yield area. The result suggests that rice yield was affected by irrigation water shortage around heading season as a result of the decreased precipitation by El Nino. Through our study, it becomes clear that the utilization of remote sensing data can be promising for assessment of the damage ratio of rice production precisely, quickly and quantitatively, and also it can be incorporated into the insurance procedures.

  12. The reliability evaluation of reclaimed water reused in power plant project

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Jia, Ru-sheng; Gao, Yu-lan; Wang, Wan-fen; Cao, Peng-qiang

    2017-12-01

    The reuse of reclaimed water has become one of the important measures to solve the shortage of water resources in many cities, But there is no unified way to evaluate the engineering. Concerning this issue, it took Wanneng power plant project in Huai city as a example, analyzed the reliability of wastewater reuse from the aspects of quality in reclaimed water, water quality of sewage plant, the present sewage quantity in the city and forecast of reclaimed water yield, in particular, it was necessary to make a correction to the actual operation flow rate of the sewage plant. the results showed that on the context of the fluctuation of inlet water quality, the outlet water quality of sewage treatment plants is basically stable, and it can meet the requirement of circulating cooling water, but suspended solids(SS) and total hardness in boiler water exceed the limit, and some advanced treatment should be carried out. In addition, the total sewage discharge will reach 13.91×104m3/d and 14.21×104m3/d respectively in the two planning level years of the project. They are greater than the normal collection capacity of the sewage system which is 12.0×104 m3/d, and the reclaimed water yield can reach 10.74×104m3/d, which is greater than the actual needed quantity 8.25×104m3/d of the power plant, so the wastewater reuse of this sewage plant are feasible and reliable to the power plant in view of engineering.

  13. Measurement of impulsive choice in rats: Same and alternate form test-retest reliability and temporal tracking

    PubMed Central

    Peterson, Jennifer R.; Hill, Catherine C.; Kirkpatrick, Kimberly

    2016-01-01

    Impulsive choice is typically measured by presenting smaller-sooner (SS) versus larger-later (LL) rewards, with biases towards the SS indicating impulsivity. The current study tested rats on different impulsive choice procedures with LL delay manipulations to assess same-form and alternate-form test-retest reliability. In the systematic-GE procedure (Green & Estle, 2003), the LL delay increased after several sessions of training; in the systematic-ER procedure (Evenden & Ryan, 1996), the delay increased within each session; and in the adjusting-M procedure (Mazur, 1987), the delay changed after each block of trials within a session based on each rat’s choices in the previous block. In addition to measuring choice behavior, we also assessed temporal tracking of the LL delays using the median times of responding during LL trials. The two systematic procedures yielded similar results in both choice and temporal tracking measures following extensive training, whereas the adjusting procedure resulted in relatively more impulsive choices and poorer temporal tracking. Overall, the three procedures produced acceptable same form test-retest reliability over time, but the adjusting procedure did not show significant alternate form test-retest reliability with the other two procedures. The results suggest that systematic procedures may supply better measurements of impulsive choice in rats. PMID:25490901

  14. Spectrally-Based Assessment of Crop Seasonal Performance and Yield

    NASA Astrophysics Data System (ADS)

    Kancheva, Rumiana; Borisova, Denitsa; Georgiev, Georgy

    The rapid advances of space technologies concern almost all scientific areas from aeronautics to medicine, and a wide range of application fields from communications to crop yield predictions. Agricultural monitoring is among the priorities of remote sensing observations for getting timely information on crop development. Monitoring agricultural fields during the growing season plays an important role in crop health assessment and stress detection provided that reliable data is obtained. Successfully spreading is the implementation of hyperspectral data to precision farming associated with plant growth and phenology monitoring, physiological state assessment, and yield prediction. In this paper, we investigated various spectral-biophysical relationships derived from in-situ reflectance measurements. The performance of spectral data for the assessment of agricultural crops condition and yield prediction was examined. The approach comprisesd development of regression models between plant spectral and state-indicative variables such as biomass, vegetation cover fraction, leaf area index, etc., and development of yield forecasting models from single-date (growth stage) and multitemporal (seasonal) reflectance data. Verification of spectral predictions was performed through comparison with estimations from biophysical relationships between crop growth variables. The study was carried out for spring barley and winter wheat. Visible and near-infrared reflectance data was acquired through the whole growing season accompanied by detailed datasets on plant phenology and canopy structural and biochemical attributes. Empirical relationships were derived relating crop agronomic variables and yield to various spectral predictors. The study findings were tested using airborne remote sensing inputs. A good correspondence was found between predicted and actual (ground-truth) estimates

  15. Sediment yield estimation in mountain catchments of the Camastra reservoir, southern Italy: a comparison among different empirical methods

    NASA Astrophysics Data System (ADS)

    Lazzari, Maurizio; Danese, Maria; Gioia, Dario; Piccarreta, Marco

    2013-04-01

    Sedimentary budget estimation is an important topic for both scientific and social community, because it is crucial to understand both dynamics of orogenic belts and many practical problems, such as soil conservation and sediment accumulation in reservoir. Estimations of sediment yield or denudation rates in southern-central Italy are generally obtained by simple empirical relationships based on statistical regression between geomorphic parameters of the drainage network and the measured suspended sediment yield at the outlet of several drainage basins or through the use of models based on sediment delivery ratio or on soil loss equations. In this work, we perform a study of catchment dynamics and an estimation of sedimentary yield for several mountain catchments of the central-western sector of the Basilicata region, southern Italy. Sediment yield estimation has been obtained through both an indirect estimation of suspended sediment yield based on the Tu index (mean annual suspension sediment yield, Ciccacci et al., 1980) and the application of the Rusle (Renard et al., 1997) and the USPED (Mitasova et al., 1996) empirical methods. The preliminary results indicate a reliable difference between the RUSLE and USPED methods and the estimation based on the Tu index; a critical data analysis of results has been carried out considering also the present-day spatial distribution of erosion, transport and depositional processes in relation to the maps obtained from the application of those different empirical methods. The studied catchments drain an artificial reservoir (i.e. the Camastra dam), where a detailed evaluation of the amount of historical sediment storage has been collected. Sediment yield estimation obtained by means of the empirical methods have been compared and checked with historical data of sediment accumulation measured in the artificial reservoir of the Camastra dam. The validation of such estimations of sediment yield at the scale of large catchments

  16. Climate Effects on Corn Yield in Missouri(.

    NASA Astrophysics Data System (ADS)

    Hu, Qi; Buyanovsky, Gregory

    2003-11-01

    Understanding climate effects on crop yield has been a continuous endeavor aiming at improving farming technology and management strategy, minimizing negative climate effects, and maximizing positive climate effects on yield. Many studies have examined climate effects on corn yield in different regions of the United States. However, most of those studies used yield and climate records that were shorter than 10 years and were for different years and localities. Although results of those studies showed various influences of climate on corn yield, they could be time specific and have been difficult to use for deriving a comprehensive understanding of climate effects on corn yield. In this study, climate effects on corn yield in central Missouri are examined using unique long-term (1895 1998) datasets of both corn yield and climate. Major results show that the climate effects on corn yield can only be explained by within-season variations in rainfall and temperature and cannot be distinguished by average growing-season conditions. Moreover, the growing-season distributions of rainfall and temperature for high-yield years are characterized by less rainfall and warmer temperature in the planting period, a rapid increase in rainfall, and more rainfall and warmer temperatures during germination and emergence. More rainfall and cooler-than-average temperatures are key features in the anthesis and kernel-filling periods from June through August, followed by less rainfall and warmer temperatures during the September and early October ripening time. Opposite variations in rainfall and temperature in the growing season correspond to low yield. Potential applications of these results in understanding how climate change may affect corn yield in the region also are discussed.

  17. Degradation spectra and ionization yields of electrons in gases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inokuti, M.; Douthat, D.A.; Rau, A.R.P.

    1975-01-01

    Progress in the microscopic theory of electron degradation in gases by Platzman, Fano, and co-workers is outlined. The theory consists of (1) the cataloging of all major inelastic-collision cross sections for electrons (including secondary-electron energy distribution in a single ionizing collision) and (2) the evaluation of cumulative consequences of individual electron collisions for the electrons themselves as well as for target molecules. For assessing the data consistency and reliability and extrapolating the data to the unexplored ranges of variables (such as electron energy), a series of plots devised by Platzman are very powerful. Electron degradation spectra were obtained through numericalmore » solution of the Spencer--Fano equation for all electron energies down to the first ionization thresholds for a few examples such as He and Ne. The systematics of the solutions resulted in the recognition of approximate scaling properties of the degradation spectra for different initial electron energies and pointed to new methods of more efficient treatment. Systematics of the ionization yields and their energy dependence on the initial electron energy were also recognized. Finally, the Spencer--Fano equation for the degradation spectra and the Fowler equation for the ionization and other yields are tightly linked with each other by a set of variational principles. (52 references, 7 figures) (DLC)« less

  18. Comparative Reliability of Structured Versus Unstructured Interviews in the Admission Process of a Residency Program

    PubMed Central

    Blouin, Danielle; Day, Andrew G.; Pavlov, Andrey

    2011-01-01

    Background Although never directly compared, structured interviews are reported as being more reliable than unstructured interviews. This study compared the reliability of both types of interview when applied to a common pool of applicants for positions in an emergency medicine residency program. Methods In 2008, one structured interview was added to the two unstructured interviews traditionally used in our resident selection process. A formal job analysis using the critical incident technique guided the development of the structured interview tool. This tool consisted of 7 scenarios assessing 4 of the domains deemed essential for success as a resident in this program. The traditional interview tool assessed 5 general criteria. In addition to these criteria, the unstructured panel members were asked to rate each candidate on the same 4 essential domains rated by the structured panel members. All 3 panels interviewed all candidates. Main outcomes were the overall, interitem, and interrater reliabilities, the correlations between interview panels, and the dimensionality of each interview tool. Results Thirty candidates were interviewed. The overall reliability reached 0.43 for the structured interview, and 0.81 and 0.71 for the unstructured interviews. Analyses of the variance components showed a high interrater, low interitem reliability for the structured interview, and a high interrater, high interitem reliability for the unstructured interviews. The summary measures from the 2 unstructured interviews were significantly correlated, but neither was correlated with the structured interview. Only the structured interview was multidimensional. Conclusions A structured interview did not yield a higher overall reliability than both unstructured interviews. The lower reliability is explained by a lower interitem reliability, which in turn is due to the multidimensionality of the interview tool. Both unstructured panels consistently rated a single dimension, even when

  19. Reliability analysis of component of affination centrifugal 1 machine by using reliability engineering

    NASA Astrophysics Data System (ADS)

    Sembiring, N.; Ginting, E.; Darnello, T.

    2017-12-01

    Problems that appear in a company that produces refined sugar, the production floor has not reached the level of critical machine availability because it often suffered damage (breakdown). This results in a sudden loss of production time and production opportunities. This problem can be solved by Reliability Engineering method where the statistical approach to historical damage data is performed to see the pattern of the distribution. The method can provide a value of reliability, rate of damage, and availability level, of an machine during the maintenance time interval schedule. The result of distribution test to time inter-damage data (MTTF) flexible hose component is lognormal distribution while component of teflon cone lifthing is weibull distribution. While from distribution test to mean time of improvement (MTTR) flexible hose component is exponential distribution while component of teflon cone lifthing is weibull distribution. The actual results of the flexible hose component on the replacement schedule per 720 hours obtained reliability of 0.2451 and availability 0.9960. While on the critical components of teflon cone lifthing actual on the replacement schedule per 1944 hours obtained reliability of 0.4083 and availability 0.9927.

  20. Detection of faults and software reliability analysis

    NASA Technical Reports Server (NTRS)

    Knight, John C.

    1987-01-01

    Multi-version or N-version programming is proposed as a method of providing fault tolerance in software. The approach requires the separate, independent preparation of multiple versions of a piece of software for some application. These versions are executed in parallel in the application environment; each receives identical inputs and each produces its version of the required outputs. The outputs are collected by a voter and, in principle, they should all be the same. In practice there may be some disagreement. If this occurs, the results of the majority are taken to be the correct output, and that is the output used by the system. A total of 27 programs were produced. Each of these programs was then subjected to one million randomly-generated test cases. The experiment yielded a number of programs containing faults that are useful for general studies of software reliability as well as studies of N-version programming. Fault tolerance through data diversity and analytic models of comparison testing are discussed.

  1. How Reliable Are Students' Evaluations of Teaching Quality? A Variance Components Approach

    ERIC Educational Resources Information Center

    Feistauer, Daniela; Richter, Tobias

    2017-01-01

    The inter-rater reliability of university students' evaluations of teaching quality was examined with cross-classified multilevel models. Students (N = 480) evaluated lectures and seminars over three years with a standardised evaluation questionnaire, yielding 4224 data points. The total variance of these student evaluations was separated into the…

  2. Yield gaps and yield relationships in US soybean production systems

    USDA-ARS?s Scientific Manuscript database

    The magnitude of yield gaps (YG) (potential yield – farmer yield) provides some indication of the prospects for increasing crop yield to meet the food demands of future populations. Quantile regression analysis was applied to county soybean [Glycine max (L.) Merrill] yields (1971 – 2011) from Kentuc...

  3. Software reliability models for critical applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pham, H.; Pham, M.

    This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the secondmore » place. 407 refs., 4 figs., 2 tabs.« less

  4. Software reliability models for critical applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pham, H.; Pham, M.

    This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place.more » 407 refs., 4 figs., 2 tabs.« less

  5. Reliability and validity of a nutrition and physical activity environmental self-assessment for child care

    PubMed Central

    Benjamin, Sara E; Neelon, Brian; Ball, Sarah C; Bangdiwala, Shrikant I; Ammerman, Alice S; Ward, Dianne S

    2007-01-01

    Background Few assessment instruments have examined the nutrition and physical activity environments in child care, and none are self-administered. Given the emerging focus on child care settings as a target for intervention, a valid and reliable measure of the nutrition and physical activity environment is needed. Methods To measure inter-rater reliability, 59 child care center directors and 109 staff completed the self-assessment concurrently, but independently. Three weeks later, a repeat self-assessment was completed by a sub-sample of 38 directors to assess test-retest reliability. To assess criterion validity, a researcher-administered environmental assessment was conducted at 69 centers and was compared to a self-assessment completed by the director. A weighted kappa test statistic and percent agreement were calculated to assess agreement for each question on the self-assessment. Results For inter-rater reliability, kappa statistics ranged from 0.20 to 1.00 across all questions. Test-retest reliability of the self-assessment yielded kappa statistics that ranged from 0.07 to 1.00. The inter-quartile kappa statistic ranges for inter-rater and test-retest reliability were 0.45 to 0.63 and 0.27 to 0.45, respectively. When percent agreement was calculated, questions ranged from 52.6% to 100% for inter-rater reliability and 34.3% to 100% for test-retest reliability. Kappa statistics for validity ranged from -0.01 to 0.79, with an inter-quartile range of 0.08 to 0.34. Percent agreement for validity ranged from 12.9% to 93.7%. Conclusion This study provides estimates of criterion validity, inter-rater reliability and test-retest reliability for an environmental nutrition and physical activity self-assessment instrument for child care. Results indicate that the self-assessment is a stable and reasonably accurate instrument for use with child care interventions. We therefore recommend the Nutrition and Physical Activity Self-Assessment for Child Care (NAP SACC

  6. Ultraflexible nanoelectronic probes form reliable, glial scar–free neural integration

    PubMed Central

    Luan, Lan; Wei, Xiaoling; Zhao, Zhengtuo; Siegel, Jennifer J.; Potnis, Ojas; Tuppen, Catherine A; Lin, Shengqing; Kazmi, Shams; Fowler, Robert A.; Holloway, Stewart; Dunn, Andrew K.; Chitwood, Raymond A.; Xie, Chong

    2017-01-01

    Implanted brain electrodes construct the only means to electrically interface with individual neurons in vivo, but their recording efficacy and biocompatibility pose limitations on scientific and clinical applications. We showed that nanoelectronic thread (NET) electrodes with subcellular dimensions, ultraflexibility, and cellular surgical footprints form reliable, glial scar–free neural integration. We demonstrated that NET electrodes reliably detected and tracked individual units for months; their impedance, noise level, single-unit recording yield, and the signal amplitude remained stable during long-term implantation. In vivo two-photon imaging and postmortem histological analysis revealed seamless, subcellular integration of NET probes with the local cellular and vasculature networks, featuring fully recovered capillaries with an intact blood-brain barrier and complete absence of chronic neuronal degradation and glial scar. PMID:28246640

  7. Interaction Between Phosphorus and Zinc on the Biomass Yield and Yield Attributes of the Medicinal Plant Stevia (Stevia rebaudiana)

    PubMed Central

    Das, Kuntal; Dang, Raman; Shivananda, T. N.; Sur, Pintu

    2005-01-01

    A greenhouse experiment was conducted at the Indian Institute of Horticultural Research (IIHR), Bangalore to study the interaction effect between phosphorus (P) and zinc (Zn) on the yield and yield attributes of the medicinal plant stevia. The results show that the yield and yield attributes have been found to be significantly affected by different treatments. The total yield in terms of biomass production has been increased significantly with the application of Zn and P in different combinations and methods, being highest (23.34 g fresh biomass) in the treatment where Zn was applied as both soil (10 kg ZnSO4/ha) and foliar spray (0.2% ZnSO4). The results also envisaged that the different yield attributes viz. height, total number of branches, and number of leaves per plant have been found to be varied with treatments, being highest in the treatment where Zn was applied as both soil and foliar spray without the application of P. The results further indicated that the yield and yield attributes of stevia have been found to be decreased in the treatment where Zn was applied as both soil and foliar spray along with P suggesting an antagonistic effect between Zn and P. PMID:15915292

  8. Reliability of environmental sampling culture results using the negative binomial intraclass correlation coefficient.

    PubMed

    Aly, Sharif S; Zhao, Jianyang; Li, Ben; Jiang, Jiming

    2014-01-01

    The Intraclass Correlation Coefficient (ICC) is commonly used to estimate the similarity between quantitative measures obtained from different sources. Overdispersed data is traditionally transformed so that linear mixed model (LMM) based ICC can be estimated. A common transformation used is the natural logarithm. The reliability of environmental sampling of fecal slurry on freestall pens has been estimated for Mycobacterium avium subsp. paratuberculosis using the natural logarithm transformed culture results. Recently, the negative binomial ICC was defined based on a generalized linear mixed model for negative binomial distributed data. The current study reports on the negative binomial ICC estimate which includes fixed effects using culture results of environmental samples. Simulations using a wide variety of inputs and negative binomial distribution parameters (r; p) showed better performance of the new negative binomial ICC compared to the ICC based on LMM even when negative binomial data was logarithm, and square root transformed. A second comparison that targeted a wider range of ICC values showed that the mean of estimated ICC closely approximated the true ICC.

  9. Transient-evoked and distortion product otoacoustic emissions: A short-term test-retest reliability study.

    PubMed

    Keppler, Hannah; Dhooge, Ingeborg; Maes, Leen; D'haenens, Wendy; Bockstael, Annelies; Philips, Birgit; Swinnen, Freya; Vinck, Bart

    2010-02-01

    Knowledge regarding the variability of transient-evoked otoacoustic emissions (TEOAEs) and distortion product otoacoustic emissions (DPOAEs) is essential in clinical settings and improves their utility in monitoring hearing status over time. In the current study, TEOAEs and DPOAEs were measured with commercially available OAE-equipment in 56 normally-hearing ears during three sessions. Reliability was analysed for the retest measurement without probe-refitting, the immediate retest measurement with probe-refitting, and retest measurements after one hour and one week. The highest reliability was obtained in the retest measurement without probe-refitting, and decreased with increasing time-interval between measurements. For TEOAEs, the lowest reliability was seen at half-octave frequency bands 1.0 and 1.4 kHz; whereas for DPOAEs half-octave frequency band 8.0 kHz had also poor reliability. Higher primary tone level combination for DPOAEs yielded to a better reliability of DPOAE amplitudes. External environmental noise seemed to be the dominating noise source in normal-hearing subjects, decreasing the reliability of emission amplitudes especially in the low-frequency region.

  10. SMART empirical approaches for predicting field performance of PV modules from results of reliability tests

    NASA Astrophysics Data System (ADS)

    Hardikar, Kedar Y.; Liu, Bill J. J.; Bheemreddy, Venkata

    2016-09-01

    Gaining an understanding of degradation mechanisms and their characterization are critical in developing relevant accelerated tests to ensure PV module performance warranty over a typical lifetime of 25 years. As newer technologies are adapted for PV, including new PV cell technologies, new packaging materials, and newer product designs, the availability of field data over extended periods of time for product performance assessment cannot be expected within the typical timeframe for business decisions. In this work, to enable product design decisions and product performance assessment for PV modules utilizing newer technologies, Simulation and Mechanism based Accelerated Reliability Testing (SMART) methodology and empirical approaches to predict field performance from accelerated test results are presented. The method is demonstrated for field life assessment of flexible PV modules based on degradation mechanisms observed in two accelerated tests, namely, Damp Heat and Thermal Cycling. The method is based on design of accelerated testing scheme with the intent to develop relevant acceleration factor models. The acceleration factor model is validated by extensive reliability testing under different conditions going beyond the established certification standards. Once the acceleration factor model is validated for the test matrix a modeling scheme is developed to predict field performance from results of accelerated testing for particular failure modes of interest. Further refinement of the model can continue as more field data becomes available. While the demonstration of the method in this work is for thin film flexible PV modules, the framework and methodology can be adapted to other PV products.

  11. Examination of Anomalous World Experience: A Report on Reliability.

    PubMed

    Conerty, Joseph; Skodlar, Borut; Pienkos, Elizabeth; Zadravek, Tina; Byrom, Greg; Sass, Louis

    2017-01-01

    The EAWE (Examination of Anomalous World Experience) is a newly developed, semi-structured interview that aims to capture anomalies of subjectivity, common in schizophrenia spectrum disorders, that pertain to experiences of the lived world, including space, time, people, language, atmosphere, and certain existential attitudes. By contrast, previous empirical studies of subjective experience in schizophrenia have focused largely on disturbances in self-experience. To assess the reliability of the EAWE, including internal consistency and interrater reliability. In the course of developing the EAWE, two distinct studies were conducted, one in the United States and the other in Slovenia. Thirteen patients diagnosed with schizophrenia spectrum or mood disorders were recruited for the US study. Fifteen such patients were recruited for the Slovenian study. Two live interviewers conducted the EAWE in the US. The Slovenian interviews were completed by one live interviewer with a second rater reviewing audiorecordings of the interview. Internal consistency and interrater reliability were calculated independently for each study, utilizing Cronbach's α, Spearman's ρ, and Cohen's κ. Each study yielded high internal consistency (Cronbach's α >0.82) and high interrater reliability for total EAWE scores (ρ > 0.83; average κ values were at least 0.78 for each study, with EAWE domain-specific κ not lower than 0.73). The EAWE, containing world-oriented inquiries into anomalies in subjective experience, has adequate reliability for use in a clinical or research setting. © 2017 S. Karger AG, Basel.

  12. Nut crop yield records show that budbreak-based chilling requirements may not reflect yield decline chill thresholds

    NASA Astrophysics Data System (ADS)

    Pope, Katherine S.; Dose, Volker; Da Silva, David; Brown, Patrick H.; DeJong, Theodore M.

    2015-06-01

    Warming winters due to climate change may critically affect temperate tree species. Insufficiently cold winters are thought to result in fewer viable flower buds and the subsequent development of fewer fruits or nuts, decreasing the yield of an orchard or fecundity of a species. The best existing approximation for a threshold of sufficient cold accumulation, the "chilling requirement" of a species or variety, has been quantified by manipulating or modeling the conditions that result in dormant bud breaking. However, the physiological processes that affect budbreak are not the same as those that determine yield. This study sought to test whether budbreak-based chilling thresholds can reasonably approximate the thresholds that affect yield, particularly regarding the potential impacts of climate change on temperate tree crop yields. County-wide yield records for almond ( Prunus dulcis), pistachio ( Pistacia vera), and walnut ( Juglans regia) in the Central Valley of California were compared with 50 years of weather records. Bayesian nonparametric function estimation was used to model yield potentials at varying amounts of chill accumulation. In almonds, average yields occurred when chill accumulation was close to the budbreak-based chilling requirement. However, in the other two crops, pistachios and walnuts, the best previous estimate of the budbreak-based chilling requirements was 19-32 % higher than the chilling accumulations associated with average or above average yields. This research indicates that physiological processes beyond requirements for budbreak should be considered when estimating chill accumulation thresholds of yield decline and potential impacts of climate change.

  13. Changing forest water yields in response to climate warming: results from long-term experimental watershed sites across North America.

    PubMed

    Creed, Irena F; Spargo, Adam T; Jones, Julia A; Buttle, Jim M; Adams, Mary B; Beall, Fred D; Booth, Eric G; Campbell, John L; Clow, Dave; Elder, Kelly; Green, Mark B; Grimm, Nancy B; Miniat, Chelcy; Ramlal, Patricia; Saha, Amartya; Sebestyen, Stephen; Spittlehouse, Dave; Sterling, Shannon; Williams, Mark W; Winkler, Rita; Yao, Huaxia

    2014-10-01

    Climate warming is projected to affect forest water yields but the effects are expected to vary. We investigated how forest type and age affect water yield resilience to climate warming. To answer this question, we examined the variability in historical water yields at long-term experimental catchments across Canada and the United States over 5-year cool and warm periods. Using the theoretical framework of the Budyko curve, we calculated the effects of climate warming on the annual partitioning of precipitation (P) into evapotranspiration (ET) and water yield. Deviation (d) was defined as a catchment's change in actual ET divided by P [AET/P; evaporative index (EI)] coincident with a shift from a cool to a warm period - a positive d indicates an upward shift in EI and smaller than expected water yields, and a negative d indicates a downward shift in EI and larger than expected water yields. Elasticity was defined as the ratio of interannual variation in potential ET divided by P (PET/P; dryness index) to interannual variation in the EI - high elasticity indicates low d despite large range in drying index (i.e., resilient water yields), low elasticity indicates high d despite small range in drying index (i.e., nonresilient water yields). Although the data needed to fully evaluate ecosystems based on these metrics are limited, we were able to identify some characteristics of response among forest types. Alpine sites showed the greatest sensitivity to climate warming with any warming leading to increased water yields. Conifer forests included catchments with lowest elasticity and stable to larger water yields. Deciduous forests included catchments with intermediate elasticity and stable to smaller water yields. Mixed coniferous/deciduous forests included catchments with highest elasticity and stable water yields. Forest type appeared to influence the resilience of catchment water yields to climate warming, with conifer and deciduous catchments more susceptible to

  14. Changing forest water yields in response to climate warming: results from long-term experimental watershed sites across North America

    PubMed Central

    Creed, Irena F; Spargo, Adam T; Jones, Julia A; Buttle, Jim M; Adams, Mary B; Beall, Fred D; Booth, Eric G; Campbell, John L; Clow, Dave; Elder, Kelly; Green, Mark B; Grimm, Nancy B; Miniat, Chelcy; Ramlal, Patricia; Saha, Amartya; Sebestyen, Stephen; Spittlehouse, Dave; Sterling, Shannon; Williams, Mark W; Winkler, Rita; Yao, Huaxia

    2014-01-01

    Climate warming is projected to affect forest water yields but the effects are expected to vary. We investigated how forest type and age affect water yield resilience to climate warming. To answer this question, we examined the variability in historical water yields at long-term experimental catchments across Canada and the United States over 5-year cool and warm periods. Using the theoretical framework of the Budyko curve, we calculated the effects of climate warming on the annual partitioning of precipitation (P) into evapotranspiration (ET) and water yield. Deviation (d) was defined as a catchment's change in actual ET divided by P [AET/P; evaporative index (EI)] coincident with a shift from a cool to a warm period – a positive d indicates an upward shift in EI and smaller than expected water yields, and a negative d indicates a downward shift in EI and larger than expected water yields. Elasticity was defined as the ratio of interannual variation in potential ET divided by P (PET/P; dryness index) to interannual variation in the EI – high elasticity indicates low d despite large range in drying index (i.e., resilient water yields), low elasticity indicates high d despite small range in drying index (i.e., nonresilient water yields). Although the data needed to fully evaluate ecosystems based on these metrics are limited, we were able to identify some characteristics of response among forest types. Alpine sites showed the greatest sensitivity to climate warming with any warming leading to increased water yields. Conifer forests included catchments with lowest elasticity and stable to larger water yields. Deciduous forests included catchments with intermediate elasticity and stable to smaller water yields. Mixed coniferous/deciduous forests included catchments with highest elasticity and stable water yields. Forest type appeared to influence the resilience of catchment water yields to climate warming, with conifer and deciduous catchments more susceptible to

  15. Statistical emulators of maize, rice, soybean and wheat yields from global gridded crop models

    DOE PAGES

    Blanc, Élodie

    2017-01-26

    This study provides statistical emulators of crop yields based on global gridded crop model simulations from the Inter-Sectoral Impact Model Intercomparison Project Fast Track project. The ensemble of simulations is used to build a panel of annual crop yields from five crop models and corresponding monthly summer weather variables for over a century at the grid cell level globally. This dataset is then used to estimate, for each crop and gridded crop model, the statistical relationship between yields, temperature, precipitation and carbon dioxide. This study considers a new functional form to better capture the non-linear response of yields to weather,more » especially for extreme temperature and precipitation events, and now accounts for the effect of soil type. In- and out-of-sample validations show that the statistical emulators are able to replicate spatial patterns of yields crop levels and changes overtime projected by crop models reasonably well, although the accuracy of the emulators varies by model and by region. This study therefore provides a reliable and accessible alternative to global gridded crop yield models. By emulating crop yields for several models using parsimonious equations, the tools provide a computationally efficient method to account for uncertainty in climate change impact assessments.« less

  16. Statistical emulators of maize, rice, soybean and wheat yields from global gridded crop models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanc, Élodie

    This study provides statistical emulators of crop yields based on global gridded crop model simulations from the Inter-Sectoral Impact Model Intercomparison Project Fast Track project. The ensemble of simulations is used to build a panel of annual crop yields from five crop models and corresponding monthly summer weather variables for over a century at the grid cell level globally. This dataset is then used to estimate, for each crop and gridded crop model, the statistical relationship between yields, temperature, precipitation and carbon dioxide. This study considers a new functional form to better capture the non-linear response of yields to weather,more » especially for extreme temperature and precipitation events, and now accounts for the effect of soil type. In- and out-of-sample validations show that the statistical emulators are able to replicate spatial patterns of yields crop levels and changes overtime projected by crop models reasonably well, although the accuracy of the emulators varies by model and by region. This study therefore provides a reliable and accessible alternative to global gridded crop yield models. By emulating crop yields for several models using parsimonious equations, the tools provide a computationally efficient method to account for uncertainty in climate change impact assessments.« less

  17. Sample-averaged biexciton quantum yield measured by solution-phase photon correlation.

    PubMed

    Beyler, Andrew P; Bischof, Thomas S; Cui, Jian; Coropceanu, Igor; Harris, Daniel K; Bawendi, Moungi G

    2014-12-10

    The brightness of nanoscale optical materials such as semiconductor nanocrystals is currently limited in high excitation flux applications by inefficient multiexciton fluorescence. We have devised a solution-phase photon correlation measurement that can conveniently and reliably measure the average biexciton-to-exciton quantum yield ratio of an entire sample without user selection bias. This technique can be used to investigate the multiexciton recombination dynamics of a broad scope of synthetically underdeveloped materials, including those with low exciton quantum yields and poor fluorescence stability. Here, we have applied this method to measure weak biexciton fluorescence in samples of visible-emitting InP/ZnS and InAs/ZnS core/shell nanocrystals, and to demonstrate that a rapid CdS shell growth procedure can markedly increase the biexciton fluorescence of CdSe nanocrystals.

  18. Sample-Averaged Biexciton Quantum Yield Measured by Solution-Phase Photon Correlation

    PubMed Central

    Beyler, Andrew P.; Bischof, Thomas S.; Cui, Jian; Coropceanu, Igor; Harris, Daniel K.; Bawendi, Moungi G.

    2015-01-01

    The brightness of nanoscale optical materials such as semiconductor nanocrystals is currently limited in high excitation flux applications by inefficient multiexciton fluorescence. We have devised a solution-phase photon correlation measurement that can conveniently and reliably measure the average biexciton-to-exciton quantum yield ratio of an entire sample without user selection bias. This technique can be used to investigate the multiexciton recombination dynamics of a broad scope of synthetically underdeveloped materials, including those with low exciton quantum yields and poor fluorescence stability. Here, we have applied this method to measure weak biexciton fluorescence in samples of visible-emitting InP/ZnS and InAs/ZnS core/shell nanocrystals, and to demonstrate that a rapid CdS shell growth procedure can markedly increase the biexciton fluorescence of CdSe nanocrystals. PMID:25409496

  19. Test-Retest Reliability of Memory Task fMRI in Alzheimer’s Disease Clinical Trials

    PubMed Central

    Atri, Alireza; O’Brien, Jacqueline L.; Sreenivasan, Aishwarya; Rastegar, Sarah; Salisbury, Sibyl; DeLuca, Amy N.; O’Keefe, Kelly M.; LaViolette, Peter S.; Rentz, Dorene M.; Locascio, Joseph J.; Sperling, Reisa A.

    2012-01-01

    Objective To examine feasibility and test-retest reliability of encoding-task functional MRI (fMRI) in mild Alzheimer’s disease (AD). Design Randomized, double-blind, placebo-controlled (RCT) study. Setting Memory clinical trials unit. Participants Twelve subjects with mild AD (MMSE 24.0±0.7, CDR 1), on >6 months stable donepezil, from the placebo-arm of a larger 24-week (n=24, four scans on weeks 0,6,12,24) study. Interventions Placebo and three face-name paired-associate encoding, block-design BOLD-fMRI scans in 12 weeks. Main Outcomes Whole-brain t-maps (p<0.001, 5-contiguous voxels) and hippocampal regions-of-interest (ROI) analyses of extent (EXT, %voxels active) and magnitude (MAG, %signal change) for Novel-greater-than-Repeated (N>R) face-name contrasts. Calculation of Intraclass Correlations (ICC) and power estimates for hippocampal ROIs. Results Task-tolerability and data yield were high (95 of 96 scans yield good quality data). Whole-brain maps were stable. Right and left hippocampal ROI ICCs were 0.59–0.87 and 0.67–0.74, respectively. To detect 25–50% changes in 0–12 week hippocampal activity using L/R-EXT or R-MAG with 80% power (2-sided-α=0.05) requires 14–51 subjects. Using L-MAG requires >125 subjects due to relatively small signals to variance ratios. Conclusions Encoding-task fMRI was successfully implemented in a single-site, 24-week, AD RCT. Week 0–12 whole-brain t-maps were stable and test-retest reliability of hippocampal fMRI measures ranged from moderate to substantial. Right hippocampal-MAG may be the most promising of these candidate measures in a leveraged context. These initial estimates of test-retest reliability and power justify evaluation of encoding-task fMRI as a potential biomarker for “signal-of-effect” in exploratory and proof-of-concept trials in mild AD. Validation of these results with larger sample sizes and assessment in multi-site studies is warranted. PMID:21555634

  20. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

  1. The impact of Global Warming on global crop yields due to changes in pest pressure

    NASA Astrophysics Data System (ADS)

    Battisti, D. S.; Tewksbury, J. J.; Deutsch, C. A.

    2011-12-01

    A billion people currently lack reliable access to sufficient food and almost half of the calories feeding these people come from just three crops: rice, maize, wheat. Insect pests are among the largest factors affecting the yield of these three crops, but models assessing the effects of global warming on crops rarely consider changes in insect pest pressure on crop yields. We use well-established relationships between temperature and insect physiology to project climate-driven changes in pest pressure, defined as integrated population metabolism, for the three major crops. By the middle of this century, under most scenarios, insect pest pressure is projected to increase by more than 50% in temperate areas, while increases in tropical regions will be more modest. Yield relationships indicate that the largest increases in insect pest pressure are likely to occur in areas where yield is greatest, suggesting increased strain on global food markets.

  2. Orbiter Autoland reliability analysis

    NASA Technical Reports Server (NTRS)

    Welch, D. Phillip

    1993-01-01

    The Space Shuttle Orbiter is the only space reentry vehicle in which the crew is seated upright. This position presents some physiological effects requiring countermeasures to prevent a crewmember from becoming incapacitated. This also introduces a potential need for automated vehicle landing capability. Autoland is a primary procedure that was identified as a requirement for landing following and extended duration orbiter mission. This report documents the results of the reliability analysis performed on the hardware required for an automated landing. A reliability block diagram was used to evaluate system reliability. The analysis considers the manual and automated landing modes currently available on the Orbiter. (Autoland is presently a backup system only.) Results of this study indicate a +/- 36 percent probability of successfully extending a nominal mission to 30 days. Enough variations were evaluated to verify that the reliability could be altered with missions planning and procedures. If the crew is modeled as being fully capable after 30 days, the probability of a successful manual landing is comparable to that of Autoland because much of the hardware is used for both manual and automated landing modes. The analysis indicates that the reliability for the manual mode is limited by the hardware and depends greatly on crew capability. Crew capability for a successful landing after 30 days has not been determined yet.

  3. Yield surface evolution for columnar ice

    NASA Astrophysics Data System (ADS)

    Zhou, Zhiwei; Ma, Wei; Zhang, Shujuan; Mu, Yanhu; Zhao, Shunpin; Li, Guoyu

    A series of triaxial compression tests, which has capable of measuring the volumetric strain of the sample, were conducted on columnar ice. A new testing approach of probing the experimental yield surface was performed from a single sample in order to investigate yield and hardening behaviors of the columnar ice under complex stress states. Based on the characteristic of the volumetric strain, a new method of defined the multiaxial yield strengths of the columnar ice is proposed. The experimental yield surface remains elliptical shape in the stress space of effective stress versus mean stress. The effect of temperature, loading rate and loading path in the initial yield surface and deformation properties of the columnar ice were also studied. Subsequent yield surfaces of the columnar ice have been explored by using uniaxial and hydrostatic paths. The evolution of the subsequent yield surface exhibits significant path-dependent characteristics. The multiaxial hardening law of the columnar ice was established experimentally. A phenomenological yield criterion was presented for multiaxial yield and hardening behaviors of the columnar ice. The comparisons between the theoretical and measured results indicate that this current model is capable of giving a reasonable prediction for the multiaxial yield and post-yield properties of the columnar ice subjected to different temperature, loading rate and path conditions.

  4. Validity and reliability of the Persian version of mobile phone addiction scale

    PubMed Central

    Mazaheri, Maryam Amidi; Karbasi, Mojtaba

    2014-01-01

    Background: With regard to large number of mobile users especially among college students in Iran, addiction to mobile phone is attracting increasing concern. There is an urgent need for reliable and valid instrument to measure this phenomenon. This study examines validity and reliability of the Persian version of mobile phone addiction scale (MPAIS) in college students. Materials and Methods: this methodological study was down in Isfahan University of Medical Sciences. One thousand one hundred and eighty students were selected by convenience sampling. The English version of the MPAI questionnaire was translated into Persian with the approach of Jones et al. (Challenges in language, culture, and modality: Translating English measures into American Sign Language. Nurs Res 2006; 55: 75-81). Its reliability was tested by Cronbach's alpha and its dimensionality validity was evaluated using Pearson correlation coefficients with other measures of mobile phone use and IAT. Construct validity was evaluated using Exploratory subscale analysis. Results: Cronbach's alpha of 0.86 was obtained for total PMPAS, for subscale1 (eight items) was 0.84, for subscale 2 (five items) was 0.81 and for subscale 3 (two items) was 0.77. There were significantly positive correlations between the score of PMPAS and IAT (r = 0.453, P < 0.001) and other measures of mobile phone use. Principal component subscale analysis yielded a three-subscale structure including: inability to control craving; feeling anxious and lost; mood improvement accounted for 60.57% of total variance. The results of discriminate validity showed that all the item's correlations with related subscale were greater than 0.5 and correlations with unrelated subscale were less than 0.5. Conclusion: Considering lack of a valid and reliable questionnaire for measuring addiction to the mobile phone, PMPAS could be a suitable instrument for measuring mobile phone addiction in future research. PMID:24778668

  5. Reliability and smallest real difference of the ankle lunge test post ankle fracture.

    PubMed

    Simondson, David; Brock, Kim; Cotton, Susan

    2012-02-01

    This study aimed to determine the reliability and the smallest real difference of the Ankle Lunge test in an ankle fracture patient population. In the post immobilisation stage of ankle fracture, ankle dorsiflexion is an important measure of progress and outcome. The Ankle Lunge test measures weight bearing dorsiflexion, resulting in negative scores (knee to wall distance) and positive scores (toe to wall distance), for which the latter has proven reliability in normal subjects only. A consecutive sample of ankle fracture patients with permission to commence weight bearing, were recruited to the study. Three measurements of the Ankle Lunge Test were performed each by two raters, one senior and one junior physiotherapist. These occurred prior to therapy sessions in the second week after plaster removal. A standardised testing station was utilised and allowed for both knee to wall distance and toe to wall distance measurement. Data was collected from 10 individuals with ankle fracture, with an average age of 36 years (SD 14.8). Seventy seven percent of observations were negative. Intra and inter-rater reliability yielded intra class correlations at or above 0.97, p < .001. There was a significant systematic bias towards improved scores during repeated measurement for one rater (p = .01). The smallest real difference was calculated as 13.8mm. The Ankle Lunge test is a practical and reliable tool for measuring weightbearing dorsiflexion post ankle fracture. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. An experiment in software reliability

    NASA Technical Reports Server (NTRS)

    Dunham, J. R.; Pierce, J. L.

    1986-01-01

    The results of a software reliability experiment conducted in a controlled laboratory setting are reported. The experiment was undertaken to gather data on software failures and is one in a series of experiments being pursued by the Fault Tolerant Systems Branch of NASA Langley Research Center to find a means of credibly performing reliability evaluations of flight control software. The experiment tests a small sample of implementations of radar tracking software having ultra-reliability requirements and uses n-version programming for error detection, and repetitive run modeling for failure and fault rate estimation. The experiment results agree with those of Nagel and Skrivan in that the program error rates suggest an approximate log-linear pattern and the individual faults occurred with significantly different error rates. Additional analysis of the experimental data raises new questions concerning the phenomenon of interacting faults. This phenomenon may provide one explanation for software reliability decay.

  7. Sediment yields of streams in the Umpqua River Basin, Oregon

    USGS Publications Warehouse

    Curtiss, D.A.

    1975-01-01

    This report summarizes sediment data collected at 11 sites in the Umpqua River basin from 1956 to 1973 and updates a report by C. A. Onions (1969) of estimated sediment yields in the basin from 1956-67.  Onions' report points out that the suspended-sediment data, collected during the 1956-67 period, were insufficient to compute reliable sediment yields.  Therefore, the U.S, Geological Survey, in cooperation with Douglas County, collected additional data from 1969 to 1973 to improve the water discharge-sediment discharge relationships at these sites.  These data are published in "Water resources data for Oregon, Part 2, Water quality records," 1970 through 1973 water years.  In addition to the 10 original sites, data were collected during this period from the Umpqua River near Elkton station, and a summary of the data for that station is included in table 1.

  8. Between-Person and Within-Person Subscore Reliability: Comparison of Unidimensional and Multidimensional IRT Models

    ERIC Educational Resources Information Center

    Bulut, Okan

    2013-01-01

    The importance of subscores in educational and psychological assessments is undeniable. Subscores yield diagnostic information that can be used for determining how each examinee's abilities/skills vary over different content domains. One of the most common criticisms about reporting and using subscores is insufficient reliability of subscores.…

  9. High-Yield Synthesis of Stoichiometric Boron Nitride Nanostructures

    DOE PAGES

    Nocua, José E.; Piazza, Fabrice; Weiner, Brad R.; ...

    2009-01-01

    Boron nimore » tride (BN) nanostructures are structural analogues of carbon nanostructures but have completely different bonding character and structural defects. They are chemically inert, electrically insulating, and potentially important in mechanical applications that include the strengthening of light structural materials. These applications require the reliable production of bulk amounts of pure BN nanostructures in order to be able to reinforce large quantities of structural materials, hence the need for the development of high-yield synthesis methods of pure BN nanostructures. Using borazine ( B 3 N 3 H 6 ) as chemical precursor and the hot-filament chemical vapor deposition (HFCVD) technique, pure BN nanostructures with cross-sectional sizes ranging between 20 and 50 nm were obtained, including nanoparticles and nanofibers. Their crystalline structure was characterized by (XRD), their morphology and nanostructure was examined by (SEM) and (TEM), while their chemical composition was studied by (EDS), (FTIR), (EELS), and (XPS). Taken altogether, the results indicate that all the material obtained is stoichiometric nanostructured BN with hexagonal and rhombohedral crystalline structure.« less

  10. Uncertainties in obtaining high reliability from stress-strength models

    NASA Technical Reports Server (NTRS)

    Neal, Donald M.; Matthews, William T.; Vangel, Mark G.

    1992-01-01

    There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.

  11. Assessment of family functioning in Caucasian and Hispanic Americans: reliability, validity, and factor structure of the Family Assessment Device.

    PubMed

    Aarons, Gregory A; McDonald, Elizabeth J; Connelly, Cynthia D; Newton, Rae R

    2007-12-01

    The purpose of this study was to examine the factor structure, reliability, and validity of the Family Assessment Device (FAD) among a national sample of Caucasian and Hispanic American families receiving public sector mental health services. A confirmatory factor analysis conducted to test model fit yielded equivocal findings. With few exceptions, indices of model fit, reliability, and validity were poorer for Hispanic Americans compared with Caucasian Americans. Contrary to our expectation, an exploratory factor analysis did not result in a better fitting model of family functioning. Without stronger evidence supporting a reformulation of the FAD, we recommend against such a course of action. Findings highlight the need for additional research on the role of culture in measurement of family functioning.

  12. Transferring Aviation Practices into Clinical Medicine for the Promotion of High Reliability.

    PubMed

    Powell-Dunford, Nicole; McPherson, Mark K; Pina, Joseph S; Gaydos, Steven J

    2017-05-01

    Aviation is a classic example of a high reliability organization (HRO)-an organization in which catastrophic events are expected to occur without control measures. As health care systems transition toward high reliability, aviation practices are increasingly transferred for clinical implementation. A PubMed search using the terms aviation, crew resource management, and patient safety was undertaken. Manuscripts authored by physician pilots and accident investigation regulations were analyzed. Subject matter experts involved in adoption of aviation practices into the medical field were interviewed. A PubMed search yielded 621 results with 22 relevant for inclusion. Improved clinical outcomes were noted in five research trials in which aviation practices were adopted, particularly with regard to checklist usage and crew resource-management training. Effectiveness of interventions was influenced by intensity of application, leadership involvement, and provision of staff training. The usefulness of incorporating mishap investigation techniques has not been established. Whereas aviation accident investigation is highly standardized, the investigation of medical error is characterized by variation. The adoption of aviation practices into clinical medicine facilitates an evolution toward high reliability. Evidence for the efficacy of the checklist and crew resource-management training is robust. Transference of aviation accident investigation practices is preliminary. A standardized, independent investigation process could facilitate the development of a safety culture commensurate with that achieved in the aviation industry.Powell-Dunford N, McPherson MK, Pina JS, Gaydos SJ. Transferring aviation practices into clinical medicine for the promotion of high reliability. Aerosp Med Hum Perform. 2017; 88(5):487-491.

  13. Reliability Generalization (RG) Analysis: The Test Is Not Reliable

    ERIC Educational Resources Information Center

    Warne, Russell

    2008-01-01

    Literature shows that most researchers are unaware of some of the characteristics of reliability. This paper clarifies some misconceptions by describing the procedures, benefits, and limitations of reliability generalization while using it to illustrate the nature of score reliability. Reliability generalization (RG) is a meta-analytic method…

  14. Examining the roles that changing harvested areas, closing yield-gaps, and increasing yield ceilings have had on crop production

    NASA Astrophysics Data System (ADS)

    Johnston, M.; Ray, D. K.; Mueller, N. D.; Foley, J. A.

    2011-12-01

    With an increasing and increasingly affluent population, there has been tremendous effort to examine strategies for sustainably increasing agricultural production to meet this surging global demand. Before developing new solutions from scratch, though, we believe it is important to consult our recent agricultural history to see where and how agricultural production changes have already taken place. By utilizing the newly created temporal M3 cropland datasets, we can for the first time examine gridded agricultural yields and area, both spatially and temporally. This research explores the historical drivers of agricultural production changes, from 1965-2005. The results will be presented spatially at the global-level (5-min resolution), as well as at the individual country-level. The primary research components of this study are presented below, including the general methodology utilized in each phase and preliminary results for soybean where available. The complete assessment will cover maize, wheat, rice, soybean, and sugarcane, and will include country-specific analysis for over 200 countries, states, territories and protectorates. Phase 1: The first component of our research isolates changes in agricultural production due to variation in planting decisions (harvested area) from changes in production due to intensification efforts (yield). We examine area/yield changes at the pixel-level over 5-year time-steps to determine how much each component has contributed to overall changes in production. Our results include both spatial patterns of changes in production, as well as spatial maps illustrating to what degree the production change is attributed to area and/or yield. Together, these maps illustrate where, why, and by how much agricultural production has changed over time. Phase 2: In the second phase of our research we attempt to determine the impact that area and yield changes have had on agricultural production at the country-level. We calculate a production

  15. Reliability Analysis of the MSC System

    NASA Astrophysics Data System (ADS)

    Kim, Young-Soo; Lee, Do-Kyoung; Lee, Chang-Ho; Woo, Sun-Hee

    2003-09-01

    MSC (Multi-Spectral Camera) is the payload of KOMPSAT-2, which is being developed for earth imaging in optical and near-infrared region. The design of the MSC is completed and its reliability has been assessed from part level to the MSC system level. The reliability was analyzed in worst case and the analysis results showed that the value complies the required value of 0.9. In this paper, a calculation method of reliability for the MSC system is described, and assessment result is presented and discussed.

  16. Quantifying potential yield and water-limited yield of summer maize in the North China Plain

    NASA Astrophysics Data System (ADS)

    Jiang, Mingnuo; Liu, Chaoshun; Chen, Maosi

    2017-09-01

    The North China Plain is a major food producing region in China, and climate change could pose a threat to food production in the region. Based on China Meteorological Forcing Dataset, simulating the growth of summer maize in North China Plain from 1979 to 2015 with the regional implementation of crop growth model WOFOST. The results showed that the model can reflect the potential yield and water-limited yield of Summer Maize in North China Plain through the calibration and validation of WOFOST model. After the regional implementation of model, combined with the reanalysis data, the model can better reproduce the regional history of summer maize yield in the North China Plain. The yield gap in Southeastern Beijing, southern Tianjin, southern Hebei province, Northwestern Shandong province is significant, these means the water condition is the main factor to summer maize yield in these regions.

  17. Maximum sustainable yield and species extinction in a prey-predator system: some new results.

    PubMed

    Ghosh, Bapan; Kar, T K

    2013-06-01

    Though the maximum sustainable yield (MSY) approach has been legally adopted for the management of world fisheries, it does not provide any guarantee against from species extinction in multispecies communities. In the present article, we describe the appropriateness of the MSY policy in a Holling-Tanner prey-predator system with different types of functional responses. It is observed that for both type I and type II functional responses, harvesting of either prey or predator species at the MSY level is a sustainable fishing policy. In the case of combined harvesting, both the species coexist at the maximum sustainable total yield (MSTY) level if the biotic potential of the prey species is greater than a threshold value. Further, increase of the biotic potential beyond the threshold value affects the persistence of the system.

  18. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    PubMed

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  19. Declining water yield from forested mountain watersheds in response to climate change and forest mesophication

    Treesearch

    Peter V. Caldwell; Chelcy F. Miniat; Katherine J. Elliott; Wayne. T. Swank; Steven T. Brantley; Stephanie H. Laseter

    2016-01-01

    Climate change and forest disturbances are threatening the ability of forested mountain watersheds to provide the clean, reliable, and abundant fresh water necessary to support aquatic ecosystems and a growing human population. Here we used 76 years of water yield, climate, and field plot vegetation measurements in six unmanaged, reference watersheds in the southern...

  20. Multi-approach assessment of the spatial distribution of the specific yield: application to the Crau plain aquifer, France

    NASA Astrophysics Data System (ADS)

    Seraphin, Pierre; Gonçalvès, Julio; Vallet-Coulomb, Christine; Champollion, Cédric

    2018-03-01

    Spatially distributed values of the specific yield, a fundamental parameter for transient groundwater mass balance calculations, were obtained by means of three independent methods for the Crau plain, France. In contrast to its traditional use to assess recharge based on a given specific yield, the water-table fluctuation (WTF) method, applied using major recharging events, gave a first set of reference values. Then, large infiltration processes recorded by monitored boreholes and caused by major precipitation events were interpreted in terms of specific yield by means of a one-dimensional vertical numerical model solving Richards' equations within the unsaturated zone. Finally, two gravity field campaigns, at low and high piezometric levels, were carried out to assess the groundwater mass variation and thus alternative specific yield values. The range obtained by the WTF method for this aquifer made of alluvial detrital material was 2.9- 26%, in line with the scarce data available so far. The average spatial value of specific yield by the WTF method (9.1%) is consistent with the aquifer scale value from the hydro-gravimetric approach. In this investigation, an estimate of the hitherto unknown spatial distribution of the specific yield over the Crau plain was obtained using the most reliable method (the WTF method). A groundwater mass balance calculation over the domain using this distribution yielded similar results to an independent quantification based on a stable isotope-mixing model. This agreement reinforces the relevance of such estimates, which can be used to build a more accurate transient hydrogeological model.

  1. Multi-approach assessment of the spatial distribution of the specific yield: application to the Crau plain aquifer, France

    NASA Astrophysics Data System (ADS)

    Seraphin, Pierre; Gonçalvès, Julio; Vallet-Coulomb, Christine; Champollion, Cédric

    2018-06-01

    Spatially distributed values of the specific yield, a fundamental parameter for transient groundwater mass balance calculations, were obtained by means of three independent methods for the Crau plain, France. In contrast to its traditional use to assess recharge based on a given specific yield, the water-table fluctuation (WTF) method, applied using major recharging events, gave a first set of reference values. Then, large infiltration processes recorded by monitored boreholes and caused by major precipitation events were interpreted in terms of specific yield by means of a one-dimensional vertical numerical model solving Richards' equations within the unsaturated zone. Finally, two gravity field campaigns, at low and high piezometric levels, were carried out to assess the groundwater mass variation and thus alternative specific yield values. The range obtained by the WTF method for this aquifer made of alluvial detrital material was 2.9- 26%, in line with the scarce data available so far. The average spatial value of specific yield by the WTF method (9.1%) is consistent with the aquifer scale value from the hydro-gravimetric approach. In this investigation, an estimate of the hitherto unknown spatial distribution of the specific yield over the Crau plain was obtained using the most reliable method (the WTF method). A groundwater mass balance calculation over the domain using this distribution yielded similar results to an independent quantification based on a stable isotope-mixing model. This agreement reinforces the relevance of such estimates, which can be used to build a more accurate transient hydrogeological model.

  2. Yield and yield gaps in central U.S. corn production systems

    USDA-ARS?s Scientific Manuscript database

    The magnitude of yield gaps (YG) (potential yield – farmer yield) provides some indication of the prospects for increasing crop yield. Quantile regression analysis was applied to county maize (Zea mays L.) yields (1972 – 2011) from Kentucky, Iowa and Nebraska (irrigated) (total of 115 counties) to e...

  3. Sample-Averaged Biexciton Quantum Yield Measured by Solution-Phase Photon Correlation

    DOE PAGES

    Beyler, Andrew P.; Bischof, Thomas S.; Cui, Jian; ...

    2014-11-19

    The brightness of nanoscale optical materials such as semiconductor nanocrystals is currently limited in high excitation flux applications by inefficient multiexciton fluorescence. We have devised a solution-phase photon correlation measurement that can conveniently and reliably measure the average biexciton-to-exciton quantum yield ratio of an entire sample without user selection bias. This technique can be used to investigate the multiexciton recombination dynamics of a broad scope of synthetically underdeveloped materials, including those with low exciton quantum yields and poor fluorescence stability. Here in this study, we have applied this method to measure weak biexciton fluorescence in samples of visible-emitting InP/ZnS andmore » InAs/ZnS core/shell nanocrystals, and to demonstrate that a rapid CdS shell growth procedure can markedly increase the biexciton fluorescence of CdSe nanocrystals.« less

  4. The extent of food waste generation across EU-27: different calculation methods and the reliability of their results.

    PubMed

    Bräutigam, Klaus-Rainer; Jörissen, Juliane; Priefer, Carmen

    2014-08-01

    The reduction of food waste is seen as an important societal issue with considerable ethical, ecological and economic implications. The European Commission aims at cutting down food waste to one-half by 2020. However, implementing effective prevention measures requires knowledge of the reasons and the scale of food waste generation along the food supply chain. The available data basis for Europe is very heterogeneous and doubts about its reliability are legitimate. This mini-review gives an overview of available data on food waste generation in EU-27 and discusses their reliability against the results of own model calculations. These calculations are based on a methodology developed on behalf of the Food and Agriculture Organization of the United Nations and provide data on food waste generation for each of the EU-27 member states, broken down to the individual stages of the food chain and differentiated by product groups. The analysis shows that the results differ significantly, depending on the data sources chosen and the assumptions made. Further research is much needed in order to improve the data stock, which builds the basis for the monitoring and management of food waste. © The Author(s) 2014.

  5. Yield and Economic Performance of Organic and Conventional Cotton-Based Farming Systems – Results from a Field Trial in India

    PubMed Central

    Forster, Dionys; Andres, Christian; Verma, Rajeev; Zundel, Christine; Messmer, Monika M.; Mäder, Paul

    2013-01-01

    The debate on the relative benefits of conventional and organic farming systems has in recent time gained significant interest. So far, global agricultural development has focused on increased productivity rather than on a holistic natural resource management for food security. Thus, developing more sustainable farming practices on a large scale is of utmost importance. However, information concerning the performance of farming systems under organic and conventional management in tropical and subtropical regions is scarce. This study presents agronomic and economic data from the conversion phase (2007–2010) of a farming systems comparison trial on a Vertisol soil in Madhya Pradesh, central India. A cotton-soybean-wheat crop rotation under biodynamic, organic and conventional (with and without Bt cotton) management was investigated. We observed a significant yield gap between organic and conventional farming systems in the 1st crop cycle (cycle 1: 2007–2008) for cotton (−29%) and wheat (−27%), whereas in the 2nd crop cycle (cycle 2: 2009–2010) cotton and wheat yields were similar in all farming systems due to lower yields in the conventional systems. In contrast, organic soybean (a nitrogen fixing leguminous plant) yields were marginally lower than conventional yields (−1% in cycle 1, −11% in cycle 2). Averaged across all crops, conventional farming systems achieved significantly higher gross margins in cycle 1 (+29%), whereas in cycle 2 gross margins in organic farming systems were significantly higher (+25%) due to lower variable production costs but similar yields. Soybean gross margin was significantly higher in the organic system (+11%) across the four harvest years compared to the conventional systems. Our results suggest that organic soybean production is a viable option for smallholder farmers under the prevailing semi-arid conditions in India. Future research needs to elucidate the long-term productivity and profitability, particularly of

  6. Reliability of surface electromyography in the assessment of paraspinal muscle fatigue: an updated systematic review.

    PubMed

    Mohseni Bandpei, Mohammad A; Rahmani, Nahid; Majdoleslam, Basir; Abdollahi, Iraj; Ali, Shabnam Shah; Ahmad, Ashfaq

    2014-09-01

    The purpose of this study was to review the literature to determine whether surface electromyography (EMG) is a reliable tool to assess paraspinal muscle fatigue in healthy subjects and in patients with low back pain (LBP). A literature search for the period of 2000 to 2012 was performed, using PubMed, ProQuest, Science Direct, EMBASE, OVID, CINAHL, and MEDLINE databases. Electromyography, reliability, median frequency, paraspinal muscle, endurance, low back pain, and muscle fatigue were used as keywords. The literature search yielded 178 studies using the above keywords. Twelve articles were selected according to the inclusion criteria of the study. In 7 of the 12 studies, the surface EMG was only applied in healthy subjects, and in 5 studies, the reliability of surface EMG was investigated in patients with LBP or a comparison with a control group. In all of these studies, median frequency was shown to be a reliable EMG parameter to assess paraspinal muscles fatigue. There was a wide variation among studies in terms of methodology, surface EMG parameters, electrode location, procedure, and homogeneity of the study population. The results suggest that there seems to be a convincing body of evidence to support the merit of surface EMG in the assessment of paraspinal muscle fatigue in healthy subject and in patients with LBP. Copyright © 2014 National University of Health Sciences. Published by Elsevier Inc. All rights reserved.

  7. Rapid, Reliable Shape Setting of Superelastic Nitinol for Prototyping Robots

    PubMed Central

    Gilbert, Hunter B.; Webster, Robert J.

    2016-01-01

    Shape setting Nitinol tubes and wires in a typical laboratory setting for use in superelastic robots is challenging. Obtaining samples that remain superelastic and exhibit desired precurvatures currently requires many iterations, which is time consuming and consumes a substantial amount of Nitinol. To provide a more accurate and reliable method of shape setting, in this paper we propose an electrical technique that uses Joule heating to attain the necessary shape setting temperatures. The resulting high power heating prevents unintended aging of the material and yields consistent and accurate results for the rapid creation of prototypes. We present a complete algorithm and system together with an experimental analysis of temperature regulation. We experimentally validate the approach on Nitinol tubes that are shape set into planar curves. We also demonstrate the feasibility of creating general space curves by shape setting a helical tube. The system demonstrates a mean absolute temperature error of 10°C. PMID:27648473

  8. Rapid, Reliable Shape Setting of Superelastic Nitinol for Prototyping Robots.

    PubMed

    Gilbert, Hunter B; Webster, Robert J

    Shape setting Nitinol tubes and wires in a typical laboratory setting for use in superelastic robots is challenging. Obtaining samples that remain superelastic and exhibit desired precurvatures currently requires many iterations, which is time consuming and consumes a substantial amount of Nitinol. To provide a more accurate and reliable method of shape setting, in this paper we propose an electrical technique that uses Joule heating to attain the necessary shape setting temperatures. The resulting high power heating prevents unintended aging of the material and yields consistent and accurate results for the rapid creation of prototypes. We present a complete algorithm and system together with an experimental analysis of temperature regulation. We experimentally validate the approach on Nitinol tubes that are shape set into planar curves. We also demonstrate the feasibility of creating general space curves by shape setting a helical tube. The system demonstrates a mean absolute temperature error of 10°C.

  9. Reliability model generator

    NASA Technical Reports Server (NTRS)

    Cohen, Gerald C. (Inventor); McMann, Catherine M. (Inventor)

    1991-01-01

    An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.

  10. A reliability analysis tool for SpaceWire network

    NASA Astrophysics Data System (ADS)

    Zhou, Qiang; Zhu, Longjiang; Fei, Haidong; Wang, Xingyou

    2017-04-01

    A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. It is becoming more and more popular in space applications due to its technical advantages, including reliability, low power and fault protection, etc. High reliability is the vital issue for spacecraft. Therefore, it is very important to analyze and improve the reliability performance of the SpaceWire network. This paper deals with the problem of reliability modeling and analysis with SpaceWire network. According to the function division of distributed network, a reliability analysis method based on a task is proposed, the reliability analysis of every task can lead to the system reliability matrix, the reliability result of the network system can be deduced by integrating these entire reliability indexes in the matrix. With the method, we develop a reliability analysis tool for SpaceWire Network based on VC, where the computation schemes for reliability matrix and the multi-path-task reliability are also implemented. By using this tool, we analyze several cases on typical architectures. And the analytic results indicate that redundancy architecture has better reliability performance than basic one. In practical, the dual redundancy scheme has been adopted for some key unit, to improve the reliability index of the system or task. Finally, this reliability analysis tool will has a directive influence on both task division and topology selection in the phase of SpaceWire network system design.

  11. Establishing Reliable Cognitive Change in Children with Epilepsy: The Procedures and Results for a Sample with Epilepsy

    ERIC Educational Resources Information Center

    van Iterson, Loretta; Augustijn, Paul B.; de Jong, Peter F.; van der Leij, Aryan

    2013-01-01

    The goal of this study was to investigate reliable cognitive change in epilepsy by developing computational procedures to determine reliable change index scores (RCIs) for the Dutch Wechsler Intelligence Scales for Children. First, RCIs were calculated based on stability coefficients from a reference sample. Then, these RCIs were applied to a…

  12. Reliability of Autism-Tics, AD/HD, and other Comorbidities (A-TAC) inventory in a test-retest design.

    PubMed

    Larson, Tomas; Kerekes, Nóra; Selinus, Eva Norén; Lichtenstein, Paul; Gumpert, Clara Hellner; Anckarsäter, Henrik; Nilsson, Thomas; Lundström, Sebastian

    2014-02-01

    The Autism-Tics, AD/HD, and other Comorbidities (A-TAC) inventory is used in epidemiological research to assess neurodevelopmental problems and coexisting conditions. Although the A-TAC has been applied in various populations, data on retest reliability are limited. The objective of the present study was to present additional reliability data. The A-TAC was administered by lay assessors and was completed on two occasions by parents of 400 individual twins, with an average interval of 70 days between test sessions. Intra- and inter-rater reliability were analysed with intraclass correlations and Cohen's kappa. A-TAC showed excellent test-retest intraclass correlations for both autism spectrum disorder and attention deficit hyperactivity disorder (each at .84). Most modules in the A-TAC had intra- and inter-rater reliability intraclass correlation coefficients of > or = .60. Cohen's kappa indi- cated acceptable reliability. The current study provides statistical evidence that the A-TAC yields good test-retest reliability in a population-based cohort of children.

  13. Pre-analytical and analytical aspects affecting clinical reliability of plasma glucose results.

    PubMed

    Pasqualetti, Sara; Braga, Federica; Panteghini, Mauro

    2017-07-01

    The measurement of plasma glucose (PG) plays a central role in recognizing disturbances in carbohydrate metabolism, with established decision limits that are globally accepted. This requires that PG results are reliable and unequivocally valid no matter where they are obtained. To control the pre-analytical variability of PG and prevent in vitro glycolysis, the use of citrate as rapidly effective glycolysis inhibitor has been proposed. However, the commercial availability of several tubes with studies showing different performance has created confusion among users. Moreover, and more importantly, studies have shown that tubes promptly inhibiting glycolysis give PG results that are significantly higher than tubes containing sodium fluoride only, used in the majority of studies generating the current PG cut-points, with a different clinical classification of subjects. From the analytical point of view, to be equivalent among different measuring systems, PG results should be traceable to a recognized higher-order reference via the implementation of an unbroken metrological hierarchy. In doing this, it is important that manufacturers of measuring systems consider the uncertainty accumulated through the different steps of the selected traceability chain. In particular, PG results should fulfil analytical performance specifications defined to fit the intended clinical application. Since PG has tight homeostatic control, its biological variability may be used to define these limits. Alternatively, given the central diagnostic role of the analyte, an outcome model showing the impact of analytical performance of test on clinical classifications of subjects can be used. Using these specifications, performance assessment studies employing commutable control materials with values assigned by reference procedure have shown that the quality of PG measurements is often far from desirable and that problems are exacerbated using point-of-care devices. Copyright © 2017 The Canadian

  14. Changing forest water yields in response to climate warming: results from long-term experimental watershed sites across North America

    Treesearch

    Irena F. Creed; Adam T. Spargo; Julia A. Jones; Jim M. Buttle; Mary B. Adams; Fred D. Beall; Eric G. Booth; John L. Campbell; Dave Clow; Kelly Elder; Mark B. Green; Nancy B. Grimm; Chelcy Miniat; Patricia Ramlal; Amartya Saha; Stephen Sebestyen; Dave Spittlehouse; Shannon Sterling; Mark W. Williams; Rita Winkler; Huaxia Yao

    2014-01-01

    Climate warming is projected to affect forest water yields but the effects are expected to vary.We investigated how forest type and age affect water yield resilience to climate warming. To answer this question, we examined the variability in historical water yields at long-term experimental catchments across Canada and the United States over 5-year cool and warm...

  15. The shared and unique values of optical, fluorescence, thermal and microwave satellite data for estimating large-scale crop yields

    USDA-ARS?s Scientific Manuscript database

    Large-scale crop monitoring and yield estimation are important for both scientific research and practical applications. Satellite remote sensing provides an effective means for regional and global cropland monitoring, particularly in data-sparse regions that lack reliable ground observations and rep...

  16. The reliability and validity of the Caregiver Work Limitations Questionnaire.

    PubMed

    Lerner, Debra; Parsons, Susan K; Chang, Hong; Visco, Zachary L; Pawlecki, J Brent

    2015-01-01

    To test a new Caregiver Work Limitations Questionnaire (WLQ). On the basis of the original WLQ, this new survey instrument assesses the effect of caregiving for ill and/or disabled persons on the caregiver's work performance. A questionnaire was administered anonymously to employees of a large business services company. Scale reliability and validity were tested with psychometric methods. Of 4128 survey participants, 18.3% currently were caregivers, 10.2% were past caregivers, and 71.5% were not caregivers. Current caregivers were limited in their ability to perform basic job tasks between mean 10.3% and 16.8% of the time. Confirmatory factor analysis yielded a scale structure similar to the WLQ's. Scales reliabilities (the Cronbach's α) ranged from 0.91 to 0.95. The Caregiver WLQ is a new tool for understanding the workplace effect of caregiving.

  17. Does a web-based feedback training program result in improved reliability in clinicians' ratings of the Global Assessment of Functioning (GAF) Scale?

    PubMed

    Støre-Valen, Jakob; Ryum, Truls; Pedersen, Geir A F; Pripp, Are H; Jose, Paul E; Karterud, Sigmund

    2015-09-01

    The Global Assessment of Functioning (GAF) Scale is used in routine clinical practice and research to estimate symptom and functional severity and longitudinal change. Concerns about poor interrater reliability have been raised, and the present study evaluated the effect of a Web-based GAF training program designed to improve interrater reliability in routine clinical practice. Clinicians rated up to 20 vignettes online, and received deviation scores as immediate feedback (i.e., own scores compared with expert raters) after each rating. Growth curves of absolute SD scores across the vignettes were modeled. A linear mixed effects model, using the clinician's deviation scores from expert raters as the dependent variable, indicated an improvement in reliability during training. Moderation by content of scale (symptoms; functioning), scale range (average; extreme), previous experience with GAF rating, profession, and postgraduate training were assessed. Training reduced deviation scores for inexperienced GAF raters, for individuals in clinical professions other than nursing and medicine, and for individuals with no postgraduate specialization. In addition, training was most beneficial for cases with average severity of symptoms compared with cases with extreme severity. The results support the use of Web-based training with feedback routines as a means to improve the reliability of GAF ratings performed by clinicians in mental health practice. These results especially pertain to clinicians in mental health practice who do not have a masters or doctoral degree. (c) 2015 APA, all rights reserved.

  18. Climatic and technological ceilings for Chinese rice stagnation based on yield gaps and yield trend pattern analysis.

    PubMed

    Zhang, Tianyi; Yang, Xiaoguang; Wang, Hesong; Li, Yong; Ye, Qing

    2014-04-01

    Climatic or technological ceilings could cause yield stagnation. Thus, identifying the principal reasons for yield stagnation within the context of the local climate and socio-economic conditions are essential for informing regional agricultural policies. In this study, we identified the climatic and technological ceilings for seven rice-production regions in China based on yield gaps and on a yield trend pattern analysis for the period 1980-2010. The results indicate that 54.9% of the counties sampled experienced yield stagnation since the 1980. The potential yield ceilings in northern and eastern China decreased to a greater extent than in other regions due to the accompanying climate effects of increases in temperature and decreases in radiation. This may be associated with yield stagnation and halt occurring in approximately 49.8-57.0% of the sampled counties in these areas. South-western China exhibited a promising scope for yield improvement, showing the greatest yield gap (30.6%), whereas the yields were stagnant in 58.4% of the sampled counties. This finding suggests that efforts to overcome the technological ceiling must be given priority so that the available exploitable yield gap can be achieved. North-eastern China, however, represents a noteworthy exception. In the north-central area of this region, climate change has increased the yield potential ceiling, and this increase has been accompanied by the most rapid increase in actual yield: 1.02 ton ha(-1) per decade. Therefore, north-eastern China shows a great potential for rice production, which is favoured by the current climate conditions and available technology level. Additional environmentally friendly economic incentives might be considered in this region. © 2013 John Wiley & Sons Ltd.

  19. Delayed Carcass Deboning Results in Significantly Reduced Cook Yields of Boneless Skinless Chicken Thighs

    USDA-ARS?s Scientific Manuscript database

    Boneless skinless chicken thighs are a new deboned poultry product in the retail market. Three trials were conducted to investigate the effect of postmortem carcass deboning time on the cook yields of boneless skinless chicken thighs as well as boneless skinless chicken breasts. Broiler carcasses ...

  20. Applicability and Limitations of Reliability Allocation Methods

    NASA Technical Reports Server (NTRS)

    Cruz, Jose A.

    2016-01-01

    Reliability allocation process may be described as the process of assigning reliability requirements to individual components within a system to attain the specified system reliability. For large systems, the allocation process is often performed at different stages of system design. The allocation process often begins at the conceptual stage. As the system design develops, more information about components and the operating environment becomes available, different allocation methods can be considered. Reliability allocation methods are usually divided into two categories: weighting factors and optimal reliability allocation. When properly applied, these methods can produce reasonable approximations. Reliability allocation techniques have limitations and implied assumptions that need to be understood by system engineers. Applying reliability allocation techniques without understanding their limitations and assumptions can produce unrealistic results. This report addresses weighting factors, optimal reliability allocation techniques, and identifies the applicability and limitations of each reliability allocation technique.

  1. [Reliability and validity studies of Turkish translation of Eysenck Personality Questionnaire Revised-Abbreviated].

    PubMed

    Karanci, A Nuray; Dirik, Gülay; Yorulmaz, Orçun

    2007-01-01

    The aim of the present study was to examine the reliability and the validity of the Turkish translation of the Eysneck Personality Questionnaire Revised-abbreviated Form (EPQR-A) (Francis et al., 1992), which consists of 24 items that assess neuroticism, extraversion, psychoticism, and lying. The questionnaire was first translated into Turkish and then back translated. Subsequently, it was administered to 756 students from 4 different universities. The Fear Survey Inventory-III (FSI-III), Rosenberg Self-Esteem Scales (RSES), and Egna Minnen Betraffande Uppfostran (EMBU-C) were also administered in order to assess the questionnaire's validity. The internal consistency, test-retest reliability, and validity were subsequently evaluated. Factor analysis, similar to the original scale, yielded 4 factors; the neuroticism, extraversion, psychoticism, and lie scales. Kuder-Richardson alpha coefficients for the extraversion, neuroticism, psychoticism, and lie scales were 0.78, 0.65, 0.42, and 0.64, respectively, and the test-retest reliability of the scales was 0.84, 0.82, 0.69, and 0.69, respectively. The relationships between EPQR-A-48, FSI-III, EMBU-C, and RSES were examined in order to evaluate the construct validity of the scale. Our findings support the construct validity of the questionnaire. To investigate gender differences in scores on the subscales, MANOVA was conducted. The results indicated that there was a gender difference only in the lie scale scores. Our findings largely supported the reliability and validity of the questionnaire in a Turkish student sample. The psychometric characteristics of the Turkish version of the EPQR-A were discussed in light of the relevant literature.

  2. Fission yield measurements at IGISOL

    NASA Astrophysics Data System (ADS)

    Lantz, M.; Al-Adili, A.; Gorelov, D.; Jokinen, A.; Kolhinen, V. S.; Mattera, A.; Moore, I.; Penttilä, H.; Pomp, S.; Prokofiev, A. V.; Rakopoulos, V.; Rinta-Antila, S.; Simutkin, V.; Solders, A.

    2016-06-01

    The fission product yields are an important characteristic of the fission process. In fundamental physics, knowledge of the yield distributions is needed to better understand the fission process. For nuclear energy applications good knowledge of neutroninduced fission-product yields is important for the safe and efficient operation of nuclear power plants. With the Ion Guide Isotope Separator On-Line (IGISOL) technique, products of nuclear reactions are stopped in a buffer gas and then extracted and separated by mass. Thanks to the high resolving power of the JYFLTRAP Penning trap, at University of Jyväskylä, fission products can be isobarically separated, making it possible to measure relative independent fission yields. In some cases it is even possible to resolve isomeric states from the ground state, permitting measurements of isomeric yield ratios. So far the reactions U(p,f) and Th(p,f) have been studied using the IGISOL-JYFLTRAP facility. Recently, a neutron converter target has been developed utilizing the Be(p,xn) reaction. We here present the IGISOL-technique for fission yield measurements and some of the results from the measurements on proton induced fission. We also present the development of the neutron converter target, the characterization of the neutron field and the first tests with neutron-induced fission.

  3. Monitoring interannual variation in global crop yield using long-term AVHRR and MODIS observations

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoyang; Zhang, Qingyuan

    2016-04-01

    Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectroradiometer (MODIS) data have been extensively applied for crop yield prediction because of their daily temporal resolution and a global coverage. This study investigated global crop yield using daily two band Enhanced Vegetation Index (EVI2) derived from AVHRR (1981-1999) and MODIS (2000-2013) observations at a spatial resolution of 0.05° (∼5 km). Specifically, EVI2 temporal trajectory of crop growth was simulated using a hybrid piecewise logistic model (HPLM) for individual pixels, which was used to detect crop phenological metrics. The derived crop phenology was then applied to calculate crop greenness defined as EVI2 amplitude and EVI2 integration during annual crop growing seasons, which was further aggregated for croplands in each country, respectively. The interannual variations in EVI2 amplitude and EVI2 integration were combined to correlate to the variation in cereal yield from 1982-2012 for individual countries using a stepwise regression model, respectively. The results show that the confidence level of the established regression models was higher than 90% (P value < 0.1) in most countries in the northern hemisphere although it was relatively poor in the southern hemisphere (mainly in Africa). The error in the yield predication was relatively smaller in America, Europe and East Asia than that in Africa. In the 10 countries with largest cereal production across the world, the prediction error was less than 9% during past three decades. This suggests that crop phenology-controlled greenness from coarse resolution satellite data has the capability of predicting national crop yield across the world, which could provide timely and reliable crop information for global agricultural trade and policymakers.

  4. Precision and reliability of periodically and quasiperiodically driven integrate-and-fire neurons.

    PubMed

    Tiesinga, P H E

    2002-04-01

    Neurons in the brain communicate via trains of all-or-none electric events known as spikes. How the brain encodes information using spikes-the neural code-remains elusive. Here the robustness against noise of stimulus-induced neural spike trains is studied in terms of attractors and bifurcations. The dynamics of model neurons converges after a transient onto an attractor yielding a reproducible sequence of spike times. At a bifurcation point the spike times on the attractor change discontinuously when a parameter is varied. Reliability, the stability of the attractor against noise, is reduced when the neuron operates close to a bifurcation point. We determined using analytical spike-time maps the attractor and bifurcation structure of an integrate-and-fire model neuron driven by a periodic or a quasiperiodic piecewise constant current and investigated the stability of attractors against noise. The integrate-and-fire model neuron became mode locked to the periodic current with a rational winding number p/q and produced p spikes per q cycles. There were q attractors. p:q mode-locking regions formed Arnold tongues. In the model, reliability was the highest during 1:1 mode locking when there was only one attractor, as was also observed in recent experiments. The quasiperiodically driven neuron mode locked to either one of the two drive periods, or to a linear combination of both of them. Mode-locking regions were organized in Arnold tongues and reliability was again highest when there was only one attractor. These results show that neuronal reliability in response to the rhythmic drive generated by synchronized networks of neurons is profoundly influenced by the location of the Arnold tongues in parameter space.

  5. Assessing Variations in Areal Organization for the Intrinsic Brain: From Fingerprints to Reliability

    PubMed Central

    Xu, Ting; Opitz, Alexander; Craddock, R. Cameron; Wright, Margaret J.; Zuo, Xi-Nian; Milham, Michael P.

    2016-01-01

    Resting state fMRI (R-fMRI) is a powerful in-vivo tool for examining the functional architecture of the human brain. Recent studies have demonstrated the ability to characterize transitions between functionally distinct cortical areas through the mapping of gradients in intrinsic functional connectivity (iFC) profiles. To date, this novel approach has primarily been applied to iFC profiles averaged across groups of individuals, or in one case, a single individual scanned multiple times. Here, we used a publically available R-fMRI dataset, in which 30 healthy participants were scanned 10 times (10 min per session), to investigate differences in full-brain transition profiles (i.e., gradient maps, edge maps) across individuals, and their reliability. 10-min R-fMRI scans were sufficient to achieve high accuracies in efforts to “fingerprint” individuals based upon full-brain transition profiles. Regarding test–retest reliability, the image-wise intraclass correlation coefficient (ICC) was moderate, and vertex-level ICC varied depending on region; larger durations of data yielded higher reliability scores universally. Initial application of gradient-based methodologies to a recently published dataset obtained from twins suggested inter-individual variation in areal profiles might have genetic and familial origins. Overall, these results illustrate the utility of gradient-based iFC approaches for studying inter-individual variation in brain function. PMID:27600846

  6. Test-retest reliability of cognitive EEG

    NASA Technical Reports Server (NTRS)

    McEvoy, L. K.; Smith, M. E.; Gevins, A.

    2000-01-01

    OBJECTIVE: Task-related EEG is sensitive to changes in cognitive state produced by increased task difficulty and by transient impairment. If task-related EEG has high test-retest reliability, it could be used as part of a clinical test to assess changes in cognitive function. The aim of this study was to determine the reliability of the EEG recorded during the performance of a working memory (WM) task and a psychomotor vigilance task (PVT). METHODS: EEG was recorded while subjects rested quietly and while they performed the tasks. Within session (test-retest interval of approximately 1 h) and between session (test-retest interval of approximately 7 days) reliability was calculated for four EEG components: frontal midline theta at Fz, posterior theta at Pz, and slow and fast alpha at Pz. RESULTS: Task-related EEG was highly reliable within and between sessions (r0.9 for all components in WM task, and r0.8 for all components in the PVT). Resting EEG also showed high reliability, although the magnitude of the correlation was somewhat smaller than that of the task-related EEG (r0.7 for all 4 components). CONCLUSIONS: These results suggest that under appropriate conditions, task-related EEG has sufficient retest reliability for use in assessing clinical changes in cognitive status.

  7. Development of a nanosatellite de-orbiting system by reliability based design optimization

    NASA Astrophysics Data System (ADS)

    Nikbay, Melike; Acar, Pınar; Aslan, Alim Rüstem

    2015-12-01

    This paper presents design approaches to develop a reliable and efficient de-orbiting system for the 3USAT nanosatellite to provide a beneficial orbital decay process at the end of a mission. A de-orbiting system is initially designed by employing the aerodynamic drag augmentation principle where the structural constraints of the overall satellite system and the aerodynamic forces are taken into account. Next, an alternative de-orbiting system is designed with new considerations and further optimized using deterministic and reliability based design techniques. For the multi-objective design, the objectives are chosen to maximize the aerodynamic drag force through the maximization of the Kapton surface area while minimizing the de-orbiting system mass. The constraints are related in a deterministic manner to the required deployment force, the height of the solar panel hole and the deployment angle. The length and the number of layers of the deployable Kapton structure are used as optimization variables. In the second stage of this study, uncertainties related to both manufacturing and operating conditions of the deployable structure in space environment are considered. These uncertainties are then incorporated into the design process by using different probabilistic approaches such as Monte Carlo Simulation, the First-Order Reliability Method and the Second-Order Reliability Method. The reliability based design optimization seeks optimal solutions using the former design objectives and constraints with the inclusion of a reliability index. Finally, the de-orbiting system design alternatives generated by different approaches are investigated and the reliability based optimum design is found to yield the best solution since it significantly improves both system reliability and performance requirements.

  8. Software reliability report

    NASA Technical Reports Server (NTRS)

    Wilson, Larry

    1991-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Unfortunately, the models appear to be unable to account for the random nature of the data. If the same code is debugged multiple times and one of the models is used to make predictions, intolerable variance is observed in the resulting reliability predictions. It is believed that data replication can remove this variance in lab type situations and that it is less than scientific to talk about validating a software reliability model without considering replication. It is also believed that data replication may prove to be cost effective in the real world, thus the research centered on verification of the need for replication and on methodologies for generating replicated data in a cost effective manner. The context of the debugging graph was pursued by simulation and experimentation. Simulation was done for the Basic model and the Log-Poisson model. Reasonable values of the parameters were assigned and used to generate simulated data which is then processed by the models in order to determine limitations on their accuracy. These experiments exploit the existing software and program specimens which are in AIR-LAB to measure the performance of reliability models.

  9. Reliability and validity of the test of incremental respiratory endurance measures of inspiratory muscle performance in COPD

    PubMed Central

    Formiga, Magno F; Roach, Kathryn E; Vital, Isabel; Urdaneta, Gisel; Balestrini, Kira; Calderon-Candelario, Rafael A

    2018-01-01

    Purpose The Test of Incremental Respiratory Endurance (TIRE) provides a comprehensive assessment of inspiratory muscle performance by measuring maximal inspiratory pressure (MIP) over time. The integration of MIP over inspiratory duration (ID) provides the sustained maximal inspiratory pressure (SMIP). Evidence on the reliability and validity of these measurements in COPD is not currently available. Therefore, we assessed the reliability, responsiveness and construct validity of the TIRE measures of inspiratory muscle performance in subjects with COPD. Patients and methods Test–retest reliability, known-groups and convergent validity assessments were implemented simultaneously in 81 male subjects with mild to very severe COPD. TIRE measures were obtained using the portable PrO2 device, following standard guidelines. Results All TIRE measures were found to be highly reliable, with SMIP demonstrating the strongest test–retest reliability with a nearly perfect intraclass correlation coefficient (ICC) of 0.99, while MIP and ID clustered closely together behind SMIP with ICC values of about 0.97. Our findings also demonstrated known-groups validity of all TIRE measures, with SMIP and ID yielding larger effect sizes when compared to MIP in distinguishing between subjects of different COPD status. Finally, our analyses confirmed convergent validity for both SMIP and ID, but not MIP. Conclusion The TIRE measures of MIP, SMIP and ID have excellent test–retest reliability and demonstrated known-groups validity in subjects with COPD. SMIP and ID also demonstrated evidence of moderate convergent validity and appear to be more stable measures in this patient population than the traditional MIP. PMID:29805255

  10. [The applicability of results].

    PubMed

    Marín-León, I

    2015-11-01

    The ultimate aim of the critical reading of medical literature is to use the scientific advances in clinical practice or for innovation. This requires an evaluation of the applicability of the results of the studies that have been published, which begins with a clear understanding of these results. When the studies do not provide sufficient guarantees of rigor in design and analysis, the conditions necessary for the applicability of the results are not met; however, the fact that the results are reliable is not enough to make it worth trying to use their conclusions. This article explains how carrying out studies in experimental or artificial conditions often moves them away from the real conditions in which they claim to apply their conclusions. To evaluate this applicability, the article proposes evaluating a set of items that will enable the reader to determine the likelihood that the benefits and risks reported in the studies will yield the least uncertainty in the clinical arena where they aim to be applied. Copyright © 2015 SERAM. Published by Elsevier España, S.L.U. All rights reserved.

  11. Electrical service reliability: the customer perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samsa, M.E.; Hub, K.A.; Krohm, G.C.

    1978-09-01

    Electric-utility-system reliability criteria have traditionally been established as a matter of utility policy or through long-term engineering practice, generally with no supportive customer cost/benefit analysis as justification. This report presents results of an initial study of the customer perspective toward electric-utility-system reliability, based on critical review of over 20 previous and ongoing efforts to quantify the customer's value of reliable electric service. A possible structure of customer classifications is suggested as a reasonable level of disaggregation for further investigation of customer value, and these groups are characterized in terms of their electricity use patterns. The values that customers assign tomore » reliability are discussed in terms of internal and external cost components. A list of options for effecting changes in customer service reliability is set forth, and some of the many policy issues that could alter customer-service reliability are identified.« less

  12. Creating Highly Reliable Accountable Care Organizations.

    PubMed

    Vogus, Timothy J; Singer, Sara J

    2016-12-01

    Accountable Care Organizations' (ACOs) pursuit of the triple aim of higher quality, lower cost, and improved population health has met with mixed results. To improve the design and implementation of ACOs we look to organizations that manage similarly complex, dynamic, and tightly coupled conditions while sustaining exceptional performance known as high-reliability organizations. We describe the key processes through which organizations achieve reliability, the leadership and organizational practices that enable it, and the role that professionals can play when charged with enacting it. Specifically, we present concrete practices and processes from health care organizations pursuing high-reliability and from early ACOs to illustrate how the triple aim may be met by cultivating mindful organizing, practicing reliability-enhancing leadership, and identifying and supporting reliability professionals. We conclude by proposing a set of research questions to advance the study of ACOs and high-reliability research. © The Author(s) 2016.

  13. ASSESSING AND COMBINING RELIABILITY OF PROTEIN INTERACTION SOURCES

    PubMed Central

    LEACH, SONIA; GABOW, AARON; HUNTER, LAWRENCE; GOLDBERG, DEBRA S.

    2008-01-01

    Integrating diverse sources of interaction information to create protein networks requires strategies sensitive to differences in accuracy and coverage of each source. Previous integration approaches calculate reliabilities of protein interaction information sources based on congruity to a designated ‘gold standard.’ In this paper, we provide a comparison of the two most popular existing approaches and propose a novel alternative for assessing reliabilities which does not require a gold standard. We identify a new method for combining the resultant reliabilities and compare it against an existing method. Further, we propose an extrinsic approach to evaluation of reliability estimates, considering their influence on the downstream tasks of inferring protein function and learning regulatory networks from expression data. Results using this evaluation method show 1) our method for reliability estimation is an attractive alternative to those requiring a gold standard and 2) the new method for combining reliabilities is less sensitive to noise in reliability assignments than the similar existing technique. PMID:17990508

  14. Reliability and Validity of an Internet-based Questionnaire Measuring Lifetime Physical Activity

    PubMed Central

    De Vera, Mary A.; Ratzlaff, Charles; Doerfling, Paul; Kopec, Jacek

    2010-01-01

    Lifetime exposure to physical activity is an important construct for evaluating associations between physical activity and disease outcomes, given the long induction periods in many chronic diseases. The authors' objective in this study was to evaluate the measurement properties of the Lifetime Physical Activity Questionnaire (L-PAQ), a novel Internet-based, self-administered instrument measuring lifetime physical activity, among Canadian men and women in 2005–2006. Reliability was examined using a test-retest study. Validity was examined in a 2-part study consisting of 1) comparisons with previously validated instruments measuring similar constructs, the Lifetime Total Physical Activity Questionnaire (LT-PAQ) and the Chasan-Taber Physical Activity Questionnaire (CT-PAQ), and 2) a priori hypothesis tests of constructs measured by the L-PAQ. The L-PAQ demonstrated good reliability, with intraclass correlation coefficients ranging from 0.67 (household activity) to 0.89 (sports/recreation). Comparison between the L-PAQ and the LT-PAQ resulted in Spearman correlation coefficients ranging from 0.41 (total activity) to 0.71 (household activity); comparison between the L-PAQ and the CT-PAQ yielded coefficients of 0.58 (sports/recreation), 0.56 (household activity), and 0.50 (total activity). L-PAQ validity was further supported by observed relations between the L-PAQ and sociodemographic variables, consistent with a priori hypotheses. Overall, the L-PAQ is a useful instrument for assessing multiple domains of lifetime physical activity with acceptable reliability and validity. PMID:20876666

  15. Reliability and validity of an internet-based questionnaire measuring lifetime physical activity.

    PubMed

    De Vera, Mary A; Ratzlaff, Charles; Doerfling, Paul; Kopec, Jacek

    2010-11-15

    Lifetime exposure to physical activity is an important construct for evaluating associations between physical activity and disease outcomes, given the long induction periods in many chronic diseases. The authors' objective in this study was to evaluate the measurement properties of the Lifetime Physical Activity Questionnaire (L-PAQ), a novel Internet-based, self-administered instrument measuring lifetime physical activity, among Canadian men and women in 2005-2006. Reliability was examined using a test-retest study. Validity was examined in a 2-part study consisting of 1) comparisons with previously validated instruments measuring similar constructs, the Lifetime Total Physical Activity Questionnaire (LT-PAQ) and the Chasan-Taber Physical Activity Questionnaire (CT-PAQ), and 2) a priori hypothesis tests of constructs measured by the L-PAQ. The L-PAQ demonstrated good reliability, with intraclass correlation coefficients ranging from 0.67 (household activity) to 0.89 (sports/recreation). Comparison between the L-PAQ and the LT-PAQ resulted in Spearman correlation coefficients ranging from 0.41 (total activity) to 0.71 (household activity); comparison between the L-PAQ and the CT-PAQ yielded coefficients of 0.58 (sports/recreation), 0.56 (household activity), and 0.50 (total activity). L-PAQ validity was further supported by observed relations between the L-PAQ and sociodemographic variables, consistent with a priori hypotheses. Overall, the L-PAQ is a useful instrument for assessing multiple domains of lifetime physical activity with acceptable reliability and validity.

  16. Effectiveness of rabbit manure biofertilizer in barley crop yield.

    PubMed

    Islas-Valdez, Samira; Lucho-Constantino, Carlos A; Beltrán-Hernández, Rosa I; Gómez-Mercado, René; Vázquez-Rodríguez, Gabriela A; Herrera, Juan M; Jiménez-González, Angélica

    2017-11-01

    The quality of biofertilizers is usually assessed only in terms of the amount of nutrients that they supply to the crops and their lack of viable pathogens and phytotoxicity. The goal of this study was to determine the effectiveness of a liquid biofertilizer obtained from rabbit manure in terms of presence of pathogens, phytotoxicity, and its effect on the grain yield and other agronomic traits of barley (Hordeum vulgare L.). Environmental effects of the biofertilizer were also evaluated by following its influence on selected soil parameters. We applied the biofertilizer at five combinations of doses and timings each and in two application modes (foliar or direct soil application) within a randomized complete block design with three replicates and using a chemical fertilizer as control. The agronomic traits evaluated were plant height, root length, dry weight, and number of leaves and stems at three growth stages: tillering, jointing, and flowering. The effectiveness of the biofertilizer was significantly modified by the mode of application, the growth stage of the crop, and the dose of biofertilizer applied. The results showed that the foliar application of the biofertilizer at the tillering stage produced the highest increase in grain yield (59.7 %, p < 0.10). The use of the biofertilizer caused significant changes in soil, particularly concerning pH, EC, Ca, Zn, Mg, and Mn. It is our view that the production and use of biofertilizers are a reliable alternative to deal with a solid waste problem while food security is increased.

  17. The Reliability of Psychiatric Diagnosis Revisited

    PubMed Central

    Rankin, Eric; France, Cheryl; El-Missiry, Ahmed; John, Collin

    2006-01-01

    Background: The authors reviewed the topic of reliability of psychiatric diagnosis from the turn of the 20th century to present. The objectives of this paper are to explore the reasons of unreliability of psychiatric diagnosis and propose ways to improve the reliability of psychiatric diagnosis. Method: The authors reviewed the literature on the concept of reliability of psychiatric diagnosis with emphasis on the impact of interviewing skills, use of diagnostic criteria, and structured interviews on the reliability of psychiatric diagnosis. Results: Causes of diagnostic unreliability are attributed to the patient, the clinician and psychiatric nomenclature. The reliability of psychiatric diagnosis can be enhanced by using diagnostic criteria, defining psychiatric symptoms and structuring the interviews. Conclusions: The authors propose the acronym ‘DR.SED,' which stands for diagnostic criteria, reference definitions, structuring the interview, clinical experience, and data. The authors recommend that clinicians use the DR.SED paradigm to improve the reliability of psychiatric diagnoses. PMID:21103149

  18. Test Reliability at the Individual Level

    PubMed Central

    Hu, Yueqin; Nesselroade, John R.; Erbacher, Monica K.; Boker, Steven M.; Burt, S. Alexandra; Keel, Pamela K.; Neale, Michael C.; Sisk, Cheryl L.; Klump, Kelly

    2016-01-01

    Reliability has a long history as one of the key psychometric properties of a test. However, a given test might not measure people equally reliably. Test scores from some individuals may have considerably greater error than others. This study proposed two approaches using intraindividual variation to estimate test reliability for each person. A simulation study suggested that the parallel tests approach and the structural equation modeling approach recovered the simulated reliability coefficients. Then in an empirical study, where forty-five females were measured daily on the Positive and Negative Affect Schedule (PANAS) for 45 consecutive days, separate estimates of reliability were generated for each person. Results showed that reliability estimates of the PANAS varied substantially from person to person. The methods provided in this article apply to tests measuring changeable attributes and require repeated measures across time on each individual. This article also provides a set of parallel forms of PANAS. PMID:28936107

  19. Effect of warming temperatures on US wheat yields.

    PubMed

    Tack, Jesse; Barkley, Andrew; Nalley, Lawton Lanier

    2015-06-02

    Climate change is expected to increase future temperatures, potentially resulting in reduced crop production in many key production regions. Research quantifying the complex relationship between weather variables and wheat yields is rapidly growing, and recent advances have used a variety of model specifications that differ in how temperature data are included in the statistical yield equation. A unique data set that combines Kansas wheat variety field trial outcomes for 1985-2013 with location-specific weather data is used to analyze the effect of weather on wheat yield using regression analysis. Our results indicate that the effect of temperature exposure varies across the September-May growing season. The largest drivers of yield loss are freezing temperatures in the Fall and extreme heat events in the Spring. We also find that the overall effect of warming on yields is negative, even after accounting for the benefits of reduced exposure to freezing temperatures. Our analysis indicates that there exists a tradeoff between average (mean) yield and ability to resist extreme heat across varieties. More-recently released varieties are less able to resist heat than older lines. Our results also indicate that warming effects would be partially offset by increased rainfall in the Spring. Finally, we find that the method used to construct measures of temperature exposure matters for both the predictive performance of the regression model and the forecasted warming impacts on yields.

  20. The Reliability of Results from National Tests, Public Examinations, and Vocational Qualifications in England

    ERIC Educational Resources Information Center

    He, Qingping; Opposs, Dennis

    2012-01-01

    National tests, public examinations, and vocational qualifications in England are used for a variety of purposes, including the certification of individual learners in different subject areas and the accountability of individual professionals and institutions. However, there has been ongoing debate about the reliability and validity of their…

  1. Defining Primary Care Shortage Areas: Do GIS-based Measures Yield Different Results?

    PubMed

    Daly, Michael R; Mellor, Jennifer M; Millones, Marco

    2018-02-12

    To examine whether geographic information systems (GIS)-based physician-to-population ratios (PPRs) yield determinations of geographic primary care shortage areas that differ from those based on bounded-area PPRs like those used in the Health Professional Shortage Area (HPSA) designation process. We used geocoded data on primary care physician (PCP) locations and census block population counts from 1 US state to construct 2 shortage area indicators. The first is a bounded-area shortage indicator defined without GIS methods; the second is a GIS-based measure that measures the populations' spatial proximity to PCP locations. We examined agreement and disagreement between bounded shortage areas and GIS-based shortage areas. Bounded shortage area indicators and GIS-based shortage area indicators agree for the census blocks where the vast majority of our study populations reside. Specifically, 95% and 98% of the populations in our full and urban samples, respectively, reside in census blocks where the 2 indicators agree. Although agreement is generally high in rural areas (ie, 87% of the rural population reside in census blocks where the 2 indicators agree), agreement is significantly lower compared to urban areas. One source of disagreement suggests that bounded-area measures may "overlook" some shortages in rural areas; however, other aspects of the HPSA designation process likely mitigate this concern. Another source of disagreement arises from the border-crossing problem, and it is more prevalent. The GIS-based PPRs we employed would yield shortage area determinations that are similar to those based on bounded-area PPRs defined for Primary Care Service Areas. Disagreement rates were lower than previous studies have found. © 2018 National Rural Health Association.

  2. 75 FR 71613 - Mandatory Reliability Standards for Interconnection Reliability Operating Limits

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-24

    ... Reliability Standards. The proposed Reliability Standards were designed to prevent instability, uncontrolled... Reliability Standards.\\2\\ The proposed Reliability Standards were designed to prevent instability... the SOLs, which if exceeded, could expose a widespread area of the bulk electric system to instability...

  3. A comprehensively quantitative method of evaluating the impact of drought on crop yield using daily multi-scale SPEI and crop growth process model.

    PubMed

    Wang, Qianfeng; Wu, Jianjun; Li, Xiaohan; Zhou, Hongkui; Yang, Jianhua; Geng, Guangpo; An, Xueli; Liu, Leizhen; Tang, Zhenghong

    2017-04-01

    The quantitative evaluation of the impact of drought on crop yield is one of the most important aspects in agricultural water resource management. To assess the impact of drought on wheat yield, the Environmental Policy Integrated Climate (EPIC) crop growth model and daily Standardized Precipitation Evapotranspiration Index (SPEI), which is based on daily meteorological data, are adopted in the Huang Huai Hai Plain. The winter wheat crop yields are estimated at 28 stations, after calibrating the cultivar coefficients based on the experimental site data, and SPEI data was taken 11 times across the growth season from 1981 to 2010. The relationship between estimated yield and multi-scale SPEI were analyzed. The optimum time scale SPEI to monitor drought during the crop growth period was determined. The reference yield was determined by averaging the yields from numerous non-drought years. From this data, we propose a comprehensive quantitative method which can be used to predict the impact of drought on wheat yields by combining the daily multi-scale SPEI and crop growth process model. This method was tested in the Huang Huai Hai Plain. The results suggested that estimation of calibrated EPIC was a good predictor of crop yield in the Huang Huai Hai Plain, with lower RMSE (15.4 %) between estimated yield and observed yield at six agrometeorological stations. The soil moisture at planting time was affected by the precipitation and evapotranspiration during the previous 90 days (about 3 months) in the Huang Huai Hai Plain. SPEI G90 was adopted as the optimum time scale SPEI to identify the drought and non-drought years, and identified a drought year in 2000. The water deficit in the year 2000 was significant, and the rate of crop yield reduction did not completely correspond with the volume of water deficit. Our proposed comprehensive method which quantitatively evaluates the impact of drought on crop yield is reliable. The results of this study further our

  4. Genotypic Variation in Yield, Yield Components, Root Morphology and Architecture, in Soybean in Relation to Water and Phosphorus Supply

    PubMed Central

    He, Jin; Jin, Yi; Du, Yan-Lei; Wang, Tao; Turner, Neil C.; Yang, Ru-Ping; Siddique, Kadambot H. M.; Li, Feng-Min

    2017-01-01

    Water shortage and low phosphorus (P) availability limit yields in soybean. Roots play important roles in water-limited and P-deficient environment, but the underlying mechanisms are largely unknown. In this study we determined the responses of four soybean [Glycine max (L.) Merr.] genotypes [Huandsedadou (HD), Bailudou (BLD), Jindou 21 (J21), and Zhonghuang 30 (ZH)] to three P levels [applied 0 (P0), 60 (P60), and 120 (P120) mg P kg-1 dry soil to the upper 0.4 m of the soil profile] and two water treatment [well-watered (WW) and water-stressed (WS)] with special reference to root morphology and architecture, we compared yield and its components, root morphology and root architecture to find out which variety and/or what kind of root architecture had high grain yield under P and drought stress. The results showed that water stress and low P, respectively, significantly reduced grain yield by 60 and 40%, daily water use by 66 and 31%, P accumulation by 40 and 80%, and N accumulation by 39 and 65%. The cultivar ZH with the lowest daily water use had the highest grain yield at P60 and P120 under drought. Increased root length was positively associated with N and P accumulation in both the WW and WS treatments, but not with grain yield under water and P deficits. However, in the WS treatment, high adventitious and lateral root densities were associated with high N and P uptake per unit root length which in turn was significantly and positively associated with grain yield. Our results suggest that (1) genetic variation of grain yield, daily water use, P and N accumulation, and root morphology and architecture were observed among the soybean cultivars and ZH had the best yield performance under P and water limited conditions; (2) water has a major influence on nutrient uptake and grain yield, while additional P supply can modestly increase yields under drought in some soybean genotypes; (3) while conserved water use plays an important role in grain yield under drought

  5. Genotypic Variation in Yield, Yield Components, Root Morphology and Architecture, in Soybean in Relation to Water and Phosphorus Supply.

    PubMed

    He, Jin; Jin, Yi; Du, Yan-Lei; Wang, Tao; Turner, Neil C; Yang, Ru-Ping; Siddique, Kadambot H M; Li, Feng-Min

    2017-01-01

    Water shortage and low phosphorus (P) availability limit yields in soybean. Roots play important roles in water-limited and P-deficient environment, but the underlying mechanisms are largely unknown. In this study we determined the responses of four soybean [ Glycine max (L.) Merr.] genotypes [Huandsedadou (HD), Bailudou (BLD), Jindou 21 (J21), and Zhonghuang 30 (ZH)] to three P levels [applied 0 (P0), 60 (P60), and 120 (P120) mg P kg -1 dry soil to the upper 0.4 m of the soil profile] and two water treatment [well-watered (WW) and water-stressed (WS)] with special reference to root morphology and architecture, we compared yield and its components, root morphology and root architecture to find out which variety and/or what kind of root architecture had high grain yield under P and drought stress. The results showed that water stress and low P, respectively, significantly reduced grain yield by 60 and 40%, daily water use by 66 and 31%, P accumulation by 40 and 80%, and N accumulation by 39 and 65%. The cultivar ZH with the lowest daily water use had the highest grain yield at P60 and P120 under drought. Increased root length was positively associated with N and P accumulation in both the WW and WS treatments, but not with grain yield under water and P deficits. However, in the WS treatment, high adventitious and lateral root densities were associated with high N and P uptake per unit root length which in turn was significantly and positively associated with grain yield. Our results suggest that (1) genetic variation of grain yield, daily water use, P and N accumulation, and root morphology and architecture were observed among the soybean cultivars and ZH had the best yield performance under P and water limited conditions; (2) water has a major influence on nutrient uptake and grain yield, while additional P supply can modestly increase yields under drought in some soybean genotypes; (3) while conserved water use plays an important role in grain yield under drought

  6. Process gg{yields}h{sub 0}{yields}{gamma}{gamma} in the Lee-Wick standard model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krauss, F.; Underwood, T. E. J.; Zwicky, R.

    2008-01-01

    The process gg{yields}h{sub 0}{yields}{gamma}{gamma} is studied in the Lee-Wick extension of the standard model (LWSM) proposed by Grinstein, O'Connell, and Wise. In this model, negative norm partners for each SM field are introduced with the aim to cancel quadratic divergences in the Higgs mass. All sectors of the model relevant to gg{yields}h{sub 0}{yields}{gamma}{gamma} are diagonalized and results are commented on from the perspective of both the Lee-Wick and higher-derivative formalisms. Deviations from the SM rate for gg{yields}h{sub 0} are found to be of the order of 15%-5% for Lee-Wick masses in the range 500-1000 GeV. Effects on the rate formore » h{sub 0}{yields}{gamma}{gamma} are smaller, of the order of 5%-1% for Lee-Wick masses in the same range. These comparatively small changes may well provide a means of distinguishing the LWSM from other models such as universal extra dimensions where same-spin partners to standard model fields also appear. Corrections to determinations of Cabibbo-Kobayashi-Maskawa (CKM) elements |V{sub t(b,s,d)}| are also considered and are shown to be positive, allowing the possibility of measuring a CKM element larger than unity, a characteristic signature of the ghostlike nature of the Lee-Wick fields.« less

  7. Reliability Growth and Its Applications to Dormant Reliability

    DTIC Science & Technology

    1981-12-01

    ability to make projection about future reli- ability (Rof 9:41-42). Barlow and Scheuer Model. Richard E. Barlow and Ernest M. Sch~uvr, of the University...Reliability Growth Prediction Models," Operations Research, 18(l):S2-6S (January/February 1970). 7. Bauer, John, William Hadley, and Robert Dietz... Texarkana , Texas, May 1973. (AD 768 119). 10. Bonis, Austin J. "Reliability Growth Curves for One Shot Devices," Proceedings 1977 Annual Reliability and

  8. Atmospheric Fluorescence Yield

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.; Christl, M. J.; Fountain, W. F.; Gregory, J. C.; Martens, K.; Sokolsky, P.; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    Several existing and planned experiments estimate the energies of ultra-high energy cosmic rays from air showers using the atmospheric fluorescence from these showers. Accurate knowledge of the conversion from atmospheric fluorescence to energy loss by ionizing particles in the atmosphere is key to this technique. In this paper we discuss a small balloon-borne instrument to make the first in situ measurements versus altitude of the atmospheric fluorescence yield. The instrument can also be used in the lab to investigate the dependence of the fluorescence yield in air on temperature, pressure and the concentrations of other gases that present in the atmosphere. The results can be used to explore environmental effects on and improve the accuracy of cosmic ray energy measurements for existing ground-based experiments and future space-based experiments.

  9. Electron-induced electron yields of uncharged insulating materials

    NASA Astrophysics Data System (ADS)

    Hoffmann, Ryan Carl

    Presented here are electron-induced electron yield measurements from high-resistivity, high-yield materials to support a model for the yield of uncharged insulators. These measurements are made using a low-fluence, pulsed electron beam and charge neutralization to minimize charge accumulation. They show charging induced changes in the total yield, as much as 75%, even for incident electron fluences of <3 fC/mm2, when compared to an uncharged yield. The evolution of the yield as charge accumulates in the material is described in terms of electron recapture, based on the extended Chung and Everhart model of the electron emission spectrum and the dual dynamic layer model for internal charge distribution. This model is used to explain charge-induced total yield modification measured in high-yield ceramics, and to provide a method for determining electron yield of uncharged, highly insulating, high-yield materials. A sequence of materials with progressively greater charge susceptibility is presented. This series starts with low-yield Kapton derivative called CP1, then considers a moderate-yield material, Kapton HN, and ends with a high-yield ceramic, polycrystalline aluminum oxide. Applicability of conductivity (both radiation induced conductivity (RIC) and dark current conductivity) to the yield is addressed. Relevance of these results to spacecraft charging is also discussed.

  10. Specific energy yield comparison between crystalline silicon and amorphous silicon based PV modules

    NASA Astrophysics Data System (ADS)

    Ferenczi, Toby; Stern, Omar; Hartung, Marianne; Mueggenburg, Eike; Lynass, Mark; Bernal, Eva; Mayer, Oliver; Zettl, Marcus

    2009-08-01

    As emerging thin-film PV technologies continue to penetrate the market and the number of utility scale installations substantially increase, detailed understanding of the performance of the various PV technologies becomes more important. An accurate database for each technology is essential for precise project planning, energy yield prediction and project financing. However recent publications showed that it is very difficult to get accurate and reliable performance data of theses technologies. This paper evaluates previously reported claims the amorphous silicon based PV modules have a higher annual energy yield compared to crystalline silicon modules relative to their rated performance. In order to acquire a detailed understanding of this effect, outdoor module tests were performed at GE Global Research Center in Munich. In this study we examine closely two of the five reported factors that contribute to enhanced energy yield of amorphous silicon modules. We find evidence to support each of these factors and evaluate their relative significance. We discuss aspects for improvement in how PV modules are sold and identify areas for further study further study.

  11. Recent advances in computational structural reliability analysis methods

    NASA Astrophysics Data System (ADS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-10-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  12. Recent advances in computational structural reliability analysis methods

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  13. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  14. Cross-cultural Adaptation, Reliability, and Validity of the Yoruba Version of the Roland-Morris Disability Questionnaire.

    PubMed

    Mbada, Chidozie Emmanuel; Idowu, Opeyemi Ayodiipo; Ogunjimi, Olawale Richard; Ayanniyi, Olusola; Orimolade, Elkanah Ayodele; Oladiran, Ajibola Babatunde; Johnson, Olubusola Esther; Akinsulore, Adesanmi; Oni, Temitope Olawale

    2017-04-01

    A translation, cross-cultural adaptation, and psychometric analysis. The aim of this study was to translate, cross-culturally adapt, and validate the Yoruba version of the RMDQ. The Roland-Morris Disability Questionnaire (RMDQ) is a valid outcome tool for low back pain (LBP) in clinical and research settings. There seems to be no valid and reliable version of the RMDQ in the Nigerian languages. Following the Guillemin criteria, the English version of the RMDQ was forward and back translated. Two Yoruba translated versions of the RMDQ were assessed for clarity, common language usage, and conceptual equivalence. Consequently, a harmonized Yoruba version was produced and was pilot-tested among 20 patients with nonspecific long-term LBP (NSLBP) for cognitive debriefing. The final version of the Yoruba RMDQ was tested for its construct validity and re-retest reliability among 120 and 87 patients with NSLBP, respectively. Pearson product moment correlation coefficient (r) of 0.82 was obtained for reliability of the Yoruba version of the RMDQ. The test-retest reliability of the Yoruba RMDQ yielded Cronbach alpha 0.932, while the intraclass correlation (ICC) ranged between 0.896 and 0.956. The analysis of the global scores of both the English and Yoruba versions of the RMDQ yielded ICC value of between 0.995 (95% confidence interval 0.996-0.997), with the item-by-item Kappa agreement ranging between 0.824 and 1.000. The external validity of RMDQ using Quadruple Visual Analogue Scale was r = -0.596 (P = 0.001). The Yoruba version of the RMDQ had no floor/ceiling effects, as no patient achieved either of the maximum or the minimum possible scores. The Yoruba version of the RMDQ has excellent reliability and validity and may be an appropriate outcome tool for clinical and research purposes among Yoruba-speaking patients with LBP. 3.

  15. Flight control electronics reliability/maintenance study

    NASA Technical Reports Server (NTRS)

    Dade, W. W.; Edwards, R. H.; Katt, G. T.; Mcclellan, K. L.; Shomber, H. A.

    1977-01-01

    Collection and analysis of data are reported that concern the reliability and maintenance experience of flight control system electronics currently in use on passenger carrying jet aircraft. Two airlines B-747 airplane fleets were analyzed to assess the component reliability, system functional reliability, and achieved availability of the CAT II configuration flight control system. Also assessed were the costs generated by this system in the categories of spare equipment, schedule irregularity, and line and shop maintenance. The results indicate that although there is a marked difference in the geographic location and route pattern between the airlines studied, there is a close similarity in the reliability and the maintenance costs associated with the flight control electronics.

  16. Reliability- and performance-based robust design optimization of MEMS structures considering technological uncertainties

    NASA Astrophysics Data System (ADS)

    Martowicz, Adam; Uhl, Tadeusz

    2012-10-01

    The paper discusses the applicability of a reliability- and performance-based multi-criteria robust design optimization technique for micro-electromechanical systems, considering their technological uncertainties. Nowadays, micro-devices are commonly applied systems, especially in the automotive industry, taking advantage of utilizing both the mechanical structure and electronic control circuit on one board. Their frequent use motivates the elaboration of virtual prototyping tools that can be applied in design optimization with the introduction of technological uncertainties and reliability. The authors present a procedure for the optimization of micro-devices, which is based on the theory of reliability-based robust design optimization. This takes into consideration the performance of a micro-device and its reliability assessed by means of uncertainty analysis. The procedure assumes that, for each checked design configuration, the assessment of uncertainty propagation is performed with the meta-modeling technique. The described procedure is illustrated with an example of the optimization carried out for a finite element model of a micro-mirror. The multi-physics approach allowed the introduction of several physical phenomena to correctly model the electrostatic actuation and the squeezing effect present between electrodes. The optimization was preceded by sensitivity analysis to establish the design and uncertain domains. The genetic algorithms fulfilled the defined optimization task effectively. The best discovered individuals are characterized by a minimized value of the multi-criteria objective function, simultaneously satisfying the constraint on material strength. The restriction of the maximum equivalent stresses was introduced with the conditionally formulated objective function with a penalty component. The yielded results were successfully verified with a global uniform search through the input design domain.

  17. Validity and reliability of the Persian version of mobile phone addiction scale.

    PubMed

    Mazaheri, Maryam Amidi; Karbasi, Mojtaba

    2014-02-01

    With regard to large number of mobile users especially among college students in Iran, addiction to mobile phone is attracting increasing concern. There is an urgent need for reliable and valid instrument to measure this phenomenon. This study examines validity and reliability of the Persian version of mobile phone addiction scale (MPAIS) in college students. this methodological study was down in Isfahan University of Medical Sciences. One thousand one hundred and eighty students were selected by convenience sampling. The English version of the MPAI questionnaire was translated into Persian with the approach of Jones et al. (Challenges in language, culture, and modality: Translating English measures into American Sign Language. Nurs Res 2006; 55: 75-81). Its reliability was tested by Cronbach's alpha and its dimensionality validity was evaluated using Pearson correlation coefficients with other measures of mobile phone use and IAT. Construct validity was evaluated using Exploratory subscale analysis. Cronbach's alpha of 0.86 was obtained for total PMPAS, for subscale1 (eight items) was 0.84, for subscale 2 (five items) was 0.81 and for subscale 3 (two items) was 0.77. There were significantly positive correlations between the score of PMPAS and IAT (r = 0.453, P < 0.001) and other measures of mobile phone use. Principal component subscale analysis yielded a three-subscale structure including: inability to control craving; feeling anxious and lost; mood improvement accounted for 60.57% of total variance. The results of discriminate validity showed that all the item's correlations with related subscale were greater than 0.5 and correlations with unrelated subscale were less than 0.5. Considering lack of a valid and reliable questionnaire for measuring addiction to the mobile phone, PMPAS could be a suitable instrument for measuring mobile phone addiction in future research.

  18. Software Reliability 2002

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores R.

    2003-01-01

    In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.

  19. Care initiation area yields dramatic results.

    PubMed

    2009-03-01

    The ED at Gaston Memorial Hospital in Gastonia, NC, has achieved dramatic results in key department metrics with a Care Initiation Area (CIA) and a physician in triage. Here's how the ED arrived at this winning solution: Leadership was trained in and implemented the Kaizen method, which eliminates redundant or inefficient process steps. Simulation software helped determine additional space needed by analyzing arrival patterns and other key data. After only two days of meetings, new ideas were implemented and tested.

  20. The Americleft Speech Project: A Training and Reliability Study

    PubMed Central

    Chapman, Kathy L.; Baylis, Adriane; Trost-Cardamone, Judith; Cordero, Kelly Nett; Dixon, Angela; Dobbelsteyn, Cindy; Thurmes, Anna; Wilson, Kristina; Harding-Bell, Anne; Sweeney, Triona; Stoddard, Gregory; Sell, Debbie

    2017-01-01

    Objective To describe the results of two reliability studies and to assess the effect of training on interrater reliability scores. Design The first study (1) examined interrater and intrarater reliability scores (weighted and unweighted kappas) and (2) compared interrater reliability scores before and after training on the use of the Cleft Audit Protocol for Speech–Augmented (CAPS-A) with British English-speaking children. The second study examined interrater and intrarater reliability on a modified version of the CAPS-A (CAPS-A Americleft Modification) with American and Canadian English-speaking children. Finally, comparisons were made between the interrater and intrarater reliability scores obtained for Study 1 and Study 2. Participants The participants were speech-language pathologists from the Americleft Speech Project. Results In Study 1, interrater reliability scores improved for 6 of the 13 parameters following training on the CAPS-A protocol. Comparison of the reliability results for the two studies indicated lower scores for Study 2 compared with Study 1. However, this appeared to be an artifact of the kappa statistic that occurred due to insufficient variability in the reliability samples for Study 2. When percent agreement scores were also calculated, the ratings appeared similar across Study 1 and Study 2. Conclusion The findings of this study suggested that improvements in interrater reliability could be obtained following a program of systematic training. However, improvements were not uniform across all parameters. Acceptable levels of reliability were achieved for those parameters most important for evaluation of velopharyngeal function. PMID:25531738

  1. Reliability Impacts in Life Support Architecture and Technology Selection

    NASA Technical Reports Server (NTRS)

    Lange, Kevin E.; Anderson, Molly S.

    2011-01-01

    Equivalent System Mass (ESM) and reliability estimates were performed for different life support architectures based primarily on International Space Station (ISS) technologies. The analysis was applied to a hypothetical 1-year deep-space mission. High-level fault trees were initially developed relating loss of life support functionality to the Loss of Crew (LOC) top event. System reliability was then expressed as the complement (nonoccurrence) this event and was increased through the addition of redundancy and spares, which added to the ESM. The reliability analysis assumed constant failure rates and used current projected values of the Mean Time Between Failures (MTBF) from an ISS database where available. Results were obtained showing the dependence of ESM on system reliability for each architecture. Although the analysis employed numerous simplifications and many of the input parameters are considered to have high uncertainty, the results strongly suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support system mass. As a point of reference, the reliability for a single-string architecture using the most regenerative combination of ISS technologies without unscheduled replacement spares was estimated to be less than 1%. The results also demonstrate how adding technologies in a serial manner to increase system closure forces the reliability of other life support technologies to increase in order to meet the system reliability requirement. This increase in reliability results in increased mass for multiple technologies through the need for additional spares. Alternative parallel architecture approaches and approaches with the potential to do more with less are discussed. The tall poles in life support ESM are also reexamined in light of estimated reliability impacts.

  2. Reliability of EEG Measures of Interaction: A Paradigm Shift Is Needed to Fight the Reproducibility Crisis

    PubMed Central

    Höller, Yvonne; Uhl, Andreas; Bathke, Arne; Thomschewski, Aljoscha; Butz, Kevin; Nardone, Raffaele; Fell, Jürgen; Trinka, Eugen

    2017-01-01

    Measures of interaction (connectivity) of the EEG are at the forefront of current neuroscientific research. Unfortunately, test-retest reliability can be very low, depending on the measure and its estimation, the EEG-frequency of interest, the length of the signal, and the population under investigation. In addition, artifacts can hamper the continuity of the EEG signal, and in some clinical situations it is impractical to exclude artifacts. We aimed to examine factors that moderate test-retest reliability of measures of interaction. The study involved 40 patients with a range of neurological diseases and memory impairments (age median: 60; range 21–76; 40% female; 22 mild cognitive impairment, 5 subjective cognitive complaints, 13 temporal lobe epilepsy), and 20 healthy controls (age median: 61.5; range 23–74; 70% female). We calculated 14 measures of interaction based on the multivariate autoregressive model from two EEG-recordings separated by 2 weeks. We characterized test-retest reliability by correlating the measures between the two EEG-recordings for variations of data length, data discontinuity, artifact exclusion, model order, and frequency over all combinations of channels and all frequencies, individually for each subject, yielding a correlation coefficient for each participant. Excluding artifacts had strong effects on reliability of some measures, such as classical, real valued coherence (~0.1 before, ~0.9 after artifact exclusion). Full frequency directed transfer function was highly reliable and robust against artifacts. Variation of data length decreased reliability in relation to poor adjustment of model order and signal length. Variation of discontinuity had no effect, but reliabilities were different between model orders, frequency ranges, and patient groups depending on the measure. Pathology did not interact with variation of signal length or discontinuity. Our results emphasize the importance of documenting reliability, which may vary

  3. A reliability analysis of the revised competitiveness index.

    PubMed

    Harris, Paul B; Houston, John M

    2010-06-01

    This study examined the reliability of the Revised Competitiveness Index by investigating the test-retest reliability, interitem reliability, and factor structure of the measure based on a sample of 280 undergraduates (200 women, 80 men) ranging in age from 18 to 28 years (M = 20.1, SD = 2.1). The findings indicate that the Revised Competitiveness Index has high test-retest reliability, high inter-item reliability, and a stable factor structure. The results support the assertion that the Revised Competitiveness Index assesses competitiveness as a stable trait rather than a dynamic state.

  4. The yield and post-yield behavior of high-density polyethylene

    NASA Technical Reports Server (NTRS)

    Semeliss, M. A.; Wong, R.; Tuttle, M. E.

    1990-01-01

    An experimental and analytical evaluation was made of the yield and post-yield behavior of high-density polyethylene, a semi-crystalline thermoplastic. Polyethylene was selected for study because it is very inexpensive and readily available in the form of thin-walled tubes. Thin-walled tubular specimens were subjected to axial loads and internal pressures, such that the specimens were subjected to a known biaxial loading. A constant octahederal shear stress rate was imposed during all tests. The measured yield and post-yield behavior was compared with predictions based on both isotropic and anisotropic models. Of particular interest was whether inelastic behavior was sensitive to the hydrostatic stress level. The major achievements and conclusions reached are discussed.

  5. Reliability analysis of interdependent lattices

    NASA Astrophysics Data System (ADS)

    Limiao, Zhang; Daqing, Li; Pengju, Qin; Bowen, Fu; Yinan, Jiang; Zio, Enrico; Rui, Kang

    2016-06-01

    Network reliability analysis has drawn much attention recently due to the risks of catastrophic damage in networked infrastructures. These infrastructures are dependent on each other as a result of various interactions. However, most of the reliability analyses of these interdependent networks do not consider spatial constraints, which are found important for robustness of infrastructures including power grid and transport systems. Here we study the reliability properties of interdependent lattices with different ranges of spatial constraints. Our study shows that interdependent lattices with strong spatial constraints are more resilient than interdependent Erdös-Rényi networks. There exists an intermediate range of spatial constraints, at which the interdependent lattices have minimal resilience.

  6. B-52 stability augmentation system reliability

    NASA Technical Reports Server (NTRS)

    Bowling, T. C.; Key, L. W.

    1976-01-01

    The B-52 SAS (Stability Augmentation System) was developed and retrofitted to nearly 300 aircraft. It actively controls B-52 structural bending, provides improved yaw and pitch damping through sensors and electronic control channels, and puts complete reliance on hydraulic control power for rudder and elevators. The system has experienced over 300,000 flight hours and has exhibited service reliability comparable to the results of the reliability test program. Development experience points out numerous lessons with potential application in the mechanization and development of advanced technology control systems of high reliability.

  7. Are Bibliographic Management Software Search Interfaces Reliable?: A Comparison between Search Results Obtained Using Database Interfaces and the EndNote Online Search Function

    ERIC Educational Resources Information Center

    Fitzgibbons, Megan; Meert, Deborah

    2010-01-01

    The use of bibliographic management software and its internal search interfaces is now pervasive among researchers. This study compares the results between searches conducted in academic databases' search interfaces versus the EndNote search interface. The results show mixed search reliability, depending on the database and type of search…

  8. An adapted yield criterion for the evolution of subsequent yield surfaces

    NASA Astrophysics Data System (ADS)

    Küsters, N.; Brosius, A.

    2017-09-01

    In numerical analysis of sheet metal forming processes, the anisotropic material behaviour is often modelled with isotropic work hardening and an average Lankford coefficient. In contrast, experimental observations show an evolution of the Lankford coefficients, which can be associated with a yield surface change due to kinematic and distortional hardening. Commonly, extensive efforts are carried out to describe these phenomena. In this paper an isotropic material model based on the Yld2000-2d criterion is adapted with an evolving yield exponent in order to change the yield surface shape. The yield exponent is linked to the accumulative plastic strain. This change has the effect of a rotating yield surface normal. As the normal is directly related to the Lankford coefficient, the change can be used to model the evolution of the Lankford coefficient during yielding. The paper will focus on the numerical implementation of the adapted material model for the FE-code LS-Dyna, mpi-version R7.1.2-d. A recently introduced identification scheme [1] is used to obtain the parameters for the evolving yield surface and will be briefly described for the proposed model. The suitability for numerical analysis will be discussed for deep drawing processes in general. Efforts for material characterization and modelling will be compared to other common yield surface descriptions. Besides experimental efforts and achieved accuracy, the potential of flexibility in material models and the risk of ambiguity during identification are of major interest in this paper.

  9. Developing an automated database for monitoring ultrasound- and computed tomography-guided procedure complications and diagnostic yield.

    PubMed

    Itri, Jason N; Jones, Lisa P; Kim, Woojin; Boonn, William W; Kolansky, Ana S; Hilton, Susan; Zafar, Hanna M

    2014-04-01

    Monitoring complications and diagnostic yield for image-guided procedures is an important component of maintaining high quality patient care promoted by professional societies in radiology and accreditation organizations such as the American College of Radiology (ACR) and Joint Commission. These outcome metrics can be used as part of a comprehensive quality assurance/quality improvement program to reduce variation in clinical practice, provide opportunities to engage in practice quality improvement, and contribute to developing national benchmarks and standards. The purpose of this article is to describe the development and successful implementation of an automated web-based software application to monitor procedural outcomes for US- and CT-guided procedures in an academic radiology department. The open source tools PHP: Hypertext Preprocessor (PHP) and MySQL were used to extract relevant procedural information from the Radiology Information System (RIS), auto-populate the procedure log database, and develop a user interface that generates real-time reports of complication rates and diagnostic yield by site and by operator. Utilizing structured radiology report templates resulted in significantly improved accuracy of information auto-populated from radiology reports, as well as greater compliance with manual data entry. An automated web-based procedure log database is an effective tool to reliably track complication rates and diagnostic yield for US- and CT-guided procedures performed in a radiology department.

  10. Test-re-test reliability and inter-rater reliability of a digital pelvic inclinometer in young, healthy males and females.

    PubMed

    Beardsley, Chris; Egerton, Tim; Skinner, Brendon

    2016-01-01

    Objective. The purpose of this study was to investigate the reliability of a digital pelvic inclinometer (DPI) for measuring sagittal plane pelvic tilt in 18 young, healthy males and females. Method. The inter-rater reliability and test-re-test reliabilities of the DPI for measuring pelvic tilt in standing on both the right and left sides of the pelvis were measured by two raters carrying out two rating sessions of the same subjects, three weeks apart. Results. For measuring pelvic tilt, inter-rater reliability was designated as good on both sides (ICC = 0.81-0.88), test-re-test reliability within a single rating session was designated as good on both sides (ICC = 0.88-0.95), and test-re-test reliability between two rating sessions was designated as moderate on the left side (ICC = 0.65) and good on the right side (ICC = 0.85). Conclusion. Inter-rater reliability and test-re-test reliability within a single rating session of the DPI in measuring pelvic tilt were both good, while test-re-test reliability between rating sessions was moderate-to-good. Caution is required regarding the interpretation of the test-re-test reliability within a single rating session, as the raters were not blinded. Further research is required to establish validity.

  11. Comparison of PHITS, GEANT4, and HIBRAC simulations of depth-dependent yields of β(+)-emitting nuclei during therapeutic particle irradiation to measured data.

    PubMed

    Rohling, Heide; Sihver, Lembit; Priegnitz, Marlen; Enghardt, Wolfgang; Fiedler, Fine

    2013-09-21

    For quality assurance in particle therapy, a non-invasive, in vivo range verification is highly desired. Particle therapy positron-emission-tomography (PT-PET) is the only clinically proven method up to now for this purpose. It makes use of the β(+)-activity produced during the irradiation by the nuclear fragmentation processes between the therapeutic beam and the irradiated tissue. Since a direct comparison of β(+)-activity and dose is not feasible, a simulation of the expected β(+)-activity distribution is required. For this reason it is essential to have a quantitatively reliable code for the simulation of the yields of the β(+)-emitting nuclei at every position of the beam path. In this paper results of the three-dimensional Monte-Carlo simulation codes PHITS, GEANT4, and the one-dimensional deterministic simulation code HIBRAC are compared to measurements of the yields of the most abundant β(+)-emitting nuclei for carbon, lithium, helium, and proton beams. In general, PHITS underestimates the yields of positron-emitters. With GEANT4 the overall most accurate results are obtained. HIBRAC and GEANT4 provide comparable results for carbon and proton beams. HIBRAC is considered as a good candidate for the implementation to clinical routine PT-PET.

  12. Comparison of PHITS, GEANT4, and HIBRAC simulations of depth-dependent yields of β+-emitting nuclei during therapeutic particle irradiation to measured data

    NASA Astrophysics Data System (ADS)

    Rohling, Heide; Sihver, Lembit; Priegnitz, Marlen; Enghardt, Wolfgang; Fiedler, Fine

    2013-09-01

    For quality assurance in particle therapy, a non-invasive, in vivo range verification is highly desired. Particle therapy positron-emission-tomography (PT-PET) is the only clinically proven method up to now for this purpose. It makes use of the β+-activity produced during the irradiation by the nuclear fragmentation processes between the therapeutic beam and the irradiated tissue. Since a direct comparison of β+-activity and dose is not feasible, a simulation of the expected β+-activity distribution is required. For this reason it is essential to have a quantitatively reliable code for the simulation of the yields of the β+-emitting nuclei at every position of the beam path. In this paper results of the three-dimensional Monte-Carlo simulation codes PHITS, GEANT4, and the one-dimensional deterministic simulation code HIBRAC are compared to measurements of the yields of the most abundant β+-emitting nuclei for carbon, lithium, helium, and proton beams. In general, PHITS underestimates the yields of positron-emitters. With GEANT4 the overall most accurate results are obtained. HIBRAC and GEANT4 provide comparable results for carbon and proton beams. HIBRAC is considered as a good candidate for the implementation to clinical routine PT-PET.

  13. Performance and reliability enhancement of linear coolers

    NASA Astrophysics Data System (ADS)

    Mai, M.; Rühlich, I.; Schreiter, A.; Zehner, S.

    2010-04-01

    Highest efficiency states a crucial requirement for modern tactical IR cryocooling systems. For enhancement of overall efficiency, AIM cryocooler designs where reassessed considering all relevant loss mechanisms and associated components. Performed investigation was based on state-of-the-art simulation software featuring magnet circuitry analysis as well as computational fluid dynamics (CFD) to realistically replicate thermodynamic interactions. As a result, an improved design for AIM linear coolers could be derived. This paper gives an overview on performance enhancement activities and major results. An additional key-requirement for cryocoolers is reliability. In recent time, AIM has introduced linear coolers with full Flexure Bearing suspension on both ends of the driving mechanism incorporating Moving Magnet piston drive. In conjunction with a Pulse-Tube coldfinger these coolers are capable of meeting MTTF's (Mean Time To Failure) in excess of 50,000 hours offering superior reliability for space applications. Ongoing development also focuses on reliability enhancement, deriving space technology into tactical solutions combining both, excelling specific performance with space like reliability. Concerned publication will summarize the progress of this reliability program and give further prospect.

  14. Wind Power Reliability Research | Wind | NREL

    Science.gov Websites

    Reliability Collaborative fact sheet. Wind Turbine Blade Reliability Wind turbine blade failures are an extremely rare occurrence, but when they do happen, the results can be catastrophic. For this reason, blade manufacturers require tests of blade properties, static mechanical tests, and fatigue tests to certify wind

  15. Software For Computing Reliability Of Other Software

    NASA Technical Reports Server (NTRS)

    Nikora, Allen; Antczak, Thomas M.; Lyu, Michael

    1995-01-01

    Computer Aided Software Reliability Estimation (CASRE) computer program developed for use in measuring reliability of other software. Easier for non-specialists in reliability to use than many other currently available programs developed for same purpose. CASRE incorporates mathematical modeling capabilities of public-domain Statistical Modeling and Estimation of Reliability Functions for Software (SMERFS) computer program and runs in Windows software environment. Provides menu-driven command interface; enabling and disabling of menu options guides user through (1) selection of set of failure data, (2) execution of mathematical model, and (3) analysis of results from model. Written in C language.

  16. 78 FR 41339 - Electric Reliability Organization Proposal To Retire Requirements in Reliability Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-10

    ...] Electric Reliability Organization Proposal To Retire Requirements in Reliability Standards AGENCY: Federal... Reliability Standards identified by the North American Electric Reliability Corporation (NERC), the Commission-certified Electric Reliability Organization. FOR FURTHER INFORMATION CONTACT: Kevin Ryan (Legal Information...

  17. Reliability Impacts in Life Support Architecture and Technology Selection

    NASA Technical Reports Server (NTRS)

    Lange Kevin E.; Anderson, Molly S.

    2012-01-01

    Quantitative assessments of system reliability and equivalent system mass (ESM) were made for different life support architectures based primarily on International Space Station technologies. The analysis was applied to a one-year deep-space mission. System reliability was increased by adding redundancy and spares, which added to the ESM. Results were thus obtained allowing a comparison of the ESM for each architecture at equivalent levels of reliability. Although the analysis contains numerous simplifications and uncertainties, the results suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support ESM and could influence the optimal degree of life support closure. Approaches for reducing reliability impacts were investigated and are discussed.

  18. Assimilating a synthetic Kalman filter leaf area index series into the WOFOST model to improve regional winter wheat yield estimation

    USDA-ARS?s Scientific Manuscript database

    The scale mismatch between remotely sensed observations and crop growth models simulated state variables decreases the reliability of crop yield estimates. To overcome this problem, we used a two-step data assimilation phases: first we generated a complete leaf area index (LAI) time series by combin...

  19. 76 FR 42534 - Mandatory Reliability Standards for Interconnection Reliability Operating Limits; System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-19

    ... Reliability Operating Limits; System Restoration Reliability Standards AGENCY: Federal Energy Regulatory... data necessary to analyze and monitor Interconnection Reliability Operating Limits (IROL) within its... Interconnection Reliability Operating Limits, Order No. 748, 134 FERC ] 61,213 (2011). \\2\\ The term ``Wide-Area...

  20. Study of the strong {sigma}{sub c}{yields}{lambda}{sub c}{pi},{sigma}{sub c}*{yields}{lambda}{sub c}{pi} and {xi}{sub c}*{yields}{xi}{sub c}{pi} decays in a nonrelativistic quark model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albertus, C.; Nieves, J.; Hernandez, E.

    We present results for the strong widths corresponding to the {sigma}{sub c}{yields}{lambda}{sub c}{pi}, {sigma}{sub c}*{yields}{lambda}{sub c}{pi} and {xi}{sub c}*{yields}{xi}{sub c}{pi} decays. The calculations have been done in a nonrelativistic constituent quark model with wave functions that take advantage of the constraints imposed by heavy quark symmetry. Partial conservation of axial current hypothesis allows us to determine the strong vertices from an analysis of the axial current matrix elements. Our results {gamma}({sigma}{sub c}{sup ++}{yields}{lambda}{sub c}{sup +}{pi}{sup +})=2.41{+-}0.07{+-}0.02 MeV, {gamma}({sigma}{sub c}{sup +}{yields}{lambda}{sub c}{sup +}{pi}{sup 0})=2.79{+-}0.08{+-}0.02 MeV, {gamma}({sigma}{sub c}{sup 0}{yields}{lambda}{sub c}{sup +}{pi}{sup -})=2.37{+-}0.07{+-}0.02 MeV, {gamma}({sigma}{sub c}*{sup ++}{yields}{lambda}{sub c}{sup +}{pi}{sup +})=17.52{+-}0.74{+-}0.12 MeV, {gamma}({sigma}{sub c}*{supmore » +}{yields}{lambda}{sub c}{sup +}{pi}{sup 0})=17.31{+-}0.73{+-}0.12 MeV, {gamma}({sigma}{sub c}*{sup 0}{yields}{lambda}{sub c}{sup +}{pi}{sup -})=16.90{+-}0.71{+-}0.12 MeV, {gamma}({xi}{sub c}*{sup +}{yields}{xi}{sub c}{sup 0}{pi}{sup +}+{xi}{sub c}{sup +}{pi}{sup 0})=3.18{+-}0.10{+-}0.01 MeV, and {gamma}({xi}{sub c}*{sup 0}{yields}{xi}{sub c}{sup +}{pi}{sup -}+{xi}{sub c}{sup 0}{pi}{sup 0})=3.03{+-}0.10{+-}0.01 MeV are in good agreement with experimental determinations.« less

  1. RELAV - RELIABILITY/AVAILABILITY ANALYSIS PROGRAM

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    RELAV (Reliability/Availability Analysis Program) is a comprehensive analytical tool to determine the reliability or availability of any general system which can be modeled as embedded k-out-of-n groups of items (components) and/or subgroups. Both ground and flight systems at NASA's Jet Propulsion Laboratory have utilized this program. RELAV can assess current system performance during the later testing phases of a system design, as well as model candidate designs/architectures or validate and form predictions during the early phases of a design. Systems are commonly modeled as System Block Diagrams (SBDs). RELAV calculates the success probability of each group of items and/or subgroups within the system assuming k-out-of-n operating rules apply for each group. The program operates on a folding basis; i.e. it works its way towards the system level from the most embedded level by folding related groups into single components. The entire folding process involves probabilities; therefore, availability problems are performed in terms of the probability of success, and reliability problems are performed for specific mission lengths. An enhanced cumulative binomial algorithm is used for groups where all probabilities are equal, while a fast algorithm based upon "Computing k-out-of-n System Reliability", Barlow & Heidtmann, IEEE TRANSACTIONS ON RELIABILITY, October 1984, is used for groups with unequal probabilities. Inputs to the program include a description of the system and any one of the following: 1) availabilities of the items, 2) mean time between failures and mean time to repairs for the items from which availabilities are calculated, 3) mean time between failures and mission length(s) from which reliabilities are calculated, or 4) failure rates and mission length(s) from which reliabilities are calculated. The results are probabilities of success of each group and the system in the given configuration. RELAV assumes exponential failure distributions for

  2. Further constraints for the Plio-Pleistocene geomagnetic field strength: New results from the Los Tuxtlas volcanic field (Mexico)

    NASA Astrophysics Data System (ADS)

    Alva-Valdivia, L. M.; Goguitchaichvili, A.; Urrutia-Fucugauchi, J.

    2001-09-01

    A rock-magnetic, paleomagnetic and paleointensity study was carried out on 13 Plio-Pleistocene volcanic flows from the Los Tuxtlas volcanic field (Trans Mexican Volcanic Belt) in order to obtain some decisive constraints for the geomagnetic field strength during the Plio-Pleistocene time. The age of the volcanic units, which yielded reliable paleointensity estimates, lies between 2.2 and 0.8 Ma according to the available K/Ar radiometric data. Thermomagnetic investigations reveal that remanence is carried in most cases by Ti-poor titanomagnetite, resulting from oxy-exsolution that probably occurred during the initial flow cooling. Unblocking temperature spectra and relatively high coercivity point to 'small' pseudo-single domain magnetic grains for these (titano)magnetites. Single-component, linear demagnetization plots were observed in most cases. Six flows yield reverse polarity magnetization, five flows are normally magnetized, and one flow shows intermediate polarity magnetization. Evidence of a strong lightning-produced magnetization overprint was detected for one site. The mean pole position obtained in this study is Plat = 83.7°, Plong = 178.1°, K = 36, A95 = 8.1°, N =10 and the corresponding mean paleodirection is I = 31.3°, D = 352°, k = 37, a95 = 8.2°, which is not significantly different from the expected direction estimated from the North American apparent polar wander path. Thirty-nine samples were pre-selected for Thellier palaeointensity experiments because of their stable remanent magnetization and relatively weak-within-site dispersion. Only 21 samples, coming from four individual basaltic lava flows, yielded reliable paleointensity estimates with the flow-mean virtual dipole moments (VDM) ranging from 6.4 to 9.1 × 1022 Am2. Combining the coeval Mexican data with the available comparable quality Pliocene paleointensity results yield a mean VDM of 6.4 × 1022 Am2, which is almost 80% of the present geomagnetic axial dipole. Reliable

  3. Comprehensive Design Reliability Activities for Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Christenson, R. L.; Whitley, M. R.; Knight, K. C.

    2000-01-01

    This technical publication describes the methodology, model, software tool, input data, and analysis result that support aerospace design reliability studies. The focus of these activities is on propulsion systems mechanical design reliability. The goal of these activities is to support design from a reliability perspective. Paralleling performance analyses in schedule and method, this requires the proper use of metrics in a validated reliability model useful for design, sensitivity, and trade studies. Design reliability analysis in this view is one of several critical design functions. A design reliability method is detailed and two example analyses are provided-one qualitative and the other quantitative. The use of aerospace and commercial data sources for quantification is discussed and sources listed. A tool that was developed to support both types of analyses is presented. Finally, special topics discussed include the development of design criteria, issues of reliability quantification, quality control, and reliability verification.

  4. Assessing the Measurement Properties of the Principal Instructional Management Rating Scale: A Meta-Analysis of Reliability Studies

    ERIC Educational Resources Information Center

    Hallinger, Phillip; Wang, Wen-Chung; Chen, Chia-Wen

    2013-01-01

    Background: In a recent article, Hallinger (2011b) reviewed 135 empirical studies that had employed the Principal Instructional Management Rating Scale (PIMRS) over the prior three decades. The author concluded that the PIMRS appeared to have attained a consistent record of yielding reliable and valid data on principal instructional leadership.…

  5. Probabilistic fatigue methodology for six nines reliability

    NASA Technical Reports Server (NTRS)

    Everett, R. A., Jr.; Bartlett, F. D., Jr.; Elber, Wolf

    1990-01-01

    Fleet readiness and flight safety strongly depend on the degree of reliability that can be designed into rotorcraft flight critical components. The current U.S. Army fatigue life specification for new rotorcraft is the so-called six nines reliability, or a probability of failure of one in a million. The progress of a round robin which was established by the American Helicopter Society (AHS) Subcommittee for Fatigue and Damage Tolerance is reviewed to investigate reliability-based fatigue methodology. The participants in this cooperative effort are in the U.S. Army Aviation Systems Command (AVSCOM) and the rotorcraft industry. One phase of the joint activity examined fatigue reliability under uniquely defined conditions for which only one answer was correct. The other phases were set up to learn how the different industry methods in defining fatigue strength affected the mean fatigue life and reliability calculations. Hence, constant amplitude and spectrum fatigue test data were provided so that each participant could perform their standard fatigue life analysis. As a result of this round robin, the probabilistic logic which includes both fatigue strength and spectrum loading variability in developing a consistant reliability analysis was established. In this first study, the reliability analysis was limited to the linear cumulative damage approach. However, it is expected that superior fatigue life prediction methods will ultimately be developed through this open AHS forum. To that end, these preliminary results were useful in identifying some topics for additional study.

  6. Benefits of seasonal forecasts of crop yields

    NASA Astrophysics Data System (ADS)

    Sakurai, G.; Okada, M.; Nishimori, M.; Yokozawa, M.

    2017-12-01

    Major factors behind recent fluctuations in food prices include increased biofuel production and oil price fluctuations. In addition, several extreme climate events that reduced worldwide food production coincided with upward spikes in food prices. The stabilization of crop yields is one of the most important tasks to stabilize food prices and thereby enhance food security. Recent development of technologies related to crop modeling and seasonal weather forecasting has made it possible to forecast future crop yields for maize and soybean. However, the effective use of these technologies remains limited. Here we present the potential benefits of seasonal crop-yield forecasts on a global scale for choice of planting day. For this purpose, we used a model (PRYSBI-2) that can well replicate past crop yields both for maize and soybean. This model system uses a Bayesian statistical approach to estimate the parameters of a basic process-based model of crop growth. The spatial variability of model parameters was considered by estimating the posterior distribution of the parameters from historical yield data by using the Markov-chain Monte Carlo (MCMC) method with a resolution of 1.125° × 1.125°. The posterior distributions of model parameters were estimated for each spatial grid with 30 000 MCMC steps of 10 chains each. By using this model and the estimated parameter distributions, we were able to estimate not only crop yield but also levels of associated uncertainty. We found that the global average crop yield increased about 30% as the result of the optimal selection of planting day and that the seasonal forecast of crop yield had a large benefit in and near the eastern part of Brazil and India for maize and the northern area of China for soybean. In these countries, the effects of El Niño and Indian Ocean dipole are large. The results highlight the importance of developing a system to forecast global crop yields.

  7. Over-expression of AtPAP2 in Camelina sativa leads to faster plant growth and higher seed yield

    PubMed Central

    2012-01-01

    Background Lipids extracted from seeds of Camelina sativa have been successfully used as a reliable source of aviation biofuels. This biofuel is environmentally friendly because the drought resistance, frost tolerance and low fertilizer requirement of Camelina sativa allow it to grow on marginal lands. Improving the species growth and seed yield by genetic engineering is therefore a target for the biofuels industry. In Arabidopsis, overexpression of purple acid phosphatase 2 encoded by Arabidopsis (AtPAP2) promotes plant growth by modulating carbon metabolism. Overexpression lines bolt earlier and produce 50% more seeds per plant than wild type. In this study, we explored the effects of overexpressing AtPAP2 in Camelina sativa. Results Under controlled environmental conditions, overexpression of AtPAP2 in Camelina sativa resulted in longer hypocotyls, earlier flowering, faster growth rate, higher photosynthetic rate and stomatal conductance, increased seed yield and seed size in comparison with the wild-type line and null-lines. Similar to transgenic Arabidopsis, activity of sucrose phosphate synthase in leaves of transgenic Camelina was also significantly up-regulated. Sucrose produced in photosynthetic tissues supplies the building blocks for cellulose, starch and lipids for growth and fuel for anabolic metabolism. Changes in carbon flow and sink/source activities in transgenic lines may affect floral, architectural, and reproductive traits of plants. Conclusions Lipids extracted from the seeds of Camelina sativa have been used as a major constituent of aviation biofuels. The improved growth rate and seed yield of transgenic Camelina under controlled environmental conditions have the potential to boost oil yield on an area basis in field conditions and thus make Camelina-based biofuels more environmentally friendly and economically attractive. PMID:22472516

  8. Reliability Generalization of the Alcohol Use Disorder Identification Test.

    ERIC Educational Resources Information Center

    Shields, Alan L.; Caruso, John C.

    2002-01-01

    Evaluated the reliability of scores from the Alcohol Use Disorders Identification Test (AUDIT; J. Sounders and others, 1993) in a reliability generalization study based on 17 empirical journal articles. Results show AUDIT scores to be generally reliable for basic assessment. (SLD)

  9. Metrological Reliability of Medical Devices

    NASA Astrophysics Data System (ADS)

    Costa Monteiro, E.; Leon, L. F.

    2015-02-01

    The prominent development of health technologies of the 20th century triggered demands for metrological reliability of physiological measurements comprising physical, chemical and biological quantities, essential to ensure accurate and comparable results of clinical measurements. In the present work, aspects concerning metrological reliability in premarket and postmarket assessments of medical devices are discussed, pointing out challenges to be overcome. In addition, considering the social relevance of the biomeasurements results, Biometrological Principles to be pursued by research and innovation aimed at biomedical applications are proposed, along with the analysis of their contributions to guarantee the innovative health technologies compliance with the main ethical pillars of Bioethics.

  10. Reliability and risk assessment of structures

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1991-01-01

    Development of reliability and risk assessment of structural components and structures is a major activity at Lewis Research Center. It consists of five program elements: (1) probabilistic loads; (2) probabilistic finite element analysis; (3) probabilistic material behavior; (4) assessment of reliability and risk; and (5) probabilistic structural performance evaluation. Recent progress includes: (1) the evaluation of the various uncertainties in terms of cumulative distribution functions for various structural response variables based on known or assumed uncertainties in primitive structural variables; (2) evaluation of the failure probability; (3) reliability and risk-cost assessment; and (4) an outline of an emerging approach for eventual certification of man-rated structures by computational methods. Collectively, the results demonstrate that the structural durability/reliability of man-rated structural components and structures can be effectively evaluated by using formal probabilistic methods.

  11. The Clinical Research Tool: A High-Performance Microdialysis-Based System for Reliably Measuring Interstitial Fluid Glucose Concentration

    PubMed Central

    Ocvirk, Gregor; Hajnsek, Martin; Gillen, Ralph; Guenther, Arnfried; Hochmuth, Gernot; Kamecke, Ulrike; Koelker, Karl-Heinz; Kraemer, Peter; Obermaier, Karin; Reinheimer, Cornelia; Jendrike, Nina; Freckmann, Guido

    2009-01-01

    Background A novel microdialysis-based continuous glucose monitoring system, the so-called Clinical Research Tool (CRT), is presented. The CRT was designed exclusively for investigational use to offer high analytical accuracy and reliability. The CRT was built to avoid signal artifacts due to catheter clogging, flow obstruction by air bubbles, and flow variation caused by inconstant pumping. For differentiation between physiological events and system artifacts, the sensor current, counter electrode and polarization voltage, battery voltage, sensor temperature, and flow rate are recorded at a rate of 1 Hz. Method In vitro characterization with buffered glucose solutions (cglucose = 0 - 26 × 10-3 mol liter-1) over 120 h yielded a mean absolute relative error (MARE) of 2.9 ± 0.9% and a recorded mean flow rate of 330 ± 48 nl/min with periodic flow rate variation amounting to 24 ± 7%. The first 120 h in vivo testing was conducted with five type 1 diabetes subjects wearing two systems each. A mean flow rate of 350 ± 59 nl/min and a periodic variation of 22 ± 6% were recorded. Results Utilizing 3 blood glucose measurements per day and a physical lag time of 1980 s, retrospective calibration of the 10 in vivo experiments yielded a MARE value of 12.4 ± 5.7. Clarke error grid analysis resulted in 81.0%, 16.6%, 0.8%, 1.6%, and 0% in regions A, B, C, D, and E, respectively. Conclusion The CRT demonstrates exceptional reliability of system operation and very good measurement performance. The ability to differentiate between artifacts and physiological effects suggests the use of the CRT as a reference tool in clinical investigations. PMID:20144284

  12. Mission Reliability Estimation for Repairable Robot Teams

    NASA Technical Reports Server (NTRS)

    Trebi-Ollennu, Ashitey; Dolan, John; Stancliff, Stephen

    2010-01-01

    A mission reliability estimation method has been designed to translate mission requirements into choices of robot modules in order to configure a multi-robot team to have high reliability at minimal cost. In order to build cost-effective robot teams for long-term missions, one must be able to compare alternative design paradigms in a principled way by comparing the reliability of different robot models and robot team configurations. Core modules have been created including: a probabilistic module with reliability-cost characteristics, a method for combining the characteristics of multiple modules to determine an overall reliability-cost characteristic, and a method for the generation of legitimate module combinations based on mission specifications and the selection of the best of the resulting combinations from a cost-reliability standpoint. The developed methodology can be used to predict the probability of a mission being completed, given information about the components used to build the robots, as well as information about the mission tasks. In the research for this innovation, sample robot missions were examined and compared to the performance of robot teams with different numbers of robots and different numbers of spare components. Data that a mission designer would need was factored in, such as whether it would be better to have a spare robot versus an equivalent number of spare parts, or if mission cost can be reduced while maintaining reliability using spares. This analytical model was applied to an example robot mission, examining the cost-reliability tradeoffs among different team configurations. Particularly scrutinized were teams using either redundancy (spare robots) or repairability (spare components). Using conservative estimates of the cost-reliability relationship, results show that it is possible to significantly reduce the cost of a robotic mission by using cheaper, lower-reliability components and providing spares. This suggests that the

  13. Regional crop yield forecasting: a probabilistic approach

    NASA Astrophysics Data System (ADS)

    de Wit, A.; van Diepen, K.; Boogaard, H.

    2009-04-01

    Information on the outlook on yield and production of crops over large regions is essential for government services dealing with import and export of food crops, for agencies with a role in food relief, for international organizations with a mandate in monitoring the world food production and trade, and for commodity traders. Process-based mechanistic crop models are an important tool for providing such information, because they can integrate the effect of crop management, weather and soil on crop growth. When properly integrated in a yield forecasting system, the aggregated model output can be used to predict crop yield and production at regional, national and continental scales. Nevertheless, given the scales at which these models operate, the results are subject to large uncertainties due to poorly known weather conditions and crop management. Current yield forecasting systems are generally deterministic in nature and provide no information about the uncertainty bounds on their output. To improve on this situation we present an ensemble-based approach where uncertainty bounds can be derived from the dispersion of results in the ensemble. The probabilistic information provided by this ensemble-based system can be used to quantify uncertainties (risk) on regional crop yield forecasts and can therefore be an important support to quantitative risk analysis in a decision making process.

  14. Random Forests for Global and Regional Crop Yield Predictions.

    PubMed

    Jeong, Jig Han; Resop, Jonathan P; Mueller, Nathaniel D; Fleisher, David H; Yun, Kyungdahm; Butler, Ethan E; Timlin, Dennis J; Shim, Kyo-Moon; Gerber, James S; Reddy, Vangimalla R; Kim, Soo-Hyung

    2016-01-01

    Accurate predictions of crop yield are critical for developing effective agricultural and food policies at the regional and global scales. We evaluated a machine-learning method, Random Forests (RF), for its ability to predict crop yield responses to climate and biophysical variables at global and regional scales in wheat, maize, and potato in comparison with multiple linear regressions (MLR) serving as a benchmark. We used crop yield data from various sources and regions for model training and testing: 1) gridded global wheat grain yield, 2) maize grain yield from US counties over thirty years, and 3) potato tuber and maize silage yield from the northeastern seaboard region. RF was found highly capable of predicting crop yields and outperformed MLR benchmarks in all performance statistics that were compared. For example, the root mean square errors (RMSE) ranged between 6 and 14% of the average observed yield with RF models in all test cases whereas these values ranged from 14% to 49% for MLR models. Our results show that RF is an effective and versatile machine-learning method for crop yield predictions at regional and global scales for its high accuracy and precision, ease of use, and utility in data analysis. RF may result in a loss of accuracy when predicting the extreme ends or responses beyond the boundaries of the training data.

  15. Yield Advances in Peanut

    USDA-ARS?s Scientific Manuscript database

    Average yields of peanut in the U.S. set an all time record of 4,695 kg ha-1 in 2012. This far exceeded the previous record yield of 3,837 kg ha-1 in 2008. Favorable weather conditions undoubtedly contributed to the record yields in 2012; however, these record yields would not have been achievable...

  16. The Americleft Speech Project: A Training and Reliability Study.

    PubMed

    Chapman, Kathy L; Baylis, Adriane; Trost-Cardamone, Judith; Cordero, Kelly Nett; Dixon, Angela; Dobbelsteyn, Cindy; Thurmes, Anna; Wilson, Kristina; Harding-Bell, Anne; Sweeney, Triona; Stoddard, Gregory; Sell, Debbie

    2016-01-01

    To describe the results of two reliability studies and to assess the effect of training on interrater reliability scores. The first study (1) examined interrater and intrarater reliability scores (weighted and unweighted kappas) and (2) compared interrater reliability scores before and after training on the use of the Cleft Audit Protocol for Speech-Augmented (CAPS-A) with British English-speaking children. The second study examined interrater and intrarater reliability on a modified version of the CAPS-A (CAPS-A Americleft Modification) with American and Canadian English-speaking children. Finally, comparisons were made between the interrater and intrarater reliability scores obtained for Study 1 and Study 2. The participants were speech-language pathologists from the Americleft Speech Project. In Study 1, interrater reliability scores improved for 6 of the 13 parameters following training on the CAPS-A protocol. Comparison of the reliability results for the two studies indicated lower scores for Study 2 compared with Study 1. However, this appeared to be an artifact of the kappa statistic that occurred due to insufficient variability in the reliability samples for Study 2. When percent agreement scores were also calculated, the ratings appeared similar across Study 1 and Study 2. The findings of this study suggested that improvements in interrater reliability could be obtained following a program of systematic training. However, improvements were not uniform across all parameters. Acceptable levels of reliability were achieved for those parameters most important for evaluation of velopharyngeal function.

  17. Effects of capillarity and microtopography on wetland specific yield

    USGS Publications Warehouse

    Sumner, D.M.

    2007-01-01

    Hydrologic models aid in describing water flows and levels in wetlands. Frequently, these models use a specific yield conceptualization to relate water flows to water level changes. Traditionally, a simple conceptualization of specific yield is used, composed of two constant values for above- and below-surface water levels and neglecting the effects of soil capillarity and land surface microtopography. The effects of capiltarity and microtopography on specific yield were evaluated at three wetland sites in the Florida Everglades. The effect of capillarity on specific yield was incorporated based on the fillable pore space within a soil moisture profile at hydrostatic equilibrium with the water table. The effect of microtopography was based on areal averaging of topographically varying values of specific yield. The results indicate that a more physically-based conceptualization of specific yield incorporating capillary and microtopographic considerations can be substantially different from the traditional two-part conceptualization, and from simpler conceptualizations incorporating only capillarity or only microtopography. For the sites considered, traditional estimates of specific yield could under- or overestimate the more physically based estimates by a factor of two or more. The results suggest that consideration of both capillarity and microtopography is important to the formulation of specific yield in physically based hydrologic models of wetlands. ?? 2007, The Society of Wetland Scientists.

  18. Increasing influence of heat stress on French maize yields from the 1960s to the 2030s

    PubMed Central

    Hawkins, Ed; Fricker, Thomas E; Challinor, Andrew J; Ferro, Christopher A T; Kit Ho, Chun; Osborne, Tom M

    2013-01-01

    Improved crop yield forecasts could enable more effective adaptation to climate variability and change. Here, we explore how to combine historical observations of crop yields and weather with climate model simulations to produce crop yield projections for decision relevant timescales. Firstly, the effects on historical crop yields of improved technology, precipitation and daily maximum temperatures are modelled empirically, accounting for a nonlinear technology trend and interactions between temperature and precipitation, and applied specifically for a case study of maize in France. The relative importance of precipitation variability for maize yields in France has decreased significantly since the 1960s, likely due to increased irrigation. In addition, heat stress is found to be as important for yield as precipitation since around 2000. A significant reduction in maize yield is found for each day with a maximum temperature above 32 °C, in broad agreement with previous estimates. The recent increase in such hot days has likely contributed to the observed yield stagnation. Furthermore, a general method for producing near-term crop yield projections, based on climate model simulations, is developed and utilized. We use projections of future daily maximum temperatures to assess the likely change in yields due to variations in climate. Importantly, we calibrate the climate model projections using observed data to ensure both reliable temperature mean and daily variability characteristics, and demonstrate that these methods work using retrospective predictions. We conclude that, to offset the projected increased daily maximum temperatures over France, improved technology will need to increase base level yields by 12% to be confident about maintaining current levels of yield for the period 2016–2035; the current rate of yield technology increase is not sufficient to meet this target. PMID:23504849

  19. Training less-experienced faculty improves reliability of skills assessment in cardiac surgery.

    PubMed

    Lou, Xiaoying; Lee, Richard; Feins, Richard H; Enter, Daniel; Hicks, George L; Verrier, Edward D; Fann, James I

    2014-12-01

    Previous work has demonstrated high inter-rater reliability in the objective assessment of simulated anastomoses among experienced educators. We evaluated the inter-rater reliability of less-experienced educators and the impact of focused training with a video-embedded coronary anastomosis assessment tool. Nine less-experienced cardiothoracic surgery faculty members from different institutions evaluated 2 videos of simulated coronary anastomoses (1 by a medical student and 1 by a resident) at the Thoracic Surgery Directors Association Boot Camp. They then underwent a 30-minute training session using an assessment tool with embedded videos to anchor rating scores for 10 components of coronary artery anastomosis. Afterward, they evaluated 2 videos of a different student and resident performing the task. Components were scored on a 1 to 5 Likert scale, yielding an average composite score. Inter-rater reliabilities of component and composite scores were assessed using intraclass correlation coefficients (ICCs) and overall pass/fail ratings with kappa. All components of the assessment tool exhibited improvement in reliability, with 4 (bite, needle holder use, needle angles, and hand mechanics) improving the most from poor (ICC range, 0.09-0.48) to strong (ICC range, 0.80-0.90) agreement. After training, inter-rater reliabilities for composite scores improved from moderate (ICC, 0.76) to strong (ICC, 0.90) agreement, and for overall pass/fail ratings, from poor (kappa = 0.20) to moderate (kappa = 0.78) agreement. Focused, video-based anchor training facilitates greater inter-rater reliability in the objective assessment of simulated coronary anastomoses. Among raters with less teaching experience, such training may be needed before objective evaluation of technical skills. Published by Elsevier Inc.

  20. Reliability training

    NASA Technical Reports Server (NTRS)

    Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Dillard, Richard B.; Wong, Kam L.; Barber, Frank J.; Barina, Frank J.

    1992-01-01

    Discussed here is failure physics, the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low cost reliable products. A review of reliability for the years 1940 to 2000 is given. Next, a review of mathematics is given as well as a description of what elements contribute to product failures. Basic reliability theory and the disciplines that allow us to control and eliminate failures are elucidated.

  1. Reliability culture at La Silla Paranal Observatory

    NASA Astrophysics Data System (ADS)

    Gonzalez, Sergio

    2010-07-01

    The Maintenance Department at the La Silla - Paranal Observatory has been an important base to keep the operations of the observatory at a good level of reliability and availability. Several strategies have been implemented and improved in order to cover these requirements and keep the system and equipment working properly when it is required. For that reason, one of the latest improvements has been the introduction of the concept of reliability, which implies that we don't simply speak about reliability concepts. It involves much more than that. It involves the use of technologies, data collecting, data analysis, decision making, committees concentrated in analysis of failure modes and how they can be eliminated, aligning the results with the requirements of our internal partners and establishing steps to achieve success. Some of these steps have already been implemented: data collection, use of technologies, analysis of data, development of priority tools, committees dedicated to analyze data and people dedicated to reliability analysis. This has permitted us to optimize our process, analyze where we can improve, avoid functional failures, reduce the failures range in several systems and subsystems; all this has had a positive impact in terms of results for our Observatory. All these tools are part of the reliability culture that allows our system to operate with a high level of reliability and availability.

  2. Reliability as Argument

    ERIC Educational Resources Information Center

    Parkes, Jay

    2007-01-01

    Reliability consists of both important social and scientific values and methods for evidencing those values, though in practice methods are often conflated with the values. With the two distinctly understood, a reliability argument can be made that articulates the particular reliability values most relevant to the particular measurement situation…

  3. Estimating the Reliability of Single-Item Life Satisfaction Measures: Results from Four National Panel Studies

    ERIC Educational Resources Information Center

    Lucas, Richard E.; Donnellan, M. Brent

    2012-01-01

    Life satisfaction is often assessed using single-item measures. However, estimating the reliability of these measures can be difficult because internal consistency coefficients cannot be calculated. Existing approaches use longitudinal data to isolate occasion-specific variance from variance that is either completely stable or variance that…

  4. Photovoltaic performance and reliability workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroposki, B

    1996-10-01

    This proceedings is the compilation of papers presented at the ninth PV Performance and Reliability Workshop held at the Sheraton Denver West Hotel on September 4--6, 1996. This years workshop included presentations from 25 speakers and had over 100 attendees. All of the presentations that were given are included in this proceedings. Topics of the papers included: defining service lifetime and developing models for PV module lifetime; examining and determining failure and degradation mechanisms in PV modules; combining IEEE/IEC/UL testing procedures; AC module performance and reliability testing; inverter reliability/qualification testing; standardization of utility interconnect requirements for PV systems; need activitiesmore » to separate variables by testing individual components of PV systems (e.g. cells, modules, batteries, inverters,charge controllers) for individual reliability and then test them in actual system configurations; more results reported from field experience on modules, inverters, batteries, and charge controllers from field deployed PV systems; and system certification and standardized testing for stand-alone and grid-tied systems.« less

  5. Water yield and hydrology

    Treesearch

    Pamela J. Edwards; Charles A. Troendle

    2012-01-01

    Investigations of hydrologic responses resulting from reducing vegetation density are fairly common throughout the Eastern United States. Although most studies have focused on the potential for increasing water yields or documenting effects from intensive practices that far exceed what would be done for fuel-reduction objectives, data from some less-intensive...

  6. Culture Representation in Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David Gertman; Julie Marble; Steven Novack

    Understanding human-system response is critical to being able to plan and predict mission success in the modern battlespace. Commonly, human reliability analysis has been used to predict failures of human performance in complex, critical systems. However, most human reliability methods fail to take culture into account. This paper takes an easily understood state of the art human reliability analysis method and extends that method to account for the influence of culture, including acceptance of new technology, upon performance. The cultural parameters used to modify the human reliability analysis were determined from two standard industry approaches to cultural assessment: Hofstede’s (1991)more » cultural factors and Davis’ (1989) technology acceptance model (TAM). The result is called the Culture Adjustment Method (CAM). An example is presented that (1) reviews human reliability assessment with and without cultural attributes for a Supervisory Control and Data Acquisition (SCADA) system attack, (2) demonstrates how country specific information can be used to increase the realism of HRA modeling, and (3) discusses the differences in human error probability estimates arising from cultural differences.« less

  7. Design for reliability: NASA reliability preferred practices for design and test

    NASA Technical Reports Server (NTRS)

    Lalli, Vincent R.

    1994-01-01

    This tutorial summarizes reliability experience from both NASA and industry and reflects engineering practices that support current and future civil space programs. These practices were collected from various NASA field centers and were reviewed by a committee of senior technical representatives from the participating centers (members are listed at the end). The material for this tutorial was taken from the publication issued by the NASA Reliability and Maintainability Steering Committee (NASA Reliability Preferred Practices for Design and Test. NASA TM-4322, 1991). Reliability must be an integral part of the systems engineering process. Although both disciplines must be weighed equally with other technical and programmatic demands, the application of sound reliability principles will be the key to the effectiveness and affordability of America's space program. Our space programs have shown that reliability efforts must focus on the design characteristics that affect the frequency of failure. Herein, we emphasize that these identified design characteristics must be controlled by applying conservative engineering principles.

  8. Reliability of COPVs Accounting for Margin of Safety on Design Burst

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L.N.

    2012-01-01

    In this paper, the stress rupture reliability of Carbon/Epoxy Composite Overwrapped Pressure Vessels (COPVs) is examined utilizing the classic Phoenix model and accounting for the differences between the design and the actual burst pressure, and the liner contribution effects. Stress rupture life primarily depends upon the fiber stress ratio which is defined as the ratio of stress in fibers at the maximum expected operating pressure to actual delivered fiber strength. The actual delivered fiber strength is calculated using the actual burst pressures of vessels established through burst tests. However, during the design phase the actual burst pressure is generally not known and to estimate the reliability of the vessels calculations are usually performed based upon the design burst pressure only. Since the design burst is lower than the actual burst, this process yields a much higher value for the stress ratio and consequently a conservative estimate for the reliability. Other complications arise due to the fact that the actual burst pressure and the liner contributions have inherent variability and therefore must be treated as random variables in order to compute the stress rupture reliability. Furthermore, the model parameters, which have to be established based on stress rupture tests of subscale vessels or coupons, have significant variability as well due to limited available data and hence must be properly accounted for. In this work an assessment of reliability of COPVs including both parameter uncertainties and physical variability inherent in liner and overwrap material behavior is made and estimates are provided in terms of degree of uncertainty in the actual burst pressure and the liner load sharing.

  9. Software reliability experiments data analysis and investigation

    NASA Technical Reports Server (NTRS)

    Walker, J. Leslie; Caglayan, Alper K.

    1991-01-01

    The objectives are to investigate the fundamental reasons which cause independently developed software programs to fail dependently, and to examine fault tolerant software structures which maximize reliability gain in the presence of such dependent failure behavior. The authors used 20 redundant programs from a software reliability experiment to analyze the software errors causing coincident failures, to compare the reliability of N-version and recovery block structures composed of these programs, and to examine the impact of diversity on software reliability using subpopulations of these programs. The results indicate that both conceptually related and unrelated errors can cause coincident failures and that recovery block structures offer more reliability gain than N-version structures if acceptance checks that fail independently from the software components are available. The authors present a theory of general program checkers that have potential application for acceptance tests.

  10. CORY: A Computer Program for Determining Dimension Stock Yields

    Treesearch

    Charles C Brunner; Marshall S. White; Fred M. Lamb; James G. Schroeder

    1989-01-01

    CORY is a computer program that calculates random-width, fixed-length cutting yields and best sawing sequences for either rip- or crosscut-first operations. It differs from other yield calculating programs by evaluating competing cuttings through conflict resolution models. Comparisons with Program YIELD resulted in a 9 percent greater cutting volume and a 98 percent...

  11. Electron Stimulated Desorption Yields at the Mercury's Surface Based On Hybrid Simulation Results

    NASA Astrophysics Data System (ADS)

    Travnicek, P. M.; Schriver, D.; Orlando, T. M.; Hellinger, P.

    2016-12-01

    In terms of previous research concerning the solar wind sputtering process, most of the focus has been on ion sputtering by precipitating solar wind protons, however, precipitating electrons can also result in the desorption of neutrals and ions from Mercury's surface and represents a potentially significant source of exospheric and heavy ion components. Electron stimulated desorption (ESD) is not bound by optical selection rules and electron impact energies can vary over a much wider range, including core-level excitations that easily lead to multi-electron shake up events that can cascade into localized multiple charged states that Coulomb explode with extreme kinetic energy release (up to 8 eV = 186,000 K). While considered for the lunar exosphere, ESD has not been adequately studied or quantified as a producer of neutrals and ions. ESD is a well known process which involves the excitation (often ionization) of a surface target followed by charge ejection, bond breaking and ion expulsion due to the resultant Coulomb repulsion. Though the role of ESD processes has not been discussed much with respect to Mercury, the impinging energetic electrons that are transported through the magnetosphere and precipitate can induce significant material removal. Given the energetics and the wide band-gap nature of the minerals, the departing material may also be primarily ionic. The possible role of 5 eV - 1 keV electron stimulated desorption and dissociation in "weathering" the regolith can be significant. ESD yields will be calculated based on the ion and electron precipitation profiles for the already carried out hybrid and electron simulations. Neutral and ion cloud profiles around Mercury will be calculated and combined with those profiles expected from PSD and MIV.

  12. Coefficient Alpha and Reliability of Scale Scores

    ERIC Educational Resources Information Center

    Almehrizi, Rashid S.

    2013-01-01

    The majority of large-scale assessments develop various score scales that are either linear or nonlinear transformations of raw scores for better interpretations and uses of assessment results. The current formula for coefficient alpha (a; the commonly used reliability coefficient) only provides internal consistency reliability estimates of raw…

  13. The Data Evaluation for Obtaining Accuracy and Reliability

    NASA Astrophysics Data System (ADS)

    Kim, Chang Geun; Chae, Kyun Shik; Lee, Sang Tae; Bhang, Gun Woong

    2012-11-01

    Nemours scientific measurement results are flooded from the paper, data book, etc. as fast growing of internet. We meet many different measurement results on the same measurand. In this moment, we are face to choose most reliable one out of them. But it is not easy to choose and use the accurate and reliable data as we do at an ice cream parlor. Even expert users feel difficult to distinguish the accurate and reliable scientific data from huge amount of measurement results. For this reason, the data evaluation is getting more important as the fast growing of internet and globalization. Furthermore the expressions of measurement results are not in standardi-zation. As these need, the international movement has been enhanced. At the first step, the global harmonization of terminology used in metrology and the expression of uncertainty in measurement were published in ISO. These methods are wide spread to many area of science on their measurement to obtain the accuracy and reliability. In this paper, it is introduced that the GUM, SRD and data evaluation on atomic collisions.

  14. A hierarchical spatial model for well yield in complex aquifers

    NASA Astrophysics Data System (ADS)

    Montgomery, J.; O'sullivan, F.

    2017-12-01

    Efficiently siting and managing groundwater wells requires reliable estimates of the amount of water that can be produced, or the well yield. This can be challenging to predict in highly complex, heterogeneous fractured aquifers due to the uncertainty around local hydraulic properties. Promising statistical approaches have been advanced in recent years. For instance, kriging and multivariate regression analysis have been applied to well test data with limited but encouraging levels of prediction accuracy. Additionally, some analytical solutions to diffusion in homogeneous porous media have been used to infer "effective" properties consistent with observed flow rates or drawdown. However, this is an under-specified inverse problem with substantial and irreducible uncertainty. We describe a flexible machine learning approach capable of combining diverse datasets with constraining physical and geostatistical models for improved well yield prediction accuracy and uncertainty quantification. Our approach can be implemented within a hierarchical Bayesian framework using Markov Chain Monte Carlo, which allows for additional sources of information to be incorporated in priors to further constrain and improve predictions and reduce the model order. We demonstrate the usefulness of this approach using data from over 7,000 wells in a fractured bedrock aquifer.

  15. Computing Reliabilities Of Ceramic Components Subject To Fracture

    NASA Technical Reports Server (NTRS)

    Nemeth, N. N.; Gyekenyesi, J. P.; Manderscheid, J. M.

    1992-01-01

    CARES calculates fast-fracture reliability or failure probability of macroscopically isotropic ceramic components. Program uses results from commercial structural-analysis program (MSC/NASTRAN or ANSYS) to evaluate reliability of component in presence of inherent surface- and/or volume-type flaws. Computes measure of reliability by use of finite-element mathematical model applicable to multiple materials in sense model made function of statistical characterizations of many ceramic materials. Reliability analysis uses element stress, temperature, area, and volume outputs, obtained from two-dimensional shell and three-dimensional solid isoparametric or axisymmetric finite elements. Written in FORTRAN 77.

  16. [Predicting the impact of climate change in the next 40 years on the yield of maize in China].

    PubMed

    Ma, Yu-ping; Sun, Lin-li; E, You-hao; Wu, Wei

    2015-01-01

    Climate change will significantly affect agricultural production in China. The combination of the integral regression model and the latest climate projection may well assess the impact of future climate change on crop yield. In this paper, the correlation model of maize yield and meteorological factors was firstly established for different provinces in China by using the integral regression method, then the impact of climate change in the next 40 years on China's maize production was evaluated combined the latest climate prediction with the reason be ing analyzed. The results showed that if the current speeds of maize variety improvement and science and technology development were constant, maize yield in China would be mainly in an increasing trend of reduction with time in the next 40 years in a range generally within 5%. Under A2 climate change scenario, the region with the most reduction of maize yield would be the Northeast except during 2021-2030, and the reduction would be generally in the range of 2.3%-4.2%. Maize yield reduction would be also high in the Northwest, Southwest and middle and lower reaches of Yangtze River after 2031. Under B2 scenario, the reduction of 5.3% in the Northeast in 2031-2040 would be the greatest across all regions. Other regions with considerable maize yield reduction would be mainly in the Northwest and the Southwest. Reduction in maize yield in North China would be small, generally within 2%, under any scenarios, and that in South China would be almost unchanged. The reduction of maize yield in most regions would be greater under A2 scenario than under B2 scenario except for the period of 2021-2030. The effect of the ten day precipitation on maize yield in northern China would be almost positive. However, the effect of ten day average temperature on yield of maize in all regions would be generally negative. The main reason of maize yield reduction was temperature increase in most provinces but precipitation decrease in a few

  17. Yielding and flow of colloidal glasses.

    PubMed

    Petekidis, Georgios; Vlassopoulos, Dimitris; Pusey, Peter N

    2003-01-01

    We investigate the yielding and flow of hard-sphere colloidal glasses by combining rheological measurements with the technique of light scattering echo. The polymethylmethacrylate particles used are sufficiently polydisperse that crystallization is suppressed. Creep and recovery measurements show that the glasses can tolerate surprisingly large strains, up to at least 15%, before yielding irreversibly. We attribute this behaviour to 'cage elasticity', the ability of a particle and its cage of neighbours to retain their identity under quite large distortion. Results from light scattering echo, which measures the extent of irreversible particle rearrangement under oscillatory shear, support the notion of cage elasticity. In the lower concentration glasses we find that particle trajectories are partly reversible under strains which significantly exceed the yield strain.

  18. Reliability evaluation methodology for NASA applications

    NASA Technical Reports Server (NTRS)

    Taneja, Vidya S.

    1992-01-01

    Liquid rocket engine technology has been characterized by the development of complex systems containing large number of subsystems, components, and parts. The trend to even larger and more complex system is continuing. The liquid rocket engineers have been focusing mainly on performance driven designs to increase payload delivery of a launch vehicle for a given mission. In otherwords, although the failure of a single inexpensive part or component may cause the failure of the system, reliability in general has not been considered as one of the system parameters like cost or performance. Up till now, quantification of reliability has not been a consideration during system design and development in the liquid rocket industry. Engineers and managers have long been aware of the fact that the reliability of the system increases during development, but no serious attempts have been made to quantify reliability. As a result, a method to quantify reliability during design and development is needed. This includes application of probabilistic models which utilize both engineering analysis and test data. Classical methods require the use of operating data for reliability demonstration. In contrast, the method described in this paper is based on similarity, analysis, and testing combined with Bayesian statistical analysis.

  19. Validity and Reliability of a Systematic Database Search Strategy to Identify Publications Resulting From Pharmacy Residency Research Projects.

    PubMed

    Kwak, Namhee; Swan, Joshua T; Thompson-Moore, Nathaniel; Liebl, Michael G

    2016-08-01

    This study aims to develop a systematic search strategy and test its validity and reliability in terms of identifying projects published in peer-reviewed journals as reported by residency graduates through an online survey. This study was a prospective blind comparison to a reference standard. Pharmacy residency projects conducted at the study institution between 2001 and 2012 were included. A step-wise, systematic procedure containing up to 8 search strategies in PubMed and EMBASE for each project was created using the names of authors and abstract keywords. In order to further maximize sensitivity, complex phrases with multiple variations were truncated to the root word. Validity was assessed by obtaining information on publications from an online survey deployed to residency graduates. The search strategy identified 13 publications (93% sensitivity, 100% specificity, and 99% accuracy). Both methods identified a similar proportion achieving publication (19.7% search strategy vs 21.2% survey, P = 1.00). Reliability of the search strategy was affirmed by the perfect agreement between 2 investigators (k = 1.00). This systematic search strategy demonstrated a high sensitivity, specificity, and accuracy for identifying publications resulting from pharmacy residency projects using information available in residency conference abstracts. © The Author(s) 2015.

  20. Validity and reliability of the Malay version of sleep apnea quality of life index – preliminary results

    PubMed Central

    2013-01-01

    Background The objective of this study was to determine the validity and reliability of the Malay translated Sleep Apnea Quality of Life Index (SAQLI) in patients with obstructive sleep apnea (OSA). Methods In this cross sectional study, the Malay version of SAQLI was administered to 82 OSA patients seen at the OSA Clinic, Hospital Universiti Sains Malaysia prior to their treatment. Additionally, the patients were asked to complete the Malay version of Medical Outcomes Study Short Form (SF-36). Twenty-three patients completed the Malay version of SAQLI again after 1–2 weeks to assess its reliability. Results Initial factor analysis of the 40-item Malay version of SAQLI resulted in four factors with eigenvalues >1. All items had factor loadings >0.5 but one of the factors was unstable with only two items. However, both items were maintained due to their high communalities and the analysis was repeated with a forced three factor solution. Variance accounted by the three factors was 78.17% with 9–18 items per factor. All items had primary loadings over 0.5 although the loadings were inconsistent with the proposed construct. The Cronbach’s alpha values were very high for all domains, >0.90. The instrument was able to discriminate between patients with mild or moderate and severe OSA. The Malay version of SAQLI correlated positively with the SF-36. The intraclass correlation coefficients for all domains were >0.90. Conclusions In light of these preliminary observations, we concluded that the Malay version of SAQLI has a high degree of internal consistency and concurrent validity albeit demonstrating a slightly different construct than the original version. The responsiveness of the questionnaire to changes in health-related quality of life following OSA treatment is yet to be determined. PMID:23786866

  1. Validity and Reliability of Accelerometers in Patients With COPD: A SYSTEMATIC REVIEW.

    PubMed

    Gore, Shweta; Blackwood, Jennifer; Guyette, Mary; Alsalaheen, Bara

    2018-05-01

    Reduced physical activity is associated with poor prognosis in chronic obstructive pulmonary disease (COPD). Accelerometers have greatly improved quantification of physical activity by providing information on step counts, body positions, energy expenditure, and magnitude of force. The purpose of this systematic review was to compare the validity and reliability of accelerometers used in patients with COPD. An electronic database search of MEDLINE and CINAHL was performed. Study quality was assessed with the Strengthening the Reporting of Observational Studies in Epidemiology checklist while methodological quality was assessed using the modified Quality Appraisal Tool for Reliability Studies. The search yielded 5392 studies; 25 met inclusion criteria. The SenseWear Pro armband reported high criterion validity under controlled conditions (r = 0.75-0.93) and high reliability (ICC = 0.84-0.86) for step counts. The DynaPort MiniMod demonstrated highest concurrent validity for step count using both video and manual methods. Validity of the SenseWear Pro armband varied between studies especially in free-living conditions, slower walking speeds, and with addition of weights during gait. A high degree of variability was found in the outcomes used and statistical analyses performed between studies, indicating a need for further studies to measure reliability and validity of accelerometers in COPD. The SenseWear Pro armband is the most commonly used accelerometer in COPD, but measurement properties are limited by gait speed variability and assistive device use. DynaPort MiniMod and Stepwatch accelerometers demonstrated high validity in patients with COPD but lack reliability data.

  2. Towards cost-effective reliability through visualization of the reliability option space

    NASA Technical Reports Server (NTRS)

    Feather, Martin S.

    2004-01-01

    In planning a complex system's development there can be many options to improve its reliability. Typically their sum total cost exceeds the budget available, so it is necessary to select judiciously from among them. Reliability models can be employed to calculate the cost and reliability implications of a candidate selection.

  3. Reliability based design optimization: Formulations and methodologies

    NASA Astrophysics Data System (ADS)

    Agarwal, Harish

    Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed

  4. An accurate and efficient reliability-based design optimization using the second order reliability method and improved stability transformation method

    NASA Astrophysics Data System (ADS)

    Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo

    2018-05-01

    The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.

  5. [Reliability and Validity of the Behavioral Check List for Preschool Children to Measure Attention Deficit Hyperactivity Behaviors].

    PubMed

    Tsuno, Kanami; Yoshimasu, Kouichi; Hayashi, Takashi; Tatsuta, Nozomi; Ito, Yuki; Kamijima, Michihiro; Nakai, Kunihiko

    2018-01-01

    Nowadays, attention deficit hyperactivity (ADH) problems are observed commonly among school-age children. However, questionnaires specific to ADH behaviors among preschool children are very few. The aim of this study was to investigate the reliability and validity of the 25-item Behavioral Check List (BCL), which was developed from interviews of parents with children who were diagnosed as having Attention-deficit/hyperactivity disorder (ADHD) and measures ADH behaviors in preschool age. We recruited 22 teachers from 10 nurseries/kindergartens in Miyagi Prefecture, Japan. A total of 138 preschool children were assessed using the BCL. To investigate inter-rater reliability, two teachers from each facility assess seven to twenty children in their class, and intraclass correlation coefficients (ICCs) were calculated. The teachers additionally answered questions in the 1/5-5 Caregiver-Teacher Report Form (C-TRF) to investigate the criterion validity of the BCL. To investigate structural validity, exploratory factor analysis with promax rotation and confirmatory factor analysis were performed. The internal consistency reliability of the BCL was good (α = 0.92) and correlation analyses also confirmed its excellent criterion validity. Although exploratory factor analysis for the BCL yielded a five-factor model that consisted of a factor structure different from that of the original one, the results were similar to the original six factors. The ICCs of the BCL were 0.38-0.99 and it was not high enough for inter-rater reliability in some facilities. However, there is a possibility to improve it by giving raters adequate explanations when using BCL. The present study showed acceptable levels of reliability and validity of the BCL among Japanese preschool children.

  6. Illustrated structural application of universal first-order reliability method

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1994-01-01

    The general application of the proposed first-order reliability method was achieved through the universal normalization of engineering probability distribution data. The method superimposes prevailing deterministic techniques and practices on the first-order reliability method to surmount deficiencies of the deterministic method and provide benefits of reliability techniques and predictions. A reliability design factor is derived from the reliability criterion to satisfy a specified reliability and is analogous to the deterministic safety factor. Its application is numerically illustrated on several practical structural design and verification cases with interesting results and insights. Two concepts of reliability selection criteria are suggested. Though the method was developed to support affordable structures for access to space, the method should also be applicable for most high-performance air and surface transportation systems.

  7. 76 FR 73608 - Reliability Technical Conference, North American Electric Reliability Corporation, Public Service...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-29

    ... or municipal authority play in forming your bulk power system reliability plans? b. Do you support..., North American Electric Reliability Corporation (NERC) Nick Akins, CEO of American Electric Power (AEP..., EL11-62-000] Reliability Technical Conference, North American Electric Reliability Corporation, Public...

  8. Soviet test yields

    NASA Astrophysics Data System (ADS)

    Vergino, Eileen S.

    Soviet seismologists have published descriptions of 96 nuclear explosions conducted from 1961 through 1972 at the Semipalatinsk test site, in Kazakhstan, central Asia [Bocharov et al., 1989]. With the exception of releasing news about some of their peaceful nuclear explosions (PNEs) the Soviets have never before published such a body of information.To estimate the seismic yield of a nuclear explosion it is necessary to obtain a calibrated magnitude-yield relationship based on events with known yields and with a consistent set of seismic magnitudes. U.S. estimation of Soviet test yields has been done through application of relationships to the Soviet sites based on the U.S. experience at the Nevada Test Site (NTS), making some correction for differences due to attenuation and near-source coupling of seismic waves.

  9. Mapping quantitative trait loci with additive effects and additive x additive epistatic interactions for biomass yield, grain yield, and straw yield using a doubled haploid population of wheat (Triticum aestivum L.).

    PubMed

    Li, Z K; Jiang, X L; Peng, T; Shi, C L; Han, S X; Tian, B; Zhu, Z L; Tian, J C

    2014-02-28

    Biomass yield is one of the most important traits for wheat (Triticum aestivum L.)-breeding programs. Increasing the yield of the aerial parts of wheat varieties will be an integral component of future wheat improvement; however, little is known regarding the genetic control of aerial part yield. A doubled haploid population, comprising 168 lines derived from a cross between two winter wheat cultivars, 'Huapei 3' (HP3) and 'Yumai 57' (YM57), was investigated. Quantitative trait loci (QTL) for total biomass yield, grain yield, and straw yield were determined for additive effects and additive x additive epistatic interactions using the QTLNetwork 2.0 software based on the mixed-linear model. Thirteen QTL were determined to have significant additive effects for the three yield traits, of which six also exhibited epistatic effects. Eleven significant additive x additive interactions were detected, of which seven occurred between QTL showing epistatic effects only, two occurred between QTL showing epistatic effects and additive effects, and two occurred between QTL with additive effects. These QTL explained 1.20 to 10.87% of the total phenotypic variation. The QTL with an allele originating from YM57 on chromosome 4B and another QTL contributed by HP3 alleles on chromosome 4D were simultaneously detected on the same or adjacent chromosome intervals for the three traits in two environments. Most of the repeatedly detected QTL across environments were not significant (P > 0.05). These results have implications for selection strategies in wheat biomass yield and for increasing the yield of the aerial part of wheat.

  10. Assuring reliability program effectiveness.

    NASA Technical Reports Server (NTRS)

    Ball, L. W.

    1973-01-01

    An attempt is made to provide simple identification and description of techniques that have proved to be most useful either in developing a new product or in improving reliability of an established product. The first reliability task is obtaining and organizing parts failure rate data. Other tasks are parts screening, tabulation of general failure rates, preventive maintenance, prediction of new product reliability, and statistical demonstration of achieved reliability. Five principal tasks for improving reliability involve the physics of failure research, derating of internal stresses, control of external stresses, functional redundancy, and failure effects control. A final task is the training and motivation of reliability specialist engineers.

  11. Reliable Freestanding Position-Based Routing in Highway Scenarios

    PubMed Central

    Galaviz-Mosqueda, Gabriel A.; Aquino-Santos, Raúl; Villarreal-Reyes, Salvador; Rivera-Rodríguez, Raúl; Villaseñor-González, Luis; Edwards, Arthur

    2012-01-01

    Vehicular Ad Hoc Networks (VANETs) are considered by car manufacturers and the research community as the enabling technology to radically improve the safety, efficiency and comfort of everyday driving. However, before VANET technology can fulfill all its expected potential, several difficulties must be addressed. One key issue arising when working with VANETs is the complexity of the networking protocols compared to those used by traditional infrastructure networks. Therefore, proper design of the routing strategy becomes a main issue for the effective deployment of VANETs. In this paper, a reliable freestanding position-based routing algorithm (FPBR) for highway scenarios is proposed. For this scenario, several important issues such as the high mobility of vehicles and the propagation conditions may affect the performance of the routing strategy. These constraints have only been partially addressed in previous proposals. In contrast, the design approach used for developing FPBR considered the constraints imposed by a highway scenario and implements mechanisms to overcome them. FPBR performance is compared to one of the leading protocols for highway scenarios. Performance metrics show that FPBR yields similar results when considering freespace propagation conditions, and outperforms the leading protocol when considering a realistic highway path loss model. PMID:23202159

  12. Reliable freestanding position-based routing in highway scenarios.

    PubMed

    Galaviz-Mosqueda, Gabriel A; Aquino-Santos, Raúl; Villarreal-Reyes, Salvador; Rivera-Rodríguez, Raúl; Villaseñor-González, Luis; Edwards, Arthur

    2012-10-24

    Vehicular Ad Hoc Networks (VANETs) are considered by car manufacturers and the research community as the enabling technology to radically improve the safety, efficiency and comfort of everyday driving. However, before VANET technology can fulfill all its expected potential, several difficulties must be addressed. One key issue arising when working with VANETs is the complexity of the networking protocols compared to those used by traditional infrastructure networks. Therefore, proper design of the routing strategy becomes a main issue for the effective deployment of VANETs. In this paper, a reliable freestanding position-based routing algorithm (FPBR) for highway scenarios is proposed. For this scenario, several important issues such as the high mobility of vehicles and the propagation conditions may affect the performance of the routing strategy. These constraints have only been partially addressed in previous proposals. In contrast, the design approach used for developing FPBR considered the constraints imposed by a highway scenario and implements mechanisms to overcome them. FPBR performance is compared to one of the leading protocols for highway scenarios. Performance metrics show that FPBR yields similar results when considering freespace propagation conditions, and outperforms the leading protocol when considering a realistic highway path loss model.

  13. Reliability in perceptual analysis of voice quality.

    PubMed

    Bele, Irene Velsvik

    2005-12-01

    This study focuses on speaking voice quality in male teachers (n = 35) and male actors (n = 36), who represent untrained and trained voice users, because we wanted to investigate normal and supranormal voices. In this study, both substantial and methodologic aspects were considered. It includes a method for perceptual voice evaluation, and a basic issue was rater reliability. A listening group of 10 listeners, 7 experienced speech-language therapists, and 3 speech-language therapist students evaluated the voices by 15 vocal characteristics using VA scales. Two sets of voice signals were investigated: text reading (2 loudness levels) and sustained vowel (3 levels). The results indicated a high interrater reliability for most perceptual characteristics. Connected speech was evaluated more reliably, especially at the normal level, but both types of voice signals were evaluated reliably, although the reliability for connected speech was somewhat higher than for vowels. Experienced listeners tended to be more consistent in their ratings than did the student raters. Some vocal characteristics achieved acceptable reliability even with a smaller panel of listeners. The perceptual characteristics grouped in 4 factors reflected perceptual dimensions.

  14. Impact of heterozygosity and heterogeneity on cotton lint yield stability: II. Lint yield components

    USDA-ARS?s Scientific Manuscript database

    In order to determine which yield components may contribute to yield stability, an 18-environment field study was undertaken to observe the mean, standard deviation (SD), and coefficient of variation (CV) for cotton lint yield components in population types that differed for lint yield stability. Th...

  15. Scaled CMOS Technology Reliability Users Guide

    NASA Technical Reports Server (NTRS)

    White, Mark

    2010-01-01

    The desire to assess the reliability of emerging scaled microelectronics technologies through faster reliability trials and more accurate acceleration models is the precursor for further research and experimentation in this relevant field. The effect of semiconductor scaling on microelectronics product reliability is an important aspect to the high reliability application user. From the perspective of a customer or user, who in many cases must deal with very limited, if any, manufacturer's reliability data to assess the product for a highly-reliable application, product-level testing is critical in the characterization and reliability assessment of advanced nanometer semiconductor scaling effects on microelectronics reliability. A methodology on how to accomplish this and techniques for deriving the expected product-level reliability on commercial memory products are provided.Competing mechanism theory and the multiple failure mechanism model are applied to the experimental results of scaled SDRAM products. Accelerated stress testing at multiple conditions is applied at the product level of several scaled memory products to assess the performance degradation and product reliability. Acceleration models are derived for each case. For several scaled SDRAM products, retention time degradation is studied and two distinct soft error populations are observed with each technology generation: early breakdown, characterized by randomly distributed weak bits with Weibull slope (beta)=1, and a main population breakdown with an increasing failure rate. Retention time soft error rates are calculated and a multiple failure mechanism acceleration model with parameters is derived for each technology. Defect densities are calculated and reflect a decreasing trend in the percentage of random defective bits for each successive product generation. A normalized soft error failure rate of the memory data retention time in FIT/Gb and FIT/cm2 for several scaled SDRAM generations is

  16. Predicting Great Lakes fish yields: tools and constraints

    USGS Publications Warehouse

    Lewis, C.A.; Schupp, D.H.; Taylor, W.W.; Collins, J.J.; Hatch, Richard W.

    1987-01-01

    Prediction of yield is a critical component of fisheries management. The development of sound yield prediction methodology and the application of the results of yield prediction are central to the evolution of strategies to achieve stated goals for Great Lakes fisheries and to the measurement of progress toward those goals. Despite general availability of species yield models, yield prediction for many Great Lakes fisheries has been poor due to the instability of the fish communities and the inadequacy of available data. A host of biological, institutional, and societal factors constrain both the development of sound predictions and their application to management. Improved predictive capability requires increased stability of Great Lakes fisheries through rehabilitation of well-integrated communities, improvement of data collection, data standardization and information-sharing mechanisms, and further development of the methodology for yield prediction. Most important is the creation of a better-informed public that will in turn establish the political will to do what is required.

  17. Absolute quantum yield measurement of powder samples.

    PubMed

    Moreno, Luis A

    2012-05-12

    Measurement of fluorescence quantum yield has become an important tool in the search for new solutions in the development, evaluation, quality control and research of illumination, AV equipment, organic EL material, films, filters and fluorescent probes for bio-industry. Quantum yield is calculated as the ratio of the number of photons absorbed, to the number of photons emitted by a material. The higher the quantum yield, the better the efficiency of the fluorescent material. For the measurements featured in this video, we will use the Hitachi F-7000 fluorescence spectrophotometer equipped with the Quantum Yield measuring accessory and Report Generator program. All the information provided applies to this system. Measurement of quantum yield in powder samples is performed following these steps: 1. Generation of instrument correction factors for the excitation and emission monochromators. This is an important requirement for the correct measurement of quantum yield. It has been performed in advance for the full measurement range of the instrument and will not be shown in this video due to time limitations. 2. Measurement of integrating sphere correction factors. The purpose of this step is to take into consideration reflectivity characteristics of the integrating sphere used for the measurements. 3. Reference and Sample measurement using direct excitation and indirect excitation. 4. Quantum Yield calculation using Direct and Indirect excitation. Direct excitation is when the sample is facing directly the excitation beam, which would be the normal measurement setup. However, because we use an integrating sphere, a portion of the emitted photons resulting from the sample fluorescence are reflected by the integrating sphere and will re-excite the sample, so we need to take into consideration indirect excitation. This is accomplished by measuring the sample placed in the port facing the emission monochromator, calculating indirect quantum yield and correcting the direct

  18. NREL to Host Ninth Annual PV Reliability Workshop | News | NREL

    Science.gov Websites

    share research leading to more durable and reliable PV modules, thus reducing the cost of solar to Host Ninth Annual PV Reliability Workshop NREL to Host Ninth Annual PV Reliability Workshop their results during a poster session at the 2017 PV Reliability Workshop. 4 people consult two

  19. Apollo experience report: Reliability and quality assurance

    NASA Technical Reports Server (NTRS)

    Sperber, K. P.

    1973-01-01

    The reliability of the Apollo spacecraft resulted from the application of proven reliability and quality techniques and from sound management, engineering, and manufacturing practices. Continual assessment of these techniques and practices was made during the program, and, when deficiencies were detected, adjustments were made and the deficiencies were effectively corrected. The most significant practices, deficiencies, adjustments, and experiences during the Apollo Program are described in this report. These experiences can be helpful in establishing an effective base on which to structure an efficient reliability and quality assurance effort for future space-flight programs.

  20. Effects of fission yield data in the calculation of antineutrino spectra for U 235 ( n , fission ) at thermal and fast neutron energies

    DOE PAGES

    Sonzogni, A. A.; McCutchan, E. A.; Johnson, T. D.; ...

    2016-04-01

    Fission yields form an integral part of the prediction of antineutrino spectra generated by nuclear reactors, but little attention has been paid to the quality and reliability of the data used in current calculations. Following a critical review of the thermal and fast ENDF/B-VII.1 235U fission yields, deficiencies are identified and improved yields are obtained, based on corrections of erroneous yields, consistency between decay and fission yield data, and updated isomeric ratios. These corrected yields are used to calculate antineutrino spectra using the summation method. An anomalous value for the thermal fission yield of 86Ge generates an excess of antineutrinosmore » at 5–7 MeV, a feature which is no longer present when the corrected yields are used. Thermal spectra calculated with two distinct fission yield libraries (corrected ENDF/B and JEFF) differ by up to 6% in the 0–7 MeV energy window, allowing for a basic estimate of the uncertainty involved in the fission yield component of summation calculations. Lastly, the fast neutron antineutrino spectrum is calculated, which at the moment can only be obtained with the summation method and may be relevant for short baseline reactor experiments using highly enriched uranium fuel.« less

  1. Raising yield potential in wheat: increasing photosynthesis capacity and efficiency

    USDA-ARS?s Scientific Manuscript database

    Increasing wheat yields to help to ensure food security is a major challenge. Meeting this challenge requires a quantum improvement in the yield potential of wheat. Past increases in yield potential have largely resulted from improvements in harvest index not through increased biomass. Further large...

  2. Getting to Zero Yield: The Evolution of the U.S. Position on the CTBT

    NASA Astrophysics Data System (ADS)

    Zimmerman, Peter D.

    1998-03-01

    In 1994 the United States favored a Comprehensive Test Ban Treaty (CTBT) which permitted tiny "hydronuclear" experiments with a nuclear energy release of four pounds or less. Other nuclear powers supported yield limits as high as large fractions of a kiloton, while most non-nuclear nations participating in the discussions at the United Nations Conference on Disarmament wanted to prohibit all nuclear explosions -- some even favoring an end to computer simulations. On the other hand, China wished an exception to permit high yield "peaceful" nuclear explosions. For the United States to adopt a new position favoring a "true zero" several pieces had to fall into place: 1) The President had to be assured that the U.S. could preserve the safety and reliability of the enduring stockpile without yield testing; 2) the U.S. needed to be sure that the marginal utility of zero-yield experiments was at least as great for this country as for any other; 3) that tests with any nuclear yield might have more marginal utility for nuclear proliferators than for the United States, thus marginally eroding this country's position; 4) the United States required a treaty which would permit maintenance of the capacity to return to testing should a national emergency requiring a nuclear test arise; and 5) all of the five nuclear weapons states had to realize that only a true-zero CTBT would have the desired political effects. This paper will outline the physics near zero yield and show why President Clinton was persuaded by arguments from many viewpoints to endorse a true test ban in August, 1996 and to sign the CTBT in September, 1997.

  3. Reliability and commercialization of oxidized VCSEL

    NASA Astrophysics Data System (ADS)

    Li, Alice; Pan, Jin-Shan; Lai, Horng-Ching; Lee, Bor-Lin; Wu, Jack; Lin, Yung-Sen; Huo, Tai-Chan; Wu, Calvin; Huang, Kai-Feng

    2003-06-01

    The reliability of oxidized VCSEL has similar result to implanted VCSEL. This paper presents our work on reliability data of oxidized VCSEL device and also the comparison with implanted VCSEL. The MTTF of oxidized VCSEL is 2.73 x 106 hrs at 55°C, 6 mA and failure rate ~ 1 FITs for the first 2 years operation. The reliability data of oxidized VCSEL includes activation energy, MTTF (mean-time-to failure), failure rate prediction, and 85°C / 85% humidity test will be presented below. Commercialization of oxidized VCSEL is demonstrated such as VCSEL structure, manufacturing facility, and packaging. A cost effective approach is key to its success in applications such as Datacomm.

  4. MEASUREMENT: ACCOUNTING FOR RELIABILITY IN PERFORMANCE ESTIMATES.

    PubMed

    Waterman, Brian; Sutter, Robert; Burroughs, Thomas; Dunagan, W Claiborne

    2014-01-01

    When evaluating physician performance measures, physician leaders are faced with the quandary of determining whether departures from expected physician performance measurements represent a true signal or random error. This uncertainty impedes the physician leader's ability and confidence to take appropriate performance improvement actions based on physician performance measurements. Incorporating reliability adjustment into physician performance measurement is a valuable way of reducing the impact of random error in the measurements, such as those caused by small sample sizes. Consequently, the physician executive has more confidence that the results represent true performance and is positioned to make better physician performance improvement decisions. Applying reliability adjustment to physician-level performance data is relatively new. As others have noted previously, it's important to keep in mind that reliability adjustment adds significant complexity to the production, interpretation and utilization of results. Furthermore, the methods explored in this case study only scratch the surface of the range of available Bayesian methods that can be used for reliability adjustment; further study is needed to test and compare these methods in practice and to examine important extensions for handling specialty-specific concerns (e.g., average case volumes, which have been shown to be important in cardiac surgery outcomes). Moreover, it's important to note that the provider group average as a basis for shrinkage is one of several possible choices that could be employed in practice and deserves further exploration in future research. With these caveats, our results demonstrate that incorporating reliability adjustment into physician performance measurements is feasible and can notably reduce the incidence of "real" signals relative to what one would expect to see using more traditional approaches. A physician leader who is interested in catalyzing performance improvement

  5. A Comparison of Reliability and Construct Validity between the Original and Revised Versions of the Rosenberg Self-Esteem Scale

    PubMed Central

    Nahathai, Wongpakaran

    2012-01-01

    Objective The Rosenberg Self-Esteem Scale (RSES) is a widely used instrument that has been tested for reliability and validity in many settings; however, some negative-worded items appear to have caused it to reveal low reliability in a number of studies. In this study, we revised one negative item that had previously (from the previous studies) produced the worst outcome in terms of the structure of the scale, then re-analyzed the new version for its reliability and construct validity, comparing it to the original version with respect to fit indices. Methods In total, 851 students from Chiang Mai University (mean age: 19.51±1.7, 57% of whom were female), participated in this study. Of these, 664 students completed the Thai version of the original RSES - containing five positively worded and five negatively worded items, while 187 students used the revised version containing six positively worded and four negatively worded items. Confirmatory factor analysis was applied, using a uni-dimensional model with method effects and a correlated uniqueness approach. Results The revised version showed the same level of reliability (good) as the original, but yielded a better model fit. The revised RSES demonstrated excellent fit statistics, with χ2=29.19 (df=19, n=187, p=0.063), GFI=0.970, TFI=0.969, NFI=0.964, CFI=0.987, SRMR=0.040 and RMSEA=0.054. Conclusion The revised version of the Thai RSES demonstrated an equivalent level of reliability but a better construct validity when compared to the original. PMID:22396685

  6. Comparative reliability of structured versus unstructured interviews in the admission process of a residency program.

    PubMed

    Blouin, Danielle; Day, Andrew G; Pavlov, Andrey

    2011-12-01

    Although never directly compared, structured interviews are reported as being more reliable than unstructured interviews. This study compared the reliability of both types of interview when applied to a common pool of applicants for positions in an emergency medicine residency program. In 2008, one structured interview was added to the two unstructured interviews traditionally used in our resident selection process. A formal job analysis using the critical incident technique guided the development of the structured interview tool. This tool consisted of 7 scenarios assessing 4 of the domains deemed essential for success as a resident in this program. The traditional interview tool assessed 5 general criteria. In addition to these criteria, the unstructured panel members were asked to rate each candidate on the same 4 essential domains rated by the structured panel members. All 3 panels interviewed all candidates. Main outcomes were the overall, interitem, and interrater reliabilities, the correlations between interview panels, and the dimensionality of each interview tool. Thirty candidates were interviewed. The overall reliability reached 0.43 for the structured interview, and 0.81 and 0.71 for the unstructured interviews. Analyses of the variance components showed a high interrater, low interitem reliability for the structured interview, and a high interrater, high interitem reliability for the unstructured interviews. The summary measures from the 2 unstructured interviews were significantly correlated, but neither was correlated with the structured interview. Only the structured interview was multidimensional. A structured interview did not yield a higher overall reliability than both unstructured interviews. The lower reliability is explained by a lower interitem reliability, which in turn is due to the multidimensionality of the interview tool. Both unstructured panels consistently rated a single dimension, even when prompted to assess the 4 specific domains

  7. Modeling and simulation of reliability of unmanned intelligent vehicles

    NASA Astrophysics Data System (ADS)

    Singh, Harpreet; Dixit, Arati M.; Mustapha, Adam; Singh, Kuldip; Aggarwal, K. K.; Gerhart, Grant R.

    2008-04-01

    Unmanned ground vehicles have a large number of scientific, military and commercial applications. A convoy of such vehicles can have collaboration and coordination. For the movement of such a convoy, it is important to predict the reliability of the system. A number of approaches are available in literature which describes the techniques for determining the reliability of the system. Graph theoretic approaches are popular in determining terminal reliability and system reliability. In this paper we propose to exploit Fuzzy and Neuro-Fuzzy approaches for predicting the node and branch reliability of the system while Boolean algebra approaches are used to determine terminal reliability and system reliability. Hence a combination of intelligent approaches like Fuzzy, Neuro-Fuzzy and Boolean approaches is used to predict the overall system reliability of a convoy of vehicles. The node reliabilities may correspond to the collaboration of vehicles while branch reliabilities will determine the terminal reliabilities between different nodes. An algorithm is proposed for determining the system reliabilities of a convoy of vehicles. The simulation of the overall system is proposed. Such simulation should be helpful to the commander to take an appropriate action depending on the predicted reliability in different terrain and environmental conditions. It is hoped that results of this paper will lead to more important techniques to have a reliable convoy of vehicles in a battlefield.

  8. Reliability in individual monitoring service.

    PubMed

    Mod Ali, N

    2011-03-01

    As a laboratory certified to ISO 9001:2008 and accredited to ISO/IEC 17025, the Secondary Standard Dosimetry Laboratory (SSDL)-Nuclear Malaysia has incorporated an overall comprehensive system for technical and quality management in promoting a reliable individual monitoring service (IMS). Faster identification and resolution of issues regarding dosemeter preparation and issuing of reports, personnel enhancement, improved customer satisfaction and overall efficiency of laboratory activities are all results of the implementation of an effective quality system. Review of these measures and responses to observed trends provide continuous improvement of the system. By having these mechanisms, reliability of the IMS can be assured in the promotion of safe behaviour at all levels of the workforce utilising ionising radiation facilities. Upgradation of in the reporting program through a web-based e-SSDL marks a major improvement in Nuclear Malaysia's IMS reliability on the whole. The system is a vital step in providing a user friendly and effective occupational exposure evaluation program in the country. It provides a higher level of confidence in the results generated for occupational dose monitoring of the IMS, thus, enhances the status of the radiation protection framework of the country.

  9. Using normalized difference vegetation index (NDVI) to estimate sugarcane yield and yield components

    USDA-ARS?s Scientific Manuscript database

    Sugarcane (Saccharum spp.) yield and yield components are important traits for growers and scientists to evaluate and select cultivars. Collection of these yield data would be labor intensive and time consuming in the early selection stages of sugarcane breeding cultivar development programs with a ...

  10. QTL mapping of root traits in phosphorus-deficient soils reveals important genomic regions for improving NDVI and grain yield in barley.

    PubMed

    Gong, Xue; McDonald, Glenn

    2017-09-01

    Major QTLs for root rhizosheath size are not correlated with grain yield or yield response to phosphorus. Important QTLs were found to improve phosphorus efficiency. Root traits are important for phosphorus (P) acquisition, but they are often difficult to characterize and their breeding values are seldom assessed under field conditions. This has shed doubts on using seedling-based criteria of root traits to select and breed for P efficiency. Eight root traits were assessed under controlled conditions in a barley doubled-haploid population in soils differing in P levels. The population was also phenotyped for grain yield, normalized difference vegetation index (NDVI), grain P uptake and P utilization efficiency at maturity (PutE GY ) under field conditions. Several quantitative traits loci (QTLs) from the root screening and the field trials were co-incident. QTLs for root rhizosheath size and root diameter explained the highest phenotypic variation in comparison to QTLs for other root traits. Shared QTLs were found between root diameter and grain yield, and total root length and PutE GY . A common major QTL for rhizosheath size and NDVI was mapped to the HvMATE gene marker on chromosome 4H. Collocations between major QTLs for NDVI and grain yield were detected on chromosomes 6H and 7H. When results from BIP and MET were combined, QTLs detected for grain yield were also those QTLs found for NDVI. QTLs qGY5H, qGY6H and qGY7Hb on 7H were robust QTLs in improving P efficiency. A selection of multiple loci may be needed to optimize the breeding outcomes due to the QTL x Environment interaction. We suggest that rhizosheath size alone is not a reliable trait to predict P efficiency or grain yield.

  11. Evolution Projects Yield Results

    ERIC Educational Resources Information Center

    Sparks, Sarah D.

    2010-01-01

    When a federal court in 2005 rejected an attempt by the Dover, Pennsylvania, school board to introduce intelligent design as an alternative to evolution to explain the development of life on Earth, it sparked a renaissance in involvement among scientists in K-12 science instruction. Now, some of those teaching programs, studies, and research…

  12. Reliability Correction for Functional Connectivity: Theory and Implementation

    PubMed Central

    Mueller, Sophia; Wang, Danhong; Fox, Michael D.; Pan, Ruiqi; Lu, Jie; Li, Kuncheng; Sun, Wei; Buckner, Randy L.; Liu, Hesheng

    2016-01-01

    Network properties can be estimated using functional connectivity MRI (fcMRI). However, regional variation of the fMRI signal causes systematic biases in network estimates including correlation attenuation in regions of low measurement reliability. Here we computed the spatial distribution of fcMRI reliability using longitudinal fcMRI datasets and demonstrated how pre-estimated reliability maps can correct for correlation attenuation. As a test case of reliability-based attenuation correction we estimated properties of the default network, where reliability was significantly lower than average in the medial temporal lobe and higher in the posterior medial cortex, heterogeneity that impacts estimation of the network. Accounting for this bias using attenuation correction revealed that the medial temporal lobe’s contribution to the default network is typically underestimated. To render this approach useful to a greater number of datasets, we demonstrate that test-retest reliability maps derived from repeated runs within a single scanning session can be used as a surrogate for multi-session reliability mapping. Using data segments with different scan lengths between 1 and 30 min, we found that test-retest reliability of connectivity estimates increases with scan length while the spatial distribution of reliability is relatively stable even at short scan lengths. Finally, analyses of tertiary data revealed that reliability distribution is influenced by age, neuropsychiatric status and scanner type, suggesting that reliability correction may be especially important when studying between-group differences. Collectively, these results illustrate that reliability-based attenuation correction is an easily implemented strategy that mitigates certain features of fMRI signal nonuniformity. PMID:26493163

  13. How Big Was It? Getting at Yield

    NASA Astrophysics Data System (ADS)

    Pasyanos, M.; Walter, W. R.; Ford, S. R.

    2013-12-01

    One of the most coveted pieces of information in the wake of a nuclear test is the explosive yield. Determining the yield from remote observations, however, is not necessarily a trivial thing. For instance, recorded observations of seismic amplitudes, used to estimate the yield, are significantly modified by the intervening media, which varies widely, and needs to be properly accounted for. Even after correcting for propagation effects such as geometrical spreading, attenuation, and station site terms, getting from the resulting source term to a yield depends on the specifics of the explosion source model, including material properties, and depth. Some formulas are based on assumptions of the explosion having a standard depth-of-burial and observed amplitudes can vary if the actual test is either significantly overburied or underburied. We will consider the complications and challenges of making these determinations using a number of standard, more traditional methods and a more recent method that we have developed using regional waveform envelopes. We will do this comparison for recent declared nuclear tests from the DPRK. We will also compare the methods using older explosions at the Nevada Test Site with announced yields, material and depths, so that actual performance can be measured. In all cases, we also strive to quantify realistic uncertainties on the yield estimation.

  14. Comparison of heavy metal loads in stormwater runoff from major and minor urban roads using pollutant yield rating curves.

    PubMed

    Davis, Brett; Birch, Gavin

    2010-08-01

    Trace metal export by stormwater runoff from a major road and local street in urban Sydney, Australia, is compared using pollutant yield rating curves derived from intensive sampling data. The event loads of copper, lead and zinc are well approximated by logarithmic relationships with respect to total event discharge owing to the reliable appearance of a first flush in pollutant mass loading from urban roads. Comparisons of the yield rating curves for these three metals show that copper and zinc export rates from the local street are comparable with that of the major road, while lead export from the local street is much higher, despite a 45-fold difference in traffic volume. The yield rating curve approach allows problematic environmental data to be presented in a simple yet meaningful manner with less information loss. Copyright 2010 Elsevier Ltd. All rights reserved.

  15. Anomalous effects in the aluminum oxide sputtering yield

    NASA Astrophysics Data System (ADS)

    Schelfhout, R.; Strijckmans, K.; Depla, D.

    2018-04-01

    The sputtering yield of aluminum oxide during reactive magnetron sputtering has been quantified by a new and fast method. The method is based on the meticulous determination of the reactive gas consumption during reactive DC magnetron sputtering and has been deployed to determine the sputtering yield of aluminum oxide. The accuracy of the proposed method is demonstrated by comparing its results to the common weight loss method excluding secondary effects such as redeposition. Both methods exhibit a decrease in sputtering yield with increasing discharge current. This feature of the aluminum oxide sputtering yield is described for the first time. It resembles the discrepancy between published high sputtering yield values determined by low current ion beams and the low deposition rate in the poisoned mode during reactive magnetron sputtering. Moreover, the usefulness of the new method arises from its time-resolved capabilities. The evolution of the alumina sputtering yield can now be measured up to a resolution of seconds. This reveals the complex dynamical behavior of the sputtering yield. A plausible explanation of the observed anomalies seems to originate from the balance between retention and out-diffusion of implanted gas atoms, while other possible causes are commented.

  16. Hawaii Electric System Reliability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loose, Verne William; Silva Monroy, Cesar Augusto

    2012-08-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers’ views of reliability “worth” and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and formore » application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers’ views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.« less

  17. Reliability and coverage analysis of non-repairable fault-tolerant memory systems

    NASA Technical Reports Server (NTRS)

    Cox, G. W.; Carroll, B. D.

    1976-01-01

    A method was developed for the construction of probabilistic state-space models for nonrepairable systems. Models were developed for several systems which achieved reliability improvement by means of error-coding, modularized sparing, massive replication and other fault-tolerant techniques. From the models developed, sets of reliability and coverage equations for the systems were developed. Comparative analyses of the systems were performed using these equation sets. In addition, the effects of varying subunit reliabilities on system reliability and coverage were described. The results of these analyses indicated that a significant gain in system reliability may be achieved by use of combinations of modularized sparing, error coding, and software error control. For sufficiently reliable system subunits, this gain may far exceed the reliability gain achieved by use of massive replication techniques, yet result in a considerable saving in system cost.

  18. Pocket Handbook on Reliability

    DTIC Science & Technology

    1975-09-01

    exponencial distributions Weibull distribution, -xtimating reliability, confidence intervals, relia- bility growth, 0. P- curves, Bayesian analysis. 20 A S...introduction for those not familiar with reliability and a good refresher for those who are currently working in the area. LEWIS NERI, CHIEF...includes one or both of the following objectives: a) prediction of the current system reliability, b) projection on the system reliability for someI future

  19. Manual muscle testing and hand-held dynamometry in people with inflammatory myopathy: An intra- and interrater reliability and validity study

    PubMed Central

    Baschung Pfister, Pierrette; Sterkele, Iris; Maurer, Britta; de Bie, Rob A.; Knols, Ruud H.

    2018-01-01

    Manual muscle testing (MMT) and hand-held dynamometry (HHD) are commonly used in people with inflammatory myopathy (IM), but their clinimetric properties have not yet been sufficiently studied. To evaluate the reliability and validity of MMT and HHD, maximum isometric strength was measured in eight muscle groups across three measurement events. To evaluate reliability of HHD, intra-class correlation coefficients (ICC), the standard error of measurements (SEM) and smallest detectable changes (SDC) were calculated. To measure reliability of MMT linear Cohen`s Kappa was computed for single muscle groups and ICC for total score. Additionally, correlations between MMT8 and HHD were evaluated with Spearman Correlation Coefficients. Fifty people with myositis (56±14 years, 76% female) were included in the study. Intra-and interrater reliability of HHD yielded excellent ICCs (0.75–0.97) for all muscle groups, except for interrater reliability of ankle extension (0.61). The corresponding SEMs% ranged from 8 to 28% and the SDCs% from 23 to 65%. MMT8 total score revealed excellent intra-and interrater reliability (ICC>0.9). Intrarater reliability of single muscle groups was substantial for shoulder and hip abduction, elbow and neck flexion, and hip extension (0.64–0.69); moderate for wrist (0.53) and knee extension (0.49) and fair for ankle extension (0.35). Interrater reliability was moderate for neck flexion (0.54) and hip abduction (0.44); fair for shoulder abduction, elbow flexion, wrist and ankle extension (0.20–0.33); and slight for knee extension (0.08). Correlations between the two tests were low for wrist, knee, ankle, and hip extension; moderate for elbow flexion, neck flexion and hip abduction; and good for shoulder abduction. In conclusion, the MMT8 total score is a reliable assessment to consider general muscle weakness in people with myositis but not for single muscle groups. In contrast, our results confirm that HHD can be recommended to evaluate

  20. CERTS: Consortium for Electric Reliability Technology Solutions - Research Highlights

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eto, Joseph

    2003-07-30

    Historically, the U.S. electric power industry was vertically integrated, and utilities were responsible for system planning, operations, and reliability management. As the nation moves to a competitive market structure, these functions have been disaggregated, and no single entity is responsible for reliability management. As a result, new tools, technologies, systems, and management processes are needed to manage the reliability of the electricity grid. However, a number of simultaneous trends prevent electricity market participants from pursuing development of these reliability tools: utilities are preoccupied with restructuring their businesses, research funding has declined, and the formation of Independent System Operators (ISOs) andmore » Regional Transmission Organizations (RTOs) to operate the grid means that control of transmission assets is separate from ownership of these assets; at the same time, business uncertainty, and changing regulatory policies have created a climate in which needed investment for transmission infrastructure and tools for reliability management has dried up. To address the resulting emerging gaps in reliability R&D, CERTS has undertaken much-needed public interest research on reliability technologies for the electricity grid. CERTS' vision is to: (1) Transform the electricity grid into an intelligent network that can sense and respond automatically to changing flows of power and emerging problems; (2) Enhance reliability management through market mechanisms, including transparency of real-time information on the status of the grid; (3) Empower customers to manage their energy use and reliability needs in response to real-time market price signals; and (4) Seamlessly integrate distributed technologies--including those for generation, storage, controls, and communications--to support the reliability needs of both the grid and individual customers.« less

  1. Reliability and validity of three pain provocation tests used for the diagnosis of chronic proximal hamstring tendinopathy.

    PubMed

    Cacchio, Angelo; Borra, Fabrizio; Severini, Gabriele; Foglia, Andrea; Musarra, Frank; Taddio, Nicola; De Paulis, Fosco

    2012-09-01

    The clinical assessment of chronic proximal hamstring tendinopathy (PHT) in athletes is a challenge to sports medicine. To be able to compare the results of research and treatments, the methods used to diagnose and evaluate PHT must be clearly defined and reproducible. To assess the reliability and validity of three pain provocation tests used for the diagnosis of PHT. Ninety-two athletes with (N=46) and without (N=46) PHT were examined by one physician and two physiotherapists, who were trained in the examination techniques before the study. The examiners were blinded to the symptoms and identity of the athletes. The three pain provocation tests examined were the Puranen-Orava, bent-knee stretch and modified bent-knee stretch tests. Intraclass correlation coefficients (ICCs) based on the repeated measures analysis of variance were used to analyse the intraexaminer and interexaminer reliability, while sensitivity, specificity, predictive values and likelihood ratios were used to determine the validity of the three tests. The ICC values in all three tests revealed a high correlation (range 0.82 to 0.88) for the interexaminer reliability and a high-to-very high correlation (range 0.87 to 0.93) for the intraexaminer reliability. All three tests displayed a moderate-to-high validity, with the highest degree of validity being yielded by the modified bent-knee stretch test. All three pain provocation tests proved to be of potential value in assessing chronic PHT in athletes. However, we recommend that they be used in conjunction with other objective measures, such as MRI.

  2. Evaluation of the Condom Barriers Scale for Young Black MSM: Reliability and Validity of Three Sub-Scales

    PubMed Central

    Crosby, Richard; Sanders, Stephanie A.; Graham, Cynthia A.; Milhausen, Robin; Yarber, William L.; Mena, Leandro

    2016-01-01

    Background Reliable and valid scale measures of barriers to condom use are not available for young Black MSM (YBMSM). The purpose of this study was to evaluate the Condom Barriers Scales for application with YBMSM. Methods A clinic-based sample of 600 YBMSM completed a computer-assisted self-interview. The primary measure was a 14-item abbreviated version of the Condom Barriers Scale. Reliability and criterion validity were assessed. Results All three sub-scales were reliable: partner-related barriers (Cronbach’s alpha=.73), sensation-related barriers (alpha=.70), and motivation-related barriers (alpha=.81). A complete absence of barriers was common: 47.0% (partner-related), 30.7% (sensation-related), and 46.5% (motivation-related). Dichotomized sub-scales were significantly associated with reporting any condomless insertive anal sex (all=P < .001) and any condomless receptive anal sex (all=P < .001). The sub-scales were significantly associated with these measures of condomless sex preserved at a continuous level (all=P <.001, except for sensation barriers associated with condomless receptive anal sex =.03). Further, the sub-scales were significantly associated with reporting any condom use problems (all =P <.001) and a measure of condomless oral sex (all =P <.001, except for partner-related barriers =.31). Finally, the sensation-related barriers sub-scale was significantly associated with testing positive for Chlamydia and/or gonorrhea (P=.049). Conclusions The three identified sub-scales yielded adequate reliability and strong evidence of validity, thereby suggesting the utility of these brief measures for use in observational and experimental research with YBMSM. PMID:28081044

  3. Characterizing bias correction uncertainty in wheat yield predictions

    NASA Astrophysics Data System (ADS)

    Ortiz, Andrea Monica; Jones, Julie; Freckleton, Robert; Scaife, Adam

    2017-04-01

    Farming systems are under increased pressure due to current and future climate change, variability and extremes. Research on the impacts of climate change on crop production typically rely on the output of complex Global and Regional Climate Models, which are used as input to crop impact models. Yield predictions from these top-down approaches can have high uncertainty for several reasons, including diverse model construction and parameterization, future emissions scenarios, and inherent or response uncertainty. These uncertainties propagate down each step of the 'cascade of uncertainty' that flows from climate input to impact predictions, leading to yield predictions that may be too complex for their intended use in practical adaptation options. In addition to uncertainty from impact models, uncertainty can also stem from the intermediate steps that are used in impact studies to adjust climate model simulations to become more realistic when compared to observations, or to correct the spatial or temporal resolution of climate simulations, which are often not directly applicable as input into impact models. These important steps of bias correction or calibration also add uncertainty to final yield predictions, given the various approaches that exist to correct climate model simulations. In order to address how much uncertainty the choice of bias correction method can add to yield predictions, we use several evaluation runs from Regional Climate Models from the Coordinated Regional Downscaling Experiment over Europe (EURO-CORDEX) at different resolutions together with different bias correction methods (linear and variance scaling, power transformation, quantile-quantile mapping) as input to a statistical crop model for wheat, a staple European food crop. The objective of our work is to compare the resulting simulation-driven hindcasted wheat yields to climate observation-driven wheat yield hindcasts from the UK and Germany in order to determine ranges of yield

  4. Improving Reliability of a Residency Interview Process

    PubMed Central

    Serres, Michelle L.; Gundrum, Todd E.

    2013-01-01

    Objective. To improve the reliability and discrimination of a pharmacy resident interview evaluation form, and thereby improve the reliability of the interview process. Methods. In phase 1 of the study, authors used a Many-Facet Rasch Measurement model to optimize an existing evaluation form for reliability and discrimination. In phase 2, interviewer pairs used the modified evaluation form within 4 separate interview stations. In phase 3, 8 interviewers individually-evaluated each candidate in one-on-one interviews. Results. In phase 1, the evaluation form had a reliability of 0.98 with person separation of 6.56; reproducibly, the form separated applicants into 6 distinct groups. Using that form in phase 2 and 3, our largest variation source was candidates, while content specificity was the next largest variation source. The phase 2 g-coefficient was 0.787, while confirmatory phase 3 was 0.922. Process reliability improved with more stations despite fewer interviewers per station—impact of content specificity was greatly reduced with more interview stations. Conclusion. A more reliable, discriminating evaluation form was developed to evaluate candidates during resident interviews, and a process was designed that reduced the impact from content specificity. PMID:24159209

  5. Process yield improvements with process control terminal for varian serial ion implanters

    NASA Astrophysics Data System (ADS)

    Higashi, Harry; Soni, Ameeta; Martinez, Larry; Week, Ken

    Implant processes in a modern wafer production fab are extremely complex. There can be several types of misprocessing, i.e. wrong dose or species, double implants and missed implants. Process Control Terminals (PCT) for Varian 350Ds installed at Intel fabs were found to substantially reduce the number of misprocessing steps. This paper describes those misprocessing steps and their subsequent reduction with use of PCTs. Reliable and simple process control with serial process ion implanters has been in increasing demand. A well designed process control terminal greatly increases device yield by monitoring all pertinent implanter functions and enabling process engineering personnel to set up process recipes for simple and accurate system operation. By programming user-selectable interlocks, implant errors are reduced and those that occur are logged for further analysis and prevention. A process control terminal should also be compatible with office personal computers for greater flexibility in system use and data analysis. The impact from the capability of a process control terminal is increased productivity, ergo higher device yield.

  6. Structural reliability assessment of the Oman India Pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Al-Sharif, A.M.; Preston, R.

    1996-12-31

    Reliability techniques are increasingly finding application in design. The special design conditions for the deep water sections of the Oman India Pipeline dictate their use since the experience basis for application of standard deterministic techniques is inadequate. The paper discusses the reliability analysis as applied to the Oman India Pipeline, including selection of a collapse model, characterization of the variability in the parameters that affect pipe resistance to collapse, and implementation of first and second order reliability analyses to assess the probability of pipe failure. The reliability analysis results are used as the basis for establishing the pipe wall thicknessmore » requirements for the pipeline.« less

  7. Assessment of cluster yield components by image analysis.

    PubMed

    Diago, Maria P; Tardaguila, Javier; Aleixos, Nuria; Millan, Borja; Prats-Montalban, Jose M; Cubero, Sergio; Blasco, Jose

    2015-04-01

    Berry weight, berry number and cluster weight are key parameters for yield estimation for wine and tablegrape industry. Current yield prediction methods are destructive, labour-demanding and time-consuming. In this work, a new methodology, based on image analysis was developed to determine cluster yield components in a fast and inexpensive way. Clusters of seven different red varieties of grapevine (Vitis vinifera L.) were photographed under laboratory conditions and their cluster yield components manually determined after image acquisition. Two algorithms based on the Canny and the logarithmic image processing approaches were tested to find the contours of the berries in the images prior to berry detection performed by means of the Hough Transform. Results were obtained in two ways: by analysing either a single image of the cluster or using four images per cluster from different orientations. The best results (R(2) between 69% and 95% in berry detection and between 65% and 97% in cluster weight estimation) were achieved using four images and the Canny algorithm. The model's capability based on image analysis to predict berry weight was 84%. The new and low-cost methodology presented here enabled the assessment of cluster yield components, saving time and providing inexpensive information in comparison with current manual methods. © 2014 Society of Chemical Industry.

  8. Recent patterns of crop yield growth and stagnation.

    PubMed

    Ray, Deepak K; Ramankutty, Navin; Mueller, Nathaniel D; West, Paul C; Foley, Jonathan A

    2012-01-01

    In the coming decades, continued population growth, rising meat and dairy consumption and expanding biofuel use will dramatically increase the pressure on global agriculture. Even as we face these future burdens, there have been scattered reports of yield stagnation in the world's major cereal crops, including maize, rice and wheat. Here we study data from ∼2.5 million census observations across the globe extending over the period 1961-2008. We examined the trends in crop yields for four key global crops: maize, rice, wheat and soybeans. Although yields continue to increase in many areas, we find that across 24-39% of maize-, rice-, wheat- and soybean-growing areas, yields either never improve, stagnate or collapse. This result underscores the challenge of meeting increasing global agricultural demands. New investments in underperforming regions, as well as strategies to continue increasing yields in the high-performing areas, are required.

  9. [Reliability and reproducibility of the Fitzpatrick phototype scale for skin sensitivity to ultraviolet light].

    PubMed

    Sánchez, Guillermo; Nova, John; Arias, Nilsa; Peña, Bibiana

    2008-12-01

    The Fitzpatrick phototype scale has been used to determine skin sensitivity to ultraviolet light. The reliability of this scale in estimating sensitivity permits risk evaluation of skin cancer based on phototype. Reliability and changes in intra and inter-observer concordance was determined for the Fitzpatrick phototype scale after the assessment methods for establishing the phototype were standardized. An analytical study of intra and inter-observer concordance was performed. The Fitzpatrick phototype scale was standardized using focus group methodology. To determine intra and inter-observer agreement, the weighted kappa statistical method was applied. The standardization effect was measured using the equal kappa contrast hypothesis and Wald test for dependent measurements. The phototype scale was applied to 155 patients over 15 years of age who were assessed four times by two independent observers. The sample was drawn from patients of the Centro Dermatol6gico Federico Lleras Acosta. During the pre-standardization phase, the baseline and six-week inter-observer weighted kappa were 0.31 and 0.40, respectively. The intra-observer kappa values for observers A and B were 0.47 and 0.51, respectively. After the standardization process, the baseline and six-week inter-observer weighted kappa values were 0.77, and 0.82, respectively. Intra-observer kappa coefficients for observers A and B were 0.78 and 0.82. Statistically significant differences were found between coefficients before and after standardization (p<0.001) in all comparisons. Following a standardization exercise, the Fitzpatrick phototype scale yielded reliable, reproducible and consistent results.

  10. Estimating variability in grain legume yields across Europe and the Americas

    NASA Astrophysics Data System (ADS)

    Cernay, Charles; Ben-Ari, Tamara; Pelzer, Elise; Meynard, Jean-Marc; Makowski, David

    2015-06-01

    Grain legume production in Europe has recently come under scrutiny. Although legume crops are often promoted to provide environmental services, European farmers tend to turn to non-legume crops. It is assumed that high variability in legume yields explains this aversion, but so far this hypothesis has not been tested. Here, we estimate the variability of major grain legume and non-legume yields in Europe and the Americas from yield time series over 1961-2013. Results show that grain legume yields are significantly more variable than non-legume yields in Europe. These differences are smaller in the Americas. Our results are robust at the level of the statistical methods. In all regions, crops with high yield variability are allocated to less than 1% of cultivated areas. Although the expansion of grain legumes in Europe may be hindered by high yield variability, some species display risk levels compatible with the development of specialized supply chains.

  11. Interactive effects of pests increase seed yield.

    PubMed

    Gagic, Vesna; Riggi, Laura Ga; Ekbom, Barbara; Malsher, Gerard; Rusch, Adrien; Bommarco, Riccardo

    2016-04-01

    Loss in seed yield and therefore decrease in plant fitness due to simultaneous attacks by multiple herbivores is not necessarily additive, as demonstrated in evolutionary studies on wild plants. However, it is not clear how this transfers to crop plants that grow in very different conditions compared to wild plants. Nevertheless, loss in crop seed yield caused by any single pest is most often studied in isolation although crop plants are attacked by many pests that can cause substantial yield losses. This is especially important for crops able to compensate and even overcompensate for the damage. We investigated the interactive impacts on crop yield of four insect pests attacking different plant parts at different times during the cropping season. In 15 oilseed rape fields in Sweden, we estimated the damage caused by seed and stem weevils, pollen beetles, and pod midges. Pest pressure varied drastically among fields with very low correlation among pests, allowing us to explore interactive impacts on yield from attacks by multiple species. The plant damage caused by each pest species individually had, as expected, either no, or a negative impact on seed yield and the strongest negative effect was caused by pollen beetles. However, seed yield increased when plant damage caused by both seed and stem weevils was high, presumably due to the joint plant compensatory reaction to insect attack leading to overcompensation. Hence, attacks by several pests can change the impact on yield of individual pest species. Economic thresholds based on single species, on which pest management decisions currently rely, may therefore result in economically suboptimal choices being made and unnecessary excessive use of insecticides.

  12. Escherichia coli sampling reliability at a frequently closed Chicago beach: monitoring and management implications

    USGS Publications Warehouse

    Whitman, Richard L.; Nevers, Meredith B.

    2004-01-01

    Monitoring beaches for recreational water quality is becoming more common, but few sampling designs or policy approaches have evaluated the efficacy of monitoring programs. The authors intensively sampled water for E. coli (N=1770) at 63rd Street Beach, Chicago for 6 months in 2000 in order to (1) characterize spatial-temporal trends, (2) determine between and within transect variation, and (3) estimate sample size requirements and determine sampling reliability.E. coli counts were highly variable within and between sampling sites but spatially and diurnally autocorrelated. Variation in counts decreased with water depth and time of day. Required number of samples was high for 70% precision around the critical closure level (i.e., 6 within or 24 between transect replicates). Since spatial replication may be cost prohibitive, composite sampling is an alternative once sources of error have been well defined. The results suggest that beach monitoring programs may be requiring too few samples to fulfill management objectives desired. As the recreational water quality national database is developed, it is important that sampling strategies are empirically derived from a thorough understanding of the sources of variation and the reliability of collected data. Greater monitoring efficacy will yield better policy decisions, risk assessments, programmatic goals, and future usefulness of the information.

  13. On-orbit spacecraft reliability

    NASA Technical Reports Server (NTRS)

    Bloomquist, C.; Demars, D.; Graham, W.; Henmi, P.

    1978-01-01

    Operational and historic data for 350 spacecraft from 52 U.S. space programs were analyzed for on-orbit reliability. Failure rates estimates are made for on-orbit operation of spacecraft subsystems, components, and piece parts, as well as estimates of failure probability for the same elements during launch. Confidence intervals for both parameters are also given. The results indicate that: (1) the success of spacecraft operation is only slightly affected by most reported incidents of anomalous behavior; (2) the occurrence of the majority of anomalous incidents could have been prevented piror to launch; (3) no detrimental effect of spacecraft dormancy is evident; (4) cycled components in general are not demonstrably less reliable than uncycled components; and (5) application of product assurance elements is conductive to spacecraft success.

  14. Hawaii electric system reliability.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silva Monroy, Cesar Augusto; Loose, Verne William

    2012-09-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and formore » application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.« less

  15. Reliability of temporal summation and diffuse noxious inhibitory control

    PubMed Central

    Cathcart, Stuart; Winefield, Anthony H; Rolan, Paul; Lushington, Kurt

    2009-01-01

    BACKGROUND: The test-retest reliability of temporal summation (TS) and diffuse noxious inhibitory control (DNIC) has not been reported to date. Establishing such reliability would support the possibility of future experimental studies examining factors affecting TS and DNIC. Similarly, the use of manual algometry to induce TS, or an occlusion cuff to induce DNIC of TS to mechanical stimuli, has not been reported to date. Such devices may offer a simpler method than current techniques for inducing TS and DNIC, affording assessment at more anatomical locations and in more varied research settings. METHOD: The present study assessed the test-retest reliability of TS and DNIC using the above techniques. Sex differences on these measures were also investigated. RESULTS: Repeated measures ANOVA indicated successful induction of TS and DNIC, with no significant differences across test-retest occasions. Sex effects were not significant for any measure or interaction. Intraclass correlations indicated high test-retest reliability for all measures; however, there was large interindividual variation between test and retest measurements. CONCLUSION: The present results indicate acceptable within-session test-retest reliability of TS and DNIC. The results support the possibility of future experimental studies examining factors affecting TS and DNIC. PMID:20011713

  16. Reliability of digital reactor protection system based on extenics.

    PubMed

    Zhao, Jing; He, Ya-Nan; Gu, Peng-Fei; Chen, Wei-Hua; Gao, Feng

    2016-01-01

    After the Fukushima nuclear accident, safety of nuclear power plants (NPPs) is widespread concerned. The reliability of reactor protection system (RPS) is directly related to the safety of NPPs, however, it is difficult to accurately evaluate the reliability of digital RPS. The method is based on estimating probability has some uncertainties, which can not reflect the reliability status of RPS dynamically and support the maintenance and troubleshooting. In this paper, the reliability quantitative analysis method based on extenics is proposed for the digital RPS (safety-critical), by which the relationship between the reliability and response time of RPS is constructed. The reliability of the RPS for CPR1000 NPP is modeled and analyzed by the proposed method as an example. The results show that the proposed method is capable to estimate the RPS reliability effectively and provide support to maintenance and troubleshooting of digital RPS system.

  17. Anomalous DD and TT yields relative to the DT yield in inertial-confinement-fusion implosions

    NASA Astrophysics Data System (ADS)

    Casey, Daniel T.

    2011-10-01

    Measurements of the D(d,p)T (DD), T(t,2n)4He (TT) and D(t,n)4He (DT) reactions have been conducted using deuterium-tritium gas-filled inertial confinement fusion (ICF) implosions. In these experiments, which were carried out at the OMEGA laser facility, absolute spectral measurements of the DD protons and TT neutrons were conducted and compared to neutron-time-of-flight measured DT-neutron yields. From these measurements, it is concluded that the DD yield is anomalously low and the TT yield is anomalously high relative to the DT yield, an effect that is enhanced with increasing ion temperature. These results can be explained by an enrichment of tritium in the core of an ICF implosion, which may be present in ignition experiments planned on the National Ignition Facility. In addition, the spectral measurements of the TT-neutron spectrum were conducted for the first time at reactant central-mass energies in the range of 15-30 keV. The results from these measurements indicate that the TT reaction proceeds primarily through the direct three-body reaction channel, producing a continuous TT-neutron spectrum in the range 0 - 9.5 MeV. This work was conducted in collaboration with J. A. Frenje, M. Gatu Johnson, M. J.-E. Manuel, H. G. Rinderknecht, N. Sinenian, F. H. Seguin, C. K. Li, R. D. Petrasso, P. B. Radha, J. A. Delettrez, V. Yu Glebov, D. D. Meyerhofer, T. C. Sangster, D. P. McNabb, P. A. Amendt, R. N. Boyd, J. R. Rygg, H. W. Herrmann, Y. H. Kim, G. P. Grim and A. D. Bacher. This work was supported in part by the U.S. Department of Energy (Grant No. DE-FG03-03SF22691), LLE (subcontract Grant No. 412160-001G), LLNL (subcontract Grant No. B504974).

  18. System reliability, performance and trust in adaptable automation.

    PubMed

    Chavaillaz, Alain; Wastell, David; Sauer, Jürgen

    2016-01-01

    The present study examined the effects of reduced system reliability on operator performance and automation management in an adaptable automation environment. 39 operators were randomly assigned to one of three experimental groups: low (60%), medium (80%), and high (100%) reliability of automation support. The support system provided five incremental levels of automation which operators could freely select according to their needs. After 3 h of training on a simulated process control task (AutoCAMS) in which the automation worked infallibly, operator performance and automation management were measured during a 2.5-h testing session. Trust and workload were also assessed through questionnaires. Results showed that although reduced system reliability resulted in lower levels of trust towards automation, there were no corresponding differences in the operators' reliance on automation. While operators showed overall a noteworthy ability to cope with automation failure, there were, however, decrements in diagnostic speed and prospective memory with lower reliability. Copyright © 2015. Published by Elsevier Ltd.

  19. Test-retest reliability of memory task functional magnetic resonance imaging in Alzheimer disease clinical trials.

    PubMed

    Atri, Alireza; O'Brien, Jacqueline L; Sreenivasan, Aishwarya; Rastegar, Sarah; Salisbury, Sibyl; DeLuca, Amy N; O'Keefe, Kelly M; LaViolette, Peter S; Rentz, Dorene M; Locascio, Joseph J; Sperling, Reisa A

    2011-05-01

    To examine the feasibility and test-retest reliability of encoding-task functional magnetic resonance imaging (fMRI) in mild Alzheimer disease (AD). Randomized, double-blind, placebo-controlled study. Memory clinical trials unit. We studied 12 patients with mild AD (mean [SEM] Mini-Mental State Examination score, 24.0 [0.7]; mean Clinical Dementia Rating score, 1.0) who had been taking donepezil hydrochloride for more than 6 months from the placebo arm of a larger 24-week study (n = 24, 4 scans on weeks 0, 6, 12, and 24, respectively). Placebo and 3 face-name, paired-associate encoding, block-design blood oxygenation level-dependent fMRI scans in 12 weeks. We performed whole-brain t maps (P < .001, 5 contiguous voxels) and hippocampal regions-of-interest analyses of extent (percentage of active voxels) and magnitude (percentage of signal change) for novel-greater-than-repeated face-name contrasts. We also calculated intraclass correlation coefficients and power estimates for hippocampal regions of interest. Task tolerability and data yield were high (95 of 96 scans yielded favorable-quality data). Whole-brain maps were stable. Right and left hippocampal regions-of-interest intraclass correlation coefficients were 0.59 to 0.87 and 0.67 to 0.74, respectively. To detect 25.0% to 50.0% changes in week-0 to week-12 hippocampal activity using left-right extent or right magnitude with 80.0% power (2-sided α = .05) requires 14 to 51 patients. Using left magnitude requires 125 patients because of relatively small signal to variance ratios. Encoding-task fMRI was successfully implemented in a single-site, 24-week, AD randomized controlled trial. Week 0 to 12 whole-brain t maps were stable, and test-retest reliability of hippocampal fMRI measures ranged from moderate to substantial. Right hippocampal magnitude may be the most promising of these candidate measures in a leveraged context. These initial estimates of test-retest reliability and power justify evaluation of

  20. Beyond reliability to profitability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bond, T.H.; Mitchell, J.S.

    1996-07-01

    Reliability concerns have controlled much of power generation design and operations. Emerging from a strictly regulated environment, profitability is becoming a much more important concept for today`s power generation executives. This paper discusses the conceptual advance-view power plant maintenance as a profit center, go beyond reliability, and embrace profitability. Profit Centered Maintenance begins with the premise that financial considerations, namely profitability, drive most aspects of modern process and manufacturing operations. Profit Centered Maintenance is a continuous process of reliability and administrative improvement and optimization. For the power generation executives with troublesome maintenance programs, Profit Centered Maintenance can be the blueprintmore » to increased profitability. It requires the culture change to make decisions based on value, to reengineer the administration of maintenance, and to enable the people performing and administering maintenance to make the most of available maintenance information technology. The key steps are to optimize the physical function of maintenance and to resolve recurring maintenance problems so that the need for maintenance can be reduced. Profit Centered Maintenance is more than just an attitude it is a path to profitability, be it resulting in increased profits or increased market share.« less

  1. Modulus and yield stress of drawn LDPE

    NASA Astrophysics Data System (ADS)

    Thavarungkul, Nandh

    Modulus and yield stress were investigated in drawn low density polyethylene (LDPE) film. Uniaxially drawn polymeric films usually show high values of modulus and yield stress, however, studies have normally only been conducted to identify the structural features that determine modulus. In this study small-angle x-ray scattering (SAXS), thermal shrinkage, birefringence, differential scanning calorimetry (DSC), and dynamic mechanical thermal analysis (DMTA) were used to examine, directly and indirectly, the structural features that determine both modulus and yield stress, which are often closely related in undrawn materials. Shish-kebab structures are proposed to account for the mechanical properties in drawn LDPE. The validity of this molecular/morphological model was tested using relationships between static mechanical data and structural and physical parameters. In addition, dynamic mechanical results are also in line with static data in supporting the model. In the machine direction (MD), "shish" and taut tie molecules (TTM) anchored in the crystalline phase account for E; whereas crystal lamellae with contributions from "shish" and TTM determine yield stress. In the transverse direction (TD), the crystalline phase plays an important roll in both modulus and yield stress. Modulus is determined by crystal lamellae functioning as platelet reinforcing elements in the amorphous matrix with an additional contributions from TTM and yield stress is determined by the crystal lamellae's resistance to deformation.

  2. Maximized exoEarth candidate yields for starshades

    NASA Astrophysics Data System (ADS)

    Stark, Christopher C.; Shaklan, Stuart; Lisman, Doug; Cady, Eric; Savransky, Dmitry; Roberge, Aki; Mandell, Avi M.

    2016-10-01

    The design and scale of a future mission to directly image and characterize potentially Earth-like planets will be impacted, to some degree, by the expected yield of such planets. Recent efforts to increase the estimated yields, by creating observation plans optimized for the detection and characterization of Earth-twins, have focused solely on coronagraphic instruments; starshade-based missions could benefit from a similar analysis. Here we explore how to prioritize observations for a starshade given the limiting resources of both fuel and time, present analytic expressions to estimate fuel use, and provide efficient numerical techniques for maximizing the yield of starshades. We implemented these techniques to create an approximate design reference mission code for starshades and used this code to investigate how exoEarth candidate yield responds to changes in mission, instrument, and astrophysical parameters for missions with a single starshade. We find that a starshade mission operates most efficiently somewhere between the fuel- and exposuretime-limited regimes and, as a result, is less sensitive to photometric noise sources as well as parameters controlling the photon collection rate in comparison to a coronagraph. We produced optimistic yield curves for starshades, assuming our optimized observation plans are schedulable and future starshades are not thrust-limited. Given these yield curves, detecting and characterizing several dozen exoEarth candidates requires either multiple starshades or an η≳0.3.

  3. Probabilistic Structural Analysis and Reliability Using NESSUS With Implemented Material Strength Degradation Model

    NASA Technical Reports Server (NTRS)

    Bast, Callie C.; Jurena, Mark T.; Godines, Cody R.; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    This project included both research and education objectives. The goal of this project was to advance innovative research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction for improved reliability and safety of structural components of aerospace and aircraft propulsion systems. Research and education partners included Glenn Research Center (GRC) and Southwest Research Institute (SwRI) along with the University of Texas at San Antonio (UTSA). SwRI enhanced the NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) code and provided consulting support for NESSUS-related activities at UTSA. NASA funding supported three undergraduate students, two graduate students, a summer course instructor and the Principal Investigator. Matching funds from UTSA provided for the purchase of additional equipment for the enhancement of the Advanced Interactive Computational SGI Lab established during the first year of this Partnership Award to conduct the probabilistic finite element summer courses. The research portion of this report presents the cumulation of work performed through the use of the probabilistic finite element program, NESSUS, Numerical Evaluation and Structures Under Stress, and an embedded Material Strength Degradation (MSD) model. Probabilistic structural analysis provided for quantification of uncertainties associated with the design, thus enabling increased system performance and reliability. The structure examined was a Space Shuttle Main Engine (SSME) fuel turbopump blade. The blade material analyzed was Inconel 718, since the MSD model was previously calibrated for this material. Reliability analysis encompassing the effects of high temperature and high cycle fatigue, yielded a reliability value of 0.99978 using a fully correlated random field for the blade thickness. The reliability did not change significantly for a change in distribution type except for a change in

  4. Reliable Design Versus Trust

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Kenneth A.

    2016-01-01

    This presentation focuses on reliability and trust for the users portion of the FPGA design flow. It is assumed that the manufacturer prior to hand-off to the user tests FPGA internal components. The objective is to present the challenges of creating reliable and trusted designs. The following will be addressed: What makes a design vulnerable to functional flaws (reliability) or attackers (trust)? What are the challenges for verifying a reliable design versus a trusted design?

  5. Refinement and evaluation of the Massachusetts firm-yield estimator model version 2.0

    USGS Publications Warehouse

    Levin, Sara B.; Archfield, Stacey A.; Massey, Andrew J.

    2011-01-01

    to assess the sensitivity of firm-yield estimates to errors in daily-streamflow input data. Results of the Monte Carlo simulations indicate that underestimation in the lowest stream inflows can cause firm yields to be underestimated by an average of 1 to 10 percent. Errors in the stage-storage relation can arise when the point density of bathymetric survey measurements is too low. Existing bathymetric surfaces were resampled using hypothetical transects of varying patterns and point densities in order to quantify the uncertainty in stage-storage relations. Reservoir-volume calculations and resulting firm yields were accurate to within 5 percent when point densities were greater than 20 points per acre of reservoir surface. Methods for incorporating summer water-demand-reduction scenarios into the firm-yield model were developed as well as the ability to relax the no-fail reliability criterion. Although the original firm-yield model allowed monthly reservoir releases to be specified, there have been no previous studies examining the feasibility of controlled releases for downstream flows from Massachusetts reservoirs. Two controlled-release scenarios were tested—with and without a summer water-demand-reduction scenario—for a scenario with a no-fail criterion and a scenario that allows for a 1-percent failure rate over the entire simulation period. Based on these scenarios, about one-third of the reservoir systems were able to support the flow-release scenarios at their 2000–2004 usage rates. Reservoirs with higher storage ratios (reservoir storage capacity to mean annual streamflow) and lower demand ratios (mean annual water demand to annual firm yield) were capable of higher downstream release rates. For the purposes of this research, all reservoir systems were assumed to have structures which enable controlled releases, although this assumption may not be true for many of the reservoirs studied.

  6. Comparison of specific-yield estimates for calculating evapotranspiration from diurnal groundwater-level fluctuations

    NASA Astrophysics Data System (ADS)

    Gribovszki, Zoltán

    2018-05-01

    Methods that use diurnal groundwater-level fluctuations are commonly used for shallow water-table environments to estimate evapotranspiration (ET) and recharge. The key element needed to obtain reliable estimates is the specific yield (Sy), a soil-water storage parameter that depends on unsaturated soil-moisture and water-table fluxes, among others. Soil-moisture profile measurement down to the water table, along with water-table-depth measurements, can provide a good opportunity to calculate Sy values even on a sub-daily scale. These values were compared with Sy estimates derived by traditional techniques, and it was found that slug-test-based Sy values gave the most similar results in a sandy soil environment. Therefore, slug-test methods, which are relatively cheap and require little time, were most suited to estimate Sy using diurnal fluctuations. The reason for this is that the timeframe of the slug-test measurement is very similar to the dynamic of the diurnal signal. The dynamic characteristic of Sy was also analyzed on a sub-daily scale (depending mostly on the speed of drainage from the soil profile) and a remarkable difference was found in Sy with respect to the rate of change of the water table. When comparing constant and sub-daily (dynamic) Sy values for ET estimation, the sub-daily Sy application yielded higher correlation, but only a slightly smaller deviation from the control ET method, compared with the usage of constant Sy.

  7. Trade-off between reservoir yield and evaporation losses as a function of lake morphology in semi-arid Brazil.

    PubMed

    Campos, José N B; Lima, Iran E; Studart, Ticiana M C; Nascimento, Luiz S V

    2016-05-31

    This study investigates the relationships between yield and evaporation as a function of lake morphology in semi-arid Brazil. First, a new methodology was proposed to classify the morphology of 40 reservoirs in the Ceará State, with storage capacities ranging from approximately 5 to 4500 hm3. Then, Monte Carlo simulations were conducted to study the effect of reservoir morphology (including real and simplified conical forms) on the water storage process at different reliability levels. The reservoirs were categorized as convex (60.0%), slightly convex (27.5%) or linear (12.5%). When the conical approximation was used instead of the real lake form, a trade-off occurred between reservoir yield and evaporation losses, with different trends for the convex, slightly convex and linear reservoirs. Using the conical approximation, the water yield prediction errors reached approximately 5% of the mean annual inflow, which is negligible for large reservoirs. However, for smaller reservoirs, this error became important. Therefore, this paper presents a new procedure for correcting the yield-evaporation relationships that were obtained by assuming a conical approximation rather than the real reservoir morphology. The combination of this correction with the Regulation Triangle Diagram is useful for rapidly and objectively predicting reservoir yield and evaporation losses in semi-arid environments.

  8. Climate change and maize yield in Iowa

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Hong; Twine, Tracy E.; Girvetz, Evan

    Climate is changing across the world, including the major maize-growing state of Iowa in the USA. To maintain crop yields, farmers will need a suite of adaptation strategies, and choice of strategy will depend on how the local to regional climate is expected to change. Here we predict how maize yield might change through the 21 st century as compared with late 20 th century yields across Iowa, USA, a region representing ideal climate and soils for maize production that contributes substantially to the global maize economy. To account for climate model uncertainty, we drive a dynamic ecosystem model withmore » output from six climate models and two future climate forcing scenarios. Despite a wide range in the predicted amount of warming and change to summer precipitation, all simulations predict a decrease in maize yields from late 20 th century to middle and late 21 st century ranging from 15% to 50%. Linear regression of all models predicts a 6% state-averaged yield decrease for every 1°C increase in warm season average air temperature. When the influence of moisture stress on crop growth is removed from the model, yield decreases either remain the same or are reduced, depending on predicted changes in warm season precipitation. Lastly, our results suggest that even if maize were to receive all the water it needed, under the strongest climate forcing scenario yields will decline by 10-20% by the end of the 21 st century.« less

  9. Climate change and maize yield in Iowa

    DOE PAGES

    Xu, Hong; Twine, Tracy E.; Girvetz, Evan

    2016-05-24

    Climate is changing across the world, including the major maize-growing state of Iowa in the USA. To maintain crop yields, farmers will need a suite of adaptation strategies, and choice of strategy will depend on how the local to regional climate is expected to change. Here we predict how maize yield might change through the 21 st century as compared with late 20 th century yields across Iowa, USA, a region representing ideal climate and soils for maize production that contributes substantially to the global maize economy. To account for climate model uncertainty, we drive a dynamic ecosystem model withmore » output from six climate models and two future climate forcing scenarios. Despite a wide range in the predicted amount of warming and change to summer precipitation, all simulations predict a decrease in maize yields from late 20 th century to middle and late 21 st century ranging from 15% to 50%. Linear regression of all models predicts a 6% state-averaged yield decrease for every 1°C increase in warm season average air temperature. When the influence of moisture stress on crop growth is removed from the model, yield decreases either remain the same or are reduced, depending on predicted changes in warm season precipitation. Lastly, our results suggest that even if maize were to receive all the water it needed, under the strongest climate forcing scenario yields will decline by 10-20% by the end of the 21 st century.« less

  10. Reliability modeling of fault-tolerant computer based systems

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.

    1987-01-01

    Digital fault-tolerant computer-based systems have become commonplace in military and commercial avionics. These systems hold the promise of increased availability, reliability, and maintainability over conventional analog-based systems through the application of replicated digital computers arranged in fault-tolerant configurations. Three tightly coupled factors of paramount importance, ultimately determining the viability of these systems, are reliability, safety, and profitability. Reliability, the major driver affects virtually every aspect of design, packaging, and field operations, and eventually produces profit for commercial applications or increased national security. However, the utilization of digital computer systems makes the task of producing credible reliability assessment a formidable one for the reliability engineer. The root of the problem lies in the digital computer's unique adaptability to changing requirements, computational power, and ability to test itself efficiently. Addressed here are the nuances of modeling the reliability of systems with large state sizes, in the Markov sense, which result from systems based on replicated redundant hardware and to discuss the modeling of factors which can reduce reliability without concomitant depletion of hardware. Advanced fault-handling models are described and methods of acquiring and measuring parameters for these models are delineated.

  11. Statistical modelling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1991-01-01

    During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety.

  12. Qualitative and quantitative assessment of DNA quality of frozen beef based on DNA yield, gel electrophoresis and PCR amplification and their correlations to beef quality.

    PubMed

    Zhao, Jing; Zhang, Ting; Liu, Yongfeng; Wang, Xingyu; Zhang, Lan; Ku, Ting; Quek, Siew Young

    2018-09-15

    Freezing is a practical method for meat preservation but the quality of frozen meat can deteriorate with storage time. This research investigated the effect of frozen storage time (up to 66 months) on changes in DNA yield, purity and integrity in beef, and further analyzed the correlation between beef quality (moisture content, protein content, TVB-N value and pH value) and DNA quality in an attempt to establish a reliable, high-throughput method for meat quality control. Results showed that frozen storage time influenced the yield and integrity of DNA significantly (p < 0.05). The DNA yield decreased as frozen storage time increased due to DNA degradation. The half-life (t 1/2  = ln2/0.015) was calculated as 46 months. The DNA quality degraded dramatically with the increased storage time based on gel electrophoresis results. Polymerase chain reaction (PCR) products from both mitochondrial DNA (mtDNA) and nuclear DNA (nDNA) were observed in all frozen beef samples. Using real-time PCR for quantitative assessment of DNA and meat quality revealed that correlations could be established successfully with mathematical models to evaluate frozen beef quality. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Automated lung volumetry from routine thoracic CT scans: how reliable is the result?

    PubMed

    Haas, Matthias; Hamm, Bernd; Niehues, Stefan M

    2014-05-01

    Today, lung volumes can be easily calculated from chest computed tomography (CT) scans. Modern postprocessing workstations allow automated volume measurement of data sets acquired. However, there are challenges in the use of lung volume as an indicator of pulmonary disease when it is obtained from routine CT. Intra-individual variation and methodologic aspects have to be considered. Our goal was to assess the reliability of volumetric measurements in routine CT lung scans. Forty adult cancer patients whose lungs were unaffected by the disease underwent routine chest CT scans in 3-month intervals, resulting in a total number of 302 chest CT scans. Lung volume was calculated by automatic volumetry software. On average of 7.2 CT scans were successfully evaluable per patient (range 2-15). Intra-individual changes were assessed. In the set of patients investigated, lung volume was approximately normally distributed, with a mean of 5283 cm(3) (standard deviation = 947 cm(3), skewness = -0.34, and curtosis = 0.16). Between different scans in one and the same patient the median intra-individual standard deviation in lung volume was 853 cm(3) (16% of the mean lung volume). Automatic lung segmentation of routine chest CT scans allows a technically stable estimation of lung volume. However, substantial intra-individual variations have to be considered. A median intra-individual deviation of 16% in lung volume between different routine scans was found. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  14. Measurements of branching fraction ratios and CP-asymmetries in suppressed B{sup -}{yields}D({yields}K{sup +}{pi}{sup -})K{sup -} and B{sup -}{yields}D({yields}K{sup +}{pi}{sup -}){pi}{sup -} decays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aaltonen, T.; Brucken, E.; Devoto, F.

    2011-11-01

    We report the first reconstruction in hadron collisions of the suppressed decays B{sup -}{yields}D({yields}K{sup +}{pi}{sup -})K{sup -} and B{sup -}{yields}D({yields}K{sup +}{pi}{sup -}){pi}{sup -}, sensitive to the Cabibbo-Kobayashi-Maskawa phase {gamma}, using data from 7 fb{sup -1} of integrated luminosity collected by the CDF II detector at the Tevatron collider. We reconstruct a signal for the B{sup -}{yields}D({yields}K{sup +}{pi}{sup -})K{sup -} suppressed mode with a significance of 3.2 standard deviations, and measure the ratios of the suppressed to favored branching fractions R(K)=[22.0{+-}8.6(stat){+-}2.6(syst)]x10{sup -3}, R{sup +}(K)=[42.6{+-}13.7(stat){+-}2.8(syst)]x10{sup -3}, R{sup -}(K)=[3.8{+-}10.3(stat){+-}2.7(syst)]x10{sup -3} as well as the direct CP-violating asymmetry A(K)=-0.82{+-}0.44(stat){+-}0.09(syst) of this mode. Corresponding quantitiesmore » for B{sup -}{yields}D({yields}K{sup +}{pi}{sup -}){pi}{sup -} decay are also reported.« less

  15. A methodology for producing reliable software, volume 1

    NASA Technical Reports Server (NTRS)

    Stucki, L. G.; Moranda, P. B.; Foshee, G.; Kirchoff, M.; Omre, R.

    1976-01-01

    An investigation into the areas having an impact on producing reliable software including automated verification tools, software modeling, testing techniques, structured programming, and management techniques is presented. This final report contains the results of this investigation, analysis of each technique, and the definition of a methodology for producing reliable software.

  16. High yield fabrication of fluorescent nanodiamonds

    PubMed Central

    Boudou, Jean-Paul; Curmi, Patrick; Jelezko, Fedor; Wrachtrup, Joerg; Aubert, Pascal; Sennour, Mohamed; Balasubramanian, Gopalakrischnan; Reuter, Rolf; Thorel, Alain; Gaffet, Eric

    2009-01-01

    A new fabrication method to produce homogeneously fluorescent nanodiamonds with high yields is described. The powder obtained by high energy ball milling of fluorescent high pressure, high temperature diamond microcrystals was converted in a pure concentrated aqueous colloidal dispersion of highly crystalline ultrasmall nanoparticles with a mean size less than or equal to 10 nm. The whole fabrication yield of colloidal quasi-spherical nanodiamonds was several orders of magnitude higher than those previously reported starting from microdiamonds. The results open up avenues for the industrial cost-effective production of fluorescent nanodiamonds with well-controlled properties. PMID:19451687

  17. Reliability Generalization of the Psychopathy Checklist Applied in Youthful Samples

    ERIC Educational Resources Information Center

    Campbell, Justin S.; Pulos, Steven; Hogan, Mike; Murry, Francie

    2005-01-01

    This study examines the average reliability of Hare Psychopathy Checklists (PCLs) adapted for use in samples of youthful offenders (aged 12 to 21 years). Two forms of reliability are examined: 18 alpha estimates of internal consistency and 18 intraclass correlation (two or more raters) estimates of interrater reliability. The results, an average…

  18. The interrater reliability of DSM III in children.

    PubMed

    Werry, J S; Methven, R J; Fitzpatrick, J; Dixon, H

    1983-09-01

    A total of 195 admissions to a child psychiatric inpatient unit were diagnosed independently by two to four clinicians on the basis of case presentations at the first ward-round after admission. The DSM III as a whole and the major categories were of high or acceptable reliability, though a few were clearly unreliable. The results are generally consistent with other studies. Unlike other studies, the subcategories were examined and found to vary widely in reliability both as a whole across the system and within parent major categories, throwing considerable doubt upon their utility. The results indicate the need both for improved diagnostic data-gathering techniques in child psychiatry and for more better-designed studies of reliability and, most necessarily, of validity.

  19. Evaluating Proposed Investments in Power System Reliability and Resilience: Preliminary Results from Interviews with Public Utility Commission Staff

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LaCommare, Kristina; Larsen, Peter; Eto, Joseph

    Policymakers and regulatory agencies are expressing renewed interest in the reliability and resilience of the U.S. electric power system in large part due to growing recognition of the challenges posed by climate change, extreme weather events, and other emerging threats. Unfortunately, there has been little or no consolidated information in the public domain describing how public utility/service commission (PUC) staff evaluate the economics of proposed investments in the resilience of the power system. Having more consolidated information would give policymakers a better understanding of how different state regulatory entities across the U.S. make economic decisions pertaining to reliability/resiliency. To helpmore » address this, Lawrence Berkeley National Laboratory (LBNL) was tasked by the U.S. Department of Energy Office of Energy Policy and Systems Analysis (EPSA) to conduct an initial set of interviews with PUC staff to learn more about how proposed utility investments in reliability/resilience are being evaluated from an economics perspective. LBNL conducted structured interviews in late May-early June 2016 with staff from the following PUCs: Washington D.C. (DCPSC), Florida (FPSC), and California (CPUC).« less

  20. Blackleg (Leptosphaeria maculans) Severity and Yield Loss in Canola in Alberta, Canada

    PubMed Central

    Hwang, Sheau-Fang; Strelkov, Stephen E.; Peng, Gary; Ahmed, Hafiz; Zhou, Qixing; Turnbull, George

    2016-01-01

    Blackleg, caused by Leptosphaeria maculans, is an important disease of oilseed rape (Brassica napus L.) in Canada and throughout the world. Severe epidemics of blackleg can result in significant yield losses. Understanding disease-yield relationships is a prerequisite for measuring the agronomic efficacy and economic benefits of control methods. Field experiments were conducted in 2013, 2014, and 2015 to determine the relationship between blackleg disease severity and yield in a susceptible cultivar and in moderately resistant to resistant canola hybrids. Disease severity was lower, and seed yield was 120%–128% greater, in the moderately resistant to resistant hybrids compared with the susceptible cultivar. Regression analysis showed that pod number and seed yield declined linearly as blackleg severity increased. Seed yield per plant decreased by 1.8 g for each unit increase in disease severity, corresponding to a decline in yield of 17.2% for each unit increase in disease severity. Pyraclostrobin fungicide reduced disease severity in all site-years and increased yield. These results show that the reduction of blackleg in canola crops substantially improves yields. PMID:27447676

  1. Mind the Roots: Phenotyping Below-Ground Crop Diversity and Its Influence on Final Yield

    NASA Astrophysics Data System (ADS)

    Nieters, C.; Guadagno, C. R.; Lemli, S.; Hosseini, A.; Ewers, B. E.

    2017-12-01

    Changes in global climate patterns and water regimes are having profound impacts on worldwide crop production. An ever-growing population paired with increasing temperatures and unpredictable periods of severe drought call for accurate modeling of future crop yield. Although novel approaches are being developed in high-throughput, above-ground image phenotyping, the below-ground plant system is still poorly phenotyped. Collection of plant root morphology and hydraulics are needed to inform mathematical models to reliably estimate yields of crops grown in sub-optimal conditions. We used Brassica rapa to inform our model as it is a globally cultivated crop with several functionally diverse cultivars. Specifically, we use 7 different accessions from oilseed (R500 and Yellow Sarson), leafy type (Pac choi and Chinese cabbage), a vegetable turnip, and two Wisconsin Fast Plants (Imb211 and Fast Plant self-compatible), which have shorter life cycles and potentially large differences in allocation to roots. Bi-weekly, we harvested above and below-ground biomass to compare the varieties in terms of carbon allocation throughout their life cycle. Using WinRhizo software, we analyzed root system length and surface area to compare and contrast root morphology among cultivars. Our results confirm that root structural characteristics are crucial to explain plant water use and carbon allocation. The root:shoot ratio reveals a significant (p < 0.01) difference among crop accession. To validate the procedure across different varieties and life stages we also compared surface area results from the image-based technology to dry biomass finding a strong linear relationship (R2= 0.85). To assess the influence of a diverse above-ground morphology on the root system we also measured above-ground anatomical and physiological traits such as gas exchange, chlorophyll content, and chlorophyll a fluorescence. A thorough analysis of the root system will clarify carbon dynamics and hydraulics at

  2. Bayesian methods in reliability

    NASA Astrophysics Data System (ADS)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  3. High-yield maize with large net energy yield and small global warming intensity

    PubMed Central

    Grassini, Patricio; Cassman, Kenneth G.

    2012-01-01

    Addressing concerns about future food supply and climate change requires management practices that maximize productivity per unit of arable land while reducing negative environmental impact. On-farm data were evaluated to assess energy balance and greenhouse gas (GHG) emissions of irrigated maize in Nebraska that received large nitrogen (N) fertilizer (183 kg of N⋅ha−1) and irrigation water inputs (272 mm or 2,720 m3 ha−1). Although energy inputs (30 GJ⋅ha−1) were larger than those reported for US maize systems in previous studies, irrigated maize in central Nebraska achieved higher grain and net energy yields (13.2 Mg⋅ha−1 and 159 GJ⋅ha−1, respectively) and lower GHG-emission intensity (231 kg of CO2e⋅Mg−1 of grain). Greater input-use efficiencies, especially for N fertilizer, were responsible for better performance of these irrigated systems, compared with much lower-yielding, mostly rainfed maize systems in previous studies. Large variation in energy inputs and GHG emissions across irrigated fields in the present study resulted from differences in applied irrigation water amount and imbalances between applied N inputs and crop N demand, indicating potential to further improve environmental performance through better management of these inputs. Observed variation in N-use efficiency, at any level of applied N inputs, suggests that an N-balance approach may be more appropriate for estimating soil N2O emissions than the Intergovernmental Panel on Climate Change approach based on a fixed proportion of applied N. Negative correlation between GHG-emission intensity and net energy yield supports the proposition that achieving high yields, large positive energy balance, and low GHG emissions in intensive cropping systems are not conflicting goals. PMID:22232684

  4. Reliability model generator specification

    NASA Technical Reports Server (NTRS)

    Cohen, Gerald C.; Mccann, Catherine

    1990-01-01

    The Reliability Model Generator (RMG), a program which produces reliability models from block diagrams for ASSIST, the interface for the reliability evaluation tool SURE is described. An account is given of motivation for RMG and the implemented algorithms are discussed. The appendices contain the algorithms and two detailed traces of examples.

  5. The effect of the stability threshold on time to stabilization and its reliability following a single leg drop jump landing.

    PubMed

    Fransz, Duncan P; Huurnink, Arnold; de Boode, Vosse A; Kingma, Idsart; van Dieën, Jaap H

    2016-02-08

    We aimed to provide insight in how threshold selection affects time to stabilization (TTS) and its reliability to support selection of methods to determine TTS. Eighty-two elite youth soccer players performed six single leg drop jump landings. The TTS was calculated based on four processed signals: raw ground reaction force (GRF) signal (RAW), moving root mean square window (RMS), sequential average (SA) or unbounded third order polynomial fit (TOP). For each trial and processing method a wide range of thresholds was applied. Per threshold, reliability of the TTS was assessed through intra-class correlation coefficients (ICC) for the vertical (V), anteroposterior (AP) and mediolateral (ML) direction of force. Low thresholds resulted in a sharp increase of TTS values and in the percentage of trials in which TTS exceeded trial duration. The TTS and ICC were essentially similar for RAW and RMS in all directions; ICC's were mostly 'insufficient' (<0.4) to 'fair' (0.4-0.6) for the entire range of thresholds. The SA signals resulted in the most stable ICC values across thresholds, being 'substantial' (>0.8) for V, and 'moderate' (0.6-0.8) for AP and ML. The ICC's for TOP were 'substantial' for V, 'moderate' for AP, and 'fair' for ML. The present findings did not reveal an optimal threshold to assess TTS in elite youth soccer players following a single leg drop jump landing. Irrespective of threshold selection, the SA and TOP methods yielded sufficiently reliable TTS values, while for RAW and RMS the reliability was insufficient to differentiate between players. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Transit Reliability Information Program : PATCO-WMATA Propulsion System Reliability/Productivity Analysis

    DOT National Transportation Integrated Search

    1984-10-01

    The Transit Reliability Information Program (TRIP) is a government-initiated program to assist the transit industry in satisfying its need for transit reliability information. TRIP provides this assistance through the operation of a national data ban...

  7. Measuring teacher self-report on classroom practices: Construct validity and reliability of the Classroom Strategies Scale-Teacher Form.

    PubMed

    Reddy, Linda A; Dudek, Christopher M; Fabiano, Gregory A; Peters, Stephanie

    2015-12-01

    This article presents information about the construct validity and reliability of a new teacher self-report measure of classroom instructional and behavioral practices (the Classroom Strategies Scales-Teacher Form; CSS-T). The theoretical underpinnings and empirical basis for the instructional and behavioral management scales are presented. Information is provided about the construct validity, internal consistency, test-retest reliability, and freedom from item-bias of the scales. Given previous investigations with the CSS Observer Form, it was hypothesized that internal consistency would be adequate and that confirmatory factor analyses (CFA) of CSS-T data from 293 classrooms would offer empirical support for the CSS-T's Total, Composite and subscales, and yield a similar factor structure to that of the CSS Observer Form. Goodness-of-fit indices of χ2/df, Root Mean Square Error of Approximation, Goodness of Fit Index, and Adjusted Goodness of Fit Index suggested satisfactory fit of proposed CFA models whereas the Comparative Fit Index did not. Internal consistency estimates of .93 and .94 were obtained for the Instructional Strategies and Behavioral Strategies Total scales respectively. Adequate test-retest reliability was found for instructional and behavioral total scales (r = .79, r = .84, percent agreement 93% and 93%). The CSS-T evidences freedom from item bias on important teacher demographics (age, educational degree, and years of teaching experience). Implications of results are discussed. (c) 2015 APA, all rights reserved).

  8. Estimation of reliability of predictions and model applicability domain evaluation in the analysis of acute toxicity (LD50).

    PubMed

    Sazonovas, A; Japertas, P; Didziapetris, R

    2010-01-01

    This study presents a new type of acute toxicity (LD(50)) prediction that enables automated assessment of the reliability of predictions (which is synonymous with the assessment of the Model Applicability Domain as defined by the Organization for Economic Cooperation and Development). Analysis involved nearly 75,000 compounds from six animal systems (acute rat toxicity after oral and intraperitoneal administration; acute mouse toxicity after oral, intraperitoneal, intravenous, and subcutaneous administration). Fragmental Partial Least Squares (PLS) with 100 bootstraps yielded baseline predictions that were automatically corrected for non-linear effects in local chemical spaces--a combination called Global, Adjusted Locally According to Similarity (GALAS) modelling methodology. Each prediction obtained in this manner is provided with a reliability index value that depends on both compound's similarity to the training set (that accounts for similar trends in LD(50) variations within multiple bootstraps) and consistency of experimental results with regard to the baseline model in the local chemical environment. The actual performance of the Reliability Index (RI) was proven by its good (and uniform) correlations with Root Mean Square Error (RMSE) in all validation sets, thus providing quantitative assessment of the Model Applicability Domain. The obtained models can be used for compound screening in the early stages of drug development and prioritization for experimental in vitro testing or later in vivo animal acute toxicity studies.

  9. Integrated performance and reliability specification for digital avionics systems

    NASA Technical Reports Server (NTRS)

    Brehm, Eric W.; Goettge, Robert T.

    1995-01-01

    This paper describes an automated tool for performance and reliability assessment of digital avionics systems, called the Automated Design Tool Set (ADTS). ADTS is based on an integrated approach to design assessment that unifies traditional performance and reliability views of system designs, and that addresses interdependencies between performance and reliability behavior via exchange of parameters and result between mathematical models of each type. A multi-layer tool set architecture has been developed for ADTS that separates the concerns of system specification, model generation, and model solution. Performance and reliability models are generated automatically as a function of candidate system designs, and model results are expressed within the system specification. The layered approach helps deal with the inherent complexity of the design assessment process, and preserves long-term flexibility to accommodate a wide range of models and solution techniques within the tool set structure. ADTS research and development to date has focused on development of a language for specification of system designs as a basis for performance and reliability evaluation. A model generation and solution framework has also been developed for ADTS, that will ultimately encompass an integrated set of analytic and simulated based techniques for performance, reliability, and combined design assessment.

  10. Lower Bounds to the Reliabilities of Factor Score Estimators.

    PubMed

    Hessen, David J

    2016-10-06

    Under the general common factor model, the reliabilities of factor score estimators might be of more interest than the reliability of the total score (the unweighted sum of item scores). In this paper, lower bounds to the reliabilities of Thurstone's factor score estimators, Bartlett's factor score estimators, and McDonald's factor score estimators are derived and conditions are given under which these lower bounds are equal. The relative performance of the derived lower bounds is studied using classic example data sets. The results show that estimates of the lower bounds to the reliabilities of Thurstone's factor score estimators are greater than or equal to the estimates of the lower bounds to the reliabilities of Bartlett's and McDonald's factor score estimators.

  11. Controlled-rate freezer cryopreservation of highly concentrated peripheral blood mononuclear cells results in higher cell yields and superior autologous T-cell stimulation for dendritic cell-based immunotherapy.

    PubMed

    Buhl, Timo; Legler, Tobias J; Rosenberger, Albert; Schardt, Anke; Schön, Michael P; Haenssle, Holger A

    2012-11-01

    Availability of large quantities of functionally effective dendritic cells (DC) represents one of the major challenges for immunotherapeutic trials against infectious or malignant diseases. Low numbers or insufficient T-cell activation of DC may result in premature termination of treatment and unsatisfying immune responses in clinical trials. Based on the notion that cryopreservation of monocytes is superior to cryopreservation of immature or mature DC in terms of resulting DC quantity and immuno-stimulatory capacity, we aimed to establish an optimized protocol for the cryopreservation of highly concentrated peripheral blood mononuclear cells (PBMC) for DC-based immunotherapy. Cryopreserved cell preparations were analyzed regarding quantitative recovery, viability, phenotype, and functional properties. In contrast to standard isopropyl alcohol (IPA) freezing, PBMC cryopreservation in an automated controlled-rate freezer (CRF) with subsequent thawing and differentiation resulted in significantly higher cell yields of immature and mature DC. Immature DC yields and total protein content after using CRF were comparable with results obtained with freshly prepared PBMC and exceeded results of standard IPA freezing by approximately 50 %. While differentiation markers, allogeneic T-cell stimulation, viability, and cytokine profiles were similar to DC from standard freezing procedures, DC generated from CRF-cryopreserved PBMC induced a significantly higher antigen-specific IFN-γ release from autologous effector T cells. In summary, automated controlled-rate freezing of highly concentrated PBMC represents an improved method for increasing DC yields and autologous T-cell stimulation.

  12. TheClinical Research Tool: a high-performance microdialysis-based system for reliably measuring interstitial fluid glucose concentration.

    PubMed

    Ocvirk, Gregor; Hajnsek, Martin; Gillen, Ralph; Guenther, Arnfried; Hochmuth, Gernot; Kamecke, Ulrike; Koelker, Karl-Heinz; Kraemer, Peter; Obermaier, Karin; Reinheimer, Cornelia; Jendrike, Nina; Freckmann, Guido

    2009-05-01

    A novel microdialysis-based continuous glucose monitoring system, the so-called Clinical Research Tool (CRT), is presented. The CRT was designed exclusively for investigational use to offer high analytical accuracy and reliability. The CRT was built to avoid signal artifacts due to catheter clogging, flow obstruction by air bubbles, and flow variation caused by inconstant pumping. For differentiation between physiological events and system artifacts, the sensor current, counter electrode and polarization voltage, battery voltage, sensor temperature, and flow rate are recorded at a rate of 1 Hz. In vitro characterization with buffered glucose solutions (c(glucose) = 0 - 26 x 10(-3) mol liter(-1)) over 120 h yielded a mean absolute relative error (MARE) of 2.9 +/- 0.9% and a recorded mean flow rate of 330 +/- 48 nl/min with periodic flow rate variation amounting to 24 +/- 7%. The first 120 h in vivo testing was conducted with five type 1 diabetes subjects wearing two systems each. A mean flow rate of 350 +/- 59 nl/min and a periodic variation of 22 +/- 6% were recorded. Utilizing 3 blood glucose measurements per day and a physical lag time of 1980 s, retrospective calibration of the 10 in vivo experiments yielded a MARE value of 12.4 +/- 5.7. Clarke error grid analysis resulted in 81.0%, 16.6%, 0.8%, 1.6%, and 0% in regions A, B, C, D, and E, respectively. The CRT demonstrates exceptional reliability of system operation and very good measurement performance. The ability to differentiate between artifacts and physiological effects suggests the use of the CRT as a reference tool in clinical investigations. 2009 Diabetes Technology Society.

  13. Interval Predictor Models with a Formal Characterization of Uncertainty and Reliability

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.

    2014-01-01

    This paper develops techniques for constructing empirical predictor models based on observations. By contrast to standard models, which yield a single predicted output at each value of the model's inputs, Interval Predictors Models (IPM) yield an interval into which the unobserved output is predicted to fall. The IPMs proposed prescribe the output as an interval valued function of the model's inputs, render a formal description of both the uncertainty in the model's parameters and of the spread in the predicted output. Uncertainty is prescribed as a hyper-rectangular set in the space of model's parameters. The propagation of this set through the empirical model yields a range of outputs of minimal spread containing all (or, depending on the formulation, most) of the observations. Optimization-based strategies for calculating IPMs and eliminating the effects of outliers are proposed. Outliers are identified by evaluating the extent by which they degrade the tightness of the prediction. This evaluation can be carried out while the IPM is calculated. When the data satisfies mild stochastic assumptions, and the optimization program used for calculating the IPM is convex (or, when its solution coincides with the solution to an auxiliary convex program), the model's reliability (that is, the probability that a future observation would be within the predicted range of outputs) can be bounded rigorously by a non-asymptotic formula.

  14. Inter-Rater Reliability and Intra-Rater Reliability of Assessing the 2-Minute Push-Up Test.

    PubMed

    Fielitz, Lynn; Coelho, Jeffrey; Horne, Thomas; Brechue, William

    2016-02-01

    The purpose of this study was to assess inter-rater reliability and intra-rater reliability of the 2-minute, 90° push-up test as utilized in the Army Physical Fitness Test. Analysis of rater assessment reliability included both total score agreement and agreement across individual push-up repetitions. This study utilized 8 Raters who assessed 15 different videotaped push-up performances over 4 iterations separated by a minimum of 1 week. The 15 push-up participants were videotaped during the semiannual Army Physical Fitness Test. Each Rater randomly viewed the 15 push-up and verbally responded with a "yes" or "no" to each push-up repetition. The data generated were analyzed using the Pearson product-moment correlation as well as the kappa, modified kappa and the intra-class correlation coefficient (3,1). An attribute agreement analysis was conducted to determine the percent of inter-rater and intra-rater agreement across individual push-ups.The results indicated that Raters varied a great deal in assessing push-ups. Over the 4 trials of 15 participants, the overall scores of the Raters varied between 3.0 and 35.7 push-ups. Post hoc comparisons found that there was significant increase in the grand mean of push-ups from trials 1-3 to trial 4 (p < 0.05). Also, there was a significant difference among raters over the 4 trials (p < 0.05). Pearson correlation coefficients for inter-rater and intra-rater reliability identified inter-rater reliability coefficients were between 0.10 and 0.97. Intra-rater coefficients were between 0.48 and 0.99. Intra-rater agreement for individual push-up repetitions ranged from 41.8% to 84.8%. The results indicated that the raters failed to assess the same push-up repetition with the same score (below 70% agreement) as well as failed to agree when viewed between raters (29%). Interestingly, as previously mentioned, scores on trial 4 increased significantly which might have been caused by rater drift or that the Raters did not maintain

  15. Reliability of the Melbourne assessment of unilateral upper limb function.

    PubMed

    Randall, M; Carlin, J B; Chondros, P; Reddihough, D

    2001-11-01

    This study examines the reliability of the Melbourne Assessment of Unilateral Upper Limb Function: a quantitative test of quality of movement in children with neurological impairment. The assessment was administered to 20 children aged from 5 to 16 years (mean age 9 years 10 months, SD 2 years 10 months) who had various types and degrees of cerebral palsy (CP). The performances of the 20 children during assessment were videotaped for subsequent scoring by 15 occupational therapists. Scores were analyzed for internal consistency of test items, inter- and intrarater reliability of scorings of the same videotapes, and test-retest reliability using repeat videotaping. Results revealed very high internal consistency of test items (alpha=0.96), moderate to high agreement both within and between raters for all test items (intraclass correlations of at least 0.7) apart from item 16 (hand to mouth and down), and high interrater reliability (0.95) and intrarater reliability (0.97) for total test scores. Test-retest results revealed moderate to high intrarater reliability for item totals (mean of 0.83 and 0.79) for each rater and high reliability for test totals (0.98 and 0.97). These findings indicate that the Melbourne Assessment of Unilateral Upper Limb Function is a reliable tool for measuring the quality of unilateral upper-limb movement in children with CP.

  16. Yield performance and stability of CMS-based triticale hybrids.

    PubMed

    Mühleisen, Jonathan; Piepho, Hans-Peter; Maurer, Hans Peter; Reif, Jochen Christoph

    2015-02-01

    CMS-based triticale hybrids showed only marginal midparent heterosis for grain yield and lower dynamic yield stability compared to inbred lines. Hybrids of triticale (×Triticosecale Wittmack) are expected to possess outstanding yield performance and increased dynamic yield stability. The objectives of the present study were to (1) examine the optimum choice of the biometrical model to compare yield stability of hybrids versus lines, (2) investigate whether hybrids exhibit a more pronounced grain yield performance and yield stability, and (3) study optimal strategies to predict yield stability of hybrids. Thirteen female and seven male parental lines and their 91 factorial hybrids as well as 30 commercial lines were evaluated for grain yield in up to 20 environments. Hybrids were produced using a cytoplasmic male sterility (CMS)-inducing cytoplasm that originated from Triticumtimopheevii Zhuk. We found that the choice of the biometrical model can cause contrasting results and concluded that a group-by-environment interaction term should be added to the model when estimating stability variance of hybrids and lines. midparent heterosis for grain yield was on average 3 % with a range from -15.0 to 11.5 %. No hybrid outperformed the best inbred line. Hybrids had, on average, lower dynamic yield stability compared to the inbred lines. Grain yield performance of hybrids could be predicted based on midparent values and general combining ability (GCA)-predicted values. In contrast, stability variance of hybrids could be predicted only based on GCA-predicted values. We speculated that negative effects of the used CMS cytoplasm might be the reason for the low performance and yield stability of the hybrids. For this purpose a detailed study on the reasons for the drawback of the currently existing CMS system in triticale is urgently required comprising also the search of potentially alternative hybridization systems.

  17. Reliability of reference distances used in photogrammetry.

    PubMed

    Aksu, Muge; Kaya, Demet; Kocadereli, Ilken

    2010-07-01

    To determine the reliability of the reference distances used for photogrammetric assessment. The sample consisted of 100 subjects with mean ages of 22.97 +/- 2.98 years. Five lateral and four frontal parameters were measured directly on the subjects' faces. For photogrammetric assessment, two reference distances for the profile view and three reference distances for the frontal view were established. Standardized photographs were taken and all the parameters that had been measured directly on the face were measured on the photographs. The reliability of the reference distances was checked by comparing direct and indirect values of the parameters obtained from the subjects' faces and photographs. Repeated measure analysis of variance (ANOVA) and Bland-Altman analyses were used for statistical assessment. For profile measurements, the indirect values measured were statistically different from the direct values except for Sn-Sto in male subjects and Prn-Sn and Sn-Sto in female subjects. The indirect values of Prn-Sn and Sn-Sto were reliable in both sexes. The poorest results were obtained in the indirect values of the N-Sn parameter for female subjects and the Sn-Me parameter for male subjects according to the Sa-Sba reference distance. For frontal measurements, the indirect values were statistically different from the direct values in both sexes except for one in male subjects. The indirect values measured were not statistically different from the direct values for Go-Go. The indirect values of Ch-Ch were reliable in male subjects. The poorest results were obtained according to the P-P reference distance. For profile assessment, the T-Ex reference distance was reliable for Prn-Sn and Sn-Sto in both sexes. For frontal assessment, Ex-Ex and En-En reference distances were reliable for Ch-Ch in male subjects.

  18. Space Shuttle Propulsion System Reliability

    NASA Technical Reports Server (NTRS)

    Welzyn, Ken; VanHooser, Katherine; Moore, Dennis; Wood, David

    2011-01-01

    This session includes the following sessions: (1) External Tank (ET) System Reliability and Lessons, (2) Space Shuttle Main Engine (SSME), Reliability Validated by a Million Seconds of Testing, (3) Reusable Solid Rocket Motor (RSRM) Reliability via Process Control, and (4) Solid Rocket Booster (SRB) Reliability via Acceptance and Testing.

  19. Gene expression information improves reliability of receptor status in breast cancer patients

    PubMed Central

    Kenn, Michael; Schlangen, Karin; Castillo-Tong, Dan Cacsire; Singer, Christian F.; Cibena, Michael; Koelbl, Heinz; Schreiner, Wolfgang

    2017-01-01

    Immunohistochemical (IHC) determination of receptor status in breast cancer patients is frequently inaccurate. Since it directs the choice of systemic therapy, it is essential to increase its reliability. We increase the validity of IHC receptor expression by additionally considering gene expression (GE) measurements. Crisp therapeutic decisions are based on IHC estimates, even if they are borderline reliable. We further improve decision quality by a responsibility function, defining a critical domain for gene expression. Refined normalization is devised to file any newly diagnosed patient into existing data bases. Our approach renders receptor estimates more reliable by identifying patients with questionable receptor status. The approach is also more efficient since the rate of conclusive samples is increased. We have curated and evaluated gene expression data, together with clinical information, from 2880 breast cancer patients. Combining IHC with gene expression information yields a method more reliable and also more efficient as compared to common practice up to now. Several types of possibly suboptimal treatment allocations, based on IHC receptor status alone, are enumerated. A ‘therapy allocation check’ identifies patients possibly miss-classified. Estrogen: false negative 8%, false positive 6%. Progesterone: false negative 14%, false positive 11%. HER2: false negative 2%, false positive 50%. Possible implications are discussed. We propose an ‘expression look-up-plot’, allowing for a significant potential to improve the quality of precision medicine. Methods are developed and exemplified here for breast cancer patients, but they may readily be transferred to diagnostic data relevant for therapeutic decisions in other fields of oncology. PMID:29100391

  20. Inventing the future of reliability: FERC's recent orders and the consolidation of reliability authority

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skees, J. Daniel

    2010-06-15

    The Energy Policy Act of 2005 established mandatory reliability standard enforcement under a system in which the Federal Energy Regulatory Commission and the Electric Reliability Organization would have their own spheres of responsibility and authority. Recent orders, however, reflect the Commission's frustration with the reliability standard drafting process and suggest that the Electric Reliability Organization's discretion is likely to receive less deference in the future. (author)

  1. Multi-scale modeling to relate Be surface temperatures, concentrations and molecular sputtering yields

    NASA Astrophysics Data System (ADS)

    Lasa, Ane; Safi, Elnaz; Nordlund, Kai

    2015-11-01

    Recent experiments and Molecular Dynamics (MD) simulations show erosion rates of Be exposed to deuterium (D) plasma varying with surface temperature and the correlated D concentration. Little is understood how these three parameters relate for Be surfaces, despite being essential for reliable prediction of impurity transport and plasma facing material lifetime in current (JET) and future (ITER) devices. A multi-scale exercise is presented here to relate Be surface temperatures, concentrations and sputtering yields. Kinetic Monte Carlo (MC) code MMonCa is used to estimate equilibrium D concentrations in Be at different temperatures. Then, mixed Be-D surfaces - that correspond to the KMC profiles - are generated in MD, to calculate Be-D molecular erosion yields due to D irradiation. With this new database implemented in the 3D MC impurity transport code ERO, modeling scenarios studying wall erosion, such as RF-induced enhanced limiter erosion or main wall surface temperature scans run at JET, can be revisited with higher confidence. Work supported by U.S. DOE under Contract DE-AC05-00OR22725.

  2. Comparing reliabilities of strip and conventional patch testing.

    PubMed

    Dickel, Heinrich; Geier, Johannes; Kreft, Burkhard; Pfützner, Wolfgang; Kuss, Oliver

    2017-06-01

    The standardized protocol for performing the strip patch test has proven to be valid, but evidence on its reliability is still missing. To estimate the parallel-test reliability of the strip patch test as compared with the conventional patch test. In this multicentre, prospective, randomized, investigator-blinded reliability study, 132 subjects were enrolled. Simultaneous duplicate strip and conventional patch tests were performed with the Finn Chambers ® on Scanpor ® tape test system and the patch test preparations nickel sulfate 5% pet., potassium dichromate 0.5% pet., and lanolin alcohol 30% pet. Reliability was estimated by the use of Cohen's kappa coefficient. Parallel-test reliability values of the three standard patch test preparations turned out to be acceptable, with slight advantages for the strip patch test. The differences in reliability were 9% (95%CI: -8% to 26%) for nickel sulfate and 23% (95%CI: -16% to 63%) for potassium dichromate, both favouring the strip patch test. The standardized strip patch test method for the detection of allergic contact sensitization in patients with suspected allergic contact dermatitis is reliable. Its application in routine clinical practice can be recommended, especially if the conventional patch test result is presumably false negative. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  3. Reliability Evaluation and Improvement Approach of Chemical Production Man - Machine - Environment System

    NASA Astrophysics Data System (ADS)

    Miao, Yongchun; Kang, Rongxue; Chen, Xuefeng

    2017-12-01

    In recent years, with the gradual extension of reliability research, the study of production system reliability has become the hot topic in various industries. Man-machine-environment system is a complex system composed of human factors, machinery equipment and environment. The reliability of individual factor must be analyzed in order to gradually transit to the research of three-factor reliability. Meanwhile, the dynamic relationship among man-machine-environment should be considered to establish an effective blurry evaluation mechanism to truly and effectively analyze the reliability of such systems. In this paper, based on the system engineering, fuzzy theory, reliability theory, human error, environmental impact and machinery equipment failure theory, the reliabilities of human factor, machinery equipment and environment of some chemical production system were studied by the method of fuzzy evaluation. At last, the reliability of man-machine-environment system was calculated to obtain the weighted result, which indicated that the reliability value of this chemical production system was 86.29. Through the given evaluation domain it can be seen that the reliability of man-machine-environment integrated system is in a good status, and the effective measures for further improvement were proposed according to the fuzzy calculation results.

  4. Reliability systems for implantable cardiac defibrillator batteries

    NASA Astrophysics Data System (ADS)

    Takeuchi, Esther S.

    The reliability of the power sources used in implantable cardiac defibrillators is critical due to the life-saving nature of the device. Achieving a high reliability power source depends on several systems functioning together. Appropriate cell design is the first step in assuring a reliable product. Qualification of critical components and of the cells using those components is done prior to their designation as implantable grade. Product consistency is assured by control of manufacturing practices and verified by sampling plans using both accelerated and real-time testing. Results to date show that lithium/silver vanadium oxide cells used for implantable cardiac defibrillators have a calculated maximum random failure rate of 0.005% per test month.

  5. Reliability analysis of airship remote sensing system

    NASA Astrophysics Data System (ADS)

    Qin, Jun

    1998-08-01

    Airship Remote Sensing System (ARSS) for obtain the dynamic or real time images in the remote sensing of the catastrophe and the environment, is a mixed complex system. Its sensor platform is a remote control airship. The achievement of a remote sensing mission depends on a series of factors. For this reason, it is very important for us to analyze reliability of ARSS. In first place, the system model was simplified form multi-stage system to two-state system on the basis of the result of the failure mode and effect analysis and the failure tree failure mode effect and criticality analysis. The failure tree was created after analyzing all factors and their interrelations. This failure tree includes four branches, e.g. engine subsystem, remote control subsystem, airship construction subsystem, flying metrology and climate subsystem. By way of failure tree analysis and basic-events classing, the weak links were discovered. The result of test running shown no difference in comparison with theory analysis. In accordance with the above conclusions, a plan of the reliability growth and reliability maintenance were posed. System's reliability are raised from 89 percent to 92 percent with the reformation of the man-machine interactive interface, the augmentation of the secondary better-groupie and the secondary remote control equipment.

  6. Influence of yield surface curvature on the macroscopic yielding and ductile failure of isotropic porous plastic materials

    NASA Astrophysics Data System (ADS)

    Dæhli, Lars Edvard Bryhni; Morin, David; Børvik, Tore; Hopperstad, Odd Sture

    2017-10-01

    Numerical unit cell models of an approximative representative volume element for a porous ductile solid are utilized to investigate differences in the mechanical response between a quadratic and a non-quadratic matrix yield surface. A Hershey equivalent stress measure with two distinct values of the yield surface exponent is employed as the matrix description. Results from the unit cell calculations are further used to calibrate a heuristic extension of the Gurson model which incorporates effects of the third deviatoric stress invariant. An assessment of the porous plasticity model reveals its ability to describe the unit cell response to some extent, however underestimating the effect of the Lode parameter for the lower triaxiality ratios imposed in this study when compared to unit cell simulations. Ductile failure predictions by means of finite element simulations using a unit cell model that resembles an imperfection band are then conducted to examine how the non-quadratic matrix yield surface influences the failure strain as compared to the quadratic matrix yield surface. Further, strain localization predictions based on bifurcation analyses and imperfection band analyses are undertaken using the calibrated porous plasticity model. These simulations are then compared to the unit cell calculations in order to elucidate the differences between the various modelling strategies. The current study reveals that strain localization analyses using an imperfection band model and a spatially discretized unit cell are in reasonable agreement, while the bifurcation analyses predict higher strain levels at localization. Imperfection band analyses are finally used to calculate failure loci for the quadratic and the non-quadratic matrix yield surface under a wide range of loading conditions. The underlying matrix yield surface is demonstrated to have a pronounced influence on the onset of strain localization.

  7. Nitrate radical oxidation of γ-terpinene: hydroxy nitrate, total organic nitrate, and secondary organic aerosol yields

    NASA Astrophysics Data System (ADS)

    Slade, Jonathan H.; de Perre, Chloé; Lee, Linda; Shepson, Paul B.

    2017-07-01

    Polyolefinic monoterpenes represent a potentially important but understudied source of organic nitrates (ONs) and secondary organic aerosol (SOA) following oxidation due to their high reactivity and propensity for multi-stage chemistry. Recent modeling work suggests that the oxidation of polyolefinic γ-terpinene can be the dominant source of nighttime ON in a mixed forest environment. However, the ON yields, aerosol partitioning behavior, and SOA yields from γ-terpinene oxidation by the nitrate radical (NO3), an important nighttime oxidant, have not been determined experimentally. In this work, we present a comprehensive experimental investigation of the total (gas + particle) ON, hydroxy nitrate, and SOA yields following γ-terpinene oxidation by NO3. Under dry conditions, the hydroxy nitrate yield = 4(+1/-3) %, total ON yield = 14(+3/-2) %, and SOA yield ≤ 10 % under atmospherically relevant particle mass loadings, similar to those for α-pinene + NO3. Using a chemical box model, we show that the measured concentrations of NO2 and γ-terpinene hydroxy nitrates can be reliably simulated from α-pinene + NO3 chemistry. This suggests that NO3 addition to either of the two internal double bonds of γ-terpinene primarily decomposes forming a relatively volatile keto-aldehyde, reconciling the small SOA yield observed here and for other internal olefinic terpenes. Based on aerosol partitioning analysis and identification of speciated particle-phase ON applying high-resolution liquid chromatography-mass spectrometry, we estimate that a significant fraction of the particle-phase ON has the hydroxy nitrate moiety. This work greatly contributes to our understanding of ON and SOA formation from polyolefin monoterpene oxidation, which could be important in the northern continental US and the Midwest, where polyolefinic monoterpene emissions are greatest.

  8. Multi-hop routing mechanism for reliable sensor computing.

    PubMed

    Chen, Jiann-Liang; Ma, Yi-Wei; Lai, Chia-Ping; Hu, Chia-Cheng; Huang, Yueh-Min

    2009-01-01

    Current research on routing in wireless sensor computing concentrates on increasing the service lifetime, enabling scalability for large number of sensors and supporting fault tolerance for battery exhaustion and broken nodes. A sensor node is naturally exposed to various sources of unreliable communication channels and node failures. Sensor nodes have many failure modes, and each failure degrades the network performance. This work develops a novel mechanism, called Reliable Routing Mechanism (RRM), based on a hybrid cluster-based routing protocol to specify the best reliable routing path for sensor computing. Table-driven intra-cluster routing and on-demand inter-cluster routing are combined by changing the relationship between clusters for sensor computing. Applying a reliable routing mechanism in sensor computing can improve routing reliability, maintain low packet loss, minimize management overhead and save energy consumption. Simulation results indicate that the reliability of the proposed RRM mechanism is around 25% higher than that of the Dynamic Source Routing (DSR) and ad hoc On-demand Distance Vector routing (AODV) mechanisms.

  9. Plateletpheresis efficiency and mathematical correction of software-derived platelet yield prediction: A linear regression and ROC modeling approach.

    PubMed

    Jaime-Pérez, José Carlos; Jiménez-Castillo, Raúl Alberto; Vázquez-Hernández, Karina Elizabeth; Salazar-Riojas, Rosario; Méndez-Ramírez, Nereida; Gómez-Almaguer, David

    2017-10-01

    Advances in automated cell separators have improved the efficiency of plateletpheresis and the possibility of obtaining double products (DP). We assessed cell processor accuracy of predicted platelet (PLT) yields with the goal of a better prediction of DP collections. This retrospective proof-of-concept study included 302 plateletpheresis procedures performed on a Trima Accel v6.0 at the apheresis unit of a hematology department. Donor variables, software predicted yield and actual PLT yield were statistically evaluated. Software prediction was optimized by linear regression analysis and its optimal cut-off to obtain a DP assessed by receiver operating characteristic curve (ROC) modeling. Three hundred and two plateletpheresis procedures were performed; in 271 (89.7%) occasions, donors were men and in 31 (10.3%) women. Pre-donation PLT count had the best direct correlation with actual PLT yield (r = 0.486. P < .001). Means of software machine-derived values differed significantly from actual PLT yield, 4.72 × 10 11 vs.6.12 × 10 11 , respectively, (P < .001). The following equation was developed to adjust these values: actual PLT yield= 0.221 + (1.254 × theoretical platelet yield). ROC curve model showed an optimal apheresis device software prediction cut-off of 4.65 × 10 11 to obtain a DP, with a sensitivity of 82.2%, specificity of 93.3%, and an area under the curve (AUC) of 0.909. Trima Accel v6.0 software consistently underestimated PLT yields. Simple correction derived from linear regression analysis accurately corrected this underestimation and ROC analysis identified a precise cut-off to reliably predict a DP. © 2016 Wiley Periodicals, Inc.

  10. Xenon Sputter Yield Measurements for Ion Thruster Materials

    NASA Technical Reports Server (NTRS)

    Williams, John D.; Gardner, Michael M.; Johnson, Mark L.; Wilbur, Paul J.

    2003-01-01

    In this paper, we describe a technique that was used to measure total and differential sputter yields of materials important to high specific impulse ion thrusters. The heart of the technique is a quartz crystal monitor that is swept at constant radial distance from a small target region where a high current density xenon ion beam is aimed. Differential sputtering yields were generally measured over a full 180 deg arc in a plane that included the beam centerline and the normal vector to the target surface. Sputter yield results are presented for a xenon ion energy range from 0.5 to 10 keV and an angle of incidence range from 0 deg to 70 deg from the target surface normal direction for targets consisting of molybdenum, titanium, solid (Poco) graphite, and flexible graphite (grafoil). Total sputter yields are calculated using a simple integration procedure and comparisons are made to sputter yields obtained from the literature. In general, the agreement between the available data is good. As expected for heavy xenon ions, the differential and total sputter yields are found to be strong functions of angle of incidence. Significant under- and over-cosine behavior is observed at low- and high-ion energies, respectively. In addition, strong differences in differential yield behavior are observed between low-Z targets (C and Ti) and high-Z targets (Mo). Curve fits to the differential sputter yield data are provided. They should prove useful to analysts interested in predicting the erosion profiles of ion thruster components and determining where the erosion products re-deposit.

  11. High yield neutron generators using the DD reaction

    NASA Astrophysics Data System (ADS)

    Vainionpaa, J. H.; Harris, J. L.; Piestrup, M. A.; Gary, C. K.; Williams, D. L.; Apodaca, M. D.; Cremer, J. T.; Ji, Qing; Ludewigt, B. A.; Jones, G.

    2013-04-01

    A product line of high yield neutron generators has been developed at Adelphi technology inc. The generators use the D-D fusion reaction and are driven by an ion beam supplied by a microwave ion source. Yields of up to 5 × 109 n/s have been achieved, which are comparable to those obtained using the more efficient D-T reaction. The microwave-driven plasma uses the electron cyclotron resonance (ECR) to produce a high plasma density for high current and high atomic ion species. These generators have an actively pumped vacuum system that allows operation at reduced pressure in the target chamber, increasing the overall system reliability. Since no radioactive tritium is used, the generators can be easily serviced, and components can be easily replaced, providing essentially an unlimited lifetime. Fast neutron source size can be adjusted by selecting the aperture and target geometries according to customer specifications. Pulsed and continuous operation has been demonstrated. Minimum pulse lengths of 50 μs have been achieved. Since the generators are easily serviceable, they offer a long lifetime neutron generator for laboratories and commercial systems requiring continuous operation. Several of the generators have been enclosed in radiation shielding/moderator structures designed for customer specifications. These generators have been proven to be useful for prompt gamma neutron activation analysis (PGNAA), neutron activation analysis (NAA) and fast neutron radiography. Thus these generators make excellent fast, epithermal and thermal neutron sources for laboratories and industrial applications that require neutrons with safe operation, small footprint, low cost and small regulatory burden.

  12. Clinical instruments: reliability and validity critical appraisal.

    PubMed

    Brink, Yolandi; Louw, Quinette A

    2012-12-01

    RATIONALE, AIM AND OBJECTIVES: There is a lack of health care practitioners using objective clinical tools with sound psychometric properties. There is also a need for researchers to improve their reporting of the validity and reliability results of these clinical tools. Therefore, to promote the use of valid and reliable tools or tests for clinical evaluation, this paper reports on the development of a critical appraisal tool to assess the psychometric properties of objective clinical tools. A five-step process was followed to develop the new critical appraisal tool: (1) preliminary conceptual decisions; (2) defining key concepts; (3) item generation; (4) assessment of face validity; and (5) formulation of the final tool. The new critical appraisal tool consists of 13 items, of which five items relate to both validity and reliability studies, four items to validity studies only and four items to reliability studies. The 13 items could be scored as 'yes', 'no' or 'not applicable'. This critical appraisal tool will aid both the health care practitioner to critically appraise the relevant literature and researchers to improve the quality of reporting of the validity and reliability of objective clinical tools. © 2011 Blackwell Publishing Ltd.

  13. General Aviation Aircraft Reliability Study

    NASA Technical Reports Server (NTRS)

    Pettit, Duane; Turnbull, Andrew; Roelant, Henk A. (Technical Monitor)

    2001-01-01

    This reliability study was performed in order to provide the aviation community with an estimate of Complex General Aviation (GA) Aircraft System reliability. To successfully improve the safety and reliability for the next generation of GA aircraft, a study of current GA aircraft attributes was prudent. This was accomplished by benchmarking the reliability of operational Complex GA Aircraft Systems. Specifically, Complex GA Aircraft System reliability was estimated using data obtained from the logbooks of a random sample of the Complex GA Aircraft population.

  14. Reliability of trauma management videos on YouTube and their compliance with ATLS® (9th edition) guideline.

    PubMed

    Şaşmaz, M I; Akça, A H

    2017-06-01

    In this study, the reliability of trauma management scenario videos (in English) on YouTube and their compliance with Advanced Trauma Life Support (ATLS ® ) guidelines were investigated. The search was conducted on February 15, 2016 by using the terms "assessment of trauma" and ''management of trauma''. All videos that were uploaded between January 2011 and June 2016 were viewed by two experienced emergency physicians. The data regarding the date of upload, the type of the uploader, duration of the video and view counts were recorded. The videos were categorized according to the video source and scores. The search results yielded 880 videos. Eight hundred and thirteen videos were excluded by the researchers. The distribution of videos by years was found to be balanced. The scores of videos uploaded by an institution were determined to be higher compared to other groups (p = 0.003). The findings of this study display that trauma management videos on YouTube in the majority of cases are not reliable/compliant with ATLS-guidelines and can therefore not be recommended for educational purposes. These data may only be used in public education after making necessary arrangements.

  15. Evaluating the reliability, validity, acceptability, and practicality of SMS text messaging as a tool to collect research data: results from the Feeding Your Baby project

    PubMed Central

    Donnan, Peter T; Symon, Andrew G; Kellett, Gillian; Monteith-Hodge, Ewa; Rauchhaus, Petra; Wyatt, Jeremy C

    2012-01-01

    Objective To test the reliability, validity, acceptability, and practicality of short message service (SMS) messaging for collection of research data. Materials and methods The studies were carried out in a cohort of recently delivered women in Tayside, Scotland, UK, who were asked about their current infant feeding method and future feeding plans. Reliability was assessed by comparison of their responses to two SMS messages sent 1 day apart. Validity was assessed by comparison of their responses to text questions and the same question administered by phone 1 day later, by comparison with the same data collected from other sources, and by correlation with other related measures. Acceptability was evaluated using quantitative and qualitative questions, and practicality by analysis of a researcher log. Results Reliability of the factual SMS message gave perfect agreement. Reliabilities for the numerical question were reasonable, with κ between 0.76 (95% CI 0.56 to 0.96) and 0.80 (95% CI 0.59 to 1.00). Validity for data compared with that collected by phone within 24 h (κ =0.92 (95% CI 0.84 to 1.00)) and with health visitor data (κ =0.85 (95% CI 0.73 to 0.97)) was excellent. Correlation validity between the text responses and other related demographic and clinical measures was as expected. Participants found the method a convenient and acceptable way of providing data. For researchers, SMS text messaging provided an easy and functional method of gathering a large volume of data. Conclusion In this sample and for these questions, SMS was a reliable and valid method for capturing research data. PMID:22539081

  16. [Reliability for detection of developmental problems using the semaphore from the Child Development Evaluation test: Is a yellow result different from a red result?

    PubMed

    Rizzoli-Córdoba, Antonio; Ortega-Ríosvelasco, Fernando; Villasís-Keever, Miguel Ángel; Pizarro-Castellanos, Mariel; Buenrostro-Márquez, Guillermo; Aceves-Villagrán, Daniel; O'Shea-Cuevas, Gabriel; Muñoz-Hernández, Onofre

    The Child Development Evaluation (CDE) is a screening tool designed and validated in Mexico for detecting developmental problems. The result is expressed through a semaphore. In the CDE test, both yellow and red results are considered positive, although a different intervention is proposed for each. The aim of this work was to evaluate the reliability of the CDE test to discriminate between children with yellow/red result based on the developmental domain quotient (DDQ) obtained through the Battelle Development Inventory, 2nd edition (in Spanish) (BDI-2). The information was obtained for the study from the validation. Children with a normal (green) result in the CDE were excluded. Two different cut-off points of the DDQ were used (BDI-2): <90 to include low average, and developmental delay was considered with a cut-off<80 per domain. Results were analyzed based on the correlation of the CDE test and each domain from the BDI-2 and by subgroups of age. With a cut-off DDQ<90, 86.8% of tests with yellow result (CDE) indicated at least one domain affected and 50% 3 or more compared with 93.8% and 78.8% for red result, respectively. There were differences in every domain (P<0.001) for the percent of children with DDQ<80 between yellow and red result (CDE): cognitive 36.1% vs. 61.9%; communication: 27.8% vs. 50.4%, motor: 18.1% vs. 39.9%; personal-social: 20.1% vs. 28.9%; and adaptive: 6.9% vs. 20.4%. The semaphore result yellow/red allows identifying different magnitudes of delay in developmental domains or subdomains, supporting the recommendation of different interventions for each one. Copyright © 2014 Hospital Infantil de México Federico Gómez. Publicado por Masson Doyma México S.A. All rights reserved.

  17. Tactile Acuity Charts: A Reliable Measure of Spatial Acuity

    PubMed Central

    Bruns, Patrick; Camargo, Carlos J.; Campanella, Humberto; Esteve, Jaume; Dinse, Hubert R.; Röder, Brigitte

    2014-01-01

    For assessing tactile spatial resolution it has recently been recommended to use tactile acuity charts which follow the design principles of the Snellen letter charts for visual acuity and involve active touch. However, it is currently unknown whether acuity thresholds obtained with this newly developed psychophysical procedure are in accordance with established measures of tactile acuity that involve passive contact with fixed duration and control of contact force. Here we directly compared tactile acuity thresholds obtained with the acuity charts to traditional two-point and grating orientation thresholds in a group of young healthy adults. For this purpose, two types of charts, using either Braille-like dot patterns or embossed Landolt rings with different orientations, were adapted from previous studies. Measurements with the two types of charts were equivalent, but generally more reliable with the dot pattern chart. A comparison with the two-point and grating orientation task data showed that the test-retest reliability of the acuity chart measurements after one week was superior to that of the passive methods. Individual thresholds obtained with the acuity charts agreed reasonably with the grating orientation threshold, but less so with the two-point threshold that yielded relatively distinct acuity estimates compared to the other methods. This potentially considerable amount of mismatch between different measures of tactile acuity suggests that tactile spatial resolution is a complex entity that should ideally be measured with different methods in parallel. The simple test procedure and high reliability of the acuity charts makes them a promising complement and alternative to the traditional two-point and grating orientation thresholds. PMID:24504346

  18. Infant polysomnography: reliability and validity of infant arousal assessment.

    PubMed

    Crowell, David H; Kulp, Thomas D; Kapuniai, Linda E; Hunt, Carl E; Brooks, Lee J; Weese-Mayer, Debra E; Silvestri, Jean; Ward, Sally Davidson; Corwin, Michael; Tinsley, Larry; Peucker, Mark

    2002-10-01

    Infant arousal scoring based on the Atlas Task Force definition of transient EEG arousal was evaluated to determine (1). whether transient arousals can be identified and assessed reliably in infants and (2). whether arousal and no-arousal epochs scored previously by trained raters can be validated reliably by independent sleep experts. Phase I for inter- and intrarater reliability scoring was based on two datasets of sleep epochs selected randomly from nocturnal polysomnograms of healthy full-term, preterm, idiopathic apparent life-threatening event cases, and siblings of Sudden Infant Death Syndrome infants of 35 to 64 weeks postconceptional age. After training, test set 1 reliability was assessed and discrepancies identified. After retraining, test set 2 was scored by the same raters to determine interrater reliability. Later, three raters from the trained group rescored test set 2 to assess inter- and intrarater reliabilities. Interrater and intrarater reliability kappa's, with 95% confidence intervals, ranged from substantial to almost perfect levels of agreement. Interrater reliabilities for spontaneous arousals were initially moderate and then substantial. During the validation phase, 315 previously scored epochs were presented to four sleep experts to rate as containing arousal or no-arousal events. Interrater expert agreements were diverse and considered as noninterpretable. Concordance in sleep experts' agreements, based on identification of the previously sampled arousal and no-arousal epochs, was used as a secondary evaluative technique. Results showed agreement by two or more experts on 86% of the Collaborative Home Infant Monitoring Evaluation Study arousal scored events. Conversely, only 1% of the Collaborative Home Infant Monitoring Evaluation Study-scored no-arousal epochs were rated as an arousal. In summary, this study presents an empirically tested model with procedures and criteria for attaining improved reliability in transient EEG arousal

  19. Validity and Reliability of Turkish Version of Gilliam Autism Rating Scale-2: Results of Preliminary Study

    ERIC Educational Resources Information Center

    Diken, Ibrahim H.; Diken, Ozlem; Gilliam, James E.; Ardic, Avsar; Sweeney, Dwight

    2012-01-01

    The purpose of this preliminary study was to explore the validity and reliability of Turkish Version of the Gilliam Autism Rating Scale-2 (TV-GARS-2). Participants included 436 children diagnosed with autism (331 male and 105 female, mean of ages was 8.01 with SD = 3.77). Data were also collected from individuals diagnosed with intellectual…

  20. Reliability and Validity of Ambulatory Cognitive Assessments

    PubMed Central

    Sliwinski, Martin J.; Mogle, Jacqueline A.; Hyun, Jinshil; Munoz, Elizabeth; Smyth, Joshua M.; Lipton, Richard B.

    2017-01-01

    Mobile technologies are increasingly used to measure cognitive function outside of traditional clinic and laboratory settings. Although ambulatory assessments of cognitive function conducted in people’s natural environments offer potential advantages over traditional assessment approaches, the psychometrics of cognitive assessment procedures have been understudied. We evaluated the reliability and construct validity of ambulatory assessments of working memory and perceptual speed administered via smartphones as part of an ecological momentary assessment (EMA) protocol in a diverse adult sample (N=219). Results indicated excellent between-person reliability (≥.97) for average scores, and evidence of reliable within-person variability across measurement occasions (.41–.53). The ambulatory tasks also exhibited construct validity, as evidence by their loadings on working memory and perceptual speed factors defined by the in-lab assessments. Our findings demonstrate that averaging across brief cognitive assessments made in uncontrolled naturalistic settings provide measurements that are comparable in reliability to assessments made in controlled laboratory environments. PMID:27084835

  1. Efficient SRAM yield optimization with mixture surrogate modeling

    NASA Astrophysics Data System (ADS)

    Zhongjian, Jiang; Zuochang, Ye; Yan, Wang

    2016-12-01

    Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.

  2. Learned helplessness in the rat: improvements in validity and reliability.

    PubMed

    Vollmayr, B; Henn, F A

    2001-08-01

    Major depression has a high prevalence and a high mortality. Despite many years of research little is known about the pathophysiologic events leading to depression nor about the causative molecular mechanisms of antidepressant treatment leading to remission and prevention of relapse. Animal models of depression are urgently needed to investigate new hypotheses. The learned helplessness paradigm initially described by Overmier and Seligman [J. Comp. Physiol. Psychol. 63 (1967) 28] is the most widely studied animal model of depression. Animals are exposed to inescapable shock and subsequently tested for a deficit in acquiring an avoidance task. Despite its excellent validity concerning the construct of etiology, symptomatology and prediction of treatment response [Clin. Neurosci. 1 (1993) 152; Trends Pharmacol. Sci. 12 (1991) 131] there has been little use of the model for the investigation of recent theories on the pathogenesis of depression. This may be due to reported difficulties in reliability of the paradigm [Animal Learn. Behav. 4 (1976) 401; Pharmacol. Biochem. Behav. 36 (1990) 739]. The aim of the current study was therefore to improve parameters for inescapable shock and learned helplessness testing to minimize artifacts and random error and yield a reliable fraction of helpless animals after shock exposure. The protocol uses mild current which induces helplessness only in some of the animals thereby modeling the hypothesis of variable predisposition for depression in different subjects [Psychopharmacol. Bull. 21 (1985) 443; Neurosci. Res. 38 (200) 193]. This allows us to use animals which are not helpless after inescapable shock as a stressed control, but sensitivity, specificity and variability of test results have to be reassessed.

  3. Evolving Reliability and Maintainability Allocations for NASA Ground Systems

    NASA Technical Reports Server (NTRS)

    Munoz, Gisela; Toon, T.; Toon, J.; Conner, A.; Adams, T.; Miranda, D.

    2016-01-01

    This paper describes the methodology and value of modifying allocations to reliability and maintainability requirements for the NASA Ground Systems Development and Operations (GSDO) programs subsystems. As systems progressed through their design life cycle and hardware data became available, it became necessary to reexamine the previously derived allocations. This iterative process provided an opportunity for the reliability engineering team to reevaluate allocations as systems moved beyond their conceptual and preliminary design phases. These new allocations are based on updated designs and maintainability characteristics of the components. It was found that trade-offs in reliability and maintainability were essential to ensuring the integrity of the reliability and maintainability analysis. This paper discusses the results of reliability and maintainability reallocations made for the GSDO subsystems as the program nears the end of its design phase.

  4. Characterizing reliability in a product/process design-assurance program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerscher, W.J. III; Booker, J.M.; Bement, T.R.

    1997-10-01

    Over the years many advancing techniques in the area of reliability engineering have surfaced in the military sphere of influence, and one of these techniques is Reliability Growth Testing (RGT). Private industry has reviewed RGT as part of the solution to their reliability concerns, but many practical considerations have slowed its implementation. It`s objective is to demonstrate the reliability requirement of a new product with a specified confidence. This paper speaks directly to that objective but discusses a somewhat different approach to achieving it. Rather than conducting testing as a continuum and developing statistical confidence bands around the results, thismore » Bayesian updating approach starts with a reliability estimate characterized by large uncertainty and then proceeds to reduce the uncertainty by folding in fresh information in a Bayesian framework.« less

  5. Evolving Reliability and Maintainability Allocations for NASA Ground Systems

    NASA Technical Reports Server (NTRS)

    Munoz, Gisela; Toon, Troy; Toon, Jamie; Conner, Angelo C.; Adams, Timothy C.; Miranda, David J.

    2016-01-01

    This paper describes the methodology and value of modifying allocations to reliability and maintainability requirements for the NASA Ground Systems Development and Operations (GSDO) program’s subsystems. As systems progressed through their design life cycle and hardware data became available, it became necessary to reexamine the previously derived allocations. This iterative process provided an opportunity for the reliability engineering team to reevaluate allocations as systems moved beyond their conceptual and preliminary design phases. These new allocations are based on updated designs and maintainability characteristics of the components. It was found that trade-offs in reliability and maintainability were essential to ensuring the integrity of the reliability and maintainability analysis. This paper discusses the results of reliability and maintainability reallocations made for the GSDO subsystems as the program nears the end of its design phase.

  6. [Study of the relationship between human quality and reliability].

    PubMed

    Long, S; Wang, C; Wang, L i; Yuan, J; Liu, H; Jiao, X

    1997-02-01

    To clarify the relationship between human quality and reliability, 1925 experiments in 20 subjects were carried out to study the relationship between disposition character, digital memory, graphic memory, multi-reaction time and education level and simulated aircraft operation. Meanwhile, effects of task difficulty and enviromental factor on human reliability were also studied. The results showed that human quality can be predicted and evaluated through experimental methods. The better the human quality, the higher the human reliability.

  7. Validity and reliability of naturalistic driving scene categorization Judgments from crowdsourcing.

    PubMed

    Cabrall, Christopher D D; Lu, Zhenji; Kyriakidis, Miltos; Manca, Laura; Dijksterhuis, Chris; Happee, Riender; de Winter, Joost

    2018-05-01

    the small scale Job, and in part for the larger scale Job, it should be noted that such reliability reported here may not be directly comparable. Nonetheless, such results are both indicative of high levels of obtained rating reliability. Overall, our results provide compelling evidence for CrowdFlower, via use of GTQs, being able to yield more accurate and consistent crowdsourced categorizations of naturalistic driving scene contents than when used without such a control mechanism. Such annotations in such short periods of time present a potentially powerful resource in driving research and driving automation development. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Earthquake prediction; new studies yield promising results

    USGS Publications Warehouse

    Robinson, R.

    1974-01-01

    On Agust 3, 1973, a small earthquake (magnitude 2.5) occurred near Blue Mountain Lake in the Adirondack region of northern New York State. This seemingly unimportant event was of great significance, however, because it was predicted. Seismologsits at the Lamont-Doherty geologcal Observatory of Columbia University accurately foretold the time, place, and magnitude of the event. Their prediction was based on certain pre-earthquake processes that are best explained by a hypothesis known as "dilatancy," a concept that has injected new life and direction into the science of earthquake prediction. Although much mroe reserach must be accomplished before we can expect to predict potentially damaging earthquakes with any degree of consistency, results such as this indicate that we are on a promising road. 

  9. Personality traits in companion dogs-Results from the VIDOPET.

    PubMed

    Turcsán, Borbála; Wallis, Lisa; Virányi, Zsófia; Range, Friederike; Müller, Corsin A; Huber, Ludwig; Riemer, Stefanie

    2018-01-01

    Individual behavioural differences in pet dogs are of great interest from a basic and applied research perspective. Most existing dog personality tests have specific (practical) goals in mind and so focused only on a limited aspect of dogs' personality, such as identifying problematic (aggressive or fearful) behaviours, assessing suitability as working dogs, or improving the results of adoption. Here we aimed to create a comprehensive test of personality in pet dogs that goes beyond traditional practical evaluations by exposing pet dogs to a range of situations they might encounter in everyday life. The Vienna Dog Personality Test (VIDOPET) consists of 15 subtests and was performed on 217 pet dogs. A two-step data reduction procedure (principal component analysis on each subtest followed by an exploratory factor analysis on the subtest components) yielded five factors: Sociability-obedience, Activity-independence, Novelty seeking, Problem orientation, and Frustration tolerance. A comprehensive evaluation of reliability and validity measures demonstrated excellent inter- and intra-observer reliability and adequate internal consistency of all factors. Moreover the test showed good temporal consistency when re-testing a subsample of dogs after an average of 3.8 years-a considerably longer test-retest interval than assessed for any other dog personality test, to our knowledge. The construct validity of the test was investigated by analysing the correlations between the results of video coding and video rating methods and the owners' assessment via a dog personality questionnaire. The results demonstrated good convergent as well as discriminant validity. To conclude, the VIDOPET is not only a highly reliable and valid tool for measuring dog personality, but also the first test to show consistent behavioural traits related to problem solving ability and frustration tolerance in pet dogs.

  10. Personality traits in companion dogs—Results from the VIDOPET

    PubMed Central

    Wallis, Lisa; Virányi, Zsófia; Range, Friederike; Müller, Corsin A.; Huber, Ludwig; Riemer, Stefanie

    2018-01-01

    Individual behavioural differences in pet dogs are of great interest from a basic and applied research perspective. Most existing dog personality tests have specific (practical) goals in mind and so focused only on a limited aspect of dogs’ personality, such as identifying problematic (aggressive or fearful) behaviours, assessing suitability as working dogs, or improving the results of adoption. Here we aimed to create a comprehensive test of personality in pet dogs that goes beyond traditional practical evaluations by exposing pet dogs to a range of situations they might encounter in everyday life. The Vienna Dog Personality Test (VIDOPET) consists of 15 subtests and was performed on 217 pet dogs. A two-step data reduction procedure (principal component analysis on each subtest followed by an exploratory factor analysis on the subtest components) yielded five factors: Sociability-obedience, Activity-independence, Novelty seeking, Problem orientation, and Frustration tolerance. A comprehensive evaluation of reliability and validity measures demonstrated excellent inter- and intra-observer reliability and adequate internal consistency of all factors. Moreover the test showed good temporal consistency when re-testing a subsample of dogs after an average of 3.8 years—a considerably longer test-retest interval than assessed for any other dog personality test, to our knowledge. The construct validity of the test was investigated by analysing the correlations between the results of video coding and video rating methods and the owners’ assessment via a dog personality questionnaire. The results demonstrated good convergent as well as discriminant validity. To conclude, the VIDOPET is not only a highly reliable and valid tool for measuring dog personality, but also the first test to show consistent behavioural traits related to problem solving ability and frustration tolerance in pet dogs. PMID:29634747

  11. Effects of Management Practices on Meloidogyne incognita and Snap Bean Yield.

    PubMed

    Smittle, D A; Johnson, A W

    1982-01-01

    Phenamiphos applied at 6.7 kg ai/ha through a solid set or a center pivot irrigation system with 28 mm of water effectively controlled root-knot nematodes, Meloidogyne incognita, and resulted in greater snap bean growth and yields irrespective of growing season, tillage method, or cover crop system. The percentage yield increases attributed to this method of M. incognita control over nontreated controls were 45% in the spring crop, and 90% and 409% in the fall crops following winter rye and fallow, respectively. Root galling was not affected by tillage systems or cover crop, but disk tillage resulted in over 50% reduction in bean yield compared with yields from the subsoil-bed tillage system.

  12. Calculation of K-shell fluorescence yields for low-Z elements

    NASA Astrophysics Data System (ADS)

    Nekkab, M.; Kahoul, A.; Deghfel, B.; Aylikci, N. Küp; Aylikçi, V.

    2015-03-01

    The analytical methods based on X-ray fluorescence are advantageous for practical applications in a variety of fields including atomic physics, X-ray fluorescence surface chemical analysis and medical research and so the accurate fluorescence yields (ωK) are required for these applications. In this contribution we report a new parameters for calculation of K-shell fluorescence yields (ωK) of elements in the range of 11≤Z≤30. The experimental data are interpolated by using the famous analytical function (ωk/(1 -ωk)) 1 /q (were q=3, 3.5 and 4) vs Z to deduce the empirical K-shell fluorescence yields. A comparison is made between the results of the procedures followed here and those theoretical and other semi-empirical fluorescence yield values. Reasonable agreement was typically obtained between our result and other works.

  13. Increasing the reliability of the fluid/crystallized difference score from the Kaufman Adolescent and Adult Intelligence Test with reliable component analysis.

    PubMed

    Caruso, J C

    2001-06-01

    The unreliability of difference scores is a well documented phenomenon in the social sciences and has led researchers and practitioners to interpret differences cautiously, if at all. In the case of the Kaufman Adult and Adolescent Intelligence Test (KAIT), the unreliability of the difference between the Fluid IQ and the Crystallized IQ is due to the high correlation between the two scales. The consequences of the lack of precision with which differences are identified are wide confidence intervals and unpowerful significance tests (i.e., large differences are required to be declared statistically significant). Reliable component analysis (RCA) was performed on the subtests of the KAIT in order to address these problems. RCA is a new data reduction technique that results in uncorrelated component scores with maximum proportions of reliable variance. Results indicate that the scores defined by RCA have discriminant and convergent validity (with respect to the equally weighted scores) and that differences between the scores, derived from a single testing session, were more reliable than differences derived from equal weighting for each age group (11-14 years, 15-34 years, 35-85+ years). This reliability advantage results in narrower confidence intervals around difference scores and smaller differences required for statistical significance.

  14. Space solar array reliability: A study and recommendations

    NASA Astrophysics Data System (ADS)

    Brandhorst, Henry W., Jr.; Rodiek, Julie A.

    2008-12-01

    Providing reliable power over the anticipated mission life is critical to all satellites; therefore solar arrays are one of the most vital links to satellite mission success. Furthermore, solar arrays are exposed to the harshest environment of virtually any satellite component. In the past 10 years 117 satellite solar array anomalies have been recorded with 12 resulting in total satellite failure. Through an in-depth analysis of satellite anomalies listed in the Airclaim's Ascend SpaceTrak database, it is clear that solar array reliability is a serious, industry-wide issue. Solar array reliability directly affects the cost of future satellites through increased insurance premiums and a lack of confidence by investors. Recommendations for improving reliability through careful ground testing, standardization of testing procedures such as the emerging AIAA standards, and data sharing across the industry will be discussed. The benefits of creating a certified module and array testing facility that would certify in-space reliability will also be briefly examined. Solar array reliability is an issue that must be addressed to both reduce costs and ensure continued viability of the commercial and government assets on orbit.

  15. Reliability and validity of the test of incremental respiratory endurance measures of inspiratory muscle performance in COPD.

    PubMed

    Formiga, Magno F; Roach, Kathryn E; Vital, Isabel; Urdaneta, Gisel; Balestrini, Kira; Calderon-Candelario, Rafael A; Campos, Michael A; Cahalin, Lawrence P

    2018-01-01

    The Test of Incremental Respiratory Endurance (TIRE) provides a comprehensive assessment of inspiratory muscle performance by measuring maximal inspiratory pressure (MIP) over time. The integration of MIP over inspiratory duration (ID) provides the sustained maximal inspiratory pressure (SMIP). Evidence on the reliability and validity of these measurements in COPD is not currently available. Therefore, we assessed the reliability, responsiveness and construct validity of the TIRE measures of inspiratory muscle performance in subjects with COPD. Test-retest reliability, known-groups and convergent validity assessments were implemented simultaneously in 81 male subjects with mild to very severe COPD. TIRE measures were obtained using the portable PrO2 device, following standard guidelines. All TIRE measures were found to be highly reliable, with SMIP demonstrating the strongest test-retest reliability with a nearly perfect intraclass correlation coefficient (ICC) of 0.99, while MIP and ID clustered closely together behind SMIP with ICC values of about 0.97. Our findings also demonstrated known-groups validity of all TIRE measures, with SMIP and ID yielding larger effect sizes when compared to MIP in distinguishing between subjects of different COPD status. Finally, our analyses confirmed convergent validity for both SMIP and ID, but not MIP. The TIRE measures of MIP, SMIP and ID have excellent test-retest reliability and demonstrated known-groups validity in subjects with COPD. SMIP and ID also demonstrated evidence of moderate convergent validity and appear to be more stable measures in this patient population than the traditional MIP.

  16. The German Version of the Gaze Anxiety Rating Scale (GARS): Reliability and Validity

    PubMed Central

    Domes, Gregor; Marx, Lisa; Spenthof, Ines; Heinrichs, Markus

    2016-01-01

    Objective Fear of eye gaze and avoidance of eye contact are core features of social anxiety disorders (SAD). To measure self-reported fear and avoidance of eye gaze, the Gaze Anxiety Rating Scale (GARS) has been developed and validated in recent years in its English version. The main objectives of the present study were to psychometrically evaluate the German translation of the GARS concerning its reliability, factorial structure, and validity. Methods Three samples of participants were enrolled in the study. (1) A non-patient sample (n = 353) completed the GARS and a set of trait questionnaires to assess internal consistency, test-retest reliability, factorial structure, and concurrent and divergent validity. (2) A sample of patients with SAD (n = 33) was compared to a healthy control group (n = 30) regarding their scores on the GARS and the trait measures. Results The German GARS fear and avoidance scales exhibited excellent internal consistency and high stability over 2 and 4 months, as did the original version. The English version’s factorial structure was replicated, yielding two categories of situations: (1) everyday situations and (2) situations involving high evaluative threat. GARS fear and avoidance displayed convergent validity with trait measures of social anxiety and were markedly higher in patients with GSAD than in healthy controls. Fear and avoidance of eye contact in situations involving high levels of evaluative threat related more closely to social anxiety than to gaze anxiety in everyday situations. Conclusions The German version of the GARS has demonstrated reliability and validity similar to the original version, and is thus well suited to capture fear and avoidance of eye contact in different social situations as a valid self-report measure of social anxiety and related disorders in the social domain for use in both clinical practice and research. PMID:26937638

  17. Ambiguities and conflicting results: the limitations of the kappa statistic in establishing the interrater reliability of the Irish nursing minimum data set for mental health: a discussion paper.

    PubMed

    Morris, Roisin; MacNeela, Padraig; Scott, Anne; Treacy, Pearl; Hyde, Abbey; O'Brien, Julian; Lehwaldt, Daniella; Byrne, Anne; Drennan, Jonathan

    2008-04-01

    In a study to establish the interrater reliability of the Irish Nursing Minimum Data Set (I-NMDS) for mental health difficulties relating to the choice of reliability test statistic were encountered. The objective of this paper is to highlight the difficulties associated with testing interrater reliability for an ordinal scale using a relatively homogenous sample and the recommended kw statistic. One pair of mental health nurses completed the I-NMDS for mental health for a total of 30 clients attending a mental health day centre over a two-week period. Data was analysed using the kw and percentage agreement statistics. A total of 34 of the 38 I-NMDS for mental health variables with lower than acceptable levels of kw reliability scores achieved acceptable levels of reliability according to their percentage agreement scores. The study findings implied that, due to the homogeneity of the sample, low variability within the data resulted in the 'base rate problem' associated with the use of kw statistic. Conclusions point to the interpretation of kw in tandem with percentage agreement scores. Suggestions that kw scores were low due to chance agreement and that one should strive to use a study sample with known variability are queried.

  18. The reliability of the Adelaide in-shoe foot model.

    PubMed

    Bishop, Chris; Hillier, Susan; Thewlis, Dominic

    2017-07-01

    Understanding the biomechanics of the foot is essential for many areas of research and clinical practice such as orthotic interventions and footwear development. Despite the widespread attention paid to the biomechanics of the foot during gait, what largely remains unknown is how the foot moves inside the shoe. This study investigated the reliability of the Adelaide In-Shoe Foot Model, which was designed to quantify in-shoe foot kinematics and kinetics during walking. Intra-rater reliability was assessed in 30 participants over five walking trials whilst wearing shoes during two data collection sessions, separated by one week. Sufficient reliability for use was interpreted as a coefficient of multiple correlation and intra-class correlation coefficient of >0.61. Inter-rater reliability was investigated separately in a second sample of 10 adults by two researchers with experience in applying markers for the purpose of motion analysis. The results indicated good consistency in waveform estimation for most kinematic and kinetic data, as well as good inter-and intra-rater reliability. The exception is the peak medial ground reaction force, the minimum abduction angle and the peak abduction/adduction external hindfoot joint moments which resulted in less than acceptable repeatability. Based on our results, the Adelaide in-shoe foot model can be used with confidence for 24 commonly measured biomechanical variables during shod walking. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Day-to-day reliability of gait characteristics in rats.

    PubMed

    Raffalt, Peter C; Nielsen, Louise R; Madsen, Stefan; Munk Højberg, Laurits; Pingel, Jessica; Nielsen, Jens Bo; Wienecke, Jacob; Alkjær, Tine

    2018-04-27

    The purpose of the present study was to determine the day-to-day reliability in stride characteristics in rats during treadmill walking obtained with two-dimensional (2D) motion capture. Kinematics were recorded from 26 adult rats during walking at 8 m/min, 12 m/min and 16 m/min on two separate days. Stride length, stride time, contact time, swing time and hip, knee and ankle joint range of motion were extracted from 15 strides. The relative reliability was assessed using intra-class correlation coefficients (ICC(1,1)) and (ICC(3,1)). The absolute reliability was determined using measurement error (ME). Across walking speeds, the relative reliability ranged from fair to good (ICCs between 0.4 and 0.75). The ME was below 91 mm for strides lengths, below 55 ms for the temporal stride variables and below 6.4° for the joint angle range of motion. In general, the results indicated an acceptable day-to-day reliability of the gait pattern parameters observed in rats during treadmill walking. The results of the present study may serve as a reference material that can help future intervention studies on rat gait characteristics both with respect to the selection of outcome measures and in the interpretation of the results. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Climate change impacts on crop yield: evidence from China.

    PubMed

    Wei, Taoyuan; Cherry, Todd L; Glomrød, Solveig; Zhang, Tianyi

    2014-11-15

    When estimating climate change impact on crop yield, a typical assumption is constant elasticity of yield with respect to a climate variable even though the elasticity may be inconstant. After estimating both constant and inconstant elasticities with respect to temperature and precipitation based on provincial panel data in China 1980-2008, our results show that during that period, the temperature change contributes positively to total yield growth by 1.3% and 0.4% for wheat and rice, respectively, but negatively by 12% for maize. The impacts of precipitation change are marginal. We also compare our estimates with other studies and highlight the implications of the inconstant elasticities for crop yield, harvest and food security. We conclude that climate change impact on crop yield would not be an issue in China if positive impacts of other socio-economic factors continue in the future. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Against the proposition: for the diagnosis of viral infections, commercial assays provide more reliable results than do in-house assays.

    PubMed

    James, Vivienne

    2008-01-01

    There are no differences inherent in the design of commercial or in-house assays and their early development is similar. The same principles apply and it is on the same criteria of accuracy, reproducibility and clinical relevance of results that all assays are judged. However, if there is sufficient uptake of a commercial assay, its strengths and any flaws soon become apparent and it will only be the best commercial assays that remain in the market. For the in-house assays it is through comparability studies and external quality assessment (EQA) schemes that the best can be demonstrated, albeit this information is only accessible initially to the EQA provider and the laboratories using the assays. The EQA results described here support my supposition that, for the diagnosis of viral infections, commercial assays do not provide more reliable results than do in-house assays.

  2. A Passive System Reliability Analysis for a Station Blackout

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunett, Acacia; Bucknor, Matthew; Grabaskas, David

    2015-05-03

    The latest iterations of advanced reactor designs have included increased reliance on passive safety systems to maintain plant integrity during unplanned sequences. While these systems are advantageous in reducing the reliance on human intervention and availability of power, the phenomenological foundations on which these systems are built require a novel approach to a reliability assessment. Passive systems possess the unique ability to fail functionally without failing physically, a result of their explicit dependency on existing boundary conditions that drive their operating mode and capacity. Argonne National Laboratory is performing ongoing analyses that demonstrate various methodologies for the characterization of passivemore » system reliability within a probabilistic framework. Two reliability analysis techniques are utilized in this work. The first approach, the Reliability Method for Passive Systems, provides a mechanistic technique employing deterministic models and conventional static event trees. The second approach, a simulation-based technique, utilizes discrete dynamic event trees to treat time- dependent phenomena during scenario evolution. For this demonstration analysis, both reliability assessment techniques are used to analyze an extended station blackout in a pool-type sodium fast reactor (SFR) coupled with a reactor cavity cooling system (RCCS). This work demonstrates the entire process of a passive system reliability analysis, including identification of important parameters and failure metrics, treatment of uncertainties and analysis of results.« less

  3. The Balanced Inventory of Desirable Responding (BIDR): A Reliability Generalization Study

    ERIC Educational Resources Information Center

    Li, Andrew; Bagger, Jessica

    2007-01-01

    The Balanced Inventory of Desirable Responding (BIDR) is one of the most widely used social desirability scales. The authors conducted a reliability generalization study to examine the typical reliability coefficients of BIDR scores and explored factors that explained the variability of reliability estimates across studies. The results indicated…

  4. Method matters: Understanding diagnostic reliability in DSM-IV and DSM-5.

    PubMed

    Chmielewski, Michael; Clark, Lee Anna; Bagby, R Michael; Watson, David

    2015-08-01

    Diagnostic reliability is essential for the science and practice of psychology, in part because reliability is necessary for validity. Recently, the DSM-5 field trials documented lower diagnostic reliability than past field trials and the general research literature, resulting in substantial criticism of the DSM-5 diagnostic criteria. Rather than indicating specific problems with DSM-5, however, the field trials may have revealed long-standing diagnostic issues that have been hidden due to a reliance on audio/video recordings for estimating reliability. We estimated the reliability of DSM-IV diagnoses using both the standard audio-recording method and the test-retest method used in the DSM-5 field trials, in which different clinicians conduct separate interviews. Psychiatric patients (N = 339) were diagnosed using the SCID-I/P; 218 were diagnosed a second time by an independent interviewer. Diagnostic reliability using the audio-recording method (N = 49) was "good" to "excellent" (M κ = .80) and comparable to the DSM-IV field trials estimates. Reliability using the test-retest method (N = 218) was "poor" to "fair" (M κ = .47) and similar to DSM-5 field-trials' estimates. Despite low test-retest diagnostic reliability, self-reported symptoms were highly stable. Moreover, there was no association between change in self-report and change in diagnostic status. These results demonstrate the influence of method on estimates of diagnostic reliability. (c) 2015 APA, all rights reserved).

  5. Reliability and the adaptive utility of discrimination among alarm callers.

    PubMed

    Blumstein, Daniel T; Verneyre, Laure; Daniel, Janice C

    2004-09-07

    Unlike individually distinctive contact calls, or calls that aid in the recognition of young by their parents, the function or functions of individually distinctive alarm calls is less obvious. We conducted three experiments to study the importance of caller reliability in explaining individual-discriminative abilities in the alarm calls of yellow-bellied marmots (Marmota flaviventris). In our first two experiments, we found that calls from less reliable individuals and calls from individuals calling from a greater simulated distance were more evocative than calls from reliable individuals or nearby callers. These results are consistent with the hypothesis that marmots assess the reliability of callers to help them decide how much time to allocate to independent vigilance. The third experiment demonstrated that the number of callers influenced responsiveness, probably because situations where more than a single caller calls, are those when there is certain to be a predator present. Taken together, the results from all three experiments demonstrate the importance of reliability in explaining individual discrimination abilities in yellow-bellied marmots. Marmots' assessment of reliability acts by influencing the time allocated to individual assessment and thus the time not allocated to other activities.

  6. Reliability and the adaptive utility of discrimination among alarm callers.

    PubMed Central

    Blumstein, Daniel T.; Verneyre, Laure; Daniel, Janice C.

    2004-01-01

    Unlike individually distinctive contact calls, or calls that aid in the recognition of young by their parents, the function or functions of individually distinctive alarm calls is less obvious. We conducted three experiments to study the importance of caller reliability in explaining individual-discriminative abilities in the alarm calls of yellow-bellied marmots (Marmota flaviventris). In our first two experiments, we found that calls from less reliable individuals and calls from individuals calling from a greater simulated distance were more evocative than calls from reliable individuals or nearby callers. These results are consistent with the hypothesis that marmots assess the reliability of callers to help them decide how much time to allocate to independent vigilance. The third experiment demonstrated that the number of callers influenced responsiveness, probably because situations where more than a single caller calls, are those when there is certain to be a predator present. Taken together, the results from all three experiments demonstrate the importance of reliability in explaining individual discrimination abilities in yellow-bellied marmots. Marmots' assessment of reliability acts by influencing the time allocated to individual assessment and thus the time not allocated to other activities. PMID:15315902

  7. A particle swarm model for estimating reliability and scheduling system maintenance

    NASA Astrophysics Data System (ADS)

    Puzis, Rami; Shirtz, Dov; Elovici, Yuval

    2016-05-01

    Modifying data and information system components may introduce new errors and deteriorate the reliability of the system. Reliability can be efficiently regained with reliability centred maintenance, which requires reliability estimation for maintenance scheduling. A variant of the particle swarm model is used to estimate reliability of systems implemented according to the model view controller paradigm. Simulations based on data collected from an online system of a large financial institute are used to compare three component-level maintenance policies. Results show that appropriately scheduled component-level maintenance greatly reduces the cost of upholding an acceptable level of reliability by reducing the need in system-wide maintenance.

  8. Specific Yield--Column drainage and centrifuge moisture content

    USGS Publications Warehouse

    Johnson, A.I.; Prill, R.C.; Morris, D.A.

    1963-01-01

    The specific yield of a rock or soil, with respect to water, is the ratio of (1) the volume of water which, after being saturated, it will yield by gravity to (2) its own volume. Specific retention represents the water retained against gravity drainage. The specific yield and retention when added together are equal to the total interconnected porosity of the rock or soil. Because specific retention is more easily determined than specific yield, most methods for obtaining yield first require the determination of specific retention. Recognizing the great need for developing improved methods of determining the specific yield of water-bearing materials, the U.S. Geological Survey and the California Department of Water Resources initiated a cooperative investigation of this subject. The major objectives of this research are (1) to review pertinent literature on specific yield and related subjects, (2) to increase basic knowledge of specific yield and rate of drainage and to determine the most practical methods of obtaining them, (3) to compare and to attempt to correlate the principal laboratory and field methods now commonly used to obtain specific yield, and (4) to obtain improved estimates of specific yield of water-bearing deposits in California. An open-file report, 'Specific yield of porous media, an annotated bibliography,' by A. I. Johnson, D. A. Morris, and R. C. Prill, was released in 1960 in partial fulfillment of the first objective. This report describes the second phase of the specific-yield study by the U.S. Geological Survey Hydrologic Laboratory at Denver, Colo. Laboratory research on column drainage and centrifuge moisture equivalent, two methods for estimating specific retention of porous media, is summarized. In the column-drainage study, a wide variety of materials was packed into plastic columns of 1- to 8-inch diameter, wetted with Denver tap water, and drained under controlled conditions of temperature and humidity. The effects of cleaning the

  9. On-line prediction of yield grade, longissimus muscle area, preliminary yield grade, adjusted preliminary yield grade, and marbling score using the MARC beef carcass image analysis system.

    PubMed

    Shackelford, S D; Wheeler, T L; Koohmaraie, M

    2003-01-01

    The present experiment was conducted to evaluate the ability of the U.S. Meat Animal Research Center's beef carcass image analysis system to predict calculated yield grade, longissimus muscle area, preliminary yield grade, adjusted preliminary yield grade, and marbling score under commercial beef processing conditions. In two commercial beef-processing facilities, image analysis was conducted on 800 carcasses on the beef-grading chain immediately after the conventional USDA beef quality and yield grades were applied. Carcasses were blocked by plant and observed calculated yield grade. The carcasses were then separated, with 400 carcasses assigned to a calibration data set that was used to develop regression equations, and the remaining 400 carcasses assigned to a prediction data set used to validate the regression equations. Prediction equations, which included image analysis variables and hot carcass weight, accounted for 90, 88, 90, 88, and 76% of the variation in calculated yield grade, longissimus muscle area, preliminary yield grade, adjusted preliminary yield grade, and marbling score, respectively, in the prediction data set. In comparison, the official USDA yield grade as applied by online graders accounted for 73% of the variation in calculated yield grade. The technology described herein could be used by the beef industry to more accurately determine beef yield grades; however, this system does not provide an accurate enough prediction of marbling score to be used without USDA grader interaction for USDA quality grading.

  10. Fission yield calculation using toy model based on Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Jubaidah, Kurniadi, Rizal

    2015-09-01

    Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (Rc), mean of left curve (μL) and mean of right curve (μR), deviation of left curve (σL) and deviation of right curve (σR). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90yield is in about 135

  11. Lumber grade-yields for factory-grade northern red oak sawlogs

    Treesearch

    James G. Schroeder; Leland F. Hanks

    1967-01-01

    A report on results of sawing 556 northern red oak sawlogs at four sawmills in West Virginia and Virginia, and the distribution of grades for the standard factory lumber produced. Tabular data on actual yield and curved grade-yield percentages.

  12. Reliable Characterization for Pyrolysis Bio-Oils Leads to Enhanced

    Science.gov Websites

    Upgrading Methods | NREL Reliable Characterization for Pyrolysis Bio-Oils Leads to Enhanced Upgrading Methods Science and Technology Highlights Highlights in Research & Development Reliable Characterization for Pyrolysis Bio-Oils Leads to Enhanced Upgrading Methods Key Research Results Achievement As co

  13. [Contrast of Z-Pinch X-Ray Yield Measure Technique].

    PubMed

    Li, Mo; Wang, Liang-ping; Sheng, Liang; Lu, Yi

    2015-03-01

    Resistive bolometer and scintillant detection system are two mainly Z-pinch X-ray yield measure techniques which are based on different diagnostic principles. Contrasting the results from two methods can help with increasing precision of X-ray yield measurement. Experiments with different load material and shape were carried out on the "QiangGuang-I" facility. For Al wire arrays, X-ray yields measured by the two techniques were largely consistent. However, for insulating coating W wire arrays, X-ray yields taken from bolometer changed with load parameters while data from scintillant detection system hardly changed. Simulation and analysis draw conclusions as follows: (1) Scintillant detection system is much more sensitive to X-ray photons with low energy and its spectral response is wider than the resistive bolometer. Thus, results from the former method are always larger than the latter. (2) The responses of the two systems are both flat to Al plasma radiation. Thus, their results are consistent for Al wire array loads. (3) Radiation form planar W wire arrays is mainly composed of sub-keV soft X-ray. X-ray yields measured by the bolometer is supposed to be accurate because of the nickel foil can absorb almost all the soft X-ray. (4) By contrast, using planar W wire arrays, data from scintillant detection system hardly change with load parameters. A possible explanation is that while the distance between wires increases, plasma temperature at stagnation reduces and spectra moves toward the soft X-ray region. Scintillator is much more sensitive to the soft X-ray below 200 eV. Thus, although the total X-ray yield reduces with large diameter load, signal from the scintillant detection system is almost the same. (5) Both Techniques affected by electron beams produced by the loads.

  14. A System for Integrated Reliability and Safety Analyses

    NASA Technical Reports Server (NTRS)

    Kostiuk, Peter; Shapiro, Gerald; Hanson, Dave; Kolitz, Stephan; Leong, Frank; Rosch, Gene; Coumeri, Marc; Scheidler, Peter, Jr.; Bonesteel, Charles

    1999-01-01

    We present an integrated reliability and aviation safety analysis tool. The reliability models for selected infrastructure components of the air traffic control system are described. The results of this model are used to evaluate the likelihood of seeing outcomes predicted by simulations with failures injected. We discuss the design of the simulation model, and the user interface to the integrated toolset.

  15. Microdosimetry of DNA conformations: relation between direct effect of (60)Co gamma rays and topology of DNA geometrical models in the calculation of A-, B- and Z-DNA radiation-induced damage yields.

    PubMed

    Semsarha, Farid; Raisali, Gholamreza; Goliaei, Bahram; Khalafi, Hossein

    2016-05-01

    In order to obtain the energy deposition pattern of ionizing radiation in the nanometric scale of genetic material and to investigate the different sensitivities of the DNA conformations, direct effects of (60)Co gamma rays on the three A, B and Z conformations of DNA have been studied. For this purpose, single-strand breaks (SSB), double-strand breaks (DSB), base damage (BD), hit probabilities and three microdosimetry quantities (imparted energy, mean chord length and lineal energy) in the mentioned DNA conformations have been calculated and compared by using GEometry ANd Tracking 4 (Geant4) toolkit. The results show that A-, B- and Z-DNA conformations have the highest yields of DSB (1.2 Gy(-1) Gbp(-1)), SSB (25.2 Gy(-1) Gbp(-1)) and BD (4.81 Gy(-1) Gbp(-1)), respectively. Based on the investigation of direct effects of radiation, it can be concluded that the DSB yield is largely correlated to the topological characteristics of DNA models, although the SSB yield is not. Moreover, according to the comparative results of the present study, a reliable candidate parameter for describing the relationship between DNA damage yields and geometry of DNA models in the theoretical radiation biology research studies would be the mean chord length (4 V/S) of the models.

  16. Reliability of movement control tests in the lumbar spine

    PubMed Central

    Luomajoki, Hannu; Kool, Jan; de Bruin, Eling D; Airaksinen, Olavi

    2007-01-01

    Background Movement control dysfunction [MCD] reduces active control of movements. Patients with MCD might form an important subgroup among patients with non specific low back pain. The diagnosis is based on the observation of active movements. Although widely used clinically, only a few studies have been performed to determine the test reliability. The aim of this study was to determine the inter- and intra-observer reliability of movement control dysfunction tests of the lumbar spine. Methods We videoed patients performing a standardized test battery consisting of 10 active movement tests for motor control in 27 patients with non specific low back pain and 13 patients with other diagnoses but without back pain. Four physiotherapists independently rated test performances as correct or incorrect per observation, blinded to all other patient information and to each other. The study was conducted in a private physiotherapy outpatient practice in Reinach, Switzerland. Kappa coefficients, percentage agreements and confidence intervals for inter- and intra-rater results were calculated. Results The kappa values for inter-tester reliability ranged between 0.24 – 0.71. Six tests out of ten showed a substantial reliability [k > 0.6]. Intra-tester reliability was between 0.51 – 0.96, all tests but one showed substantial reliability [k > 0.6]. Conclusion Physiotherapists were able to reliably rate most of the tests in this series of motor control tasks as being performed correctly or not, by viewing films of patients with and without back pain performing the task. PMID:17850669

  17. How do cognitively impaired elderly patients define "testament": reliability and validity of the testament definition scale.

    PubMed

    Heinik, J; Werner, P; Lin, R

    1999-01-01

    The testament definition scale (TDS) is a specifically designed six-item scale aimed at measuring the respondent's capacity to define "testament." We assessed the reliability and validity of this new short scale in 31 community-dwelling cognitively impaired elderly patients. Interrater reliability for the six items ranged from .87 to .97. The interrater reliability for the total score was .77. Significant correlations were found between the TDS score and the Mini-Mental State Examination (MMSE) and the Cambridge Cognitive Examination scores (r = .71 and .72 respectively, p = .001). Criterion validity yielded significantly different means for subjects with MMSE scores of 24-30 and 0-23: mean 3.9 and 1.6 respectively (t(20) = 4.7, p = .001). Using a cutoff point of 0-2 vs. 3+, 79% of the subjects were correctly classified as severely cognitively impaired, with only 8.3% false positives, and a positive predictive value of 94%. Thus, TDS was found both reliable and valid. This scale, however, is not synonymous with testamentary capacity. The discussion deals with the methodological limitations of this study, and highlights the practical as well as the theoretical relevance of TDS. Future studies are warranted to elucidate the relationships between TDS and existing legal requirements of testamentary capacity.

  18. Validity and reliability of a video questionnaire to assess physical function in older adults.

    PubMed

    Balachandran, Anoop; N Verduin, Chelsea; Potiaumpai, Melanie; Ni, Meng; Signorile, Joseph F

    2016-08-01

    Self-report questionnaires are widely used to assess physical function in older adults. However, they often lack a clear frame of reference and hence interpreting and rating task difficulty levels can be problematic for the responder. Consequently, the usefulness of traditional self-report questionnaires for assessing higher-level functioning is limited. Video-based questionnaires can overcome some of these limitations by offering a clear and objective visual reference for the performance level against which the subject is to compare his or her perceived capacity. Hence the purpose of the study was to develop and validate a novel, video-based questionnaire to assess physical function in older adults independently living in the community. A total of 61 community-living adults, 60years or older, were recruited. To examine validity, 35 of the subjects completed the video questionnaire, two types of physical performance tests: a test of instrumental activity of daily living (IADL) included in the Short Physical Functional Performance battery (PFP-10), and a composite of 3 performance tests (30s chair stand, single-leg balance and usual gait speed). To ascertain reliability, two-week test-retest reliability was assessed in the remaining 26 subjects who did not participate in validity testing. The video questionnaire showed a moderate correlation with the IADLs (Spearman rho=0.64, p<0.001; 95% CI (0.4, 0.8)), and a lower correlation with the composite score of physical performance tests (Spearman rho=0.49, p<0.01; 95% CI (0.18, 0.7)). The test-retest assessment yielded an intra-class correlation (ICC) of 0.87 (p<0.001; 95% CI (0.70, 0.94)) and a Cronbach's alpha of 0.89 demonstrating good reliability and internal consistency. Our results show that the video questionnaire developed to evaluate physical function in community-living older adults is a valid and reliable assessment tool; however, further validation is needed for definitive conclusions. Copyright © 2016

  19. Yield estimation of corn with multispectral data and the potential of using imaging spectrometers

    NASA Astrophysics Data System (ADS)

    Bach, Heike

    1997-05-01

    In the frame of the special yield estimation, a regular procedure conducted for the European Union to more accurately estimate agricultural yield, a project was conducted for the state minister for Rural Environment, Food and Forestry of Baden-Wuerttemberg, Germany) to test remote sensing data with advanced yield formation models for accuracy and timelines of yield estimation of corn. The methodology employed uses field-based plant parameter estimation from atmospherically corrected multitemporal/multispectral LANDSAT-TM data. An agrometeorological plant-production-model is used for yield prediction. Based solely on 4 LANDSAT-derived estimates and daily meteorological data the grain yield of corn stands was determined for 1995. The modeled yield was compared with results independently gathered within the special yield estimation for 23 test fields in the Upper Rhine Valley. The agrement between LANDSAT-based estimates and Special Yield Estimation shows a relative error of 2.3 percent. The comparison of the results for single fields shows, that six weeks before harvest the grain yield of single corn fields was estimated with a mean relative accuracy of 13 percent using satellite information. The presented methodology can be transferred to other crops and geographical regions. For future applications hyperspectral sensors show great potential to further enhance the results or yield prediction with remote sensing.

  20. Dimension scaling effects on the yield sensitivity of HEMT digital circuits

    NASA Technical Reports Server (NTRS)

    Sarker, Jogendra C.; Purviance, John E.

    1992-01-01

    In our previous works, using a graphical tool, yield factor histograms, we studied the yield sensitivity of High Electron Mobility Transistors (HEMT) and HEMT circuit performance with the variation of process parameters. This work studies the scaling effects of process parameters on yield sensitivity of HEMT digital circuits. The results from two HEMT circuits are presented.