Dwyer, Tim; Whelan, Daniel B; Khoshbin, Amir; Wasserstein, David; Dold, Andrew; Chahal, Jaskarndip; Nauth, Aaron; Murnaghan, M Lucas; Ogilvie-Harris, Darrell J; Theodoropoulos, John S
2015-04-01
The objective of this study was to establish the intra- and inter-observer reliability of hamstring graft measurement using cylindrical sizing tubes. Hamstring tendons (gracilis and semitendinosus) were harvested from ten cadavers by a single surgeon and whip stitched together to create ten 4-strand hamstring grafts. Ten sports medicine surgeons and fellows sized each graft independently using either hollow cylindrical sizers or block sizers in 0.5-mm increments—the sizing technique used was applied consistently to each graft. Surgeons moved sequentially from graft to graft and measured each hamstring graft twice. Surgeons were asked to state the measured proximal (femoral) and distal (tibial) diameter of each graft, as well as the diameter of the tibial and femoral tunnels that they would drill if performing an anterior cruciate ligament (ACL) reconstruction using that graft. Reliability was established using intra-class correlation coefficients. Overall, both the inter-observer and intra-observer agreement were >0.9, demonstrating excellent reliability. The inter-observer reliability for drill sizes was also excellent (>0.9). Excellent correlation was seen between cylindrical sizing, and drill sizes (>0.9). Sizing of hamstring grafts by multiple surgeons demonstrated excellent intra-observer and intra-observer reliability, potentially validating clinical studies exploring ACL reconstruction outcomes by hamstring graft diameter when standard techniques are used. III.
Reliability of risk-adjusted outcomes for profiling hospital surgical quality.
Krell, Robert W; Hozain, Ahmed; Kao, Lillian S; Dimick, Justin B
2014-05-01
Quality improvement platforms commonly use risk-adjusted morbidity and mortality to profile hospital performance. However, given small hospital caseloads and low event rates for some procedures, it is unclear whether these outcomes reliably reflect hospital performance. To determine the reliability of risk-adjusted morbidity and mortality for hospital performance profiling using clinical registry data. A retrospective cohort study was conducted using data from the American College of Surgeons National Surgical Quality Improvement Program, 2009. Participants included all patients (N = 55,466) who underwent colon resection, pancreatic resection, laparoscopic gastric bypass, ventral hernia repair, abdominal aortic aneurysm repair, and lower extremity bypass. Outcomes included risk-adjusted overall morbidity, severe morbidity, and mortality. We assessed reliability (0-1 scale: 0, completely unreliable; and 1, perfectly reliable) for all 3 outcomes. We also quantified the number of hospitals meeting minimum acceptable reliability thresholds (>0.70, good reliability; and >0.50, fair reliability) for each outcome. For overall morbidity, the most common outcome studied, the mean reliability depended on sample size (ie, how high the hospital caseload was) and the event rate (ie, how frequently the outcome occurred). For example, mean reliability for overall morbidity was low for abdominal aortic aneurysm repair (reliability, 0.29; sample size, 25 cases per year; and event rate, 18.3%). In contrast, mean reliability for overall morbidity was higher for colon resection (reliability, 0.61; sample size, 114 cases per year; and event rate, 26.8%). Colon resection (37.7% of hospitals), pancreatic resection (7.1% of hospitals), and laparoscopic gastric bypass (11.5% of hospitals) were the only procedures for which any hospitals met a reliability threshold of 0.70 for overall morbidity. Because severe morbidity and mortality are less frequent outcomes, their mean reliability was lower, and even fewer hospitals met the thresholds for minimum reliability. Most commonly reported outcome measures have low reliability for differentiating hospital performance. This is especially important for clinical registries that sample rather than collect 100% of cases, which can limit hospital case accrual. Eliminating sampling to achieve the highest possible caseloads, adjusting for reliability, and using advanced modeling strategies (eg, hierarchical modeling) are necessary for clinical registries to increase their benchmarking reliability.
A Simple and Reliable Method of Design for Standalone Photovoltaic Systems
NASA Astrophysics Data System (ADS)
Srinivasarao, Mantri; Sudha, K. Rama; Bhanu, C. V. K.
2017-06-01
Standalone photovoltaic (SAPV) systems are seen as a promoting method of electrifying areas of developing world that lack power grid infrastructure. Proliferations of these systems require a design procedure that is simple, reliable and exhibit good performance over its life time. The proposed methodology uses simple empirical formulae and easily available parameters to design SAPV systems, that is, array size with energy storage. After arriving at the different array size (area), performance curves are obtained for optimal design of SAPV system with high amount of reliability in terms of autonomy at a specified value of loss of load probability (LOLP). Based on the array to load ratio (ALR) and levelized energy cost (LEC) through life cycle cost (LCC) analysis, it is shown that the proposed methodology gives better performance, requires simple data and is more reliable when compared with conventional design using monthly average daily load and insolation.
Grimm, Annegret; Gruber, Bernd; Henle, Klaus
2014-01-01
Reliable estimates of population size are fundamental in many ecological studies and biodiversity conservation. Selecting appropriate methods to estimate abundance is often very difficult, especially if data are scarce. Most studies concerning the reliability of different estimators used simulation data based on assumptions about capture variability that do not necessarily reflect conditions in natural populations. Here, we used data from an intensively studied closed population of the arboreal gecko Gehyra variegata to construct reference population sizes for assessing twelve different population size estimators in terms of bias, precision, accuracy, and their 95%-confidence intervals. Two of the reference populations reflect natural biological entities, whereas the other reference populations reflect artificial subsets of the population. Since individual heterogeneity was assumed, we tested modifications of the Lincoln-Petersen estimator, a set of models in programs MARK and CARE-2, and a truncated geometric distribution. Ranking of methods was similar across criteria. Models accounting for individual heterogeneity performed best in all assessment criteria. For populations from heterogeneous habitats without obvious covariates explaining individual heterogeneity, we recommend using the moment estimator or the interpolated jackknife estimator (both implemented in CAPTURE/MARK). If data for capture frequencies are substantial, we recommend the sample coverage or the estimating equation (both models implemented in CARE-2). Depending on the distribution of catchabilities, our proposed multiple Lincoln-Petersen and a truncated geometric distribution obtained comparably good results. The former usually resulted in a minimum population size and the latter can be recommended when there is a long tail of low capture probabilities. Models with covariates and mixture models performed poorly. Our approach identified suitable methods and extended options to evaluate the performance of mark-recapture population size estimators under field conditions, which is essential for selecting an appropriate method and obtaining reliable results in ecology and conservation biology, and thus for sound management. PMID:24896260
Plastic packaged microcircuits: Quality, reliability, and cost issues
NASA Astrophysics Data System (ADS)
Pecht, Michael G.; Agarwal, Rakesh; Quearry, Dan
1993-12-01
Plastic encapsulated microcircuits (PEMs) find their main application in commercial and telecommunication electronics. The advantages of PEMs in cost, size, weight, performance, and market lead-time, have attracted 97% of the market share of worldwide microcircuit sales. However, PEMs have always been resisted in US Government and military applications due to the perception that PEM reliability is low. This paper surveys plastic packaging with respect to the issues of reliability, market lead-time, performance, cost, and weight as a means to guide part-selection and system-design.
FY12 End of Year Report for NEPP DDR2 Reliability
NASA Technical Reports Server (NTRS)
Guertin, Steven M.
2013-01-01
This document reports the status of the NASA Electronic Parts and Packaging (NEPP) Double Data Rate 2 (DDR2) Reliability effort for FY2012. The task expanded the focus of evaluating reliability effects targeted for device examination. FY11 work highlighted the need to test many more parts and to examine more operating conditions, in order to provide useful recommendations for NASA users of these devices. This year's efforts focused on development of test capabilities, particularly focusing on those that can be used to determine overall lot quality and identify outlier devices, and test methods that can be employed on components for flight use. Flight acceptance of components potentially includes considerable time for up-screening (though this time may not currently be used for much reliability testing). Manufacturers are much more knowledgeable about the relevant reliability mechanisms for each of their devices. We are not in a position to know what the appropriate reliability tests are for any given device, so although reliability testing could be focused for a given device, we are forced to perform a large campaign of reliability tests to identify devices with degraded reliability. With the available up-screening time for NASA parts, it is possible to run many device performance studies. This includes verification of basic datasheet characteristics. Furthermore, it is possible to perform significant pattern sensitivity studies. By doing these studies we can establish higher reliability of flight components. In order to develop these approaches, it is necessary to develop test capability that can identify reliability outliers. To do this we must test many devices to ensure outliers are in the sample, and we must develop characterization capability to measure many different parameters. For FY12 we increased capability for reliability characterization and sample size. We increased sample size this year by moving from loose devices to dual inline memory modules (DIMMs) with an approximate reduction of 20 to 50 times in terms of per device under test (DUT) cost. By increasing sample size we have improved our ability to characterize devices that may be considered reliability outliers. This report provides an update on the effort to improve DDR2 testing capability. Although focused on DDR2, the methods being used can be extended to DDR and DDR3 with relative ease.
Thaung, Jörgen; Olseke, Kjell; Ahl, Johan; Sjöstrand, Johan
2014-09-01
The purpose of our study was to establish a practical and quick test for assessing reading performance and to statistically analyse interchart and test-retest reliability of a new standardized Swedish reading chart system consisting of three charts constructed according to the principles available in the literature. Twenty-four subjects with healthy eyes, mean age 65 ± 10 years, were tested binocularly and the reading performance evaluated as reading acuity, critical print size and maximum reading speed. The test charts all consist of 12 short text sentences with a print size ranging from 0.9 to -0.2 logMAR in approximate steps of 0.1 logMAR. Two testing sessions, in two different groups (C1 and C2), were under strict control of luminance and lighting environment. Reading performance tests with chart T1, T2 and T3 were used for evaluation of interchart reliability and test data from a second session 1 month or more apart for the test-retest analysis. The testing of reading performance in adult observers with short sentences of continuous text was quick and practical. The agreement between the tests obtained with the three different test charts was high both within the same test session and at retest. This new Swedish variant of a standardized reading system based on short sentences and logarithmic progression of print size provides reliable measurements of reading performance and preliminary norms in an age group around 65 years. The reading test with three independent reading charts can be useful for clinical studies of reading ability before and after treatment. © 2013 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.
Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham
2017-12-01
During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Total systems design analysis of high performance structures
NASA Technical Reports Server (NTRS)
Verderaime, V.
1993-01-01
Designer-control parameters were identified at interdiscipline interfaces to optimize structural systems performance and downstream development and operations with reliability and least life-cycle cost. Interface tasks and iterations are tracked through a matrix of performance disciplines integration versus manufacturing, verification, and operations interactions for a total system design analysis. Performance integration tasks include shapes, sizes, environments, and materials. Integrity integrating tasks are reliability and recurring structural costs. Significant interface designer control parameters were noted as shapes, dimensions, probability range factors, and cost. Structural failure concept is presented, and first-order reliability and deterministic methods, benefits, and limitations are discussed. A deterministic reliability technique combining benefits of both is proposed for static structures which is also timely and economically verifiable. Though launch vehicle environments were primarily considered, the system design process is applicable to any surface system using its own unique filed environments.
Quality assurance and reliability in the Japanese electronics industry
NASA Astrophysics Data System (ADS)
Pecht, Michael; Boulton, William R.
1995-02-01
Quality and reliability are two attributes required for all Japanese products, although the JTEC panel found these attributes to be secondary to customer cost requirements. While our Japanese hosts gave presentations on the challenges of technology, cost, and miniaturization, quality and reliability were infrequently the focus of our discussions. Quality and reliability were assumed to be sufficient to meet customer needs. Fujitsu's slogan, 'quality built-in, with cost and performance as prime consideration,' illustrates this point. Sony's definition of a next-generation product is 'one that is going to be half the size and half the price at the same performance of the existing one'. Quality and reliability are so integral to Japan's electronics industry that they need no new emphasis.
Quality assurance and reliability in the Japanese electronics industry
NASA Technical Reports Server (NTRS)
Pecht, Michael; Boulton, William R.
1995-01-01
Quality and reliability are two attributes required for all Japanese products, although the JTEC panel found these attributes to be secondary to customer cost requirements. While our Japanese hosts gave presentations on the challenges of technology, cost, and miniaturization, quality and reliability were infrequently the focus of our discussions. Quality and reliability were assumed to be sufficient to meet customer needs. Fujitsu's slogan, 'quality built-in, with cost and performance as prime consideration,' illustrates this point. Sony's definition of a next-generation product is 'one that is going to be half the size and half the price at the same performance of the existing one'. Quality and reliability are so integral to Japan's electronics industry that they need no new emphasis.
Probabilistic sizing of laminates with uncertainties
NASA Technical Reports Server (NTRS)
Shah, A. R.; Liaw, D. G.; Chamis, C. C.
1993-01-01
A reliability based design methodology for laminate sizing and configuration for a special case of composite structures is described. The methodology combines probabilistic composite mechanics with probabilistic structural analysis. The uncertainties of constituent materials (fiber and matrix) to predict macroscopic behavior are simulated using probabilistic theory. Uncertainties in the degradation of composite material properties are included in this design methodology. A multi-factor interaction equation is used to evaluate load and environment dependent degradation of the composite material properties at the micromechanics level. The methodology is integrated into a computer code IPACS (Integrated Probabilistic Assessment of Composite Structures). Versatility of this design approach is demonstrated by performing a multi-level probabilistic analysis to size the laminates for design structural reliability of random type structures. The results show that laminate configurations can be selected to improve the structural reliability from three failures in 1000, to no failures in one million. Results also show that the laminates with the highest reliability are the least sensitive to the loading conditions.
Leifker, Feea R.; Patterson, Thomas L.; Bowie, Christopher R.; Mausbach, Brent T.; Harvey, Philip D.
2010-01-01
Performance-based measures of the ability to perform social and everyday living skills are being more widely used to assess functional capacity in people with serious mental illnesses such as schizophrenia and bipolar disorder. Since they are also being used as outcome measures in pharmacological and cognitive remediation studies aimed at cognitive impairments in schizophrenia, understanding their measurement properties and potential sensitivity to change is important. In this study, the test-retest reliability, practice effects, and reliable change indices of two different performance-based functional capacity measures, the UCSD Performance-based skills assessment (UPSA) and Social skills performance assessment (SSPA) were examined over several different retest intervals in two different samples of people with schizophrenia (n’s=238 and 116) and a healthy comparison sample (n=109). These psychometric properties were compared to those of a neuropsychological assessment battery. Test-retest reliabilities of the long form of the UPSA ranged from r=.63 to r=.80 over follow-up periods up to 36 months in people with schizophrenia, while brief UPSA reliabilities ranged from r=.66 to r=.81. Test-retest reliability of the NP performance scores ranged from r=.77 to r=.79. Test-retest reliabilities of the UPSA were lower in healthy controls, while NP performance was slightly more reliable. SSPA test-retest reliability was lower. Practice effect sizes ranged from .05 to .16 for the UPSA and .07 to .19 for the NP assessment in patients, with HC having more practice effects. Reliable change intervals were consistent across NP and both FC measures, indicating equal potential for detection of change. These performance-based measures of functional capacity appear to have similar potential to be sensitive to change compared to NP performance in people with schizophrenia. PMID:20399613
Chen, Xi; Xu, Yixuan; Liu, Anfeng
2017-04-19
High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs. However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%.
Chen, Xi; Xu, Yixuan; Liu, Anfeng
2017-01-01
High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs). However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%. PMID:28422062
de Witte, Annemarie M H; Hoozemans, Marco J M; Berger, Monique A M; van der Slikke, Rienk M A; van der Woude, Lucas H V; Veeger, Dirkjan H E J
2018-01-01
The aim of this study was to develop and describe a wheelchair mobility performance test in wheelchair basketball and to assess its construct validity and reliability. To mimic mobility performance of wheelchair basketball matches in a standardised manner, a test was designed based on observation of wheelchair basketball matches and expert judgement. Forty-six players performed the test to determine its validity and 23 players performed the test twice for reliability. Independent-samples t-tests were used to assess whether the times needed to complete the test were different for classifications, playing standards and sex. Intraclass correlation coefficients (ICC) were calculated to quantify reliability of performance times. Males performed better than females (P < 0.001, effect size [ES] = -1.26) and international men performed better than national men (P < 0.001, ES = -1.62). Performance time of low (≤2.5) and high (≥3.0) classification players was borderline not significant with a moderate ES (P = 0.06, ES = 0.58). The reliability was excellent for overall performance time (ICC = 0.95). These results show that the test can be used as a standardised mobility performance test to validly and reliably assess the capacity in mobility performance of elite wheelchair basketball athletes. Furthermore, the described methodology of development is recommended for use in other sports to develop sport-specific tests.
COTS Ceramic Chip Capacitors: An Evaluation of the Parts and Assurance Methodologies
NASA Technical Reports Server (NTRS)
Brusse, Jay A.; Sampson, Michael J.
2004-01-01
Commercial-Off-The-Shelf (COTS) multilayer ceramic chip capacitors (MLCCs) are continually evolving to reduce physical size and increase volumetric efficiency. Designers of high reliability aerospace and military systems are attracted to these attributes of COTS MLCCs and would like to take advantage of them while maintaining the high standards for long-term reliable operation they are accustomed io when selecting military qualified established reliability (MIL-ER) MLCCs. However, MIL-ER MLCCs are not available in the full range of small chip sizes with high capacitance as found in today's COTS MLCCs. The objectives for this evaluation were to assess the long-term performance of small case size COTS MLCCs and to identify effective, lower-cost product assurance methodologies. Fifteen (15) lots of COTS X7R dielectric MLCCs from four (4) different manufacturers and two (2) MIL-ER BX dielectric MLCCs from two (2) of the same manufacturers were evaluated. Both 0805 and 0402 chip sizes were included. Several voltage ratings were tested ranging from a high of 50 volts to a low of 6.3 volts. The evaluation consisted of a comprehensive screening and qualification test program based upon MIL-PRF-55681 (i.e., voltage conditioning, thermal shock, moisture resistance, 2000-hour life test, etc.). In addition, several lot characterization tests were performed including Destructive Physical Analysis (DPA), Highly Accelerated Life Test (HALT) and Dielectric Voltage Breakdown Strength. The data analysis included a comparison of the 2000-hour life test results (used as a metric for long-term performance) relative to the screening and characterization test results. Results of this analysis indicate that the long-term life performance of COTS MLCCs is variable -- some lots perform well, some lots perform poorly. DPA and HALT were found to be promising lot characterization tests to identify substandard COTS MLCC lots prior to conducting more expensive screening and qualification tests. The results indicate that lot- specific screening and qualification are still recommended for high reliability applications. One significant and concerning observation is that MIL- type voltage conditioning (100 hours at twice rated voltage, 125 C) was not an effective screen in removing infant mortality parts for the particular lots of COTS MLCCs evaluated.
Reliability of magnetic resonance imaging assessment of rotator cuff: the ROW study.
Jain, Nitin B; Collins, Jamie; Newman, Joel S; Katz, Jeffrey N; Losina, Elena; Higgins, Laurence D
2015-03-01
Physiatrists encounter patients with rotator cuff disorders, and imaging is frequently an important component of their diagnostic assessment. However, there is a paucity of literature on the reliability of magnetic resonance imaging (MRI) assessment between shoulder specialists and musculoskeletal radiologists. We assessed inter- and intrarater reliability of MRI characteristics of the rotator cuff. Cross-sectional secondary analyses in a prospective cohort study. Academic tertiary care centers. Subjects with shoulder pain were recruited from orthopedic and physiatry clinics. Two shoulder-fellowship-trained physicians (a physiatrist and a shoulder surgeon) jointly performed a blinded composite MRI review by consensus of 31 subjects with shoulder pain. Subsequently, MRI was reviewed by one fellowship-trained musculoskeletal radiologist. We calculated the Cohen kappa coefficients and percentage agreement among the 2 reviews (composite review of 2 shoulder specialists versus that of the musculoskeletal radiologist). Intrarater reliability was assessed among the shoulder specialists by performing a repeated blinded composite MRI review. In addition to this repeated composite review, only one of the physiatry shoulder specialists performed an additional review. Interrater reliability (shoulder specialists versus musculoskeletal radiologist) was substantial for the presence or absence of tear (kappa 0.90 [95% confidence interval {CI}, 0.72-1.00]), tear thickness (kappa 0.84 [95% CI, 0.70-0.99]), longitudinal size of tear (kappa 0.75 [95% CI, 0.44-1.00]), fatty infiltration (kappa 0.62 [95% CI, 0.45-0.79]), and muscle atrophy (kappa 0.68 [95% CI, 0.50-0.86]). There was only fair interrater reliability of the transverse size of tear (kappa 0.20 [95% CI, 0.00-0.51]). The kappa for intrarater reliability was high for tear thickness (0.88 [95% CI, 0.72-1.00]), longitudinal tear size (0.61 [95% CI, 0.22-0.99]), fatty infiltration (0.89 [95% CI, 0.80,-0.98]), and muscle atrophy (0.87 [95% CI, 0.76-0.98]). Intrarater reliability for the individual shoulder specialist was similar to that of the composite reviews. There was high interrater and intrarater reliability for most findings on shoulder MRI. Analysis of our data supports the reliability of MRI assessment by shoulder specialists for rotator cuff disorders. Copyright © 2015 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Pye, Kenneth; Blott, Simon J
2004-08-11
Particle size is a fundamental property of any sediment, soil or dust deposit which can provide important clues to nature and provenance. For forensic work, the particle size distribution of sometimes very small samples requires precise determination using a rapid and reliable method with a high resolution. The Coulter trade mark LS230 laser granulometer offers rapid and accurate sizing of particles in the range 0.04-2000 microm for a variety of sample types, including soils, unconsolidated sediments, dusts, powders and other particulate materials. Reliable results are possible for sample weights of just 50 mg. Discrimination between samples is performed on the basis of the shape of the particle size curves and statistical measures of the size distributions. In routine forensic work laser granulometry data can rarely be used in isolation and should be considered in combination with results from other techniques to reach an overall conclusion.
An interim report on the MCAT Essay Pilot Project.
Koenig, J A; Mitchell, K J
1988-01-01
Results from four pilot administrations of the Medical College Admission Test essay question are reported. Analyses focused on (a) the performance characteristics of sample groups differentiated by gender, size of hometown, race/ethnicity, and dominant language; (b) the relationships between essay scores and academic/demographic characteristics; and (c) the reliability of one 45-minute versus two 30-minute essays. No differences were found for examinees grouped by gender and size of home community. Mean differences among the racial/ethnic groups were explained largely by reading level differences. Differences in essay performance by language group were large and unexplained by reading level differences. No relationship was found between the essay score and the academic/demographic characteristics. Reliability estimates for two 30-minute essays were higher than for one 45-minute essay; however, the 30-minute period yielded writing of poorer quality. Test-retest reliabilities for the 45-minute topics will remain the focus of future studies as will performance by examinees for whom English is a second language. The impact of the essay on the selection process will also be assessed.
Multisite Reliability of Cognitive BOLD Data
Brown, Gregory G.; Mathalon, Daniel H.; Stern, Hal; Ford, Judith; Mueller, Bryon; Greve, Douglas N.; McCarthy, Gregory; Voyvodic, Jim; Glover, Gary; Diaz, Michele; Yetter, Elizabeth; Burak Ozyurt, I.; Jorgensen, Kasper W.; Wible, Cynthia G.; Turner, Jessica A.; Thompson, Wesley K.; Potkin, Steven G.
2010-01-01
Investigators perform multi-site functional magnetic resonance imaging studies to increase statistical power, to enhance generalizability, and to improve the likelihood of sampling relevant subgroups. Yet undesired site variation in imaging methods could off-set these potential advantages. We used variance components analysis to investigate sources of variation in the blood oxygen level dependent (BOLD) signal across four 3T magnets in voxelwise and region of interest (ROI) analyses. Eighteen participants traveled to four magnet sites to complete eight runs of a working memory task involving emotional or neutral distraction. Person variance was more than 10 times larger than site variance for five of six ROIs studied. Person-by-site interactions, however, contributed sizable unwanted variance to the total. Averaging over runs increased between-site reliability, with many voxels showing good to excellent between-site reliability when eight runs were averaged and regions of interest showing fair to good reliability. Between-site reliability depended on the specific functional contrast analyzed in addition to the number of runs averaged. Although median effect size was correlated with between-site reliability, dissociations were observed for many voxels. Brain regions where the pooled effect size was large but between-site reliability was poor were associated with reduced individual differences. Brain regions where the pooled effect size was small but between-site reliability was excellent were associated with a balance of participants who displayed consistently positive or consistently negative BOLD responses. Although between-site reliability of BOLD data can be good to excellent, acquiring highly reliable data requires robust activation paradigms, ongoing quality assurance, and careful experimental control. PMID:20932915
Cheng, Ningtao; Wu, Leihong; Cheng, Yiyu
2013-01-01
The promise of microarray technology in providing prediction classifiers for cancer outcome estimation has been confirmed by a number of demonstrable successes. However, the reliability of prediction results relies heavily on the accuracy of statistical parameters involved in classifiers. It cannot be reliably estimated with only a small number of training samples. Therefore, it is of vital importance to determine the minimum number of training samples and to ensure the clinical value of microarrays in cancer outcome prediction. We evaluated the impact of training sample size on model performance extensively based on 3 large-scale cancer microarray datasets provided by the second phase of MicroArray Quality Control project (MAQC-II). An SSNR-based (scale of signal-to-noise ratio) protocol was proposed in this study for minimum training sample size determination. External validation results based on another 3 cancer datasets confirmed that the SSNR-based approach could not only determine the minimum number of training samples efficiently, but also provide a valuable strategy for estimating the underlying performance of classifiers in advance. Once translated into clinical routine applications, the SSNR-based protocol would provide great convenience in microarray-based cancer outcome prediction in improving classifier reliability. PMID:23861920
Dalen, Havard; Gundersen, Guri H; Skjetne, Kyrre; Haug, Hilde H; Kleinau, Jens O; Norekval, Tone M; Graven, Torbjorn
2015-08-01
Routine assessment of volume state by ultrasound may improve follow-up of heart failure patients. We aimed to study the feasibility and reliability of focused pocket-size ultrasound examinations of the pleural cavities and the inferior vena cava performed by nurses to assess volume state at an outpatient heart failure clinic. Ultrasound examinations were performed in 62 included heart failure patients by specialized nurses with a pocket-size imaging device (PSID). Patients were then re-examined by a cardiologist with a high-end scanner for reference within 1 h. Specialized nurses were able to obtain and interpret images from both pleural cavities and the inferior vena cava and estimate the volume status in all patients. Time consumption for focused ultrasound examination was median 5 min. In total 26 patients had any kind of pleural effusion (in 39 pleural cavities) by reference. The sensitivity, specificity, positive and negative predictive values were high, all ≥ 92%. The correlations with reference were high for all measurements, all r ≥ 0.79. Coefficients of variation for end-expiratory dimension of inferior vena cava and quantification of pleural effusion were 10.8% and 12.7%, respectively. Specialized nurses were, after a dedicated training protocol, able to obtain reliable recordings of both pleural cavities and the inferior vena cava by PSID and interpret the images in a reliable way. Implementing focused ultrasound examinations to assess volume status by nurses in an outpatient heart failure clinic may improve diagnostics, and thus improve therapy. © The European Society of Cardiology 2014.
The De-Escalating Aggressive Behaviour Scale: development and psychometric testing.
Nau, Johannes; Halfens, Ruud; Needham, Ian; Dassen, Theo
2009-09-01
This paper is a report of a study to develop and test the psychometric properties of a scale measuring nursing students' performance in de-escalation of aggressive behaviour. Successful training should lead not merely to more knowledge and amended attitudes but also to improved performance. However, the quality of de-escalation performance is difficult to assess. Based on a qualitative investigation, seven topics pertaining to de-escalating behaviour were identified and the wording of items tested. The properties of the items and the scale were investigated quantitatively. A total of 1748 performance evaluations by students (rater group 1) from a skills laboratory were used to check distribution and conduct a factor analysis. Likewise, 456 completed evaluations by de-escalation experts (rater group 2) of videotaped performances at pre- and posttest were used to investigate internal consistency, interrater reliability, test-retest reliability, effect size and factor structure. Data were collected in 2007-2008 in German. Factor analysis showed a unidimensional 7-item scale with factor loadings ranging from 0.55 to 0.81 (rater group 1) and 0.48 to 0.88 (rater group 2). Cronbach's alphas of 0.87 and 0.88 indicated good internal consistency irrespective of rater group. A Pearson's r of 0.80 confirmed acceptable test-retest reliability, and interrater reliability Intraclass Correlation 3 ranging from 0.77 to 0.93 also showed acceptable results. The effect size r of 0.53 plus Cohen's d of 1.25 indicates the capacity of the scale to detect changes in performance. Further research is needed to test the English version of the scale and its validity.
The impact of Lean bundles on hospital performance: does size matter?
Al-Hyari, Khalil; Abu Hammour, Sewar; Abu Zaid, Mohammad Khair Saleem; Haffar, Mohamed
2016-10-10
Purpose The purpose of this paper is to study the effect of the implementation of Lean bundles on hospital performance in private hospitals in Jordan and evaluate how much the size of organization can affect the relationship between Lean bundles implementation and hospital performance. Design/methodology/approach The research is considered as quantitative method (descriptive and hypothesis testing). Three statistical techniques were adopted to analyse the data. Structural equation modeling techniques and multi-group analysis were used to examine the research's hypothesis, and to perform the required statistical analysis of the data from the survey. Reliability analysis and confirmatory factor analysis were used to test the construct validity, reliability and measurement loadings that were performed. Findings Lean bundles have been identified as an effective approach that can dramatically improve the organizational performance of private hospitals in Jordan. Main Lean bundles - just in time, human resource management, and total quality management are applicable to large, small and medium hospitals without significant differences in advantages that depend on size. Originality/value According to the researchers' best knowledge, this is the first research that studies the impact of Lean bundles implementation in healthcare sector in Jordan. This research also makes a significant contribution for decision makers in healthcare to increase their awareness of Lean bundles.
MEASUREMENT: ACCOUNTING FOR RELIABILITY IN PERFORMANCE ESTIMATES.
Waterman, Brian; Sutter, Robert; Burroughs, Thomas; Dunagan, W Claiborne
2014-01-01
When evaluating physician performance measures, physician leaders are faced with the quandary of determining whether departures from expected physician performance measurements represent a true signal or random error. This uncertainty impedes the physician leader's ability and confidence to take appropriate performance improvement actions based on physician performance measurements. Incorporating reliability adjustment into physician performance measurement is a valuable way of reducing the impact of random error in the measurements, such as those caused by small sample sizes. Consequently, the physician executive has more confidence that the results represent true performance and is positioned to make better physician performance improvement decisions. Applying reliability adjustment to physician-level performance data is relatively new. As others have noted previously, it's important to keep in mind that reliability adjustment adds significant complexity to the production, interpretation and utilization of results. Furthermore, the methods explored in this case study only scratch the surface of the range of available Bayesian methods that can be used for reliability adjustment; further study is needed to test and compare these methods in practice and to examine important extensions for handling specialty-specific concerns (e.g., average case volumes, which have been shown to be important in cardiac surgery outcomes). Moreover, it's important to note that the provider group average as a basis for shrinkage is one of several possible choices that could be employed in practice and deserves further exploration in future research. With these caveats, our results demonstrate that incorporating reliability adjustment into physician performance measurements is feasible and can notably reduce the incidence of "real" signals relative to what one would expect to see using more traditional approaches. A physician leader who is interested in catalyzing performance improvement through focused, effective physician performance improvement is well advised to consider the value of incorporating reliability adjustments into their performance measurement system.
The assessment of biases in the acoustic discrimination of individuals
Šálek, Martin
2017-01-01
Animal vocalizations contain information about individual identity that could potentially be used for the monitoring of individuals. However, the performance of individual discrimination is subjected to many biases depending on factors such as the amount of identity information, or methods used. These factors need to be taken into account when comparing results of different studies or selecting the most cost-effective solution for a particular species. In this study, we evaluate several biases associated with the discrimination of individuals. On a large sample of little owl male individuals, we assess how discrimination performance changes with methods of call description, an increasing number of individuals, and number of calls per male. Also, we test whether the discrimination performance within the whole population can be reliably estimated from a subsample of individuals in a pre-screening study. Assessment of discrimination performance at the level of the individual and at the level of call led to different conclusions. Hence, studies interested in individual discrimination should optimize methods at the level of individuals. The description of calls by their frequency modulation leads to the best discrimination performance. In agreement with our expectations, discrimination performance decreased with population size. Increasing the number of calls per individual linearly increased the discrimination of individuals (but not the discrimination of calls), likely because it allows distinction between individuals with very similar calls. The available pre-screening index does not allow precise estimation of the population size that could be reliably monitored. Overall, projects applying acoustic monitoring at the individual level in population need to consider limitations regarding the population size that can be reliably monitored and fine-tune their methods according to their needs and limitations. PMID:28486488
42 CFR 401.705 - Eligibility criteria for qualified entities.
Code of Federal Regulations, 2014 CFR
2014-10-01
.... (iv) Designing, and continuously improving the format of performance reports on providers and... subpart address the methodological concerns regarding sample size and reliability that have been expressed...
42 CFR 401.705 - Eligibility criteria for qualified entities.
Code of Federal Regulations, 2013 CFR
2013-10-01
.... (iv) Designing, and continuously improving the format of performance reports on providers and... subpart address the methodological concerns regarding sample size and reliability that have been expressed...
42 CFR 401.705 - Eligibility criteria for qualified entities.
Code of Federal Regulations, 2012 CFR
2012-10-01
.... (iv) Designing, and continuously improving the format of performance reports on providers and... subpart address the methodological concerns regarding sample size and reliability that have been expressed...
Microgrid Design Toolkit (MDT) Technical Documentation and Component Summaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arguello, Bryan; Gearhart, Jared Lee; Jones, Katherine A.
2015-09-01
The Microgrid Design Toolkit (MDT) is a decision support software tool for microgrid designers to use during the microgrid design process. The models that support the two main capabilities in MDT are described. The first capability, the Microgrid Sizing Capability (MSC), is used to determine the size and composition of a new microgrid in the early stages of the design process. MSC is a mixed-integer linear program that is focused on developing a microgrid that is economically viable when connected to the grid. The second capability is focused on refining a microgrid design for operation in islanded mode. This secondmore » capability relies on two models: the Technology Management Optimization (TMO) model and Performance Reliability Model (PRM). TMO uses a genetic algorithm to create and refine a collection of candidate microgrid designs. It uses PRM, a simulation based reliability model, to assess the performance of these designs. TMO produces a collection of microgrid designs that perform well with respect to one or more performance metrics.« less
Soultan, Alaaeldin; Safi, Kamran
2017-01-01
Digitized species occurrence data provide an unprecedented source of information for ecologists and conservationists. Species distribution model (SDM) has become a popular method to utilise these data for understanding the spatial and temporal distribution of species, and for modelling biodiversity patterns. Our objective is to study the impact of noise in species occurrence data (namely sample size and positional accuracy) on the performance and reliability of SDM, considering the multiplicative impact of SDM algorithms, species specialisation, and grid resolution. We created a set of four 'virtual' species characterized by different specialisation levels. For each of these species, we built the suitable habitat models using five algorithms at two grid resolutions, with varying sample sizes and different levels of positional accuracy. We assessed the performance and reliability of the SDM according to classic model evaluation metrics (Area Under the Curve and True Skill Statistic) and model agreement metrics (Overall Concordance Correlation Coefficient and geographic niche overlap) respectively. Our study revealed that species specialisation had by far the most dominant impact on the SDM. In contrast to previous studies, we found that for widespread species, low sample size and low positional accuracy were acceptable, and useful distribution ranges could be predicted with as few as 10 species occurrences. Range predictions for narrow-ranged species, however, were sensitive to sample size and positional accuracy, such that useful distribution ranges required at least 20 species occurrences. Against expectations, the MAXENT algorithm poorly predicted the distribution of specialist species at low sample size.
NASA Technical Reports Server (NTRS)
Johnson, Theodore F.; Waters, W. Allen; Singer, Thomas N.; Haftka, Raphael T.
2004-01-01
A next generation reusable launch vehicle (RLV) will require thermally efficient and light-weight cryogenic propellant tank structures. Since these tanks will be weight-critical, analytical tools must be developed to aid in sizing the thickness of insulation layers and structural geometry for optimal performance. Finite element method (FEM) models of the tank and insulation layers were created to analyze the thermal performance of the cryogenic insulation layer and thermal protection system (TPS) of the tanks. The thermal conditions of ground-hold and re-entry/soak-through for a typical RLV mission were used in the thermal sizing study. A general-purpose nonlinear FEM analysis code, capable of using temperature and pressure dependent material properties, was used as the thermal analysis code. Mechanical loads from ground handling and proof-pressure testing were used to size the structural geometry of an aluminum cryogenic tank wall. Nonlinear deterministic optimization and reliability optimization techniques were the analytical tools used to size the geometry of the isogrid stiffeners and thickness of the skin. The results from the sizing study indicate that a commercial FEM code can be used for thermal analyses to size the insulation thicknesses where the temperature and pressure were varied. The results from the structural sizing study show that using combined deterministic and reliability optimization techniques can obtain alternate and lighter designs than the designs obtained from deterministic optimization methods alone.
Optimizing Probability of Detection Point Estimate Demonstration
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2017-01-01
Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.
Breaking a habit: a further role of the phonological loop in action control.
Saeki, Erina; Baddeley, Alan D; Hitch, Graham J; Saito, Satoru
2013-10-01
Recent research has suggested that keeping track of a task goal in rapid task switching may depend on the phonological loop component of working memory. In this study, we investigated whether the phonological loop plays a similar role when a single switch extending over several trials is required after many trials on which one has performed a competing task. Participants were shown pairs of digits varying in numerical and physical size, and they were required to decide which digit was numerically or physically larger. An experimental cycle consisted of four blocks of 24 trials. In Experiment 1, participants in the task change groups performed the numerical-size judgment task during the first three blocks, and then changed to the physical-size judgment task in the fourth. Participants in the continuation groups performed only the physical-size judgment task throughout all four blocks. We found negative effects of articulatory suppression on the fourth block, but only in the task change groups. Experiment 2 was a replication, with the modification that both groups received identical instructions and practice. Experiment 3 was a further replication using numerical-size judgment as the target task. The results showed a pattern similar to that from Experiment 1, with negative effects of articulatory suppression found only in the task change group. The congruity of numerical and physical size had a reliable effect on performance in all three experiments, but unlike the task change, it did not reliably interact with articulatory suppression. The results suggest that in addition to its well-established role in rapid task switching, the phonological loop also contributes to active goal maintenance in longer-term action control.
NASA Astrophysics Data System (ADS)
Teves, André da Costa; Lima, Cícero Ribeiro de; Passaro, Angelo; Silva, Emílio Carlos Nelli
2017-03-01
Electrostatic or capacitive accelerometers are among the highest volume microelectromechanical systems (MEMS) products nowadays. The design of such devices is a complex task, since they depend on many performance requirements, which are often conflicting. Therefore, optimization techniques are often used in the design stage of these MEMS devices. Because of problems with reliability, the technology of MEMS is not yet well established. Thus, in this work, size optimization is combined with the reliability-based design optimization (RBDO) method to improve the performance of accelerometers. To account for uncertainties in the dimensions and material properties of these devices, the first order reliability method is applied to calculate the probabilities involved in the RBDO formulation. Practical examples of bulk-type capacitive accelerometer designs are presented and discussed to evaluate the potential of the implemented RBDO solver.
Brunetti, Natale Daniele; Delli Carri, Felice; Ruggiero, Maria Assunta; Cuculo, Andrea; Ruggiero, Antonio; Ziccardi, Luigi; De Gennaro, Luisa; Di Biase, Matteo
2014-03-01
Exact quantification of plaque extension during coronary angioplasty (PCI) usually falls on interventional cardiologist (IC). Quantitative coronary stenosis assessment (QCA) may be possibly committed to the radiology technician (RT), who usually supports cath-lab nurse and IC during PCI. We therefore sought to investigate the reliability of QCA performed by RT in comparison with IC. Forty-four consecutive patients with acute coronary syndrome underwent PCI; target coronary vessel size beneath target coronary lesion (S) and target coronary lesion length (L) were assessed by the RT, junior IC (JIC), and senior IC (SIC) and then compared. SIC evaluation, which determined the final stent selection for coronary stenting, was considered as a reference benchmark. RT performance with QCA support in assessing target vessel size and target lesion length was not significantly different from SIC (r = 0.46, p < 0.01; r = 0.64, p < 0.001, respectively) as well as JIC (r = 0.79, r = 0.75, p < 0.001, respectively). JIC performance was significantly better than RT in assessing target vessel size (p < 0.05), while not significant when assessing target lesion length. RT may reliably assess target lesion by using adequate QCA software in the cath-lab in case of PCI; RT performance does not differ from SIC.
Tooth-size discrepancy: A comparison between manual and digital methods
Correia, Gabriele Dória Cabral; Habib, Fernando Antonio Lima; Vogel, Carlos Jorge
2014-01-01
Introduction Technological advances in Dentistry have emerged primarily in the area of diagnostic tools. One example is the 3D scanner, which can transform plaster models into three-dimensional digital models. Objective This study aimed to assess the reliability of tooth size-arch length discrepancy analysis measurements performed on three-dimensional digital models, and compare these measurements with those obtained from plaster models. Material and Methods To this end, plaster models of lower dental arches and their corresponding three-dimensional digital models acquired with a 3Shape R700T scanner were used. All of them had lower permanent dentition. Four different tooth size-arch length discrepancy calculations were performed on each model, two of which by manual methods using calipers and brass wire, and two by digital methods using linear measurements and parabolas. Results Data were statistically assessed using Friedman test and no statistically significant differences were found between the two methods (P > 0.05), except for values found by the linear digital method which revealed a slight, non-significant statistical difference. Conclusions Based on the results, it is reasonable to assert that any of these resources used by orthodontists to clinically assess tooth size-arch length discrepancy can be considered reliable. PMID:25279529
The multidriver: A reliable multicast service using the Xpress Transfer Protocol
NASA Technical Reports Server (NTRS)
Dempsey, Bert J.; Fenton, John C.; Weaver, Alfred C.
1990-01-01
A reliable multicast facility extends traditional point-to-point virtual circuit reliability to one-to-many communication. Such services can provide more efficient use of network resources, a powerful distributed name binding capability, and reduced latency in multidestination message delivery. These benefits will be especially valuable in real-time environments where reliable multicast can enable new applications and increase the availability and the reliability of data and services. We present a unique multicast service that exploits features in the next-generation, real-time transfer layer protocol, the Xpress Transfer Protocol (XTP). In its reliable mode, the service offers error, flow, and rate-controlled multidestination delivery of arbitrary-sized messages, with provision for the coordination of reliable reverse channels. Performance measurements on a single-segment Proteon ProNET-4 4 Mbps 802.5 token ring with heterogeneous nodes are discussed.
Beauvais, Brad; Richter, Jason; Brezinski, Paul
The 2014 Military Health System Review calls for healthcare system leaders to implement effective strategies used by other high-performing organizations. The authors state, " the [military health system] MHS can create an optimal healthcare environment that focuses on continuous quality improvement where every patient receives safe, high-quality care at all times" (Military Health System, 2014, p. 1). Although aspirational, the document does not specify how a highly reliable health system is developed or what systemic factors are necessary to sustain highly reliable performance. Our work seeks to address this gap and provide guidance to MHS leaders regarding how high-performing organizations develop exceptional levels of performance.The authors' expectation is that military medicine will draw on these lessons to enhance leadership, develop exceptional organizational cultures, onboard and engage employees, build customer loyalty, and improve quality of care. Leaders from other segments of the healthcare field likely will find this study valuable given the size of the military healthcare system (9.6 million beneficiaries), the United States' steady progression toward population-based health, and the increasing need for highly reliable systems and performance.
[Study of the reliability in one dimensional size measurement with digital slit lamp microscope].
Wang, Tao; Qi, Chaoxiu; Li, Qigen; Dong, Lijie; Yang, Jiezheng
2010-11-01
To study the reliability of digital slit lamp microscope as a tool for quantitative analysis in one dimensional size measurement. Three single-blinded observers acquired and repeatedly measured the images with a size of 4.00 mm and 10.00 mm on the vernier caliper, which simulatated the human eye pupil and cornea diameter under China-made digital slit lamp microscope in the objective magnification of 4 times, 10 times, 16 times, 25 times, 40 times and 4 times, 10 times, 16 times, respectively. The correctness and precision of measurement were compared. The images with 4 mm size were measured by three investigators and the average values were located between 3.98 to 4.06. For the images with 10.00 mm size, the average values fell within 10.00 ~ 10.04. Measurement results of 4.00 mm images showed, except A4, B25, C16 and C25, significant difference was noted between the measured value and the true value. Regarding measurement results of 10.00 mm iamges indicated, except A10, statistical significance was found between the measured value and the true value. In terms of comparing the results of the same size measured at different magnifications by the same investigator, except for investigators A's measurements of 10.00 mm dimension, the measurement results by all the remaining investigators presented statistical significance at different magnifications. Compared measurements of the same size with different magnifications, measurements of 4.00 mm in 4-fold magnification had no significant difference among the investigators', the remaining results were statistically significant. The coefficient of variation of all measurement results were less than 5%; as magnification increased, the coefficient of variation decreased. The measurement of digital slit lamp microscope in one-dimensional size has good reliability,and should be performed for reliability analysis before used for quantitative analysis to reduce systematic errors.
Touch Precision Modulates Visual Bias.
Misceo, Giovanni F; Jones, Maurice D
2018-01-01
The sensory precision hypothesis holds that different seen and felt cues about the size of an object resolve themselves in favor of the more reliable modality. To examine this precision hypothesis, 60 college students were asked to look at one size while manually exploring another unseen size either with their bare fingers or, to lessen the reliability of touch, with their fingers sleeved in rigid tubes. Afterwards, the participants estimated either the seen size or the felt size by finding a match from a visual display of various sizes. Results showed that the seen size biased the estimates of the felt size when the reliability of touch decreased. This finding supports the interaction between touch reliability and visual bias predicted by statistically optimal models of sensory integration.
Reduction of Specimen Size for the Full Simultaneous Characterization of Thermoelectric Performance
NASA Astrophysics Data System (ADS)
Vasilevskiy, D.; Simard, J.-M.; Masut, R. A.; Turenne, S.
2017-05-01
The successful implementation of thermoelectric (TE) materials for waste heat recovery depends strongly on our ability to increase their performance. This challenge continues to generate a renewed interest in novel high TE performance compounds. The technological difficulties in producing homogeneous ingots of new compounds or alloys with regular shape and a size sufficiently large to prepare several samples that are usually needed for a separate measurement of all TE parameters are well known. It creates a situation whereby material performance could be critically over- or under-evaluated at the first stages of the research process of a new material. Both cases would equally lead to negative consequences. Thus, minimizing the specimen size yet keeping it adequate for accurate material characterization becomes extremely important. In this work we report the experimental validation of reliable simultaneous measurements of the four most relevant TE parameters on a single bismuth telluride alloy based specimen of 4 mm × 4 mm × 1.4 mm in size. This translates in roughly 140 mg in weight for one of the heaviest TE materials, as was used in this study, and <100 mg for most others. Our validation is based on comparative measurements performed by a Harman apparatus (ZT-Scanner) on a series of differently sized specimens of hot extruded bismuth telluride based alloys. The Seebeck coefficient, electrical resistivity, thermal conductivity and the figure of merit were simultaneously assessed from 300 K to 440 K with increments of 20 K, 15 K, 10 K, 5 K, and 1 K. Our choice of a well-known homogeneous material has been made to increase measurement reliability and accuracy, but the results are expected to be valid for the full TE characterization of any unknown material. These results show a way to significantly decrease specimen sizes which has the potential to accelerate investigation of novel TE materials for large scale waste heat recovery.
Workshop on Microwave Power Transmission and Reception. Workshop Paper Summaries
NASA Technical Reports Server (NTRS)
1980-01-01
Microwave systems performance and phase control are discussed. Component design and reliability are highlighted. The power amplifiers, radiating elements, rectennas, and solid state configurations are described. The proper sizing of microwave transmission systems is also discussed.
Fracture mechanics concepts in reliability analysis of monolithic ceramics
NASA Technical Reports Server (NTRS)
Manderscheid, Jane M.; Gyekenyesi, John P.
1987-01-01
Basic design concepts for high-performance, monolithic ceramic structural components are addressed. The design of brittle ceramics differs from that of ductile metals because of the inability of ceramic materials to redistribute high local stresses caused by inherent flaws. Random flaw size and orientation requires that a probabilistic analysis be performed in order to determine component reliability. The current trend in probabilistic analysis is to combine linear elastic fracture mechanics concepts with the two parameter Weibull distribution function to predict component reliability under multiaxial stress states. Nondestructive evaluation supports this analytical effort by supplying data during verification testing. It can also help to determine statistical parameters which describe the material strength variation, in particular the material threshold strength (the third Weibull parameter), which in the past was often taken as zero for simplicity.
Modeling and Simulation Reliable Spacecraft On-Board Computing
NASA Technical Reports Server (NTRS)
Park, Nohpill
1999-01-01
The proposed project will investigate modeling and simulation-driven testing and fault tolerance schemes for Spacecraft On-Board Computing, thereby achieving reliable spacecraft telecommunication. A spacecraft communication system has inherent capabilities of providing multipoint and broadcast transmission, connectivity between any two distant nodes within a wide-area coverage, quick network configuration /reconfiguration, rapid allocation of space segment capacity, and distance-insensitive cost. To realize the capabilities above mentioned, both the size and cost of the ground-station terminals have to be reduced by using reliable, high-throughput, fast and cost-effective on-board computing system which has been known to be a critical contributor to the overall performance of space mission deployment. Controlled vulnerability of mission data (measured in sensitivity), improved performance (measured in throughput and delay) and fault tolerance (measured in reliability) are some of the most important features of these systems. The system should be thoroughly tested and diagnosed before employing a fault tolerance into the system. Testing and fault tolerance strategies should be driven by accurate performance models (i.e. throughput, delay, reliability and sensitivity) to find an optimal solution in terms of reliability and cost. The modeling and simulation tools will be integrated with a system architecture module, a testing module and a module for fault tolerance all of which interacting through a centered graphical user interface.
Expanding the Scope of High-Performance Computing Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uram, Thomas D.; Papka, Michael E.
The high-performance computing centers of the future will expand their roles as service providers, and as the machines scale up, so should the sizes of the communities they serve. National facilities must cultivate their users as much as they focus on operating machines reliably. The authors present five interrelated topic areas that are essential to expanding the value provided to those performing computational science.
Delli Carri, Felice; Ruggiero, Maria Assunta; Cuculo, Andrea; Ruggiero, Antonio; Ziccardi, Luigi; De Gennaro, Luisa; Di Biase, Matteo
2014-01-01
Background Exact quantification of plaque extension during coronary angioplasty (PCI) usually falls on interventional cardiologist (IC). Quantitative coronary stenosis assessment (QCA) may be possibly committed to the radiology technician (RT), who usually supports cath-lab nurse and IC during PCI. We therefore sought to investigate the reliability of QCA performed by RT in comparison with IC. Methods Forty-four consecutive patients with acute coronary syndrome underwent PCI; target coronary vessel size beneath target coronary lesion (S) and target coronary lesion length (L) were assessed by the RT, junior IC (JIC), and senior IC (SIC) and then compared. SIC evaluation, which determined the final stent selection for coronary stenting, was considered as a reference benchmark. Results RT performance with QCA support in assessing target vessel size and target lesion length was not significantly different from SIC (r = 0.46, p < 0.01; r = 0.64, p < 0.001, respectively) as well as JIC (r = 0.79, r = 0.75, p < 0.001, respectively). JIC performance was significantly better than RT in assessing target vessel size (p < 0.05), while not significant when assessing target lesion length. Conclusions RT may reliably assess target lesion by using adequate QCA software in the cath-lab in case of PCI; RT performance does not differ from SIC. PMID:24672672
Chhapola, Viswas; Tiwari, Soumya; Deepthi, Bobbity; Henry, Brandon Michael; Brar, Rekha; Kanwal, Sandeep Kumar
2018-06-01
A plethora of research is available on ultrasonographic kidney size standards. We performed a systematic review of methodological quality of ultrasound studies aimed at developing normative renal parameters in healthy children, by evaluating the risk of bias (ROB) using the 'Anatomical Quality Assessment (AQUA)' tool. We searched Medline, Scopus, CINAHL, and Google Scholar on June 04 2018, and observational studies measuring kidney size by ultrasonography in healthy children (0-18 years) were included. The ROB of each study was evaluated in five domains using a 20 item coding scheme based on AQUA tool framework. Fifty-four studies were included. Domain 1 (subject characteristics) had a high ROB in 63% of studies due to the unclear description of age, sex, and ethnicity. The performance in Domain 2 (study design) was the best with 85% of studies having a prospective design. Methodological characterization (Domain 3) was poor across the studies (< 10% compliance), with suboptimal performance in the description of patient positioning, operator experience, and assessment of intra/inter-observer reliability. About three-fourth of the studies had a low ROB in Domain 4 (descriptive anatomy). Domain 5 (reporting of results) had a high ROB in approximately half of the studies, the majority reporting results in the form of central tendency measures. Significant deficiencies and heterogeneity were observed in the methodological quality of USG studies performed to-date for measurement of kidney size in children. We hereby provide a framework for the conducting such studies in future. PROSPERO (CRD42017071601).
Dependence of Grain Size on the Performance of a Polysilicon Channel TFT for 3D NAND Flash Memory.
Kim, Seung-Yoon; Park, Jong Kyung; Hwang, Wan Sik; Lee, Seung-Jun; Lee, Ki-Hong; Pyi, Seung Ho; Cho, Byung Jin
2016-05-01
We investigated the dependence of grain size on the performance of a polycrystalline silicon (poly-Si) channel TFT for application to 3D NAND Flash memory devices. It has been found that the device performance and memory characteristics are strongly affected by the grain size of the poly-Si channel. Higher on-state current, faster program speed, and poor endurance/reliability properties are observed when the poly-Si grain size is large. These are mainly attributed to the different local electric field induced by an oxide valley at the interface between the poly-Si channel and the gate oxide. In addition, the trap density at the gate oxide interface was successfully measured using a charge pumping method by the separation between the gate oxide interface traps and traps at the grain boundaries in the poly-Si channel. The poly-Si channel with larger grain size has lower interface trap density.
Detecting long-term growth trends using tree rings: a critical evaluation of methods.
Peters, Richard L; Groenendijk, Peter; Vlam, Mart; Zuidema, Pieter A
2015-05-01
Tree-ring analysis is often used to assess long-term trends in tree growth. A variety of growth-trend detection methods (GDMs) exist to disentangle age/size trends in growth from long-term growth changes. However, these detrending methods strongly differ in approach, with possible implications for their output. Here, we critically evaluate the consistency, sensitivity, reliability and accuracy of four most widely used GDMs: conservative detrending (CD) applies mathematical functions to correct for decreasing ring widths with age; basal area correction (BAC) transforms diameter into basal area growth; regional curve standardization (RCS) detrends individual tree-ring series using average age/size trends; and size class isolation (SCI) calculates growth trends within separate size classes. First, we evaluated whether these GDMs produce consistent results applied to an empirical tree-ring data set of Melia azedarach, a tropical tree species from Thailand. Three GDMs yielded similar results - a growth decline over time - but the widely used CD method did not detect any change. Second, we assessed the sensitivity (probability of correct growth-trend detection), reliability (100% minus probability of detecting false trends) and accuracy (whether the strength of imposed trends is correctly detected) of these GDMs, by applying them to simulated growth trajectories with different imposed trends: no trend, strong trends (-6% and +6% change per decade) and weak trends (-2%, +2%). All methods except CD, showed high sensitivity, reliability and accuracy to detect strong imposed trends. However, these were considerably lower in the weak or no-trend scenarios. BAC showed good sensitivity and accuracy, but low reliability, indicating uncertainty of trend detection using this method. Our study reveals that the choice of GDM influences results of growth-trend studies. We recommend applying multiple methods when analysing trends and encourage performing sensitivity and reliability analysis. Finally, we recommend SCI and RCS, as these methods showed highest reliability to detect long-term growth trends. © 2014 John Wiley & Sons Ltd.
Stochastic modelling of the hydrologic operation of rainwater harvesting systems
NASA Astrophysics Data System (ADS)
Guo, Rui; Guo, Yiping
2018-07-01
Rainwater harvesting (RWH) systems are an effective low impact development practice that provides both water supply and runoff reduction benefits. A stochastic modelling approach is proposed in this paper to quantify the water supply reliability and stormwater capture efficiency of RWH systems. The input rainfall series is represented as a marked Poisson process and two typical water use patterns are analytically described. The stochastic mass balance equation is solved analytically, and based on this, explicit expressions relating system performance to system characteristics are derived. The performances of a wide variety of RWH systems located in five representative climatic regions of the United States are examined using the newly derived analytical equations. Close agreements between analytical and continuous simulation results are shown for all the compared cases. In addition, an analytical equation is obtained expressing the required storage size as a function of the desired water supply reliability, average water use rate, as well as rainfall and catchment characteristics. The equations developed herein constitute a convenient and effective tool for sizing RWH systems and evaluating their performances.
A method of bias correction for maximal reliability with dichotomous measures.
Penev, Spiridon; Raykov, Tenko
2010-02-01
This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson; Krishnamurthy, Thiagaraja; Sykes, Nancy P.; Elishakoff, Isaac
1993-01-01
Computations were performed to determine the effect of an overall bow-type imperfection on the reliability of structural panels under combined compression and shear loadings. A panel's reliability is the probability that it will perform the intended function - in this case, carry a given load without buckling or exceeding in-plane strain allowables. For a panel loaded in compression, a small initial bow can cause large bending stresses that reduce both the buckling load and the load at which strain allowables are exceeded; hence, the bow reduces the reliability of the panel. In this report, analytical studies on two stiffened panels quantified that effect. The bow is in the shape of a half-sine wave along the length of the panel. The size e of the bow at panel midlength is taken to be the single random variable. Several probability density distributions for e are examined to determine the sensitivity of the reliability to details of the bow statistics. In addition, the effects of quality control are explored with truncated distributions.
High-performance liquid chromatographic method for the determination of dansyl-polyamines
Subhash C. Minocha; Rakesh Minocha; Cheryl A. Robie
1990-01-01
This paper describes a fast reliable, and a sensitive technique for the separation and quantification of dansylated polyamines by high-performance liquid chromatography. Using a small 33 x 4.6 mm I.D., 3 ?m particle size, C18 reversed-phase cartridge column and a linear gradient of acetonitrile-heptanesulfonate (10 mM, pH 3.4...
Revell, J D; Mirmehdi, M; McNally, D S
2004-04-01
We examine tissue deformations using non-invasive dynamic musculoskeletal ultrasonograhy, and quantify its performance on controlled in vitro gold standard (groundtruth) sequences followed by clinical in vivo data. The proposed approach employs a two-dimensional variable-sized block matching algorithm with a hierarchical full search. We extend this process by refining displacements to sub-pixel accuracy. We show by application that this technique yields quantitatively reliable results.
Formiga, Magno F; Roach, Kathryn E; Vital, Isabel; Urdaneta, Gisel; Balestrini, Kira; Calderon-Candelario, Rafael A; Campos, Michael A; Cahalin, Lawrence P
2018-01-01
The Test of Incremental Respiratory Endurance (TIRE) provides a comprehensive assessment of inspiratory muscle performance by measuring maximal inspiratory pressure (MIP) over time. The integration of MIP over inspiratory duration (ID) provides the sustained maximal inspiratory pressure (SMIP). Evidence on the reliability and validity of these measurements in COPD is not currently available. Therefore, we assessed the reliability, responsiveness and construct validity of the TIRE measures of inspiratory muscle performance in subjects with COPD. Test-retest reliability, known-groups and convergent validity assessments were implemented simultaneously in 81 male subjects with mild to very severe COPD. TIRE measures were obtained using the portable PrO2 device, following standard guidelines. All TIRE measures were found to be highly reliable, with SMIP demonstrating the strongest test-retest reliability with a nearly perfect intraclass correlation coefficient (ICC) of 0.99, while MIP and ID clustered closely together behind SMIP with ICC values of about 0.97. Our findings also demonstrated known-groups validity of all TIRE measures, with SMIP and ID yielding larger effect sizes when compared to MIP in distinguishing between subjects of different COPD status. Finally, our analyses confirmed convergent validity for both SMIP and ID, but not MIP. The TIRE measures of MIP, SMIP and ID have excellent test-retest reliability and demonstrated known-groups validity in subjects with COPD. SMIP and ID also demonstrated evidence of moderate convergent validity and appear to be more stable measures in this patient population than the traditional MIP.
Thermal Management and Reliability of Power Electronics and Electric Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narumanchi, Sreekant
2016-09-19
Increasing the number of electric-drive vehicles (EDVs) on America's roads has been identified as a strategy with near-term potential for dramatically decreasing the nation's dependence on oil - by the U.S. Department of Energy, the federal cross-agency EV-Everywhere Challenge, and the automotive industry. Mass-market deployment will rely on meeting aggressive technical targets, including improved efficiency and reduced size, weight, and cost. Many of these advances will depend on optimization of thermal management. Effective thermal management is critical to improving the performance and ensuring the reliability of EDVs. Efficient heat removal makes higher power densities and lower operating temperatures possible, andmore » in turn enables cost and size reductions. The National Renewable Energy Laboratory (NREL), along with DOE and industry partners is working to develop cost-effective thermal management solutions to increase device and component power densities. In this presentation, the activities in recent years related to thermal management and reliability of automotive power electronics and electric machines are presented.« less
Brackley, Victoria; Ball, Kevin; Tor, Elaine
2018-05-12
The effectiveness of the swimming turn is highly influential to overall performance in competitive swimming. The push-off or wall contact, within the turn phase, is directly involved in determining the speed the swimmer leaves the wall. Therefore, it is paramount to develop reliable methods to measure the wall-contact-time during the turn phase for training and research purposes. The aim of this study was to determine the concurrent validity and reliability of the Pool Pad App to measure wall-contact-time during the freestyle and backstroke tumble turn. The wall-contact-times of nine elite and sub-elite participants were recorded during their regular training sessions. Concurrent validity statistics included the standardised typical error estimate, linear analysis and effect sizes while the intraclass correlating coefficient (ICC) was used for the reliability statistics. The standardised typical error estimate resulted in a moderate Cohen's d effect size with an R 2 value of 0.80 and the ICC between the Pool Pad and 2D video footage was 0.89. Despite these measurement differences, the results from this concurrent validity and reliability analyses demonstrated that the Pool Pad is suitable for measuring wall-contact-time during the freestyle and backstroke tumble turn within a training environment.
Pneumothorax size measurements on digital chest radiographs: Intra- and inter- rater reliability.
Thelle, Andreas; Gjerdevik, Miriam; Grydeland, Thomas; Skorge, Trude D; Wentzel-Larsen, Tore; Bakke, Per S
2015-10-01
Detailed and reliable methods may be important for discussions on the importance of pneumothorax size in clinical decision-making. Rhea's method is widely used to estimate pneumothorax size in percent based on chest X-rays (CXRs) from three measure points. Choi's addendum is used for anterioposterior projections. The aim of this study was to examine the intrarater and interrater reliability of the Rhea and Choi method using digital CXR in the ward based PACS monitors. Three physicians examined a retrospective series of 80 digital CXRs showing pneumothorax, using Rhea and Choi's method, then repeated in a random order two weeks later. We used the analysis of variance technique by Eliasziw et al. to assess the intrarater and interrater reliability in altogether 480 estimations of pneumothorax size. Estimated pneumothorax sizes ranged between 5% and 100%. The intrarater reliability coefficient was 0.98 (95% one-sided lower-limit confidence interval C 0.96), and the interrater reliability coefficient was 0.95 (95% one-sided lower-limit confidence interval 0.93). This study has shown that the Rhea and Choi method for calculating pneumothorax size has high intrarater and interrater reliability. These results are valid across gender, side of pneumothorax and whether the patient is diagnosed with primary or secondary pneumothorax. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
A New Method for the Evaluation and Prediction of Base Stealing Performance.
Bricker, Joshua C; Bailey, Christopher A; Driggers, Austin R; McInnis, Timothy C; Alami, Arya
2016-11-01
Bricker, JC, Bailey, CA, Driggers, AR, McInnis, TC, and Alami, A. A new method for the evaluation and prediction of base stealing performance. J Strength Cond Res 30(11): 3044-3050, 2016-The purposes of this study were to evaluate a new method using electronic timing gates to monitor base stealing performance in terms of reliability, differences between it and traditional stopwatch-collected times, and its ability to predict base stealing performance. Twenty-five healthy collegiate baseball players performed maximal effort base stealing trials with a right and left-handed pitcher. An infrared electronic timing system was used to calculate the reaction time (RT) and total time (TT), whereas coaches' times (CT) were recorded with digital stopwatches. Reliability of the TGM was evaluated with intraclass correlation coefficients (ICCs) and coefficient of variation (CV). Differences between the TGM and traditional CT were calculated with paired samples t tests Cohen's d effect size estimates. Base stealing performance predictability of the TGM was evaluated with Pearson's bivariate correlations. Acceptable relative reliability was observed (ICCs 0.74-0.84). Absolute reliability measures were acceptable for TT (CVs = 4.4-4.8%), but measures were elevated for RT (CVs = 32.3-35.5%). Statistical and practical differences were found between TT and CT (right p = 0.00, d = 1.28 and left p = 0.00, d = 1.49). The TGM TT seems to be a decent predictor of base stealing performance (r = -0.49 to -0.61). The authors recommend using the TGM used in this investigation for athlete monitoring because it was found to be reliable, seems to be more precise than traditional CT measured with a stopwatch, provides an additional variable of value (RT), and may predict future performance.
Battery capacity and recharging needs for electric buses in city transit service
Gao, Zhiming; Lin, Zhenhong; LaClair, Tim J.; ...
2017-01-27
Our paper evaluates the energy consumption and battery performance of city transit electric buses operating on real day-to-day routes and standardized bus drive cycles, based on a developed framework tool that links bus electrification feasibility with real-world vehicle performance, city transit bus service reliability, battery sizing and charging infrastructure. The impacts of battery capacity combined with regular and ultrafast charging over different routes have been analyzed in terms of the ability to maintain city transit bus service reliability like conventional buses. These results show that ultrafast charging via frequent short-time boost charging events, for example at a designated bus stopmore » after completing each circuit of an assigned route, can play a significant role in reducing the battery size and can eliminate the need for longer duration charging events that would cause schedule delays. Furthermore, the analysis presented shows that significant benefits can be realized by employing multiple battery configurations and flexible battery swapping practices in electric buses. These flexible design and use options will allow electric buses to service routes of varying city driving patterns and can therefore enable meaningful reductions to the cost of the vehicle and battery while ensuring service that is as reliable as conventional buses.« less
Battery capacity and recharging needs for electric buses in city transit service
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Zhiming; Lin, Zhenhong; LaClair, Tim J.
Our paper evaluates the energy consumption and battery performance of city transit electric buses operating on real day-to-day routes and standardized bus drive cycles, based on a developed framework tool that links bus electrification feasibility with real-world vehicle performance, city transit bus service reliability, battery sizing and charging infrastructure. The impacts of battery capacity combined with regular and ultrafast charging over different routes have been analyzed in terms of the ability to maintain city transit bus service reliability like conventional buses. These results show that ultrafast charging via frequent short-time boost charging events, for example at a designated bus stopmore » after completing each circuit of an assigned route, can play a significant role in reducing the battery size and can eliminate the need for longer duration charging events that would cause schedule delays. Furthermore, the analysis presented shows that significant benefits can be realized by employing multiple battery configurations and flexible battery swapping practices in electric buses. These flexible design and use options will allow electric buses to service routes of varying city driving patterns and can therefore enable meaningful reductions to the cost of the vehicle and battery while ensuring service that is as reliable as conventional buses.« less
Reliability Analysis of Uniaxially Ground Brittle Materials
NASA Technical Reports Server (NTRS)
Salem, Jonathan A.; Nemeth, Noel N.; Powers, Lynn M.; Choi, Sung R.
1995-01-01
The fast fracture strength distribution of uniaxially ground, alpha silicon carbide was investigated as a function of grinding angle relative to the principal stress direction in flexure. Both as-ground and ground/annealed surfaces were investigated. The resulting flexural strength distributions were used to verify reliability models and predict the strength distribution of larger plate specimens tested in biaxial flexure. Complete fractography was done on the specimens. Failures occurred from agglomerates, machining cracks, or hybrid flaws that consisted of a machining crack located at a processing agglomerate. Annealing eliminated failures due to machining damage. Reliability analyses were performed using two and three parameter Weibull and Batdorf methodologies. The Weibull size effect was demonstrated for machining flaws. Mixed mode reliability models reasonably predicted the strength distributions of uniaxial flexure and biaxial plate specimens.
David Ray; Chad Keyser; Robert Seymour; John Brissette
2008-01-01
Forest managers are increasingly called upon to provide long-term predictions of forest development. The dynamics of regeneration establishment, survival and subsequent recruitment of established seedlings to larger size classes is a critical component of these forecasts, yet remains a weak link in available models. To test the reliability of FVS-NE for simulating...
Fabrication, test and demonstration of critical environment monitoring system
NASA Technical Reports Server (NTRS)
Heimendinger, K. W.
1972-01-01
Design and performance of an analytical system for the evaluation of certain environmental constituents in critical environmental areas of the Quality Reliability and Assurance Laboratory are reported. Developed was a self-contained, integrated, minimum sized unit that detects, interrogates, and records those parameters of the environment dictated for control in large storage facilities, clean rooms, temporarily curtained enclosures, and special working benches. The system analyzes humidity, temperature, hydrocarbons particle size, and particle count within prescribed clean areas.
Measuring Physical Activity in Pregnancy Using Questionnaires: A Meta-Analysis
Schuster, Snježana; Šklempe Kokić, Iva; Sindik, Joško
2016-09-01
Physical activity (PA) during normal pregnancy has various positive effects on pregnant women’s health. Determination of the relationship between PA and health outcomes requires accurate measurement of PA in pregnant women. The purpose of this review is to provide a summary of valid and reliable PA questionnaires for pregnant women. During 2013, Pubmed, OvidSP and Web of Science databases were searched for trials on measurement properties of PA questionnaires for pregnant population. Six studies and four questionnaires met the inclusion criteria: Pregnancy Physical Activity Questionnaire, Modified Kaiser Physical Activity Survey, Short Pregnancy Leisure Time Physical Activity Questionnaire and Third Pregnancy Infection and Nutrition Study Physical Activity Questionnaire. Assessment of validity and reliability was performed using correlations of the scores in these questionnaires with objective measures and subjective measures (self-report) of PA, as well as test-retest reliability coefficients. Sample sizes included in analysis varied from 45 to 177 subjects. The best validity and reliability characteristics (together with effect sizes) were identified for the Modified Kaiser Physical Activity Survey and Pregnancy Physical Activity Questionnaire (French, Vietnamese, standard). In conclusion, assessment of PA during pregnancy remains a challenging and complex task. Questionnaires are a simple and effective, yet limited tool for assessing PA.
A Framework For Fault Tolerance In Virtualized Servers
2016-06-01
effects into the system. Decrease in performance, the expansion in the total system size and weight, and a hike in the system cost can be counted in... benefit also shines out in terms of reliability. 41 4. How Data Guard Synchronizes Standby Databases Primary and standby databases in Oracle Data
Efficiency tests of samplers for microbiological aerosols, a review
NASA Technical Reports Server (NTRS)
Henningson, E.; Faengmark, I.
1984-01-01
To obtain comparable results from studies using a variety of samplers of microbiological aerosols with different collection performances for various particle sizes, methods reported in the literature were surveyed, evaluated, and tabulated for testing the efficiency of the samplers. It is concluded that these samplers were not thoroughly tested, using reliable methods. Tests were conducted in static air chambers and in various outdoor and work environments. Results are not reliable as it is difficult to achieve stable and reproducible conditions in these test systems. Testing in a wind tunnel is recommended.
Study samples are too small to produce sufficiently precise reliability coefficients.
Charter, Richard A
2003-04-01
In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.
NASA Astrophysics Data System (ADS)
Wang, Qian; Choa, Sung-Hoon; Kim, Woonbae; Hwang, Junsik; Ham, Sukjin; Moon, Changyoul
2006-03-01
Development of packaging is one of the critical issues toward realizing commercialization of radio-frequency-microelectromechanical system (RF-MEMS) devices. The RF-MEMS package should be designed to have small size, hermetic protection, good RF performance, and high reliability. In addition, packaging should be conducted at sufficiently low temperature. In this paper, a low-temperature hermetic wafer level packaging scheme for the RF-MEMS devices is presented. For hermetic sealing, Au-Sn eutectic bonding technology at temperatures below 300°C is used. Au-Sn multilayer metallization with a square loop of 70 µm in width is performed. The electrical feed-through is achieved by the vertical through-hole via filling with electroplated Cu. The size of the MEMS package is 1 mm × 1 mm × 700 µm. The shear strength and hermeticity of the package satisfies the requirements of MIL-STD-883F. Any organic gases or contamination are not observed inside the package. The total insertion loss for the packaging is 0.075 dB at 2 GHz. Furthermore, the robustness of the package is demonstrated by observing no performance degradation and physical damage of the package after several reliability tests.
Implications of scaling on static RAM bit cell stability and reliability
NASA Astrophysics Data System (ADS)
Coones, Mary Ann; Herr, Norm; Bormann, Al; Erington, Kent; Soorholtz, Vince; Sweeney, John; Phillips, Michael
1993-01-01
In order to lower manufacturing costs and increase performance, static random access memory (SRAM) bit cells are scaled progressively toward submicron geometries. The reliability of an SRAM is highly dependent on the bit cell stability. Smaller memory cells with less capacitance and restoring current make the array more susceptible to failures from defectivity, alpha hits, and other instabilities and leakage mechanisms. Improving long term reliability while migrating to higher density devices makes the task of building in and improving reliability increasingly difficult. Reliability requirements for high density SRAMs are very demanding with failure rates of less than 100 failures per billion device hours (100 FITs) being a common criteria. Design techniques for increasing bit cell stability and manufacturability must be implemented in order to build in this level of reliability. Several types of analyses are performed to benchmark the performance of the SRAM device. Examples of these analysis techniques which are presented here include DC parametric measurements of test structures, functional bit mapping of the circuit used to characterize the entire distribution of bits, electrical microprobing of weak and/or failing bits, and system and accelerated soft error rate measurements. These tests allow process and design improvements to be evaluated prior to implementation on the final product. These results are used to provide comprehensive bit cell characterization which can then be compared to device models and adjusted accordingly to provide optimized cell stability versus cell size for a particular technology. The result is designed in reliability which can be accomplished during the early stages of product development.
Formiga, Magno F; Roach, Kathryn E; Vital, Isabel; Urdaneta, Gisel; Balestrini, Kira; Calderon-Candelario, Rafael A
2018-01-01
Purpose The Test of Incremental Respiratory Endurance (TIRE) provides a comprehensive assessment of inspiratory muscle performance by measuring maximal inspiratory pressure (MIP) over time. The integration of MIP over inspiratory duration (ID) provides the sustained maximal inspiratory pressure (SMIP). Evidence on the reliability and validity of these measurements in COPD is not currently available. Therefore, we assessed the reliability, responsiveness and construct validity of the TIRE measures of inspiratory muscle performance in subjects with COPD. Patients and methods Test–retest reliability, known-groups and convergent validity assessments were implemented simultaneously in 81 male subjects with mild to very severe COPD. TIRE measures were obtained using the portable PrO2 device, following standard guidelines. Results All TIRE measures were found to be highly reliable, with SMIP demonstrating the strongest test–retest reliability with a nearly perfect intraclass correlation coefficient (ICC) of 0.99, while MIP and ID clustered closely together behind SMIP with ICC values of about 0.97. Our findings also demonstrated known-groups validity of all TIRE measures, with SMIP and ID yielding larger effect sizes when compared to MIP in distinguishing between subjects of different COPD status. Finally, our analyses confirmed convergent validity for both SMIP and ID, but not MIP. Conclusion The TIRE measures of MIP, SMIP and ID have excellent test–retest reliability and demonstrated known-groups validity in subjects with COPD. SMIP and ID also demonstrated evidence of moderate convergent validity and appear to be more stable measures in this patient population than the traditional MIP. PMID:29805255
Haugan, Gørill; Drageset, Jorunn
2014-08-01
Depression and anxiety are particularly common among individuals living in long-term care facilities. Therefore, access to a valid and reliable measure of anxiety and depression among nursing home patients is highly warranted. To investigate the dimensionality, reliability and construct validity of the Hospital Anxiety and Depression scale (HADS) in a cognitively intact nursing home population. Cross-sectional data were collected from two samples; 429 cognitively intact nursing home patients participated, representing 74 different Norwegian nursing homes. Confirmative factor analyses and correlations with selected constructs were used. The two-factor model provided a good fit in Sample1, revealing a poorer fit in Sample2. Good-acceptable measurement reliability was demonstrated, and construct validity was supported. Using listwise deletion the sample sizes were 227 and 187, for Sample1 and Sample2, respectively. Greater sample sizes would have strengthen the statistical power in the tests. The researchers visited the participants to help fill in the questionnaires; this might have introduced some bias into the respondents׳ reporting. The 14 HADS items were part of greater questionnaires. Thus, frail, older NH patients might have tired during the interview causing a possible bias. Low reliability for depression was disclosed, mainly resulting from three items appearing to be inappropriate indicators for depression in this population. Further research is needed exploring which items might perform as more reliably indicators for depression among nursing home patients. Copyright © 2014 Elsevier B.V. All rights reserved.
Liang, Li; Xu, Jun; Liang, Zhi-Tao; Dong, Xiao-Ping; Chen, Hu-Biao; Zhao, Zhong-Zhen
2018-05-08
In commercial herbal markets, Polygoni Multiflori Radix (PMR, the tuberous roots of Polygonum multiflorum Thunb.), a commonly-used Chinese medicinal material, is divided into different grades based on morphological features of size and weight. While more weight and larger size command a higher price, there is no scientific data confirming that the more expensive roots are in fact of better quality. To assess the inherent quality of various grades and of various tissues in PMR and to find reliable morphological indicators of quality, a method combining laser microdissection (LMD) and ultra-performance liquid chromatography triple-quadrupole mass spectrometry (UPLC-QqQ-MS/MS) was applied. Twelve major chemical components were quantitatively determined in both whole material and different tissues of PMR. Determination of the whole material revealed that traditional commercial grades based on size and weight of PRM did not correspond to any significant differences in chemical content. Instead, tissue-specific analysis indicated that the morphological features could be linked with quality in a new way. That is, PMR with broader cork and phloem, as seen in a transverse section, were typically of better quality as these parts are where the bioactive components accumulate. The tissue-specific analysis of secondary metabolites creates a reliable morphological criterion for quality grading of PMR.
NASA Astrophysics Data System (ADS)
Franck, Bas A. M.; Dreschler, Wouter A.; Lyzenga, Johannes
2004-12-01
In this study we investigated the reliability and convergence characteristics of an adaptive multidirectional pattern search procedure, relative to a nonadaptive multidirectional pattern search procedure. The procedure was designed to optimize three speech-processing strategies. These comprise noise reduction, spectral enhancement, and spectral lift. The search is based on a paired-comparison paradigm, in which subjects evaluated the listening comfort of speech-in-noise fragments. The procedural and nonprocedural factors that influence the reliability and convergence of the procedure are studied using various test conditions. The test conditions combine different tests, initial settings, background noise types, and step size configurations. Seven normal hearing subjects participated in this study. The results indicate that the reliability of the optimization strategy may benefit from the use of an adaptive step size. Decreasing the step size increases accuracy, while increasing the step size can be beneficial to create clear perceptual differences in the comparisons. The reliability also depends on starting point, stop criterion, step size constraints, background noise, algorithms used, as well as the presence of drifting cues and suboptimal settings. There appears to be a trade-off between reliability and convergence, i.e., when the step size is enlarged the reliability improves, but the convergence deteriorates. .
Practical aspects of photovoltaic technology, applications and cost (revised)
NASA Technical Reports Server (NTRS)
Rosenblum, L.
1985-01-01
The purpose of this text is to provide the reader with the background, understanding, and computational tools needed to master the practical aspects of photovoltaic (PV) technology, application, and cost. The focus is on stand-alone, silicon solar cell, flat-plate systems in the range of 1 to 25 kWh/day output. Technology topics covered include operation and performance of each of the major system components (e.g., modules, array, battery, regulators, controls, and instrumentation), safety, installation, operation and maintenance, and electrical loads. Application experience and trends are presented. Indices of electrical service performance - reliability, availability, and voltage control - are discussed, and the known service performance of central station electric grid, diesel-generator, and PV stand-alone systems are compared. PV system sizing methods are reviewed and compared, and a procedure for rapid sizing is described and illustrated by the use of several sample cases. The rapid sizing procedure yields an array and battery size that corresponds to a minimum cost system for a given load requirement, insulation condition, and desired level of service performance. PV system capital cost and levelized energy cost are derived as functions of service performance and insulation. Estimates of future trends in PV system costs are made.
Memorial Hermann: high reliability from board to bedside.
Shabot, M Michael; Monroe, Douglas; Inurria, Juan; Garbade, Debbi; France, Anne-Claire
2013-06-01
In 2006 the Memorial Hermann Health System (MHHS), which includes 12 hospitals, began applying principles embraced by high reliability organizations (HROs). Three factors support its HRO journey: (1) aligned organizational structure with transparent management systems and compressed reporting processes; (2) Robust Process Improvement (RPI) with high-reliability interventions; and (3) cultural establishment, sustainment, and evolution. The Quality and Safety strategic plan contains three domains, each with a specific set of measures that provide goals for performance: (1) "Clinical Excellence;" (2) "Do No Harm;" and (3) "Saving Lives," as measured by the Serious Safety Event rate. MHHS uses a uniform approach to performance improvement--RPI, which includes Six Sigma, Lean, and change management, to solve difficult safety and quality problems. The 9 acute care hospitals provide multiple opportunities to integrate high-reliability interventions and best practices across MHHS. For example, MHHS partnered with the Joint Commission Center for Transforming Healthcare in its inaugural project to establish reliable hand hygiene behaviors, which improved MHHS's average hand hygiene compliance rate from 44% to 92% currently. Soon after compliance exceeded 85% at all 12 hospitals, the average rate of central line-associated bloodstream and ventilator-associated pneumonias decreased to essentially zero. MHHS's size and diversity require a disciplined approach to performance improvement and systemwide achievement of measurable success. The most significant cultural change at MHHS has been the expectation for 100% compliance with evidence-based quality measures and 0% incidence of patient harm.
Kaya, M S; Güçlü, B; Schimmel, M; Akyüz, S
2017-11-01
The unappealing taste of the chewing material and the time-consuming repetitive task in masticatory performance tests using artificial foodstuff may discourage children from performing natural chewing movements. Therefore, the aim was to determine the validity and reliability of a two-colour chewing gum mixing ability test for masticatory performance (MP) assessment in mixed dentition children. Masticatory performance was tested in two groups: systemically healthy fully dentate young adults and children in mixed dentition. Median particle size was assessed using a comminution test, and a two-colour chewing gum mixing ability test was applied for MP analysis. Validity was tested with Pearson correlation, and reliability was tested with intra-class correlation coefficient, Pearson correlation and Bland-Altman plots. Both comminution and two-colour chewing gum mixing ability tests revealed statistically significant MP differences between children (n = 25) and adults (n = 27, both P < 0·01). Pearson correlation between comminution and two-colour chewing gum mixing ability tests was positive and significant (r = 0·418, P = 0·002). Correlations for interobserver reliability and test-retest values were significant (r = 0·990, P = 0·0001 and r = 0·995, P = 0·0001). Although both methods could discriminate MP differences, the comminution test detected these differences generally in a wider range compared to two-colour chewing gum mixing ability test. However, considering the high reliability of the results, the two-colour chewing gum mixing ability test can be used to assess masticatory performance in children, especially at non-clinical settings. © 2017 John Wiley & Sons Ltd.
Simulation-based Assessment to Reliably Identify Key Resident Performance Attributes.
Blum, Richard H; Muret-Wagstaff, Sharon L; Boulet, John R; Cooper, Jeffrey B; Petrusa, Emil R; Baker, Keith H; Davidyuk, Galina; Dearden, Jennifer L; Feinstein, David M; Jones, Stephanie B; Kimball, William R; Mitchell, John D; Nadelberg, Robert L; Wiser, Sarah H; Albrecht, Meredith A; Anastasi, Amanda K; Bose, Ruma R; Chang, Laura Y; Culley, Deborah J; Fisher, Lauren J; Grover, Meera; Klainer, Suzanne B; Kveraga, Rikante; Martel, Jeffrey P; McKenna, Shannon S; Minehart, Rebecca D; Mitchell, John D; Mountjoy, Jeremi R; Pawlowski, John B; Pilon, Robert N; Shook, Douglas C; Silver, David A; Warfield, Carol A; Zaleski, Katherine L
2018-04-01
Obtaining reliable and valid information on resident performance is critical to patient safety and training program improvement. The goals were to characterize important anesthesia resident performance gaps that are not typically evaluated, and to further validate scores from a multiscenario simulation-based assessment. Seven high-fidelity scenarios reflecting core anesthesiology skills were administered to 51 first-year residents (CA-1s) and 16 third-year residents (CA-3s) from three residency programs. Twenty trained attending anesthesiologists rated resident performances using a seven-point behaviorally anchored rating scale for five domains: (1) formulate a clear plan, (2) modify the plan under changing conditions, (3) communicate effectively, (4) identify performance improvement opportunities, and (5) recognize limits. A second rater assessed 10% of encounters. Scores and variances for each domain, each scenario, and the total were compared. Low domain ratings (1, 2) were examined in detail. Interrater agreement was 0.76; reliability of the seven-scenario assessment was r = 0.70. CA-3s had a significantly higher average total score (4.9 ± 1.1 vs. 4.6 ± 1.1, P = 0.01, effect size = 0.33). CA-3s significantly outscored CA-1s for five of seven scenarios and domains 1, 2, and 3. CA-1s had a significantly higher proportion of worrisome ratings than CA-3s (chi-square = 24.1, P < 0.01, effect size = 1.50). Ninety-eight percent of residents rated the simulations more educational than an average day in the operating room. Sensitivity of the assessment to CA-1 versus CA-3 performance differences for most scenarios and domains supports validity. No differences, by experience level, were detected for two domains associated with reflective practice. Smaller score variances for CA-3s likely reflect a training effect; however, worrisome performance scores for both CA-1s and CA-3s suggest room for improvement.
NASA Technical Reports Server (NTRS)
Chen, Y.; Nguyen, D.; Guertin, S.; Berstein, J.; White, M.; Menke, R.; Kayali, S.
2003-01-01
This paper presents a reliability evaluation methodology to obtain the statistical reliability information of memory chips for space applications when the test sample size needs to be kept small because of the high cost of the radiation hardness memories.
Government and industry interactions in the development of clock technology
NASA Technical Reports Server (NTRS)
Hellwig, H.
1981-01-01
It appears likely that everyone in the time and frequency community can agree on goals to be realized through the expenditure of resources. These goals are the same as found in most fields of technology: lower cost, better performance, increased reliability, small size and lower power. Related aspects are examined in the process of clock and frequency standard development. Government and industry are reviewed in a highly interactive role. These interactions include judgements on clock performance, what kind of clock, expenditure of resources, transfer of ideas or hardware concepts from government to industry, and control of production. Successful clock development and production requires a government/industry relationship which is characterized by long-term continuity, multidisciplinary team work, focused funding and a separation of reliability and production oriented tasks from performance improvement/research type efforts.
Designing learning environments to promote student learning: ergonomics in all but name.
Smith, Thomas J
2013-01-01
This report introduces evidence for the conclusion that a common theme underlies almost all proposed solutions for improving the performance of K-12 students, namely their reliance on the design of educational system environments, features and operations. Two categories of design factors impacting such performance are addressed: (1) 9 factors reliably shown to have a strong influence - namely environmental design of classroom and building facilities, longer exposure to learning, cooperative learning designs, early childhood education, teaching quality, nutritional adequacy, participation in physical activity, good physical fitness, and school-community integration; and (2) 11 factors with an equivocal, varied or weak influence - classroom technology, online learning environments, smaller class size, school choice, school funding, school size, school start times, teacher training level, amount of homework, student self-confidence and informal learning. It is concluded that: (1) student learning outcomes, and more broadly the edifice of education itself, are largely defined in terms of an extensive system of design factors and conditions; (2) the time is long overdue for the educational system to acknowledge the central role of E/HF design as the major influence on student performance and learning; and (3) K-12 educators and administrators should emphasize allocation of resources to design factors reliably shown to have a strongly positive impact on student performance, but should treat expenditure on factors with equivocal, varied or weak influence on such performance with more caution and/or skepticism.
Reliability Considerations of ULP Scaled CMOS in Spacecraft Systems
NASA Technical Reports Server (NTRS)
White, Mark; MacNeal, Kristen; Cooper, Mark
2012-01-01
NASA, the aerospace community, and other high reliability (hi-rel) users of advanced microelectronic products face many challenges as technology continues to scale into the deep sub-micron region. Decreasing the feature size of CMOS devices not only allows more components to be placed on a single chip, but it increases performance by allowing faster switching (or clock) speeds with reduced power compared to larger scaled devices. Higher performance, and lower operating and stand-by power characteristics of Ultra-Low Power (ULP) microelectronics are not only desirable, but also necessary to meet low power consumption design goals of critical spacecraft systems. The integration of these components in such systems, however, must be balanced with the overall risk tolerance of the project.
Stirling Convertor Fasteners Reliability Quantification
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Korovaichuk, Igor; Kovacevich, Tiodor; Schreiber, Jeffrey G.
2006-01-01
Onboard Radioisotope Power Systems (RPS) being developed for NASA s deep-space science and exploration missions require reliable operation for up to 14 years and beyond. Stirling power conversion is a candidate for use in an RPS because it offers a multifold increase in the conversion efficiency of heat to electric power and reduced inventory of radioactive material. Structural fasteners are responsible to maintain structural integrity of the Stirling power convertor, which is critical to ensure reliable performance during the entire mission. Design of fasteners involve variables related to the fabrication, manufacturing, behavior of fasteners and joining parts material, structural geometry of the joining components, size and spacing of fasteners, mission loads, boundary conditions, etc. These variables have inherent uncertainties, which need to be accounted for in the reliability assessment. This paper describes these uncertainties along with a methodology to quantify the reliability, and provides results of the analysis in terms of quantified reliability and sensitivity of Stirling power conversion reliability to the design variables. Quantification of the reliability includes both structural and functional aspects of the joining components. Based on the results, the paper also describes guidelines to improve the reliability and verification testing.
Huang, Zhiheng; Xiong, Hua; Wu, Zhiyong; Conway, Paul; Altmann, Frank
2013-01-01
The dimensions of microbumps in three-dimensional integration reach microscopic scales and thus necessitate a study of the multiscale microstructures in microbumps. Here, we present simulated mesoscale and atomic-scale microstructures of microbumps using phase field and phase field crystal models. Coupled microstructure, mechanical stress, and electromigration modeling was performed to highlight the microstructural effects on the reliability of microbumps. The results suggest that the size and geometry of microbumps can influence both the mesoscale and atomic-scale microstructural formation during solidification. An external stress imposed on the microbump can cause ordered phase growth along the boundaries of the microbump. Mesoscale microstructures formed in the microbumps from solidification, solid state phase separation, and coarsening processes suggest that the microstructures in smaller microbumps are more heterogeneous. Due to the differences in microstructures, the von Mises stress distributions in microbumps of different sizes and geometries vary. In addition, a combined effect resulting from the connectivity of the phase morphology and the amount of interface present in the mesoscale microstructure can influence the electromigration reliability of microbumps. PMID:28788356
NASA Astrophysics Data System (ADS)
Riggs, William R.
1994-05-01
SHARP is a Navy wide logistics technology development effort aimed at reducing the acquisition costs, support costs, and risks of military electronic weapon systems while increasing the performance capability, reliability, maintainability, and readiness of these systems. Lower life cycle costs for electronic hardware are achieved through technology transition, standardization, and reliability enhancement to improve system affordability and availability as well as enhancing fleet modernization. Advanced technology is transferred into the fleet through hardware specifications for weapon system building blocks of standard electronic modules, standard power systems, and standard electronic systems. The product lines are all defined with respect to their size, weight, I/O, environmental performance, and operational performance. This method of defining the standard is very conducive to inserting new technologies into systems using the standard hardware. This is the approach taken thus far in inserting photonic technologies into SHARP hardware. All of the efforts have been related to module packaging; i.e. interconnects, component packaging, and module developments. Fiber optic interconnects are discussed in this paper.
NASA Astrophysics Data System (ADS)
Armstrong, Michael James
Increases in power demands and changes in the design practices of overall equipment manufacturers has led to a new paradigm in vehicle systems definition. The development of unique power systems architectures is of increasing importance to overall platform feasibility and must be pursued early in the aircraft design process. Many vehicle systems architecture trades must be conducted concurrent to platform definition. With an increased complexity introduced during conceptual design, accurate predictions of unit level sizing requirements must be made. Architecture specific emergent requirements must be identified which arise due to the complex integrated effect of unit behaviors. Off-nominal operating scenarios present sizing critical requirements to the aircraft vehicle systems. These requirements are architecture specific and emergent. Standard heuristically defined failure mitigation is sufficient for sizing traditional and evolutionary architectures. However, architecture concepts which vary significantly in terms of structure and composition require that unique failure mitigation strategies be defined for accurate estimations of unit level requirements. Identifying of these off-nominal emergent operational requirements require extensions to traditional safety and reliability tools and the systematic identification of optimal performance degradation strategies. Discrete operational constraints posed by traditional Functional Hazard Assessment (FHA) are replaced by continuous relationships between function loss and operational hazard. These relationships pose the objective function for hazard minimization. Load shedding optimization is performed for all statistically significant failures by varying the allocation of functional capability throughout the vehicle systems architecture. Expressing hazards, and thereby, reliability requirements as continuous relationships with the magnitude and duration of functional failure requires augmentations to the traditional means for system safety assessment (SSA). The traditional two state and discrete system reliability assessment proves insufficient. Reliability is, therefore, handled in an analog fashion: as a function of magnitude of failure and failure duration. A series of metrics are introduced which characterize system performance in terms of analog hazard probabilities. These include analog and cumulative system and functional risk, hazard correlation, and extensions to the traditional component importance metrics. Continuous FHA, load shedding optimization, and analog SSA constitute the SONOMA process (Systematic Off-Nominal Requirements Analysis). Analog system safety metrics inform both architecture optimization (changes in unit level capability and reliability) and architecture augmentation (changes in architecture structure and composition). This process was applied for two vehicle systems concepts (conventional and 'more-electric') in terms of loss/hazard relationships with varying degrees of fidelity. Application of this process shows that the traditional assumptions regarding the structure of the function loss vs. hazard relationship apply undue design bias to functions and components during exploratory design. This bias is illustrated in terms of inaccurate estimations of the system and function level risk and unit level importance. It was also shown that off-nominal emergent requirements must be defined specific to each architecture concept. Quantitative comparisons of architecture specific off-nominal performance were obtained which provide evidence to the need for accurate definition of load shedding strategies during architecture exploratory design. Formally expressing performance degradation strategies in terms of the minimization of a continuous hazard space enhances the system architects ability to accurately predict sizing critical emergent requirements concurrent to architecture definition. Furthermore, the methods and frameworks generated here provide a structured and flexible means for eliciting these architecture specific requirements during the performance of architecture trades.
Hagendorfer, Harald; Kaegi, Ralf; Traber, Jacqueline; Mertens, Stijn F L; Scherrers, Roger; Ludwig, Christian; Ulrich, Andrea
2011-11-14
In this work we discuss about the method development, applicability and limitations of an asymmetric flow field flow fractionation (A4F) system in combination with a multi-detector setup consisting of UV/vis, light scattering, and inductively coupled plasma mass spectrometry (ICPMS). The overall aim was to obtain a size dependent-, element specific-, and quantitative method appropriate for the characterization of metallic engineered nanoparticle (ENP) dispersions. Thus, systematic investigations of crucial method parameters were performed by employing well characterized Au nanoparticles (Au-NPs) as a defined model system. For good separation performance, the A4F flow-, membrane-, and carrier conditions were optimized. To obtain reliable size information, the use of laser light scattering based detectors was evaluated, where an online dynamic light scattering (DLS) detector showed good results for the investigated Au-NP up to a size of 80 nm in hydrodynamic diameter. To adapt large sensitivity differences of the various detectors, as well as to guarantee long term stability and minimum contamination of the mass spectrometer a split-flow concept for coupling ICPMS was evaluated. To test for reliable quantification, the ICPMS signal response of ionic Au standards was compared to that of Au-NP. Using proper stabilization with surfactants, no difference for concentrations of 1-50 μg Au L(-1) in the size range from 5 to 80 nm for citrate stabilized dispersions was observed. However, studies using different A4F channel membranes showed unspecific particle-membrane interaction resulting in retention time shifts and unspecific loss of nanoparticles, depending on the Au-NP system as well as membrane batch and type. Thus, reliable quantification and discrimination of ionic and particular species was performed using ICPMS in combination with ultracentrifugation instead of direct quantification with the A4F multi-detector setup. Figures of merit were obtained, by comparing the results from the multi detector approach outlined above, with results from batch-DLS and transmission electron microscopy (TEM). Furthermore, validation performed with certified NIST Au-NP showed excellent agreement. The developed methods show potential for characterization of other commonly used and important metallic engineered nanoparticles. Copyright © 2011 Elsevier B.V. All rights reserved.
NEPP Evaluation of Automotive Grade Tantalum Chip Capacitors
NASA Technical Reports Server (NTRS)
Sampson, Mike; Brusse, Jay
2018-01-01
Automotive grade tantalum (Ta) chip capacitors are available at lower cost with smaller physical size and higher volumetric efficiency compared to military/space grade capacitors. Designers of high reliability aerospace and military systems would like to take advantage of these attributes while maintaining the high standards for long-term reliable operation they are accustomed to when selecting military-qualified established reliability tantalum chip capacitors (e.g., MIL-PRF-55365). The objective for this evaluation was to assess the long-term performance of off-the-shelf automotive grade Ta chip capacitors (i.e., manufacturer self-qualified per AEC Q-200). Two (2) lots of case size D manganese dioxide (MnO2) cathode Ta chip capacitors from 1 manufacturer were evaluated. The evaluation consisted of construction analysis, basic electrical parameter characterization, extended long-term (2000 hours) life testing and some accelerated stress testing. Tests and acceptance criteria were based upon manufacturer datasheets and the Automotive Electronics Council's AEC Q-200 qualification specification for passive electronic components. As-received a few capacitors were marginally above the specified tolerance for capacitance and ESR. X-ray inspection found that the anodes for some devices may not be properly aligned within the molded encapsulation leaving less than 1 mil thickness of the encapsulation. This evaluation found that the long-term life performance of automotive grade Ta chip capacitors is generally within specification limits suggesting these capacitors may be suitable for some space applications.
Jung, Yong-Gyun; Kim, Hyejin; Lee, Sangyeop; Kim, Suyeoun; Jo, EunJi; Kim, Eun-Geun; Choi, Jungil; Kim, Hyun Jung; Yoo, Jungheon; Lee, Hye-Jeong; Kim, Haeun; Jung, Hyunju; Ryoo, Sungweon; Kwon, Sunghoon
2018-06-05
The Disc Agarose Channel (DAC) system utilizes microfluidics and imaging technologies and is fully automated and capable of tracking single cell growth to produce Mycobacterium tuberculosis (MTB) drug susceptibility testing (DST) results within 3~7 days. In particular, this system can be easily used to perform DSTs without the fastidious preparation of the inoculum of MTB cells. Inoculum effect is one of the major problems that causes DST errors. The DAC system was not influenced by the inoculum effect and produced reliable DST results. In this system, the minimum inhibitory concentration (MIC) values of the first-line drugs were consistent regardless of inoculum sizes ranging from ~10 3 to ~10 8 CFU/mL. The consistent MIC results enabled us to determine the critical concentrations for 12 anti-tuberculosis drugs. Based on the determined critical concentrations, further DSTs were performed with 254 MTB clinical isolates without measuring an inoculum size. There were high agreement rates (96.3%) between the DAC system and the absolute concentration method using Löwenstein-Jensen medium. According to these results, the DAC system is the first DST system that is not affected by the inoculum effect. It can thus increase reliability and convenience for DST of MTB. We expect that this system will be a potential substitute for conventional DST systems.
Reliably detectable flaw size for NDE methods that use calibration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-1823 and associated mh18232 POD software gives most common methods of POD analysis. In this paper, POD analysis is applied to an NDE method, such as eddy current testing, where calibration is used. NDE calibration standards have known size artificial flaws such as electro-discharge machined (EDM) notches and flat bottom hole (FBH) reflectors which are used to set instrument sensitivity for detection of real flaws. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. Therefore, it is important to correlate signal responses from real flaws with signal responses form artificial flaws used in calibration process to determine reliably detectable flaw size.
Reliably Detectable Flaw Size for NDE Methods that Use Calibration
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2017-01-01
Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-1823 and associated mh1823 POD software gives most common methods of POD analysis. In this paper, POD analysis is applied to an NDE method, such as eddy current testing, where calibration is used. NDE calibration standards have known size artificial flaws such as electro-discharge machined (EDM) notches and flat bottom hole (FBH) reflectors which are used to set instrument sensitivity for detection of real flaws. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. Therefore, it is important to correlate signal responses from real flaws with signal responses form artificial flaws used in calibration process to determine reliably detectable flaw size.
High correlations between MRI brain volume measurements based on NeuroQuant® and FreeSurfer.
Ross, David E; Ochs, Alfred L; Tate, David F; Tokac, Umit; Seabaugh, John; Abildskov, Tracy J; Bigler, Erin D
2018-05-30
NeuroQuant ® (NQ) and FreeSurfer (FS) are commonly used computer-automated programs for measuring MRI brain volume. Previously they were reported to have high intermethod reliabilities but often large intermethod effect size differences. We hypothesized that linear transformations could be used to reduce the large effect sizes. This study was an extension of our previously reported study. We performed NQ and FS brain volume measurements on 60 subjects (including normal controls, patients with traumatic brain injury, and patients with Alzheimer's disease). We used two statistical approaches in parallel to develop methods for transforming FS volumes into NQ volumes: traditional linear regression, and Bayesian linear regression. For both methods, we used regression analyses to develop linear transformations of the FS volumes to make them more similar to the NQ volumes. The FS-to-NQ transformations based on traditional linear regression resulted in effect sizes which were small to moderate. The transformations based on Bayesian linear regression resulted in all effect sizes being trivially small. To our knowledge, this is the first report describing a method for transforming FS to NQ data so as to achieve high reliability and low effect size differences. Machine learning methods like Bayesian regression may be more useful than traditional methods. Copyright © 2018 Elsevier B.V. All rights reserved.
Climate and water resource change impacts and adaptation potential for US power supply
Miara, Ariel; Macknick, Jordan E.; Vorosmarty, Charles J.; ...
2017-10-30
Power plants that require cooling currently (2015) provide 85% of electricity generation in the United States. These facilities need large volumes of water and sufficiently cool temperatures for optimal operations, and projected climate conditions may lower their potential power output and affect reliability. We evaluate the performance of 1,080 thermoelectric plants across the contiguous US under future climates (2035-2064) and their collective performance at 19 North American Electric Reliability Corporation (NERC) sub-regions. Joint consideration of engineering interactions with climate, hydrology and environmental regulations reveals the region-specific performance of energy systems and the need for regional energy security and climate-water adaptationmore » strategies. Despite climate-water constraints on individual plants, the current power supply infrastructure shows potential for adaptation to future climates by capitalizing on the size of regional power systems, grid configuration and improvements in thermal efficiencies. Without placing climate-water impacts on individual plants in a broader power systems context, vulnerability assessments that aim to support adaptation and resilience strategies misgauge the extent to which regional energy systems are vulnerable. As a result, climate-water impacts can lower thermoelectric reserve margins, a measure of systems-level reliability, highlighting the need to integrate climate-water constraints on thermoelectric power supply into energy planning, risk assessments, and system reliability management.« less
Climate and water resource change impacts and adaptation potential for US power supply
NASA Astrophysics Data System (ADS)
Miara, Ariel; Macknick, Jordan E.; Vörösmarty, Charles J.; Tidwell, Vincent C.; Newmark, Robin; Fekete, Balazs
2017-11-01
Power plants that require cooling currently (2015) provide 85% of electricity generation in the United States. These facilities need large volumes of water and sufficiently cool temperatures for optimal operations, and projected climate conditions may lower their potential power output and affect reliability. We evaluate the performance of 1,080 thermoelectric plants across the contiguous US under future climates (2035-2064) and their collective performance at 19 North American Electric Reliability Corporation (NERC) sub-regions. Joint consideration of engineering interactions with climate, hydrology and environmental regulations reveals the region-specific performance of energy systems and the need for regional energy security and climate-water adaptation strategies. Despite climate-water constraints on individual plants, the current power supply infrastructure shows potential for adaptation to future climates by capitalizing on the size of regional power systems, grid configuration and improvements in thermal efficiencies. Without placing climate-water impacts on individual plants in a broader power systems context, vulnerability assessments that aim to support adaptation and resilience strategies misgauge the extent to which regional energy systems are vulnerable. Climate-water impacts can lower thermoelectric reserve margins, a measure of systems-level reliability, highlighting the need to integrate climate-water constraints on thermoelectric power supply into energy planning, risk assessments, and system reliability management.
Climate and water resource change impacts and adaptation potential for US power supply
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miara, Ariel; Macknick, Jordan E.; Vorosmarty, Charles J.
Power plants that require cooling currently (2015) provide 85% of electricity generation in the United States. These facilities need large volumes of water and sufficiently cool temperatures for optimal operations, and projected climate conditions may lower their potential power output and affect reliability. We evaluate the performance of 1,080 thermoelectric plants across the contiguous US under future climates (2035-2064) and their collective performance at 19 North American Electric Reliability Corporation (NERC) sub-regions. Joint consideration of engineering interactions with climate, hydrology and environmental regulations reveals the region-specific performance of energy systems and the need for regional energy security and climate-water adaptationmore » strategies. Despite climate-water constraints on individual plants, the current power supply infrastructure shows potential for adaptation to future climates by capitalizing on the size of regional power systems, grid configuration and improvements in thermal efficiencies. Without placing climate-water impacts on individual plants in a broader power systems context, vulnerability assessments that aim to support adaptation and resilience strategies misgauge the extent to which regional energy systems are vulnerable. As a result, climate-water impacts can lower thermoelectric reserve margins, a measure of systems-level reliability, highlighting the need to integrate climate-water constraints on thermoelectric power supply into energy planning, risk assessments, and system reliability management.« less
Validity and reliability of a novel measure of activity performance and participation.
Murgatroyd, Phil; Karimi, Leila
2016-01-01
To develop and evaluate an innovative clinician-rated measure, which produces global numerical ratings of activity performance and participation. Repeated measures study with 48 community-dwelling participants investigating clinical sensibility, comprehensiveness, practicality, inter-rater reliability, responsiveness, sensitivity and concurrent validity with Barthel Index. Important clinimetric characteristics including comprehensiveness and ease of use were rated >8/10 by clinicians. Inter-rater reliability was excellent on the summary scores (intraclass correlation of 0.95-0.98). There was good evidence that the new outcome measure distinguished between known high and low functional scoring groups, including both responsiveness to change and sensitivity at the same time point in numerous tests. Concurrent validity with the Barthel Index was fair to high (Spearman Rank Order Correlation 0.32-0.85, p > 0.05). The new measure's summary scores were nearly twice as responsive to change compared with the Barthel Index. Other more detailed data could also be generated by the new measure. The Activity Performance Measure is an innovative outcome instrument that showed good clinimetric qualities in this initial study. Some of the results were strong, given the sample size, and further trial and evaluation is appropriate. Implications for Rehabilitation The Activity Performance Measure is an innovative outcome measure covering activity performance and participation. In an initial evaluation, it showed good clinimetric qualities including responsiveness to change, sensitivity, practicality, clinical sensibility, item coverage, inter-rater reliability and concurrent validity with the Barthel Index. Further trial and evaluation is appropriate.
Size matters: bigger is faster.
Sereno, Sara C; O'Donnell, Patrick J; Sereno, Margaret E
2009-06-01
A largely unexplored aspect of lexical access in visual word recognition is "semantic size"--namely, the real-world size of an object to which a word refers. A total of 42 participants performed a lexical decision task on concrete nouns denoting either big or small objects (e.g., bookcase or teaspoon). Items were matched pairwise on relevant lexical dimensions. Participants' reaction times were reliably faster to semantically "big" versus "small" words. The results are discussed in terms of possible mechanisms, including more active representations for "big" words, due to the ecological importance attributed to large objects in the environment and the relative speed of neural responses to large objects.
View of Pakistan Atomic Energy Commission towards SMPR's in the light of KANUPP performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huseini, S.D.
1985-01-01
The developing countries in general do not have grid capacities adequate enough to incorporate standard size, economic but rather large nuclear power plants for maximum advantage. Therefore, small and medium size reactors (SMPR) have been and still are, of particular interest to the developing countries in spite of certain known problems with these reactors. Pakistan Atomic Energy Commission (PAEC) has been operating a CANDU type of a small PHWR plant since 1971 when it was connected to the local Karachi grid. This paper describes PAEC's view in the light of KANUPP performance with respect to such factors associated with SMPR'smore » as selection of suitable reactor size and type, its operation in a grid of small capacity, flexibility of operation and its role as a reliable source of electrical power.« less
NASA Technical Reports Server (NTRS)
Ibrahim, Mounir; Danila, Daniel; Simon, Terrence; Mantell, Susan; Sun, Liyong; Gadeon, David; Qiu, Songgang; Wood, Gary; Kelly, Kevin; McLean, Jeffrey
2007-01-01
An actual-size microfabricated regenerator comprised of a stack of 42 disks, 19 mm diameter and 0.25 mm thick, with layers of microscopic, segmented, involute-shaped flow channels was fabricated and tested. The geometry resembles layers of uniformly-spaced segmented-parallel-plates, except the plates are curved. Each disk was made from electro-plated nickel using the LiGA process. This regenerator had feature sizes close to those required for an actual Stirling engine but the overall regenerator dimensions were sized for the NASA/Sunpower oscillating-flow regenerator test rig. Testing in the oscillating-flow test rig showed the regenerator performed extremely well, significantly better than currently used random-fiber material, producing the highest figures of merit ever recorded for any regenerator tested in that rig over its approximately 20 years of use.
NASA Astrophysics Data System (ADS)
He, Jingjing; Wang, Dengjiang; Zhang, Weifang
2015-03-01
This study presents an experimental and modeling study for damage detection and quantification in riveted lap joints. Embedded lead zirconate titanate piezoelectric (PZT) ceramic wafer-type sensors are employed to perform in-situ non-destructive testing during fatigue cyclical loading. A multi-feature integration method is developed to quantify the crack size using signal features of correlation coefficient, amplitude change, and phase change. In addition, probability of detection (POD) model is constructed to quantify the reliability of the developed sizing method. Using the developed crack size quantification method and the resulting POD curve, probabilistic fatigue life prediction can be performed to provide comprehensive information for decision-making. The effectiveness of the overall methodology is demonstrated and validated using several aircraft lap joint specimens from different manufactures and under different loading conditions.
Engine System Model Development for Nuclear Thermal Propulsion
NASA Technical Reports Server (NTRS)
Nelson, Karl W.; Simpson, Steven P.
2006-01-01
In order to design, analyze, and evaluate conceptual Nuclear Thermal Propulsion (NTP) engine systems, an improved NTP design and analysis tool has been developed. The NTP tool utilizes the Rocket Engine Transient Simulation (ROCETS) system tool and many of the routines from the Enabler reactor model found in Nuclear Engine System Simulation (NESS). Improved non-nuclear component models and an external shield model were added to the tool. With the addition of a nearly complete system reliability model, the tool will provide performance, sizing, and reliability data for NERVA-Derived NTP engine systems. A new detailed reactor model is also being developed and will replace Enabler. The new model will allow more flexibility in reactor geometry and include detailed thermal hydraulics and neutronics models. A description of the reactor, component, and reliability models is provided. Another key feature of the modeling process is the use of comprehensive spreadsheets for each engine case. The spreadsheets include individual worksheets for each subsystem with data, plots, and scaled figures, making the output very useful to each engineering discipline. Sample performance and sizing results with the Enabler reactor model are provided including sensitivities. Before selecting an engine design, all figures of merit must be considered including the overall impacts on the vehicle and mission. Evaluations based on key figures of merit of these results and results with the new reactor model will be performed. The impacts of clustering and external shielding will also be addressed. Over time, the reactor model will be upgraded to design and analyze other NTP concepts with CERMET and carbide fuel cores.
Strategy for Developing Expert-System-Based Internet Protocols (TCP/IP)
NASA Technical Reports Server (NTRS)
Ivancic, William D.
1997-01-01
The Satellite Networks and Architectures Branch of NASA's Lewis Research is addressing the issue of seamless interoperability of satellite networks with terrestrial networks. One of the major issues is improving reliable transmission protocols such as TCP over long latency and error-prone links. Many tuning parameters are available to enhance the performance of TCP including segment size, timers and window sizes. There are also numerous congestion avoidance algorithms such as slow start, selective retransmission and selective acknowledgment that are utilized to improve performance. This paper provides a strategy to characterize the performance of TCP relative to various parameter settings in a variety of network environments (i.e. LAN, WAN, wireless, satellite, and IP over ATM). This information can then be utilized to develop expert-system-based Internet protocols.
Suzuki, T; Sato, Y; Sotome, S; Arai, H; Arai, A; Yoshida, H
2017-06-01
This study was designed to investigate the reliability and validity of measurements of finger diameters with a ring gauge. A reliability study enrolled two independent samples (50 participants and seven examiners in Study I; 26 participants and 26 examiners in Study II). The sizes of each participant's little fingers were measured twice with a ring gauge by each examiner. To investigate the validity of the measurements, five hand therapists compared the finger size and hand volume of 30 participants with the ring gauge and with a figure-of-eight technique (Study III). The intra-class correlation coefficient for intra-observer reliability ranged from 0.97 to 0.99 in Study I, and 0.90 to 0.97 in Study II. The intra-class correlation coefficient for inter-observer reliability was 0.95 in Study I and 0.94 in Study II. The validity study showed a Pearson product moment correlation coefficient of 0.75. The ring gauge showed high reliability and validity for measurement of finger size. III, diagnostic.
First Order Reliability Application and Verification Methods for Semistatic Structures
NASA Technical Reports Server (NTRS)
Verderaime, Vincent
1994-01-01
Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored by conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments, its stress audits are shown to be arbitrary and incomplete, and it compromises high strength materials performance. A reliability method is proposed which combines first order reliability principles with deterministic design variables and conventional test technique to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety index expression. The application is reduced to solving for a factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and with the pace of semistatic structural designs.
Towards a new protocol of scoliosis assessments and monitoring in clinical practice: A pilot study.
Lukovic, Tanja; Cukovic, Sasa; Lukovic, Vanja; Devedzic, Goran; Djordjevic, Dusica
2015-01-01
Although intensively investigated, the procedures for assessment and monitoring of scoliosis are still a subject of controversies. The aim of this study was to assess validity and reliability of a number of physiotherapeutic measurements that could be used for clinical monitoring of scoliosis. Fifteen healthy (symmetric) subjects were subjected to a set of measurements two times, by two experienced and two inexperienced physiotherapists. Intra-observer and inter-observer reliability of measurements were determined. Following measurements were performed: body height and weight, chest girth in inspirium and expirium, the length of legs, the spine translation, the lateral pelvic tilt, the equality of the shoulders, position of scapulas, the equality of stature triangles, the rib hump, the existence of m. iliopsoas contracture, Fröhner index, the size of lumbar lordosis and the angle of trunk rotation. Intraclass correlation coefficient was high (> 0.8) for majority of measurements when experienced physiotherapists performed them, while inexperienced physiotherapists performed precisely only basic, easy measurements. We showed in this pilot study on healthy subjects, that majority of basic physiotherapeutic measurements are valid and reliable when performed by specialized physiotherapist, and it can be expected that this protocol will gain high value when measurements on subjects with scoliosis are performed.
Power Quality and Reliability Project
NASA Technical Reports Server (NTRS)
Attia, John O.
2001-01-01
One area where universities and industry can link is in the area of power systems reliability and quality - key concepts in the commercial, industrial and public sector engineering environments. Prairie View A&M University (PVAMU) has established a collaborative relationship with the University of'Texas at Arlington (UTA), NASA/Johnson Space Center (JSC), and EP&C Engineering and Technology Group (EP&C) a small disadvantage business that specializes in power quality and engineering services. The primary goal of this collaboration is to facilitate the development and implementation of a Strategic Integrated power/Systems Reliability and Curriculum Enhancement Program. The objectives of first phase of this work are: (a) to develop a course in power quality and reliability, (b) to use the campus of Prairie View A&M University as a laboratory for the study of systems reliability and quality issues, (c) to provide students with NASA/EPC shadowing and Internship experience. In this work, a course, titled "Reliability Analysis of Electrical Facilities" was developed and taught for two semesters. About thirty seven has benefited directly from this course. A laboratory accompanying the course was also developed. Four facilities at Prairie View A&M University were surveyed. Some tests that were performed are (i) earth-ground testing, (ii) voltage, amperage and harmonics of various panels in the buildings, (iii) checking the wire sizes to see if they were the right size for the load that they were carrying, (iv) vibration tests to test the status of the engines or chillers and water pumps, (v) infrared testing to the test arcing or misfiring of electrical or mechanical systems.
The inter-rater reliability of estimating the size of burns from various burn area chart drawings.
Wachtel, T L; Berry, C C; Wachtel, E E; Frank, H A
2000-03-01
The accuracy and variability of burn size calculations using four Lund and Browder charts currently in clinical use and two Rule of Nine's diagrams were evaluated. The study showed that variability in estimation increased with burn size initially, plateaued in large burns and then decreased slightly in extensive burns. The Rule of Nine's technique often overestimates the burn size and is more variable, but can be performed somewhat faster than the Lund and Browder method. More burn experience leads to less variability in burn area chart drawing estimates. Irregularly shaped burns and burns on the trunk and thighs had greater variability than less irregularly shaped burns or burns on more defined anatomical parts of the body.
Sauzet, Odile; Peacock, Janet L
2017-07-20
The analysis of perinatal outcomes often involves datasets with some multiple births. These are datasets mostly formed of independent observations and a limited number of clusters of size two (twins) and maybe of size three or more. This non-independence needs to be accounted for in the statistical analysis. Using simulated data based on a dataset of preterm infants we have previously investigated the performance of several approaches to the analysis of continuous outcomes in the presence of some clusters of size two. Mixed models have been developed for binomial outcomes but very little is known about their reliability when only a limited number of small clusters are present. Using simulated data based on a dataset of preterm infants we investigated the performance of several approaches to the analysis of binomial outcomes in the presence of some clusters of size two. Logistic models, several methods of estimation for the logistic random intercept models and generalised estimating equations were compared. The presence of even a small percentage of twins means that a logistic regression model will underestimate all parameters but a logistic random intercept model fails to estimate the correlation between siblings if the percentage of twins is too small and will provide similar estimates to logistic regression. The method which seems to provide the best balance between estimation of the standard error and the parameter for any percentage of twins is the generalised estimating equations. This study has shown that the number of covariates or the level two variance do not necessarily affect the performance of the various methods used to analyse datasets containing twins but when the percentage of small clusters is too small, mixed models cannot capture the dependence between siblings.
Endoscopic Stone Measurement During Ureteroscopy.
Ludwig, Wesley W; Lim, Sunghwan; Stoianovici, Dan; Matlaga, Brian R
2018-01-01
Currently, stone size cannot be accurately measured while performing flexible ureteroscopy (URS). We developed novel software for ureteroscopic, stone size measurement, and then evaluated its performance. A novel application capable of measuring stone fragment size, based on the known distance of the basket tip in the ureteroscope's visual field, was designed and calibrated in a laboratory setting. Complete URS procedures were recorded and 30 stone fragments were extracted and measured using digital calipers. The novel software program was applied to the recorded URS footage to obtain ureteroscope-derived stone size measurements. These ureteroscope-derived measurements were then compared with the actual-measured fragment size. The median longitudinal and transversal errors were 0.14 mm (95% confidence interval [CI] 0.1, 0.18) and 0.09 mm (95% CI 0.02, 0.15), respectively. The overall software accuracy and precision were 0.17 and 0.15 mm, respectively. The longitudinal and transversal measurements obtained by the software and digital calipers were highly correlated (r = 0.97 and 0.93). Neither stone size nor stone type was correlated with error measurements. This novel method and software reliably measured stone fragment size during URS. The software ultimately has the potential to make URS safer and more efficient.
Sample size requirements for the design of reliability studies: precision consideration.
Shieh, Gwowen
2014-09-01
In multilevel modeling, the intraclass correlation coefficient based on the one-way random-effects model is routinely employed to measure the reliability or degree of resemblance among group members. To facilitate the advocated practice of reporting confidence intervals in future reliability studies, this article presents exact sample size procedures for precise interval estimation of the intraclass correlation coefficient under various allocation and cost structures. Although the suggested approaches do not admit explicit sample size formulas and require special algorithms for carrying out iterative computations, they are more accurate than the closed-form formulas constructed from large-sample approximations with respect to the expected width and assurance probability criteria. This investigation notes the deficiency of existing methods and expands the sample size methodology for the design of reliability studies that have not previously been discussed in the literature.
Teletoxicology: Patient Assessment Using Wearable Audiovisual Streaming Technology.
Skolnik, Aaron B; Chai, Peter R; Dameff, Christian; Gerkin, Richard; Monas, Jessica; Padilla-Jones, Angela; Curry, Steven
2016-12-01
Audiovisual streaming technologies allow detailed remote patient assessment and have been suggested to change management and enhance triage. The advent of wearable, head-mounted devices (HMDs) permits advanced teletoxicology at a relatively low cost. A previously published pilot study supports the feasibility of using the HMD Google Glass® (Google Inc.; Mountain View, CA) for teletoxicology consultation. This study examines the reliability, accuracy, and precision of the poisoned patient assessment when performed remotely via Google Glass®. A prospective observational cohort study was performed on 50 patients admitted to a tertiary care center inpatient toxicology service. Toxicology fellows wore Google Glass® and transmitted secure, real-time video and audio of the initial physical examination to a remote investigator not involved in the subject's care. High-resolution still photos of electrocardiograms (ECGs) were transmitted to the remote investigator. On-site and remote investigators recorded physical examination findings and ECG interpretation. Both investigators completed a brief survey about the acceptability and reliability of the streaming technology for each encounter. Kappa scores and simple agreement were calculated for each examination finding and electrocardiogram parameter. Reliability scores and reliability difference were calculated and compared for each encounter. Data were available for analysis of 17 categories of examination and ECG findings. Simple agreement between on-site and remote investigators ranged from 68 to 100 % (median = 94 %, IQR = 10.5). Kappa scores could be calculated for 11/17 parameters and demonstrated slight to fair agreement for two parameters and moderate to almost perfect agreement for nine parameters (median = 0.653; substantial agreement). The lowest Kappa scores were for pupil size and response to light. On a 100-mm visual analog scale (VAS), mean comfort level was 93 and mean reliability rating was 89 for on-site investigators. For remote users, the mean comfort and reliability ratings were 99 and 86, respectively. The average difference in reliability scores between on-site and remote investigators was 2.6, with the difference increasing as reliability scores decreased. Remote evaluation of poisoned patients via Google Glass® is possible with a high degree of agreement on examination findings and ECG interpretation. Evaluation of pupil size and response to light is limited, likely by the quality of streaming video. Users of Google Glass® for teletoxicology reported high levels of comfort with the technology and found it reliable, though as reported reliability decreased, remote users were most affected. Further study should compare patient-centered outcomes when using HMDs for consultation to those resulting from telephone consultation.
High efficiency low cost monolithic module for SARSAT distress beacons
NASA Technical Reports Server (NTRS)
Petersen, Wendell C.; Siu, Daniel P.
1992-01-01
The program objectives were to develop a highly efficient, low cost RF module for SARSAT beacons; achieve significantly lower battery current drain, amount of heat generated, and size of battery required; utilize MMIC technology to improve efficiency, reliability, packaging, and cost; and provide a technology database for GaAs based UHF RF circuit architectures. Presented in viewgraph form are functional block diagrams of the SARSAT distress beacon and beacon RF module as well as performance goals, schematic diagrams, predicted performances, and measured performances for the phase modulator and power amplifier.
A portable battery for objective, non-obstrusive measures of human performances
NASA Technical Reports Server (NTRS)
Kennedy, R. S.
1984-01-01
The need for a standardized battery of human performance tests to measure the effects of various treatments is pointed out. Progress in such a program is reported. Three batteries are available which differ in length and the number of tests in the battery. All tests are implemented on a portable, lap held, briefcase size microprocessor. Performances measured include: information processing, memory, visual perception, reasoning, and motor skills, programs to determine norms, reliabilities, stabilities, factor structure of tests, comparisons with marker tests, apparatus suitability. Rationale for the battery is provided.
Fritz, Ann‐Kristina; Amrein, Irmgard
2017-01-01
Although most nervous system diseases affect women and men differentially, most behavioral studies using mouse models do not include subjects of both sexes. Many researchers worry that data of female mice may be unreliable due to the estrous cycle. Here, we retrospectively evaluated sex effects on coefficient of variation (CV) in 5,311 mice which had performed the same place navigation protocol in the water‐maze and in 4,554 mice tested in the same open field arena. Confidence intervals for Cohen's d as measure of effect size were computed and tested for equivalence with 0.2 as equivalence margin. Despite the large sample size, only few behavioral parameters showed a significant sex effect on CV. Confidence intervals of effect size indicated that CV was either equivalent or showed a small sex difference at most, accounting for less than 2% of total group to group variation of CV. While female mice were potentially slightly more variable in water‐maze acquisition and in the open field, males tended to perform less reliably in the water‐maze probe trial. In addition to evaluating variability, we also directly compared mean performance of female and male mice and found them to be equivalent in both water‐maze place navigation and open field exploration. Our data confirm and extend other large scale studies in demonstrating that including female mice in experiments does not cause a relevant increase of data variability. Our results make a strong case for including mice of both sexes whenever open field or water‐maze are used in preclinical research. PMID:28654717
Fritz, Ann-Kristina; Amrein, Irmgard; Wolfer, David P
2017-09-01
Although most nervous system diseases affect women and men differentially, most behavioral studies using mouse models do not include subjects of both sexes. Many researchers worry that data of female mice may be unreliable due to the estrous cycle. Here, we retrospectively evaluated sex effects on coefficient of variation (CV) in 5,311 mice which had performed the same place navigation protocol in the water-maze and in 4,554 mice tested in the same open field arena. Confidence intervals for Cohen's d as measure of effect size were computed and tested for equivalence with 0.2 as equivalence margin. Despite the large sample size, only few behavioral parameters showed a significant sex effect on CV. Confidence intervals of effect size indicated that CV was either equivalent or showed a small sex difference at most, accounting for less than 2% of total group to group variation of CV. While female mice were potentially slightly more variable in water-maze acquisition and in the open field, males tended to perform less reliably in the water-maze probe trial. In addition to evaluating variability, we also directly compared mean performance of female and male mice and found them to be equivalent in both water-maze place navigation and open field exploration. Our data confirm and extend other large scale studies in demonstrating that including female mice in experiments does not cause a relevant increase of data variability. Our results make a strong case for including mice of both sexes whenever open field or water-maze are used in preclinical research. © 2017 The Authors. American Journal of Medical Genetics Part C Published by Wiley Periodicals, Inc.
Terry, Leann; Kelley, Ken
2012-11-01
Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.
BurnCase 3D software validation study: Burn size measurement accuracy and inter-rater reliability.
Parvizi, Daryousch; Giretzlehner, Michael; Wurzer, Paul; Klein, Limor Dinur; Shoham, Yaron; Bohanon, Fredrick J; Haller, Herbert L; Tuca, Alexandru; Branski, Ludwik K; Lumenta, David B; Herndon, David N; Kamolz, Lars-P
2016-03-01
The aim of this study was to compare the accuracy of burn size estimation using the computer-assisted software BurnCase 3D (RISC Software GmbH, Hagenberg, Austria) with that using a 2D scan, considered to be the actual burn size. Thirty artificial burn areas were pre planned and prepared on three mannequins (one child, one female, and one male). Five trained physicians (raters) were asked to assess the size of all wound areas using BurnCase 3D software. The results were then compared with the real wound areas, as determined by 2D planimetry imaging. To examine inter-rater reliability, we performed an intraclass correlation analysis with a 95% confidence interval. The mean wound area estimations of the five raters using BurnCase 3D were in total 20.7±0.9% for the child, 27.2±1.5% for the female and 16.5±0.1% for the male mannequin. Our analysis showed relative overestimations of 0.4%, 2.8% and 1.5% for the child, female and male mannequins respectively, compared to the 2D scan. The intraclass correlation between the single raters for mean percentage of the artificial burn areas was 98.6%. There was also a high intraclass correlation between the single raters and the 2D Scan visible. BurnCase 3D is a valid and reliable tool for the determination of total body surface area burned in standard models. Further clinical studies including different pediatric and overweight adult mannequins are warranted. Copyright © 2016 Elsevier Ltd and ISBI. All rights reserved.
McGough, Ellen L; Lin, Shih-Yin; Belza, Basia; Becofsky, Katie M; Jones, Dina L; Liu, Minhui; Wilcox, Sara; Logsdon, Rebecca G
2017-11-28
There is growing evidence that exercise interventions can mitigate functional decline and reduce fall risk in older adults with Alzheimer disease and related dementias (ADRD). Although physical performance outcome measures have been successfully used in older adults without cognitive impairment, additional research is needed regarding their use with individuals who have ADRD, and who may have difficulty following instructions regarding performance of these measures. The purpose of this scoping review was to identify commonly used physical performance outcome measures, for exercise interventions, that are responsive and reliable in older adults with ADRD. Ultimately, we aimed to provide recommendations regarding the use of outcome measures for individuals with ADRD across several domains of physical performance. A scoping review was conducted to broadly assess physical performance outcome measures used in exercise interventions for older adults with ADRD. Exercise intervention studies that included at least 1 measure of physical performance were included. All physical performance outcome measures were abstracted, coded, and categorized into 5 domains of physical performance: fitness, functional mobility, gait, balance, and strength. Criteria for recommendations were based on (1) the frequency of use, (2) responsiveness, and (3) reliability. Frequency was determined by the number of studies that used the outcome measure per physical performance domain. Responsiveness was assessed via calculated effect size of the outcome measures across studies within physical performance domains. Reliability was evaluated via published studies of psychometric properties. A total of 20 physical performance outcome measures were extracted from 48 articles that met study inclusion criteria. The most frequently used outcome measures were the 6-minute walk test, Timed Up and Go, repeated chair stand tests, short-distance gait speed, the Berg Balance Scale, and isometric strength measures. These outcome measures demonstrated a small, medium, or large effect in at least 50% of the exercise intervention studies. Good to excellent reliability was reported in samples of older adults with mild to moderate dementia. Fitness, functional mobility, gait, balance, and strength represent important domains of physical performance for older adults. The 6-minute walk test, Timed Up and Go, repeated chair stand tests, short-distance gait speed, Berg Balance Scale, and isometric strength are recommended as commonly used and reliable physical performance outcome measures for exercise interventions in older adults with mild to moderate ADRD. Further research is needed on optimal measures for individuals with severe ADRD. The results of this review will aid clinicians and researchers in selecting reliable measures to evaluate physical performance outcomes in response to exercise interventions in older adults with ADRD.
Mizuguchi, Satoshi; Sands, William A; Wassinger, Craig A; Lamont, Hugh S; Stone, Michael H
2015-06-01
Examining a countermovement jump (CMJ) force-time curve related to net impulse might be useful in monitoring athletes' performance. This study aimed to investigate the reliability of alternative net impulse calculation and net impulse characteristics (height, width, rate of force development, shape factor, and proportion) and validate against the traditional calculation in the CMJ. Twelve participants performed the CMJ in two sessions (48 hours apart) for test-retest reliability. Twenty participants were involved for the validity assessment. Results indicated intra-class correlation coefficient (ICC) of ≥ 0.89 and coefficient of variation (CV) of ≤ 5.1% for all of the variables except for rate of force development (ICC = 0.78 and CV = 22.3%). The relationship between the criterion and alternative calculations was r = 1.00. While the difference between them was statistically significant (245.96 ± 63.83 vs. 247.14 ± 64.08 N s, p < 0.0001), the effect size was trivial and deemed practically minimal (d = 0.02). In conclusion, variability of rate of force development will pose a greater challenge in detecting performance changes. Also, the alternative calculation can be used practically in place of the traditional calculation to identify net impulse characteristics and monitor and study athletes' performance in greater depth.
Cohen, Wayne R; Hayes-Gill, Barrie
2014-06-01
To evaluate the performance of external electronic fetal heart rate and uterine contraction monitoring according to maternal body mass index. Secondary analysis of prospective equivalence study. Three US urban teaching hospitals. Seventy-four parturients with a normal term pregnancy. The parent study assessed performance of two methods of external fetal heart rate monitoring (abdominal fetal electrocardiogram and Doppler ultrasound) and of uterine contraction monitoring (electrohystero-graphy and tocodynamometry) compared with internal monitoring with fetal scalp electrode and intrauterine pressure transducer. Reliability of external techniques was assessed by the success rate and positive percent agreement with internal methods. Bland-Altman analysis determined accuracy. We analyzed data from that study according to maternal body mass index. We assessed the relationship between body mass index and monitor performance with linear regression, using body mass index as the independent variable and measures of reliability and accuracy as dependent variables. There was no significant association between maternal body mass index and any measure of reliability or accuracy for abdominal fetal electrocardiogram. By contrast, the overall positive percent agreement for Doppler ultrasound declined (p = 0.042), and the root mean square error from the Bland-Altman analysis increased in the first stage (p = 0.029) with increasing body mass index. Uterine contraction recordings from electrohysterography and tocodynamometry showed no significant deterioration related to maternal body mass index. Accuracy and reliability of fetal heart rate monitoring using abdominal fetal electrocardiogram was unaffected by maternal obesity, whereas performance of ultrasound degraded directly with maternal size. Both electrohysterography and tocodynamometry were unperturbed by obesity. © 2014 Nordic Federation of Societies of Obstetrics and Gynecology.
Reliability and variability of day-to-day vault training measures in artistic gymnastics.
Bradshaw, Elizabeth; Hume, Patria; Calton, Mark; Aisbett, Brad
2010-06-01
Inter-day training reliability and variability in artistic gymnastics vaulting was determined using a customised infra-red timing gate and contact mat timing system. Thirteen Australian high performance gymnasts (eight males and five females) aged 11-23 years were assessed during two consecutive days of normal training. Each gymnast completed a number of vault repetitions per daily session. Inter-day variability of vault run-up velocities (at -18 to -12 m, -12 to -6 m, -6 to -2 m, and -2 to 0 m from the nearest edge of the beat board), and board contact, pre-flight, and table contact times were determined using mixed modelling statistics to account for random (within-subject variability) and fixed effects (gender, number of subjects, number of trials). The difference in the mean (Mdiff) and Cohen's effect sizes for reliability assessment and intra-class correlation coefficients, and the coefficient of variation percentage (CV%) were calculated for variability assessment. Approach velocity (-18 to -2m, CV = 2.4-7.8%) and board contact time (CV = 3.5%) were less variable measures when accounting for day-to-day performance differences, than pre-flight time (CV = 17.7%) and table contact time (CV = 20.5%). While pre-flight and table contact times are relevant training measures, approach velocity and board contact time are more reliable when quantifying vaulting performance.
Quinn, Lori; Khalil, Hanan; Dawes, Helen; Fritz, Nora E; Kegelmeyer, Deb; Kloos, Anne D; Gillard, Jonathan W; Busse, Monica
2013-07-01
Clinical intervention trials in people with Huntington disease (HD) have been limited by a lack of reliable and appropriate outcome measures. The purpose of this study was to determine the reliability and minimal detectable change (MDC) of various outcome measures that are potentially suitable for evaluating physical functioning in individuals with HD. This was a multicenter, prospective, observational study. Participants with pre-manifest and manifest HD (early, middle, and late stages) were recruited from 8 international sites to complete a battery of physical performance and functional measures at 2 assessments, separated by 1 week. Test-retest reliability (using intraclass correlation coefficients) and MDC values were calculated for all measures. Seventy-five individuals with HD (mean age=52.12 years, SD=11.82) participated in the study. Test-retest reliability was very high (>.90) for participants with manifest HD for the Six-Minute Walk Test (6MWT), 10-Meter Walk Test, Timed "Up & Go" Test (TUG), Berg Balance Scale (BBS), Physical Performance Test (PPT), Barthel Index, Rivermead Mobility Index, and Tinetti Mobility Test (TMT). Many MDC values suggested a relatively high degree of inherent variability, particularly in the middle stage of HD. Minimum detectable change values for participants with manifest HD that were relatively low across disease stages were found for the BBS (5), PPT (5), and TUG (2.98). For individuals with pre-manifest HD (n=11), the 6MWT and Four Square Step Test had high reliability and low MDC values. The sample size for the pre-manifest HD group was small. The BBS, PPT, and TUG appear most appropriate for clinical trials aimed at improving physical functioning in people with manifest HD. Further research in people with pre-manifest HD is necessary.
ERIC Educational Resources Information Center
Achilles, C. M.; Krieger, Jean D.; Finn, J. D.; Sharp, Mark
Small classes in grades K-3 boost student academic performance in all subjects and in prosocial behavior. Results are both short- and long-term. One study explored the theory that a major cause behind improved academic achievement involves improved student behavior, which increases student engagement in the classroom. Two other studies provide…
The Reliability of Individualized Load-Velocity Profiles.
Banyard, Harry G; Nosaka, K; Vernon, Alex D; Haff, G Gregory
2017-11-15
This study examined the reliability of peak velocity (PV), mean propulsive velocity (MPV), and mean velocity (MV) in the development of load-velocity profiles (LVP) in the full depth free-weight back squat performed with maximal concentric effort. Eighteen resistance-trained men performed a baseline one-repetition maximum (1RM) back squat trial and three subsequent 1RM trials used for reliability analyses, with 48-hours interval between trials. 1RM trials comprised lifts from six relative loads including 20, 40, 60, 80, 90, and 100% 1RM. Individualized LVPs for PV, MPV, or MV were derived from loads that were highly reliable based on the following criteria: intra-class correlation coefficient (ICC) >0.70, coefficient of variation (CV) ≤10%, and Cohen's d effect size (ES) <0.60. PV was highly reliable at all six loads. Importantly, MPV and MV were highly reliable at 20, 40, 60, 80 and 90% but not 100% 1RM (MPV: ICC=0.66, CV=18.0%, ES=0.10, standard error of the estimate [SEM]=0.04m·s -1 ; MV: ICC=0.55, CV=19.4%, ES=0.08, SEM=0.04m·s -1 ). When considering the reliable ranges, almost perfect correlations were observed for LVPs derived from PV 20-100% (r=0.91-0.93), MPV 20-90% (r=0.92-0.94) and MV 20-90% (r=0.94-0.95). Furthermore, the LVPs were not significantly different (p>0.05) between trials, movement velocities, or between linear regression versus second order polynomial fits. PV 20-100% , MPV 20-90% , and MV 20-90% are reliable and can be utilized to develop LVPs using linear regression. Conceptually, LVPs can be used to monitor changes in movement velocity and employed as a method for adjusting sessional training loads according to daily readiness.
Reliability and validity of two isometric squat tests.
Blazevich, Anthony J; Gill, Nicholas; Newton, Robert U
2002-05-01
The purpose of the present study was first to examine the reliability of isometric squat (IS) and isometric forward hack squat (IFHS) tests to determine if repeated measures on the same subjects yielded reliable results. The second purpose was to examine the relation between isometric and dynamic measures of strength to assess validity. Fourteen male subjects performed maximal IS and IFHS tests on 2 occasions and 1 repetition maximum (1-RM) free-weight squat and forward hack squat (FHS) tests on 1 occasion. The 2 tests were found to be highly reliable (intraclass correlation coefficient [ICC](IS) = 0.97 and ICC(IFHS) = 1.00). There was a strong relation between average IS and 1-RM squat performance, and between IFHS and 1-RM FHS performance (r(squat) = 0.77, r(FHS) = 0.76; p < 0.01), but a weak relation between squat and FHS test performances (r < 0.55). There was also no difference between observed 1-RM values and those predicted by our regression equations. Errors in predicting 1-RM performance were in the order of 8.5% (standard error of the estimate [SEE] = 13.8 kg) and 7.3% (SEE = 19.4 kg) for IS and IFHS respectively. Correlations between isometric and 1-RM tests were not of sufficient size to indicate high validity of the isometric tests. Together the results suggest that IS and IFHS tests could detect small differences in multijoint isometric strength between subjects, or performance changes over time, and that the scores in the isometric tests are well related to 1-RM performance. However, there was a small error when predicting 1-RM performance from isometric performance, and these tests have not been shown to discriminate between small changes in dynamic strength. The weak relation between squat and FHS test performance can be attributed to differences in the movement patterns of the tests
Antoine-Santoni, Thierry; Santucci, Jean-François; de Gentili, Emmanuelle; Silvani, Xavier; Morandini, Frederic
2009-01-01
The paper deals with a Wireless Sensor Network (WSN) as a reliable solution for capturing the kinematics of a fire front spreading over a fuel bed. To provide reliable information in fire studies and support fire fighting strategies, a Wireless Sensor Network must be able to perform three sequential actions: 1) sensing thermal data in the open as the gas temperature; 2) detecting a fire i.e., the spatial position of a flame; 3) tracking the fire spread during its spatial and temporal evolution. One of the great challenges in performing fire front tracking with a WSN is to avoid the destruction of motes by the fire. This paper therefore shows the performance of Wireless Sensor Network when the motes are protected with a thermal insulation dedicated to track a fire spreading across vegetative fuels on a field scale. The resulting experimental WSN is then used in series of wildfire experiments performed in the open in vegetation areas ranging in size from 50 to 1,000 m2. PMID:22454563
Antoine-Santoni, Thierry; Santucci, Jean-François; de Gentili, Emmanuelle; Silvani, Xavier; Morandini, Frederic
2009-01-01
The paper deals with a Wireless Sensor Network (WSN) as a reliable solution for capturing the kinematics of a fire front spreading over a fuel bed. To provide reliable information in fire studies and support fire fighting strategies, a Wireless Sensor Network must be able to perform three sequential actions: 1) sensing thermal data in the open as the gas temperature; 2) detecting a fire i.e., the spatial position of a flame; 3) tracking the fire spread during its spatial and temporal evolution. One of the great challenges in performing fire front tracking with a WSN is to avoid the destruction of motes by the fire. This paper therefore shows the performance of Wireless Sensor Network when the motes are protected with a thermal insulation dedicated to track a fire spreading across vegetative fuels on a field scale. The resulting experimental WSN is then used in series of wildfire experiments performed in the open in vegetation areas ranging in size from 50 to 1,000 m(2).
Measuring the Pain Area: An Intra- and Inter-Rater Reliability Study Using Image Analysis Software.
Dos Reis, Felipe Jose Jandre; de Barros E Silva, Veronica; de Lucena, Raphaela Nunes; Mendes Cardoso, Bruno Alexandre; Nogueira, Leandro Calazans
2016-01-01
Pain drawings have frequently been used for clinical information and research. The aim of this study was to investigate intra- and inter-rater reliability of area measurements performed on pain drawings. Our secondary objective was to verify the reliability when using computers with different screen sizes, both with and without mouse hardware. Pain drawings were completed by patients with chronic neck pain or neck-shoulder-arm pain. Four independent examiners participated in the study. Examiners A and B used the same computer with a 16-inch screen and wired mouse hardware. Examiner C used a notebook with a 16-inch screen and no mouse hardware, and Examiner D used a computer with an 11.6-inch screen and a wireless mouse. Image measurements were obtained using GIMP and NIH ImageJ computer programs. The length of all the images was measured using GIMP software to a set scale in ImageJ. Thus, each marked area was encircled and the total surface area (cm(2) ) was calculated for each pain drawing measurement. A total of 117 areas were identified and 52 pain drawings were analyzed. The intrarater reliability between all examiners was high (ICC = 0.989). The inter-rater reliability was also high. No significant differences were observed when using different screen sizes or when using or not using the mouse hardware. This suggests that the precision of these measurements is acceptable for the use of this method as a measurement tool in clinical practice and research. © 2014 World Institute of Pain.
Reliability considerations for the total strain range version of strainrange partitioning
NASA Technical Reports Server (NTRS)
Wirsching, P. H.; Wu, Y. T.
1984-01-01
A proposed total strainrange version of strainrange partitioning (SRP) to enhance the manner in which SRP is applied to life prediction is considered with emphasis on how advanced reliability technology can be applied to perform risk analysis and to derive safety check expressions. Uncertainties existing in the design factors associated with life prediction of a component which experiences the combined effects of creep and fatigue can be identified. Examples illustrate how reliability analyses of such a component can be performed when all design factors in the SRP model are random variables reflecting these uncertainties. The Rackwitz-Fiessler and Wu algorithms are used and estimates of the safety index and the probablity of failure are demonstrated for a SRP problem. Methods of analysis of creep-fatigue data with emphasis on procedures for producing synoptic statistics are presented. An attempt to demonstrate the importance of the contribution of the uncertainties associated with small sample sizes (fatique data) to risk estimates is discussed. The procedure for deriving a safety check expression for possible use in a design criteria document is presented.
A pragmatic decision model for inventory management with heterogeneous suppliers
NASA Astrophysics Data System (ADS)
Nakandala, Dilupa; Lau, Henry; Zhang, Jingjing; Gunasekaran, Angappa
2018-05-01
For enterprises, it is imperative that the trade-off between the cost of inventory and risk implications is managed in the most efficient manner. To explore this, we use the common example of a wholesaler operating in an environment where suppliers demonstrate heterogeneous reliability. The wholesaler has partial orders with dual suppliers and uses lateral transshipments. While supplier reliability is a key concern in inventory management, reliable suppliers are more expensive and investment in strategic approaches that improve supplier performance carries a high cost. Here we consider the operational strategy of dual sourcing with reliable and unreliable suppliers and model the total inventory cost where the likely scenario lead-time of the unreliable suppliers extends beyond the scheduling period. We then develop a Customized Integer Programming Optimization Model to determine the optimum size of partial orders with multiple suppliers. In addition to the objective of total cost optimization, this study takes into account the volatility of the cost associated with the uncertainty of an inventory system.
First-order reliability application and verification methods for semistatic structures
NASA Astrophysics Data System (ADS)
Verderaime, V.
1994-11-01
Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored in conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments; stress audits are shown to be arbitrary and incomplete, and the concept compromises the performance of high-strength materials. A reliability method is proposed that combines first-order reliability principles with deterministic design variables and conventional test techniques to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety-index expression. The application is reduced to solving for a design factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this design factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and the development of semistatic structural designs.
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.; Hoge, Peter A.; Patel, B. M.; Nagpal, Vinod K.
2009-01-01
The primary structure of the Ares I-X Upper Stage Simulator (USS) launch vehicle is constructed of welded mild steel plates. There is some concern over the possibility of structural failure due to welding flaws. It was considered critical to quantify the impact of uncertainties in residual stress, material porosity, applied loads, and material and crack growth properties on the reliability of the welds during its pre-flight and flight. A criterion--an existing maximum size crack at the weld toe must be smaller than the maximum allowable flaw size--was established to estimate the reliability of the welds. A spectrum of maximum allowable flaw sizes was developed for different possible combinations of all of the above listed variables by performing probabilistic crack growth analyses using the ANSYS finite element analysis code in conjunction with the NASGRO crack growth code. Two alternative methods were used to account for residual stresses: (1) The mean residual stress was assumed to be 41 ksi and a limit was set on the net section flow stress during crack propagation. The critical flaw size was determined by parametrically increasing the initial flaw size and detecting if this limit was exceeded during four complete flight cycles, and (2) The mean residual stress was assumed to be 49.6 ksi (the parent material s yield strength) and the net section flow stress limit was ignored. The critical flaw size was determined by parametrically increasing the initial flaw size and detecting if catastrophic crack growth occurred during four complete flight cycles. Both surface-crack models and through-crack models were utilized to characterize cracks in the weld toe.
Agreement and reading time for differently-priced devices for the digital capture of X-ray films.
Salazar, Antonio José; Camacho, Juan Camilo; Aguirre, Diego Andrés
2012-03-01
We assessed the reliability of three digital capture devices: a film digitizer (which cost US $15,000), a flat-bed scanner (US $1800) and a digital camera (US $450). Reliability was measured as the agreement between six observers when reading images acquired from a single device and also in terms of the pair-device agreement. The images were 136 chest X-ray cases. The variables measured were the interstitial opacities distribution, interstitial patterns, nodule size and percentage pneumothorax size. The agreement between the six readers when reading images acquired from a single device was similar for the three devices. The pair-device agreements were moderate for all variables. There were significant differences in reading-time between devices: the mean reading-time for the film digitizer was 93 s, it was 59 s for the flat-bed scanner and 70 s for the digital camera. Despite the differences in their cost, there were no substantial differences in the performance of the three devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narumanchi, Sreekant
Increasing the number of electric-drive vehicles (EDVs) on America's roads has been identified as a strategy with near-term potential for dramatically decreasing the nation's dependence on oil - by the U.S. Department of Energy, the federal cross-agency EV-Everywhere Challenge, and the automotive industry. Mass-market deployment will rely on meeting aggressive technical targets, including improved efficiency and reduced size, weight, and cost. Many of these advances will depend on optimization of thermal management. Effective thermal management is critical to improving the performance and ensuring the reliability of EDVs. Efficient heat removal makes higher power densities and lower operating temperatures possible, andmore » in turn enables cost and size reductions. The National Renewable Energy Laboratory (NREL), along with DOE and industry partners is working to develop cost-effective thermal management solutions to increase device and component power densities. In this presentation, the activities in recent years related to thermal management and reliability of automotive power electronics and electric machines are presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narumanchi, Sreekant
Increasing the number of electric-drive vehicles (EDVs) on America's roads has been identified as a strategy with near-term potential for dramatically decreasing the nation's dependence on oil -- by the U.S. Department of Energy, the federal cross-agency EV-Everywhere Challenge, and the automotive industry. Mass-market deployment will rely on meeting aggressive technical targets, including improved efficiency and reduced size, weight, and cost. Many of these advances will depend on optimization of thermal management. Effective thermal management is critical to improving the performance and ensuring the reliability of EDVs. Efficient heat removal makes higher power densities and lower operating temperatures possible, andmore » in turn enables cost and size reductions. The National Renewable Energy Laboratory (NREL), along with DOE and industry partners is working to develop cost-effective thermal management solutions to increase device and component power densities. In this presentation, the activities in recent years related to thermal management and reliability of automotive power electronics and electric machines will be presented.« less
NASA Astrophysics Data System (ADS)
Qiang, Tian; Wang, Cong; Kim, Nam-Young
2017-08-01
A diplexer offering the advantages of compact size, high performance, and high reliability is proposed on the basis of advanced integrated passive device (IPD) fabrication techniques. The proposed diplexer is developed by combining a third-order low-pass filter (LPF) and a third-order high-pass filter (HPF), which are designed on the basis of the elliptic function prototype low-pass filter. Primary components, such as inductors and capacitors, are designed and fabricated with high Q-factor and appropriate values, and they are subsequently used to construct a compact diplexer having a chip area of 900 μm × 1100 μm (0.009 λ0 × 0.011 λ0, where λ0 is the guided wavelength). In addition, a small-outline transistor (SOT-6) packaging method is adopted, and reliability tests (including temperature, humidity, vibration, and pressure) are conducted to guarantee long-term stability and commercial success. The packaged measurement results indicate excellent RF performance with insertion losses of 1.39 dB and 0.75 dB at operation bands of 0.9 GHz and 1.8 GHz, respectively. The return loss is lower than 10 dB from 0.5 GHz to 4.0 GHz, while the isolation is higher than 15 dB from 0.5 GHz to 3.0 GHz. Thus, it can be concluded that the proposed SOT-6 packaged diplexer is a promising candidate for GSM/CDMA applications. Synthetic solution of diplexer design, RF performance optimization, fabrication process, packaging, RF response measurement, and reliability test is particularly explained and analyzed in this work.
Cygnus Performance in Subcritical Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
G. Corrow, M. Hansen, D. Henderson, S. Lutz, C. Mitton, et al.
2008-02-01
The Cygnus Dual Beam Radiographic Facility consists of two identical radiographic sources with the following specifications: 4-rad dose at 1 m, 1-mm spot size, 50-ns pulse length, 2.25-MeV endpoint energy. The facility is located in an underground tunnel complex at the Nevada Test Site. Here SubCritical Experiments (SCEs) are performed to study the dynamic properties of plutonium. The Cygnus sources were developed as a primary diagnostic for these tests. Since SCEs are single-shot, high-value events - reliability and reproducibility are key issues. Enhanced reliability involves minimization of failure modes through design, inspection, and testing. Many unique hardware and operational featuresmore » were incorporated into Cygnus to insure reliability. Enhanced reproducibility involves normalization of shot-to-shot output also through design, inspection, and testing. The first SCE to utilize Cygnus, Armando, was executed on May 25, 2004. A year later, April - May 2005, calibrations using a plutonium step wedge were performed. The results from this series were used for more precise interpretation of the Armando data. In the period February - May 2007 Cygnus was fielded on Thermos, which is a series of small-sample plutonium shots using a one-dimensional geometry. Pulsed power research generally dictates frequent change in hardware configuration. Conversely, SCE applications have typically required constant machine settings. Therefore, while operating during the past four years we have accumulated a large database for evaluation of machine performance under highly consistent operating conditions. Through analysis of this database Cygnus reliability and reproducibility on Armando, Step Wedge, and Thermos is presented.« less
Male songbird indicates body size with low-pitched advertising songs.
Hall, Michelle L; Kingma, Sjouke A; Peters, Anne
2013-01-01
Body size is a key sexually selected trait in many animal species. If size imposes a physical limit on the production of loud low-frequency sounds, then low-pitched vocalisations could act as reliable signals of body size. However, the central prediction of this hypothesis--that the pitch of vocalisations decreases with size among competing individuals--has limited support in songbirds. One reason could be that only the lowest-frequency components of vocalisations are constrained, and this may go unnoticed when vocal ranges are large. Additionally, the constraint may only be apparent in contexts when individuals are indeed advertising their size. Here we explicitly consider signal diversity and performance limits to demonstrate that body size limits song frequency in an advertising context in a songbird. We show that in purple-crowned fairy-wrens, Malurus coronatus coronatus, larger males sing lower-pitched low-frequency advertising songs. The lower frequency bound of all advertising song types also has a significant negative relationship with body size. However, the average frequency of all their advertising songs is unrelated to body size. This comparison of different approaches to the analysis demonstrates how a negative relationship between body size and song frequency can be obscured by failing to consider signal design and the concept of performance limits. Since these considerations will be important in any complex communication system, our results imply that body size constraints on low-frequency vocalisations could be more widespread than is currently recognised.
Male Songbird Indicates Body Size with Low-Pitched Advertising Songs
Hall, Michelle L.; Kingma, Sjouke A.; Peters, Anne
2013-01-01
Body size is a key sexually selected trait in many animal species. If size imposes a physical limit on the production of loud low-frequency sounds, then low-pitched vocalisations could act as reliable signals of body size. However, the central prediction of this hypothesis – that the pitch of vocalisations decreases with size among competing individuals – has limited support in songbirds. One reason could be that only the lowest-frequency components of vocalisations are constrained, and this may go unnoticed when vocal ranges are large. Additionally, the constraint may only be apparent in contexts when individuals are indeed advertising their size. Here we explicitly consider signal diversity and performance limits to demonstrate that body size limits song frequency in an advertising context in a songbird. We show that in purple-crowned fairy-wrens, Malurus coronatus coronatus, larger males sing lower-pitched low-frequency advertising songs. The lower frequency bound of all advertising song types also has a significant negative relationship with body size. However, the average frequency of all their advertising songs is unrelated to body size. This comparison of different approaches to the analysis demonstrates how a negative relationship between body size and song frequency can be obscured by failing to consider signal design and the concept of performance limits. Since these considerations will be important in any complex communication system, our results imply that body size constraints on low-frequency vocalisations could be more widespread than is currently recognised. PMID:23437221
Superior model for fault tolerance computation in designing nano-sized circuit systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, N. S. S., E-mail: narinderjit@petronas.com.my; Muthuvalu, M. S., E-mail: msmuthuvalu@gmail.com; Asirvadam, V. S., E-mail: vijanth-sagayan@petronas.com.my
2014-10-24
As CMOS technology scales nano-metrically, reliability turns out to be a decisive subject in the design methodology of nano-sized circuit systems. As a result, several computational approaches have been developed to compute and evaluate reliability of desired nano-electronic circuits. The process of computing reliability becomes very troublesome and time consuming as the computational complexity build ups with the desired circuit size. Therefore, being able to measure reliability instantly and superiorly is fast becoming necessary in designing modern logic integrated circuits. For this purpose, the paper firstly looks into the development of an automated reliability evaluation tool based on the generalizationmore » of Probabilistic Gate Model (PGM) and Boolean Difference-based Error Calculator (BDEC) models. The Matlab-based tool allows users to significantly speed-up the task of reliability analysis for very large number of nano-electronic circuits. Secondly, by using the developed automated tool, the paper explores into a comparative study involving reliability computation and evaluation by PGM and, BDEC models for different implementations of same functionality circuits. Based on the reliability analysis, BDEC gives exact and transparent reliability measures, but as the complexity of the same functionality circuits with respect to gate error increases, reliability measure by BDEC tends to be lower than the reliability measure by PGM. The lesser reliability measure by BDEC is well explained in this paper using distribution of different signal input patterns overtime for same functionality circuits. Simulation results conclude that the reliability measure by BDEC depends not only on faulty gates but it also depends on circuit topology, probability of input signals being one or zero and also probability of error on signal lines.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Braatz, Brett G.; Cumblidge, Stephen E.; Doctor, Steven R.
2012-12-31
The U.S. Nuclear Regulatory Commission has established the Program to Assess the Reliability of Emerging Nondestructive Techniques (PARENT) as a follow-on to the international cooperative Program for the Inspection of Nickel Alloy Components (PINC). The goal of PINC was to evaluate the capabilities of various nondestructive evaluation (NDE) techniques to detect and characterize surface-breaking primary water stress corrosion cracks in dissimilar-metal welds (DMW) in bottom-mounted instrumentation (BMI) penetrations and small-bore (≈400-mm diameter) piping components. A series of international blind round-robin tests were conducted by commercial and university inspection teams. Results from these tests showed that a combination of conventional andmore » phased-array ultrasound techniques provided the highest performance for flaw detection and depth sizing in dissimilar metal piping welds. The effective detection of flaws in BMIs by eddy current and ultrasound shows that it may be possible to reliably inspect these components in the field. The goal of PARENT is to continue the work begun in PINC and apply the lessons learned to a series of open and blind international round-robin tests that will be conducted on a new set of piping components including large-bore (≈900-mm diameter) DMWs, small-bore DMWs, and BMIs. Open round-robin testing will engage universities and industry worldwide to investigate the reliability of emerging NDE techniques to detect and accurately size flaws having a wide range of lengths, depths, orientations, and locations. Blind round-robin testing will invite testing organizations worldwide, whose inspectors and procedures are certified by the standards for the nuclear industry in their respective countries, to investigate the ability of established NDE techniques to detect and size flaws whose characteristics range from easy to very difficult to detect and size. This paper presents highlights of PINC and reports on the plans and progress for PARENT round-robin tests.« less
Alonso, Carmen; Raynor, Peter C; Goyal, Sagar; Olson, Bernard A; Alba, Anna; Davies, Peter R; Torremorell, Montserrat
2017-05-01
Swine and poultry viruses, such as porcine reproductive and respiratory syndrome virus (PRRSV), porcine epidemic diarrhea virus (PEDV), and highly pathogenic avian influenza virus (HPAIV), are economically important pathogens that can spread via aerosols. The reliability of methods for quantifying particle-associated viruses as well as the size distribution of aerosolized particles bearing these viruses under field conditions are not well documented. We compared the performance of 2 size-differentiating air samplers in disease outbreaks that occurred in swine and poultry facilities. Both air samplers allowed quantification of particles by size, and measured concentrations of PRRSV, PEDV, and HPAIV stratified by particle size both within and outside swine and poultry facilities. All 3 viruses were detectable in association with aerosolized particles. Proportions of positive sampling events were 69% for PEDV, 61% for HPAIV, and 8% for PRRSV. The highest virus concentrations were found with PEDV, followed by HPAIV and PRRSV. Both air collectors performed equally for the detection of total virus concentration. For all 3 viruses, higher numbers of RNA copies were associated with larger particles; however, a bimodal distribution of particles was observed in the case of PEDV and HPAIV.
Wijlemans, Joost W; Deckers, Roel; van den Bosch, Maurice A A J; Seinstra, Beatrijs A; van Stralen, Marijn; van Diest, Paul J; Moonen, Chrit T W; Bartels, Lambertus W
2013-06-01
Volumetric magnetic resonance (MR)-guided high-intensity focused ultrasound (HIFU) is a completely noninvasive image-guided thermal ablation technique. Recently, there has been growing interest in the use of MR-HIFU for noninvasive ablation of malignant tumors. Of particular interest for noninvasive ablation of malignant tumors is reliable treatment monitoring and evaluation of response. At this point, there is limited evidence on the evolution of the ablation region after MR-HIFU treatment. The purpose of the present study was to comprehensively characterize the evolution of the ablation region after volumetric MR-HIFU ablation in a Vx2 tumor model using MR imaging, MR temperature data, and histological data. Vx2 tumors in the hind limb muscle of New Zealand White rabbits (n = 30) were ablated using a clinical MR-HIFU system. Twenty-four animals were available for analyses. Magnetic resonance imaging was performed before and immediately after ablation; MR temperature mapping was performed during the ablation. The animals were distributed over 7 groups with different follow-up lengths. Depending on the group, animals were reimaged and then killed on day 0, 1, 3, 7, 14, 21, or 28 after ablation. For all time points, the size of nonperfused areas (NPAs) on contrast-enhanced T1-weighted (CE-T1-w) images was compared with lethal thermal dose areas (ie, the tissue area that received a thermal dose of 240 equivalent minutes or greater [EM] at 43°C) and with the necrotic tissue areas on histology sections. The NPA on CE-T1-w imaging showed an increase in median size from 266 ± 148 to 392 ± 178 mm(2) during the first day and to 343 ± 170 mm(2) on day 3, followed by a gradual decrease to 113 ± 103 mm(2) on day 28. Immediately after ablation, the NPA was 1.6 ± 1.4 times larger than the area that received a thermal dose of 240 EM or greater in all animals. The median size of the necrotic area on histology was 1.7 ± 0.4 times larger than the NPA immediately after ablation. After 7 days, the size of the NPA was in agreement with the necrotic tissue area on histology (ratio, 1.0 ± 0.2). During the first 3 days after MR-HIFU ablation, the ablation region increases in size, after which it gradually decreases in size. The NPA on CE-T1-w imaging underestimates the extent of tissue necrosis on histology in the initial few days, but after 1 week, the NPA is reliable in delineating the necrotic tissue area. The 240-EM thermal dose limit underestimates the necrotic tissue area immediately after MR-HIFU ablation. Reliable treatment evaluation techniques are particularly important for noninvasive, image-guided tumor ablation. Our results indicate that CE-T1-w imaging is reliable for MR-HIFU treatment evaluation after 1 week.
A microRNA detection system based on padlock probes and rolling circle amplification
Jonstrup, Søren Peter; Koch, Jørn; Kjems, Jørgen
2006-01-01
The differential expression and the regulatory roles of microRNAs (miRNAs) are being studied intensively these years. Their minute size of only 19–24 nucleotides and strong sequence similarity among related species call for enhanced methods for reliable detection and quantification. Moreover, miRNA expression is generally restricted to a limited number of specific cells within an organism and therefore requires highly sensitive detection methods. Here we present a simple and reliable miRNA detection protocol based on padlock probes and rolling circle amplification. It can be performed without specialized equipment and is capable of measuring the content of specific miRNAs in a few nanograms of total RNA. PMID:16888321
A microRNA detection system based on padlock probes and rolling circle amplification.
Jonstrup, Søren Peter; Koch, Jørn; Kjems, Jørgen
2006-09-01
The differential expression and the regulatory roles of microRNAs (miRNAs) are being studied intensively these years. Their minute size of only 19-24 nucleotides and strong sequence similarity among related species call for enhanced methods for reliable detection and quantification. Moreover, miRNA expression is generally restricted to a limited number of specific cells within an organism and therefore requires highly sensitive detection methods. Here we present a simple and reliable miRNA detection protocol based on padlock probes and rolling circle amplification. It can be performed without specialized equipment and is capable of measuring the content of specific miRNAs in a few nanograms of total RNA.
Validity and reliability of a new tool to evaluate handwriting difficulties in Parkinson's disease.
Nackaerts, Evelien; Heremans, Elke; Smits-Engelsman, Bouwien C M; Broeder, Sanne; Vandenberghe, Wim; Bergmans, Bruno; Nieuwboer, Alice
2017-01-01
Handwriting in Parkinson's disease (PD) features specific abnormalities which are difficult to assess in clinical practice since no specific tool for evaluation of spontaneous movement is currently available. This study aims to validate the 'Systematic Screening of Handwriting Difficulties' (SOS-test) in patients with PD. Handwriting performance of 87 patients and 26 healthy age-matched controls was examined using the SOS-test. Sixty-seven patients were tested a second time within a period of one month. Participants were asked to copy as much as possible of a text within 5 minutes with the instruction to write as neatly and quickly as in daily life. Writing speed (letters in 5 minutes), size (mm) and quality of handwriting were compared. Correlation analysis was performed between SOS outcomes and other fine motor skill measurements and disease characteristics. Intrarater, interrater and test-retest reliability were assessed using the intraclass correlation coefficient (ICC) and Spearman correlation coefficient. Patients with PD had a smaller (p = 0.043) and slower (p<0.001) handwriting and showed worse writing quality (p = 0.031) compared to controls. The outcomes of the SOS-test significantly correlated with fine motor skill performance and disease duration and severity. Furthermore, the test showed excellent intrarater, interrater and test-retest reliability (ICC > 0.769 for both groups). The SOS-test is a short and effective tool to detect handwriting problems in PD with excellent reliability. It can therefore be recommended as a clinical instrument for standardized screening of handwriting deficits in PD.
Fitzgerald, John S; Johnson, LuAnn; Tomkinson, Grant; Stein, Jesse; Roemmich, James N
2018-05-01
Mechanography during the vertical jump may enhance screening and determining mechanistic causes underlying physical performance changes. Utility of jump mechanography for evaluation is limited by scant test-retest reliability data on force-time variables. This study examined the test-retest reliability of eight jump execution variables assessed from mechanography. Thirty-two women (mean±SD: age 20.8 ± 1.3 yr) and 16 men (age 22.1 ± 1.9 yr) attended a familiarization session and two testing sessions, all one week apart. Participants performed two variations of the squat jump with squat depth self-selected and controlled using a goniometer to 80º knee flexion. Test-retest reliability was quantified as the systematic error (using effect size between jumps), random error (using coefficients of variation), and test-retest correlations (using intra-class correlation coefficients). Overall, jump execution variables demonstrated acceptable reliability, evidenced by small systematic errors (mean±95%CI: 0.2 ± 0.07), moderate random errors (mean±95%CI: 17.8 ± 3.7%), and very strong test-retest correlations (range: 0.73-0.97). Differences in random errors between controlled and self-selected protocols were negligible (mean±95%CI: 1.3 ± 2.3%). Jump execution variables demonstrated acceptable reliability, with no meaningful differences between the controlled and self-selected jump protocols. To simplify testing, a self-selected jump protocol can be used to assess force-time variables with negligible impact on measurement error.
Application Of Ti-Based Self-Formation Barrier Layers To Cu Dual-Damascene Interconnects
NASA Astrophysics Data System (ADS)
Ito, Kazuhiro; Ohmori, Kazuyuki; Kohama, Kazuyuki; Mori, Kenichi; Maekawa, Kazuyoshi; Asai, Koyu; Murakami, Masanori
2010-11-01
Cu interconnects have been used extensively in ULSI devices. However, large resistance-capacitance delay and poor device reliability have been critical issues as the device feature size has reduced to nanometer scale. In order to achieve low resistance and high reliability of Cu interconnects, we have applied a thin Ti-based self-formed barrier (SFB) using Cu(Ti) alloy seed to 45nm-node dual damascene interconnects and evaluated its performance. The line resistance and via resistance decreased significantly, compared with those of conventional Ta/TaN barriers. The stress migration performance was also drastically improved using the SFB process. A performance of time dependent dielectric breakdown revealed superior endurance. These results suggest that the Ti-based SFB process is one of the most promising candidates for advanced Cu interconnects. TEM and X-ray photoelectron spectroscopy observations for characterization of the Ti-based SFB structure were also performed. The Ti-based SFB consisted of mainly amorphous Ti oxides. Amorphous or crystalline Ti compounds such as TiC, TiN, and TiSi formed beneath Cu alloy films, and the formation varied with dielectric.
Investigating Reliabilities of Intraindividual Variability Indicators
ERIC Educational Resources Information Center
Wang, Lijuan; Grimm, Kevin J.
2012-01-01
Reliabilities of the two most widely used intraindividual variability indicators, "ISD[superscript 2]" and "ISD", are derived analytically. Both are functions of the sizes of the first and second moments of true intraindividual variability, the size of the measurement error variance, and the number of assessments within a burst. For comparison,…
Assessment, development, and testing of glass for blast environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glass, Sarah Jill
2003-06-01
Glass can have lethal effects including fatalities and injuries when it breaks and then flies through the air under blast loading (''the glass problem''). One goal of this program was to assess the glass problem and solutions being pursued to mitigate it. One solution to the problem is the development of new glass technology that allows the strength and fragmentation to be controlled or selected depending on the blast performance specifications. For example the glass could be weak and fail, or it could be strong and survive, but it must perform reliably. Also, once it fails it should produce fragmentsmore » of a controlled size. Under certain circumstances it may be beneficial to have very small fragments, in others it may be beneficial to have large fragments that stay together. The second goal of this program was to evaluate the performance (strength, reliability, and fragmentation) of Engineered Stress Profile (ESP) glass under different loading conditions. These included pseudo-static strength and pressure tests and free-field blast tests. The ultimate goal was to provide engineers and architects with a glass whose behavior under blast loading is less lethal. A near-term benefit is a new approach for improving the reliability of glass and modifying its fracture behavior.« less
Novel Strength Test Battery to Permit Evidence-Based Paralympic Classification
Beckman, Emma M.; Newcombe, Peter; Vanlandewijck, Yves; Connick, Mark J.; Tweedy, Sean M.
2014-01-01
Abstract Ordinal-scale strength assessment methods currently used in Paralympic athletics classification prevent the development of evidence-based classification systems. This study evaluated a battery of 7, ratio-scale, isometric tests with the aim of facilitating the development of evidence-based methods of classification. This study aimed to report sex-specific normal performance ranges, evaluate test–retest reliability, and evaluate the relationship between the measures and body mass. Body mass and strength measures were obtained from 118 participants—63 males and 55 females—ages 23.2 years ± 3.7 (mean ± SD). Seventeen participants completed the battery twice to evaluate test–retest reliability. The body mass–strength relationship was evaluated using Pearson correlations and allometric exponents. Conventional patterns of force production were observed. Reliability was acceptable (mean intraclass correlation = 0.85). Eight measures had moderate significant correlations with body size (r = 0.30–61). Allometric exponents were higher in males than in females (mean 0.99 vs 0.30). Results indicate that this comprehensive and parsimonious battery is an important methodological advance because it has psychometric properties critical for the development of evidence-based classification. Measures were interrelated with body size, indicating further research is required to determine whether raw measures require normalization in order to be validly applied in classification. PMID:25068950
Ribeiro, Fernanda; Lépine, Pierre-Alexis; Garceau-Bolduc, Corine; Coats, Valérie; Allard, Étienne; Maltais, François; Saey, Didier
2015-01-01
Background The purpose of this study was to determine and compare the test-retest reliability of quadriceps isokinetic endurance testing at two knee angular velocities in patients with chronic obstructive pulmonary disease (COPD). Methods After one familiarization session, 14 patients with moderate to severe COPD (mean age 65±4 years; forced expiratory volume in 1 second (FEV1) 55%±18% predicted) performed two quadriceps isokinetic endurance tests on two separate occasions within a 5–7-day interval. Quadriceps isokinetic endurance tests consisted of 30 maximal knee extensions at angular velocities of 90° and 180° per second, performed in random order. Test-retest reliability was assessed for peak torque, muscle endurance, work slope, work fatigue index, and changes in FEV1 for dyspnea and leg fatigue from rest to the end of the test. The intraclass correlation coefficient, minimal detectable change, and limits of agreement were calculated. Results High test-retest reliability was identified for peak torque and muscle total work at both velocities. Work fatigue index was considered reliable at 90° per second but not at 180° per second. A lower reliability was identified for dyspnea and leg fatigue scores at both angular velocities. Conclusion Despite a limited sample size, our findings support the use of a 30-maximal repetition isokinetic muscle testing procedure at angular velocities of 90° and 180° per second in patients with moderate to severe COPD. Endurance measurement (total isokinetic work) at 90° per second was highly reliable, with a minimal detectable change at the 95% confidence level of 10%. Peak torque and fatigue index could also be assessed reliably at 90° per second. Evaluation of dyspnea and leg fatigue using the modified Borg scale of perceived exertion was poorly reliable and its clinical usefulness is questionable. These results should be useful in the design and interpretation of future interventions aimed at improving muscle endurance in COPD. PMID:26124656
JPARSS: A Java Parallel Network Package for Grid Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jie; Akers, Walter; Chen, Ying
2002-03-01
The emergence of high speed wide area networks makes grid computinga reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve bandwidth and to reduce latency on a high speed wide area network. This paper presents a Java package called JPARSS (Java Parallel Secure Stream (Socket)) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a grid environment without the necessity of tuning TCP window size.more » This package enables single sign-on, certificate delegation and secure or plain-text data transfer using several security components based on X.509 certificate and SSL. Several experiments will be presented to show that using Java parallelstreams is more effective than tuning TCP window size. In addition a simple architecture using Web services« less
The spark-ignition aircraft piston engine of the future
NASA Technical Reports Server (NTRS)
Stuckas, K. J.
1980-01-01
Areas of advanced technology appropriate to the design of a spark-ignition aircraft piston engine for the late 1980 time period were investigated and defined. Results of the study show that significant improvements in fuel economy, weight and size, safety, reliability, durability and performance may be achieved with a high degree of success, predicated on the continued development of advances in combustion systems, electronics, materials and control systems.
Research@ARL. Imaging & Image Processing. Volume 3, Issue 1
2014-01-01
goal, the focal plane arrays (FPAs) the Army deploys must excel in all areas of performance including thermal sensitivity, image resolution, speed of...are available only in relatively small sizes. Further, the difference in thermal expansion coefficients between a CZT substrate and its silicon (Si...read-out integrated circuitry reduces the reliability of large format FPAs due to repeated thermal cycling. Some in the community believed this
Development of a non-explosive release actuator using shape memory alloy wire.
Yoo, Young Ik; Jeong, Ju Won; Lim, Jae Hyuk; Kim, Kyung-Won; Hwang, Do-Soon; Lee, Jung Ju
2013-01-01
We have developed a newly designed non-explosive release actuator that can replace currently used release devices. The release mechanism is based on a separation mechanism, which relies on segmented nuts and a shape memory alloy (SMA) wire trigger. A quite fast and simple trigger operation is made possible through the use of SMA wire. This actuator is designed to allow a high preload with low levels of shock for the solar arrays of medium-size satellites. After actuation, the proposed device can be easily and instantly reset. Neither replacement, nor refurbishment of any components is necessary. According to the results of a performance test, the release time, preload capacity, and maximum shock level are 50 ms, 15 kN, and 350 G, respectively. In order to increase the reliability of the actuator, more than ten sets of performance tests are conducted. In addition, the proposed release actuator is tested under thermal vacuum and extreme vibration environments. No degradation or damage was observed during the two environment tests, and the release actuator was able to operate successfully. Considering the test results as a whole, we conclude that the proposed non-explosive release actuator can be applied reliably to intermediate-size satellites to replace existing release systems.
Verification measurements of the IRMM-1027 and the IAEA large-sized dried (LSD) spikes.
Jakopič, R; Aregbe, Y; Richter, S; Zuleger, E; Mialle, S; Balsley, S D; Repinc, U; Hiess, J
2017-01-01
In the frame of the accountancy measurements of the fissile materials, reliable determinations of the plutonium and uranium content in spent nuclear fuel are required to comply with international safeguards agreements. Large-sized dried (LSD) spikes of enriched 235 U and 239 Pu for isotope dilution mass spectrometry (IDMS) analysis are routinely applied in reprocessing plants for this purpose. A correct characterisation of these elements is a pre-requirement for achieving high accuracy in IDMS analyses. This paper will present the results of external verification measurements of such LSD spikes performed by the European Commission and the International Atomic Energy Agency.
Paap, Kenneth R; Sawi, Oliver
2016-12-01
Studies testing for individual or group differences in executive functioning can be compromised by unknown test-retest reliability. Test-retest reliabilities across an interval of about one week were obtained from performance in the antisaccade, flanker, Simon, and color-shape switching tasks. There is a general trade-off between the greater reliability of single mean RT measures, and the greater process purity of measures based on contrasts between mean RTs in two conditions. The individual differences in RT model recently developed by Miller and Ulrich was used to evaluate the trade-off. Test-retest reliability was statistically significant for 11 of the 12 measures, but was of moderate size, at best, for the difference scores. The test-retest reliabilities for the Simon and flanker interference scores were lower than those for switching costs. Standard practice evaluates the reliability of executive-functioning measures using split-half methods based on data obtained in a single day. Our test-retest measures of reliability are lower, especially for difference scores. These reliability measures must also take into account possible day effects that classical test theory assumes do not occur. Measures based on single mean RTs tend to have acceptable levels of reliability and convergent validity, but are "impure" measures of specific executive functions. The individual differences in RT model shows that the impurity problem is worse than typically assumed. However, the "purer" measures based on difference scores have low convergent validity that is partly caused by deficiencies in test-retest reliability. Copyright © 2016 Elsevier B.V. All rights reserved.
TRAMP; The next generation data acquisition for RTP
DOE Office of Scientific and Technical Information (OSTI.GOV)
van Haren, P.C.; Wijnoltz, F.
1992-04-01
The Rijnhuizen Tokamak Project RTP is a medium-sized tokamak experiment, which requires a very reliable data-acquisition system, due to its pulsed nature. Analyzing the limitations of an existing CAMAC-based data-acquisition system showed, that substantial increase of performance and flexibility could best be obtained by the construction of an entirely new system. This paper discusses this system, CALLED TRAMP (Transient Recorder and Amoeba Multi Processor), based on tailor-made transient recorders with a multiprocessor computer system in VME running Amoeba. The performance of TRAMP exceeds the performance of the CAMAC system by a factor of four. The plans to increase the flexibilitymore » and for a further increase of performance are presented.« less
Residual stress measurement in a metal microdevice by micro Raman spectroscopy
NASA Astrophysics Data System (ADS)
Song, Chang; Du, Liqun; Qi, Leijie; Li, Yu; Li, Xiaojun; Li, Yuanqi
2017-10-01
Large residual stress induced during the electroforming process cannot be ignored to fabricate reliable metal microdevices. Accurate measurement is the basis for studying the residual stress. Influenced by the topological feature size of micron scale in the metal microdevice, residual stress in it can hardly be measured by common methods. In this manuscript, a methodology is proposed to measure the residual stress in the metal microdevice using micro Raman spectroscopy (MRS). To estimate the residual stress in metal materials, micron sized β-SiC particles were mixed in the electroforming solution for codeposition. First, the calculated expression relating the Raman shifts to the induced biaxial stress for β-SiC was derived based on the theory of phonon deformation potentials and Hooke’s law. Corresponding micro electroforming experiments were performed and the residual stress in Ni-SiC composite layer was both measured by x-ray diffraction (XRD) and MRS methods. Then, the validity of the MRS measurements was verified by comparing with the residual stress measured by XRD method. The reliability of the MRS method was further validated by the statistical student’s t-test. The MRS measurements were found to have no systematic error in comparison with the XRD measurements, which confirm that the residual stresses measured by the MRS method are reliable. Besides that, the MRS method, by which the residual stress in a micro inertial switch was measured, has been confirmed to be a convincing experiment tool for estimating the residual stress in metal microdevice with micron order topological feature size.
Clay, Olivio J; Wadley, Virginia G; Edwards, Jerri D; Roth, David L; Roenker, Daniel L; Ball, Karlene K
2005-08-01
Driving is a complex behavior that requires the utilization of a wide range of individual abilities. Identifying assessments that not only capture individual differences, but also are related to older adults' driving performance would be beneficial. This investigation examines the relationship between the Useful Field of View (UFOV) assessment and objective measures of retrospective or concurrent driving performance, including state-recorded accidents, on-road driving, and driving simulator performance. The PubMed and PsycINFO databases were searched to retrieve eight studies that reported bivariate relationships between UFOV and these objective driving measures. Cumulative meta-analysis techniques were used to combine the effect sizes in an attempt to determine whether the strength of the relationship was stable across studies and to assess whether a sufficient number of studies have been conducted to validate the relationship between UFOV and driving performance. A within-group homogeneity of effect sizes test revealed that the samples could be thought of as being drawn from the same population, Q [7] = 11.29, p (one-tailed) = 0.13. Therefore, the effect sizes of eight studies were combined for the present cumulative meta-analysis. The weighted mean effect size across the studies revealed a large effect (Cohen's d = 0.945), with poorer UFOV performance associated with negative driving outcomes. This relationship was robust across multiple indices of driving performance and several research laboratories. This convergence of evidence across numerous studies using different methodologies confirms the importance of the UFOV assessment as a valid and reliable index of driving performance and safety. Recent prospective studies have confirmed a relationship between UFOV performance and future crashes, further supporting the use of this instrument as a potential screening measure for at-risk older drivers.
Digital templating for THA: a simple computer-assisted application for complex hip arthritis cases.
Hafez, Mahmoud A; Ragheb, Gad; Hamed, Adel; Ali, Amr; Karim, Said
2016-10-01
Total hip arthroplasty (THA) is the standard procedure for end-stage arthritis of the hip. Its technical success relies on preoperative planning of the surgical procedure and virtual setup of the operative performance. Digital hip templating is one methodology of preoperative planning for THA which requires a digital preoperative radiograph and a computer with special software. This is a prospective study involving 23 patients (25 hips) who were candidates for complex THA surgery (unilateral or bilateral). Digital templating is done by radiographic assessment using radiographic magnification correction, leg length discrepancy and correction measurements, acetabular component and femoral component templating as well as neck resection measurement. The overall accuracy for templating the stem implant's exact size is 81%. This percentage increased to 94% when considering sizing within 1 size. Digital templating has proven effective, reliable and essential technique for preoperative planning and accurate prediction of THA sizing and alignment.
Renewal of the Control System and Reliable Long Term Operation of the LHD Cryogenic System
NASA Astrophysics Data System (ADS)
Mito, T.; Iwamoto, A.; Oba, K.; Takami, S.; Moriuchi, S.; Imagawa, S.; Takahata, K.; Yamada, S.; Yanagi, N.; Hamaguchi, S.; Kishida, F.; Nakashima, T.
The Large Helical Device (LHD) is a heliotron-type fusion plasma experimental machine which consists of a fully superconducting magnet system cooled by a helium refrigerator having a total equivalent cooling capacity of 9.2 kW@4.4 K. Seventeenplasma experimental campaigns have been performed successfully since1997 with high reliability of 99%. However, sixteen years have passed from the beginning of the system operation. Improvements are being implementedto prevent serious failures and to pursue further reliability.The LHD cryogenic control system was designed and developed as an open system utilizing latest control equipment of VME controllers and UNIX workstations at the construction time. Howeverthe generation change of control equipment has been advanced. Down-sizing of control deviceshas beenplanned from VME controllers to compact PCI controllers in order to simplify the system configuration and to improve the system reliability. The new system is composed of compact PCI controller and remote I/O connected with EtherNet/IP. Making the system redundant becomes possible by doubling CPU, LAN, and remote I/O respectively. The smooth renewal of the LHD cryogenic controlsystem and the further improvement of the cryogenic system reliability are reported.
Rahman, Azriani Ab; Mohamad, Norsarwany; Imran, Musa Kamarul; Ibrahim, Wan Pauzi Wan; Othman, Azizah; Aziz, Aniza Abd; Harith, Sakinah; Ibrahim, Mohd Ismail; Ariffin, Nor Hashimah; Van Rostenberghe, Hans
2011-01-01
Background: No previous study has assessed the impact of childhood disability on parents and family in the context of Malaysia, and no instrument to measure this impact has previously been available. The objective of this cross-sectional study was to determine the reliability of a Malay version of the PedsQL™ Family Impact Module that measures the impact of children with disabilities (CWD) on their parents and family in a Malaysian context. Methods: The study was conducted in 2009. The questionnaire was translated forward and backward before it was administered to 44 caregivers of CWD to determine the internal consistency reliability. The test for Cronbach’s alpha was performed. Results: The internal consistency reliability was good. The Cronbach’s alpha for all domains was above 0.7, ranging from 0.73 to 0.895. Conclusion: The Malay version of the PedsQL™ Family Impact Module showed evidence of good internal consistency reliability. However, future studies with a larger sample size are necessary before the module can be recommended as a tool to measure the impact of disability on Malay-speaking Malaysian families. PMID:22589674
Nerve ultrasound reliability of upper limbs: Effects of examiner training.
Garcia-Santibanez, Rocio; Dietz, Alexander R; Bucelli, Robert C; Zaidman, Craig M
2018-02-01
Duration of training to reliably measure nerve cross-sectional area with ultrasound is unknown. A retrospective review was performed of ultrasound data, acquired and recorded by 2 examiners-an expert and either a trainee with 2 months (novice) or a trainee with 12 months (experienced) of experience. Data on median, ulnar, and radial nerves were reviewed for 42 patients. Interrater reliability was good and varied most with nerve site but little with experience. Coefficient of variation (CoV) range was 9.33%-22.5%. Intraclass correlation coefficient (ICC) was good to excellent (0.65-95) except ulnar nerve-wrist/forearm and radial nerve-humerus (ICC = 0.39-0.59). Interrater differences did not vary with nerve size or body mass index. Expert-novice and expert-experienced interrater differences and CoV were similar. The ulnar nerve-wrist expert-novice interrater difference decreased with time (r s = -0.68, P = 0.001). A trainee with at least 2 months of experience can reliably measure upper limb nerves. Reliability varies by nerve and location and slightly improves with time. Muscle Nerve 57: 189-192, 2018. © 2017 Wiley Periodicals, Inc.
Mukherjee, Shalini; Yadav, Rajeev; Yung, Iris; Zajdel, Daniel P.; Oken, Barry S.
2011-01-01
Objectives To determine 1) whether heart rate variability (HRV) was a sensitive and reliable measure in mental effort tasks carried out by healthy seniors and 2) whether non-linear approaches to HRV analysis, in addition to traditional time and frequency domain approaches were useful to study such effects. Methods Forty healthy seniors performed two visual working memory tasks requiring different levels of mental effort, while ECG was recorded. They underwent the same tasks and recordings two weeks later. Traditional and 13 non-linear indices of HRV including Poincaré, entropy and detrended fluctuation analysis (DFA) were determined. Results Time domain (especially mean R-R interval/RRI), frequency domain and, among nonlinear parameters- Poincaré and DFA were the most reliable indices. Mean RRI, time domain and Poincaré were also the most sensitive to different mental effort task loads and had the largest effect size. Conclusions Overall, linear measures were the most sensitive and reliable indices to mental effort. In non-linear measures, Poincaré was the most reliable and sensitive, suggesting possible usefulness as an independent marker in cognitive function tasks in healthy seniors. Significance A large number of HRV parameters was both reliable as well as sensitive indices of mental effort, although the simple linear methods were the most sensitive. PMID:21459665
Acute Respiratory Distress Syndrome Measurement Error. Potential Effect on Clinical Study Results
Cooke, Colin R.; Iwashyna, Theodore J.; Hofer, Timothy P.
2016-01-01
Rationale: Identifying patients with acute respiratory distress syndrome (ARDS) is a recognized challenge. Experts often have only moderate agreement when applying the clinical definition of ARDS to patients. However, no study has fully examined the implications of low reliability measurement of ARDS on clinical studies. Objectives: To investigate how the degree of variability in ARDS measurement commonly reported in clinical studies affects study power, the accuracy of treatment effect estimates, and the measured strength of risk factor associations. Methods: We examined the effect of ARDS measurement error in randomized clinical trials (RCTs) of ARDS-specific treatments and cohort studies using simulations. We varied the reliability of ARDS diagnosis, quantified as the interobserver reliability (κ-statistic) between two reviewers. In RCT simulations, patients identified as having ARDS were enrolled, and when measurement error was present, patients without ARDS could be enrolled. In cohort studies, risk factors as potential predictors were analyzed using reviewer-identified ARDS as the outcome variable. Measurements and Main Results: Lower reliability measurement of ARDS during patient enrollment in RCTs seriously degraded study power. Holding effect size constant, the sample size necessary to attain adequate statistical power increased by more than 50% as reliability declined, although the result was sensitive to ARDS prevalence. In a 1,400-patient clinical trial, the sample size necessary to maintain similar statistical power increased to over 1,900 when reliability declined from perfect to substantial (κ = 0.72). Lower reliability measurement diminished the apparent effectiveness of an ARDS-specific treatment from a 15.2% (95% confidence interval, 9.4–20.9%) absolute risk reduction in mortality to 10.9% (95% confidence interval, 4.7–16.2%) when reliability declined to moderate (κ = 0.51). In cohort studies, the effect on risk factor associations was similar. Conclusions: ARDS measurement error can seriously degrade statistical power and effect size estimates of clinical studies. The reliability of ARDS measurement warrants careful attention in future ARDS clinical studies. PMID:27159648
Takahashi, Chie; Watt, Simon J.
2014-01-01
When we hold an object while looking at it, estimates from visual and haptic cues to size are combined in a statistically optimal fashion, whereby the “weight” given to each signal reflects their relative reliabilities. This allows object properties to be estimated more precisely than would otherwise be possible. Tools such as pliers and tongs systematically perturb the mapping between object size and the hand opening. This could complicate visual-haptic integration because it may alter the reliability of the haptic signal, thereby disrupting the determination of appropriate signal weights. To investigate this we first measured the reliability of haptic size estimates made with virtual pliers-like tools (created using a stereoscopic display and force-feedback robots) with different “gains” between hand opening and object size. Haptic reliability in tool use was straightforwardly determined by a combination of sensitivity to changes in hand opening and the effects of tool geometry. The precise pattern of sensitivity to hand opening, which violated Weber's law, meant that haptic reliability changed with tool gain. We then examined whether the visuo-motor system accounts for these reliability changes. We measured the weight given to visual and haptic stimuli when both were available, again with different tool gains, by measuring the perceived size of stimuli in which visual and haptic sizes were varied independently. The weight given to each sensory cue changed with tool gain in a manner that closely resembled the predictions of optimal sensory integration. The results are consistent with the idea that different tool geometries are modeled by the brain, allowing it to calculate not only the distal properties of objects felt with tools, but also the certainty with which those properties are known. These findings highlight the flexibility of human sensory integration and tool-use, and potentially provide an approach for optimizing the design of visual-haptic devices. PMID:24592245
Nibali, Maria L; Tombleson, Tom; Brady, Philip H; Wagner, Phillip
2015-10-01
Understanding typical variation of vertical jump (VJ) performance and confounding sources of its typical variability (i.e., familiarization and competitive level) is pertinent in the routine monitoring of athletes. We evaluated the presence of systematic error (learning effect) and nonuniformity of error (heteroscedasticity) across VJ performances of athletes that differ in competitive level and quantified the reliability of VJ kinetic and kinematic variables relative to the smallest worthwhile change (SWC). One hundred thirteen high school athletes, 30 college athletes, and 35 professional athletes completed repeat VJ trials. Average eccentric rate of force development (RFD), average concentric (CON) force, CON impulse, and jump height measurements were obtained from vertical ground reaction force (VGRF) data. Systematic error was assessed by evaluating changes in the mean of repeat trials. Heteroscedasticity was evaluated by plotting the difference score (trial 2 - trial 1) against the mean of the trials. Variability of jump variables was calculated as the typical error (TE) and coefficient of variation (%CV). No substantial systematic error (effect size range: -0.07 to 0.11) or heteroscedasticity was present for any of the VJ variables. Vertical jump can be performed without the need for familiarization trials, and the variability can be conveyed as either the raw TE or the %CV. Assessment of VGRF variables is an effective and reliable means of assessing VJ performance. Average CON force and CON impulse are highly reliable (%CV: 2.7% ×/÷ 1.10), although jump height was the only variable to display a %CV ≤SWC. Eccentric RFD is highly variable yet should not be discounted from VJ assessments on this factor alone because it may be sensitive to changes in response to training or fatigue that exceed the TE.
Sekulic, Damir; Pehar, Miran; Krolo, Ante; Spasic, Miodrag; Uljevic, Ognjen; Calleja-González, Julio; Sattler, Tine
2017-08-01
Sekulic, D, Pehar, M, Krolo, A, Spasic, M, Uljevic, O, Calleja-González, J, and Sattler, T. Evaluation of basketball-specific agility: applicability of preplanned and nonplanned agility performances for differentiating playing positions and playing levels. J Strength Cond Res 31(8): 2278-2288, 2017-The importance of agility in basketball is well known, but there is an evident lack of studies examining basketball-specific agility performances in high-level players. The aim of this study was to determine the reliability and discriminative validity of 1 standard agility test (test of preplanned agility [change-of-direction speed] over T course, T-TEST), and 4 newly developed basketball-specific agility tests, in defining playing positions and performance levels in basketball. The study comprised 110 high-level male basketball players (height: 194.92 ± 8.09 cm; body mass: 89.33 ± 10.91 kg; age: 21.58 ± 3.92 years). The variables included playing position (Guard, Forward, Center), performance level (first division vs. second division), anthropometrics (body height, body mass, and percentage of body fat), T-TEST, nonplanned basketball agility test performed on dominant (BBAGILdom) and nondominant sides (BBAGILnond), and a preplanned (change-of-direction speed) basketball agility test performed on dominant (BBCODSdom) and nondominant sides (BBCODSnond). The reliability of agility tests was high (intraclass correlation coefficient of 0.81-0.95). Forwards were most successful in the T-TEST (F test: 13.57; p = 0.01). Guards outperformed Centers in BBCODSdom, BBCODSndom, BBAGILdom, and BBAGILnond (F test: 5.06, p = 0.01; 6.57, 0.01; 6.26, 0.01; 3.37, 0.04, respectively). First division Guards achieved better results than second division Guards in BBCODSdom (t: 2.55; p = 0.02; moderate effect size differences), BBAGILdom, and BBAGILnond (t: 3.04 and 3.06, respectively; both p = 0.01 and moderate effect size differences). First division Centers outperformed second division Centers in BBAGILdom (t: 2.50; p = 0.02; moderate effect size differences). The developed basketball-specific agility tests are applicable when defining position-specific agility. Both preplanned and nonplanned agilities are important qualities in differentiating between Guards of 2 performance levels. The results confirmed the importance of testing basketball-specific nonplanned agility when evaluating the performance level of Centers.
Reliability of hospital cost profiles in inpatient surgery.
Grenda, Tyler R; Krell, Robert W; Dimick, Justin B
2016-02-01
With increased policy emphasis on shifting risk from payers to providers through mechanisms such as bundled payments and accountable care organizations, hospitals are increasingly in need of metrics to understand their costs relative to peers. However, it is unclear whether Medicare payments for surgery can reliably compare hospital costs. We used national Medicare data to assess patients undergoing colectomy, pancreatectomy, and open incisional hernia repair from 2009 to 2010 (n = 339,882 patients). We first calculated risk-adjusted hospital total episode payments for each procedure. We then used hierarchical modeling techniques to estimate the reliability of total episode payments for each procedure and explored the impact of hospital caseload on payment reliability. Finally, we quantified the number of hospitals meeting published reliability benchmarks. Mean risk-adjusted total episode payments ranged from $13,262 (standard deviation [SD] $14,523) for incisional hernia repair to $25,055 (SD $22,549) for pancreatectomy. The reliability of hospital episode payments varied widely across procedures and depended on sample size. For example, mean episode payment reliability for colectomy (mean caseload, 157) was 0.80 (SD 0.18), whereas for pancreatectomy (mean caseload, 13) the mean reliability was 0.45 (SD 0.27). Many hospitals met published reliability benchmarks for each procedure. For example, 90% of hospitals met reliability benchmarks for colectomy, 40% for pancreatectomy, and 66% for incisional hernia repair. Episode payments for inpatient surgery are a reliable measure of hospital costs for commonly performed procedures, but are less reliable for lower volume operations. These findings suggest that hospital cost profiles based on Medicare claims data may be used to benchmark efficiency, especially for more common procedures. Copyright © 2016 Elsevier Inc. All rights reserved.
Probabilistic assessment of dynamic system performance. Part 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belhadj, Mohamed
1993-01-01
Accurate prediction of dynamic system failure behavior can be important for the reliability and risk analyses of nuclear power plants, as well as for their backfitting to satisfy given constraints on overall system reliability, or optimization of system performance. Global analysis of dynamic systems through investigating the variations in the structure of the attractors of the system and the domains of attraction of these attractors as a function of the system parameters is also important for nuclear technology in order to understand the fault-tolerance as well as the safety margins of the system under consideration and to insure a safemore » operation of nuclear reactors. Such a global analysis would be particularly relevant to future reactors with inherent or passive safety features that are expected to rely on natural phenomena rather than active components to achieve and maintain safe shutdown. Conventionally, failure and global analysis of dynamic systems necessitate the utilization of different methodologies which have computational limitations on the system size that can be handled. Using a Chapman-Kolmogorov interpretation of system dynamics, a theoretical basis is developed that unifies these methodologies as special cases and which can be used for a comprehensive safety and reliability analysis of dynamic systems.« less
A General Reliability Model for Ni-BaTiO3-Based Multilayer Ceramic Capacitors
NASA Technical Reports Server (NTRS)
Liu, Donhang
2014-01-01
The evaluation of multilayer ceramic capacitors (MLCCs) with Ni electrode and BaTiO3 dielectric material for potential space project applications requires an in-depth understanding of their reliability. A general reliability model for Ni-BaTiO3 MLCC is developed and discussed. The model consists of three parts: a statistical distribution; an acceleration function that describes how a capacitor's reliability life responds to the external stresses, and an empirical function that defines contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size, and capacitor chip size A. Application examples are also discussed based on the proposed reliability model for Ni-BaTiO3 MLCCs.
A General Reliability Model for Ni-BaTiO3-Based Multilayer Ceramic Capacitors
NASA Technical Reports Server (NTRS)
Liu, Donhang
2014-01-01
The evaluation for potential space project applications of multilayer ceramic capacitors (MLCCs) with Ni electrode and BaTiO3 dielectric material requires an in-depth understanding of the MLCCs reliability. A general reliability model for Ni-BaTiO3 MLCCs is developed and discussed in this paper. The model consists of three parts: a statistical distribution; an acceleration function that describes how a capacitors reliability life responds to external stresses; and an empirical function that defines the contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size r, and capacitor chip size A. Application examples are also discussed based on the proposed reliability model for Ni-BaTiO3 MLCCs.
Chen, Xiao; Lu, Bin; Yan, Chao-Gan
2018-01-01
Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Multiple objective optimization in reliability demonstration test
Lu, Lu; Anderson-Cook, Christine Michaela; Li, Mingyang
2016-10-01
Reliability demonstration tests are usually performed in product design or validation processes to demonstrate whether a product meets specified requirements on reliability. For binomial demonstration tests, the zero-failure test has been most commonly used due to its simplicity and use of minimum sample size to achieve an acceptable consumer’s risk level. However, this test can often result in unacceptably high risk for producers as well as a low probability of passing the test even when the product has good reliability. This paper explicitly explores the interrelationship between multiple objectives that are commonly of interest when planning a demonstration test andmore » proposes structured decision-making procedures using a Pareto front approach for selecting an optimal test plan based on simultaneously balancing multiple criteria. Different strategies are suggested for scenarios with different user priorities and graphical tools are developed to help quantify the trade-offs between choices and to facilitate informed decision making. As a result, potential impacts of some subjective user inputs on the final decision are studied to offer insights and useful guidance for general applications.« less
Lean, Premixed-Prevaporized (LPP) combustor conceptual design study
NASA Technical Reports Server (NTRS)
Dickman, R. A.; Dodds, W. J.; Ekstedt, E. E.
1979-01-01
Four combustion systems were designed and sized for the energy efficient engine. A fifth combustor was designed for the cycle and envelope of the twin-spool, high bypass ratio, high pressure ratio turbofan engine. Emission levels, combustion performance, life, and reliability assessments were made for these five combustion systems. Results of these design studies indicate that cruise NOx emission can be reduced by the use of lean, premixed-prevaporaized combustion and airflow modulation.
2012-01-12
fabrication of the composite indicate physical deformities and defects, including entanglement of carbon nanotubes and fused contacts, that are understood...working distance, and spot size, 2.5) of MWCNT array batch of which the composite was made and tested: (a) Entanglements of Individual Nanotubes...electron, photon and phonon) in these materials is critical to their reliable and robust performance, thus accommodating denser circuits 2 and higher
Communication System Architecture for Planetary Exploration
NASA Technical Reports Server (NTRS)
Braham, Stephen P.; Alena, Richard; Gilbaugh, Bruce; Glass, Brian; Norvig, Peter (Technical Monitor)
2001-01-01
Future human missions to Mars will require effective communications supporting exploration activities and scientific field data collection. Constraints on cost, size, weight and power consumption for all communications equipment make optimization of these systems very important. These information and communication systems connect people and systems together into coherent teams performing the difficult and hazardous tasks inherent in planetary exploration. The communication network supporting vehicle telemetry data, mission operations, and scientific collaboration must have excellent reliability, and flexibility.
Intelligent Systems for Power Management and Distribution
NASA Technical Reports Server (NTRS)
Button, Robert M.
2002-01-01
The motivation behind an advanced technology program to develop intelligent power management and distribution (PMAD) systems is described. The program concentrates on developing digital control and distributed processing algorithms for PMAD components and systems to improve their size, weight, efficiency, and reliability. Specific areas of research in developing intelligent DC-DC converters and distributed switchgear are described. Results from recent development efforts are presented along with expected future benefits to the overall PMAD system performance.
Optimal sample sizes for the design of reliability studies: power consideration.
Shieh, Gwowen
2014-09-01
Intraclass correlation coefficients are used extensively to measure the reliability or degree of resemblance among group members in multilevel research. This study concerns the problem of the necessary sample size to ensure adequate statistical power for hypothesis tests concerning the intraclass correlation coefficient in the one-way random-effects model. In view of the incomplete and problematic numerical results in the literature, the approximate sample size formula constructed from Fisher's transformation is reevaluated and compared with an exact approach across a wide range of model configurations. These comprehensive examinations showed that the Fisher transformation method is appropriate only under limited circumstances, and therefore it is not recommended as a general method in practice. For advance design planning of reliability studies, the exact sample size procedures are fully described and illustrated for various allocation and cost schemes. Corresponding computer programs are also developed to implement the suggested algorithms.
An evaluation of the Meditech M250 and a comparison with other CT scanners.
Greensmith, R; Richardson, R B; Sargood, A J; Stevens, P H; Mackintosh, I P
1985-11-01
The Meditech M250 computerised tomography (CT) machine was evaluated during the first half of 1984. Measurements were made of noise, modulation transfer function, slice width, radiation dose profile, uniformity and linearity of CT number, effective photon energy and parameters relating to machine specification, such as pixel size and scan time. All breakdowns were logged to indicate machine reliability. A comparison with the established EMI CT1010 and CT5005 was made for noise, resolution and multislice radiation dose, as well as the dose efficiency or quality (Q) factor for both head and body modes of operation. The M250 was found to perform to its intended specification with an acceptable level of reliability.
NASA Astrophysics Data System (ADS)
Wang, Li; Li, Feng; Xing, Jian
2017-10-01
In this paper, a hybrid artificial bee colony (ABC) algorithm and pattern search (PS) method is proposed and applied for recovery of particle size distribution (PSD) from spectral extinction data. To be more useful and practical, size distribution function is modelled as the general Johnson's ? function that can overcome the difficulty of not knowing the exact type beforehand encountered in many real circumstances. The proposed hybrid algorithm is evaluated through simulated examples involving unimodal, bimodal and trimodal PSDs with different widths and mean particle diameters. For comparison, all examples are additionally validated by the single ABC algorithm. In addition, the performance of the proposed algorithm is further tested by actual extinction measurements with real standard polystyrene samples immersed in water. Simulation and experimental results illustrate that the hybrid algorithm can be used as an effective technique to retrieve the PSDs with high reliability and accuracy. Compared with the single ABC algorithm, our proposed algorithm can produce more accurate and robust inversion results while taking almost comparative CPU time over ABC algorithm alone. The superiority of ABC and PS hybridization strategy in terms of reaching a better balance of estimation accuracy and computation effort increases its potentials as an excellent inversion technique for reliable and efficient actual measurement of PSD.
Jeffery, Nicholas W; Gregory, T Ryan
2014-10-01
Crustaceans are enormously diverse both phylogenetically and ecologically, but they remain substantially underrepresented in the existing genome size database. An expansion of this dataset could be facilitated if it were possible to obtain genome size estimates from ethanol-preserved specimens. In this study, two tests were performed in order to assess the reliability of genome size data generated using preserved material. First, the results of estimates based on flash-frozen versus ethanol-preserved material were compared across 37 species of crustaceans that differ widely in genome size. Second, a comparison was made of specimens from a single species that had been stored in ethanol for 1-14 years. In both cases, the use of gill tissue in Feulgen image analysis densitometry proved to be a very viable approach. This finding is of direct relevance to both new studies of field-collected crustaceans as well as potential studies based on existing collections. © 2014 International Society for Advancement of Cytometry.
USDA-ARS?s Scientific Manuscript database
The eButton takes frontal images at 4 second intervals throughout the day. A three-dimensional (3D) manually administered wire mesh procedure has been developed to quantify portion sizes from the two-dimensional (2D) images. This paper reports a test of the interrater reliability and validity of use...
Bizzocchi, Nicola; Fracchiolla, Francesco; Schwarz, Marco; Algranati, Carlo
2017-01-01
In a radiotherapy center, daily quality assurance (QA) measurements are performed to ensure that the equipment can be safely used for patient treatment on that day. In a pencil beam scanning (PBS) proton therapy center, spot positioning, spot size, range, and dose output are usually verified every day before treatments. We designed, built, and tested a new, reliable, sensitive, and inexpensive phantom, coupled with an array of ionization chambers, for daily QA that reduces the execution times while preserving the reliability of the test. The phantom is provided with 2 pairs of wedges to sample the Bragg peak at different depths to have a transposition on the transverse plane of the depth dose. Three "boxes" are used to check spot positioning and delivered dose. The box thickness helps spread the single spot and to fit a Gaussian profile on a low resolution detector. We tested whether our new QA solution could detect errors larger than our action levels: 1 mm in spot positioning, 2 mm in range, and 10% in spot size. Execution time was also investigated. Our method is able to correctly detect 98% of spots that are actually in tolerance for spot positioning and 99% of spots out of 1 mm tolerance. All range variations greater than the threshold (2 mm) were correctly detected. The analysis performed over 1 month showed a very good repeatability of spot characteristics. The time taken to perform the daily quality assurance is 20 minutes, a half of the execution time of the former multidevice procedure. This "in-house build" phantom substitutes 2 very expensive detectors (a multilayer ionization chamber [MLIC] and a strip chamber, reducing by 5 times the cost of the equipment. We designed, built, and validated a phantom that allows for accurate, sensitive, fast, and inexpensive daily QA procedures in proton therapy with PBS. Copyright © 2017 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
Smartphone versus knee ligament arthrometer when size does not matter.
Ferretti, Andrea; Andrea, Ferretti; Valeo, Luigi; Luigi, Valeo; Mazza, Daniele; Daniele, Mazza; Muliere, Luca; Luca, Muliere; Iorio, Paolo; Paolo, Iorio; Giovannetti, Giovanni; Giovanni, Giovannetti; Conteduca, Fabio; Fabio, Conteduca; Iorio, Raffaele; Raffaele, Iorio
2014-10-01
The use of available mechanical methods to measure anterior tibial translation (ATT) in anterior cruciate ligament (ACL)-deficient knees are limited by size and costs. This study evaluated the performance of a portable device based on a downloadable electronic smartphone application to measure ATT in ACL-deficient knees. A specific smartphone application (SmartJoint) was developed for this purpose. Two independent observers nonsequentially measured the amount of ATT during execution of a maximum manual Lachman test in 35 patients with an ACL-deficient knee using KT 1000 and SmartJoint on both involved and uninvolved knees. As each examiner performed the test three times on each knee, a total of 840 measurements were collected. Statistical analysis compared intertest, interobserver and intra-observer reliability using the interclass correlation coefficient (ICC). An ICC > 0.75 indicates excellent reproducibility among measurements. Mean amount of ATT on uninvolved knees was 6.1 mm [standard deviation (SD = 2)] with the KT 1000 and 6.4 mm (SD = 2) with SmartJoint. Mean side-to-side difference was 8.1 mm. (SD = 4) with KT 1000 and 8.3 mm (SD = 3) with SmartJoint. Intertest reliability between the two methods yielded an ICC 0.797 [95 % confidence interval (CI) 0.717-0.857] for the uninvolved knee and of 0.987 (CI 0.981-0.991) for the involved knee. Interobserver ICC for SmartJoint and KT 1000 was 0.957 (CI 0.927-0.976) for the uninvolved knee and 0.992 (CI 0.986-0.996) for the involved knee and 0.973 (CI 0.954-0.985) for the uninvolved knee and 0.989 (CI 0.981-0.994) for involved knee, respectively. The performance of SmartJoint is comparable and highly correlated with measurements obtained from KT 1000. SmartJoint may provide a truly portable, noninvasive, accurate, reliable, inexpensive and widely accessible method to characterize ATT in ACL-deficient knee.
Davies, Emlyn J.; Buscombe, Daniel D.; Graham, George W.; Nimmo-Smith, W. Alex M.
2015-01-01
Substantial information can be gained from digital in-line holography of marine particles, eliminating depth-of-field and focusing errors associated with standard lens-based imaging methods. However, for the technique to reach its full potential in oceanographic research, fully unsupervised (automated) methods are required for focusing, segmentation, sizing and classification of particles. These computational challenges are the subject of this paper, in which we draw upon data collected using a variety of holographic systems developed at Plymouth University, UK, from a significant range of particle types, sizes and shapes. A new method for noise reduction in reconstructed planes is found to be successful in aiding particle segmentation and sizing. The performance of an automated routine for deriving particle characteristics (and subsequent size distributions) is evaluated against equivalent size metrics obtained by a trained operative measuring grain axes on screen. The unsupervised method is found to be reliable, despite some errors resulting from over-segmentation of particles. A simple unsupervised particle classification system is developed, and is capable of successfully differentiating sand grains, bubbles and diatoms from within the surf-zone. Avoiding miscounting bubbles and biological particles as sand grains enables more accurate estimates of sand concentrations, and is especially important in deployments of particle monitoring instrumentation in aerated water. Perhaps the greatest potential for further development in the computational aspects of particle holography is in the area of unsupervised particle classification. The simple method proposed here provides a foundation upon which further development could lead to reliable identification of more complex particle populations, such as those containing phytoplankton, zooplankton, flocculated cohesive sediments and oil droplets.
Myatt, Julia P; Crompton, Robin H; Thorpe, Susannah K S
2011-01-01
By relating an animal's morphology to its functional role and the behaviours performed, we can further develop our understanding of the selective factors and constraints acting on the adaptations of great apes. Comparison of muscle architecture between different ape species, however, is difficult because only small sample sizes are ever available. Further, such samples are often comprised of different age–sex classes, so studies have to rely on scaling techniques to remove body mass differences. However, the reliability of such scaling techniques has been questioned. As datasets increase in size, more reliable statistical analysis may eventually become possible. Here we employ geometric and allometric scaling techniques, and ancovas (a form of general linear model, GLM) to highlight and explore the different methods available for comparing functional morphology in the non-human great apes. Our results underline the importance of regressing data against a suitable body size variable to ascertain the relationship (geometric or allometric) and of choosing appropriate exponents by which to scale data. ancova models, while likely to be more robust than scaling for species comparisons when sample sizes are high, suffer from reduced power when sample sizes are low. Therefore, until sample sizes are radically increased it is preferable to include scaling analyses along with ancovas in data exploration. Overall, the results obtained from the different methods show little significant variation, whether in muscle belly mass, fascicle length or physiological cross-sectional area between the different species. This may reflect relatively close evolutionary relationships of the non-human great apes; a universal influence on morphology of generalised orthograde locomotor behaviours or, quite likely, both. PMID:21507000
Validity and reliability of a new tool to evaluate handwriting difficulties in Parkinson’s disease
Nackaerts, Evelien; Heremans, Elke; Smits-Engelsman, Bouwien C. M.; Broeder, Sanne; Vandenberghe, Wim; Bergmans, Bruno; Nieuwboer, Alice
2017-01-01
Background Handwriting in Parkinson’s disease (PD) features specific abnormalities which are difficult to assess in clinical practice since no specific tool for evaluation of spontaneous movement is currently available. Objective This study aims to validate the ‘Systematic Screening of Handwriting Difficulties’ (SOS-test) in patients with PD. Methods Handwriting performance of 87 patients and 26 healthy age-matched controls was examined using the SOS-test. Sixty-seven patients were tested a second time within a period of one month. Participants were asked to copy as much as possible of a text within 5 minutes with the instruction to write as neatly and quickly as in daily life. Writing speed (letters in 5 minutes), size (mm) and quality of handwriting were compared. Correlation analysis was performed between SOS outcomes and other fine motor skill measurements and disease characteristics. Intrarater, interrater and test-retest reliability were assessed using the intraclass correlation coefficient (ICC) and Spearman correlation coefficient. Results Patients with PD had a smaller (p = 0.043) and slower (p<0.001) handwriting and showed worse writing quality (p = 0.031) compared to controls. The outcomes of the SOS-test significantly correlated with fine motor skill performance and disease duration and severity. Furthermore, the test showed excellent intrarater, interrater and test-retest reliability (ICC > 0.769 for both groups). Conclusion The SOS-test is a short and effective tool to detect handwriting problems in PD with excellent reliability. It can therefore be recommended as a clinical instrument for standardized screening of handwriting deficits in PD. PMID:28253374
Sample size determination for mediation analysis of longitudinal data.
Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying
2018-03-27
Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.
Reliable enumeration of malaria parasites in thick blood films using digital image analysis.
Frean, John A
2009-09-23
Quantitation of malaria parasite density is an important component of laboratory diagnosis of malaria. Microscopy of Giemsa-stained thick blood films is the conventional method for parasite enumeration. Accurate and reproducible parasite counts are difficult to achieve, because of inherent technical limitations and human inconsistency. Inaccurate parasite density estimation may have adverse clinical and therapeutic implications for patients, and for endpoints of clinical trials of anti-malarial vaccines or drugs. Digital image analysis provides an opportunity to improve performance of parasite density quantitation. Accurate manual parasite counts were done on 497 images of a range of thick blood films with varying densities of malaria parasites, to establish a uniformly reliable standard against which to assess the digital technique. By utilizing descriptive statistical parameters of parasite size frequency distributions, particle counting algorithms of the digital image analysis programme were semi-automatically adapted to variations in parasite size, shape and staining characteristics, to produce optimum signal/noise ratios. A reliable counting process was developed that requires no operator decisions that might bias the outcome. Digital counts were highly correlated with manual counts for medium to high parasite densities, and slightly less well correlated with conventional counts. At low densities (fewer than 6 parasites per analysed image) signal/noise ratios were compromised and correlation between digital and manual counts was poor. Conventional counts were consistently lower than both digital and manual counts. Using open-access software and avoiding custom programming or any special operator intervention, accurate digital counts were obtained, particularly at high parasite densities that are difficult to count conventionally. The technique is potentially useful for laboratories that routinely perform malaria parasite enumeration. The requirements of a digital microscope camera, personal computer and good quality staining of slides are potentially reasonably easy to meet.
A simulation model for risk assessment of turbine wheels
NASA Technical Reports Server (NTRS)
Safie, Fayssal M.; Hage, Richard T.
1991-01-01
A simulation model has been successfully developed to evaluate the risk of the Space Shuttle auxiliary power unit (APU) turbine wheels for a specific inspection policy. Besides being an effective tool for risk/reliability evaluation, the simulation model also allows the analyst to study the trade-offs between wheel reliability, wheel life, inspection interval, and rejection crack size. For example, in the APU application, sensitivity analysis results showed that the wheel life limit has the least effect on wheel reliability when compared to the effect of the inspection interval and the rejection crack size. In summary, the simulation model developed represents a flexible tool to predict turbine wheel reliability and study the risk under different inspection policies.
A simulation model for risk assessment of turbine wheels
NASA Astrophysics Data System (ADS)
Safie, Fayssal M.; Hage, Richard T.
A simulation model has been successfully developed to evaluate the risk of the Space Shuttle auxiliary power unit (APU) turbine wheels for a specific inspection policy. Besides being an effective tool for risk/reliability evaluation, the simulation model also allows the analyst to study the trade-offs between wheel reliability, wheel life, inspection interval, and rejection crack size. For example, in the APU application, sensitivity analysis results showed that the wheel life limit has the least effect on wheel reliability when compared to the effect of the inspection interval and the rejection crack size. In summary, the simulation model developed represents a flexible tool to predict turbine wheel reliability and study the risk under different inspection policies.
Hybrid propulsion technology program: Phase 1. Volume 3: Thiokol Corporation Space Operations
NASA Technical Reports Server (NTRS)
Schuler, A. L.; Wiley, D. R.
1989-01-01
Three candidate hybrid propulsion (HP) concepts were identified, optimized, evaluated, and refined through an iterative process that continually forced improvement to the systems with respect to safety, reliability, cost, and performance criteria. A full scale booster meeting Advanced Solid Rocket Motor (ASRM) thrust-time constraints and a booster application for 1/4 ASRM thrust were evaluated. Trade studies and analyses were performed for each of the motor elements related to SRM technology. Based on trade study results, the optimum HP concept for both full and quarter sized systems was defined. The three candidate hybrid concepts evaluated are illustrated.
Universal first-order reliability concept applied to semistatic structures
NASA Technical Reports Server (NTRS)
Verderaime, V.
1994-01-01
A reliability design concept was developed for semistatic structures which combines the prevailing deterministic method with the first-order reliability method. The proposed method surmounts deterministic deficiencies in providing uniformly reliable structures and improved safety audits. It supports risk analyses and reliability selection criterion. The method provides a reliability design factor derived from the reliability criterion which is analogous to the current safety factor for sizing structures and verifying reliability response. The universal first-order reliability method should also be applicable for air and surface vehicles semistatic structures.
Universal first-order reliability concept applied to semistatic structures
NASA Astrophysics Data System (ADS)
Verderaime, V.
1994-07-01
A reliability design concept was developed for semistatic structures which combines the prevailing deterministic method with the first-order reliability method. The proposed method surmounts deterministic deficiencies in providing uniformly reliable structures and improved safety audits. It supports risk analyses and reliability selection criterion. The method provides a reliability design factor derived from the reliability criterion which is analogous to the current safety factor for sizing structures and verifying reliability response. The universal first-order reliability method should also be applicable for air and surface vehicles semistatic structures.
Breaking Barriers to Low-Cost Modular Inverter Production & Use
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogdan Borowy; Leo Casey; Jerry Foshage
2005-05-31
The goal of this cost share contract is to advance key technologies to reduce size, weight and cost while enhancing performance and reliability of Modular Inverter Product for Distributed Energy Resources (DER). Efforts address technology development to meet technical needs of DER market protection, isolation, reliability, and quality. Program activities build on SatCon Technology Corporation inverter experience (e.g., AIPM, Starsine, PowerGate) for Photovoltaic, Fuel Cell, Energy Storage applications. Efforts focused four technical areas, Capacitors, Cooling, Voltage Sensing and Control of Parallel Inverters. Capacitor efforts developed a hybrid capacitor approach for conditioning SatCon's AIPM unit supply voltages by incorporating several typesmore » and sizes to store energy and filter at high, medium and low frequencies while minimizing parasitics (ESR and ESL). Cooling efforts converted the liquid cooled AIPM module to an air-cooled unit using augmented fin, impingement flow cooling. Voltage sensing efforts successfully modified the existing AIPM sensor board to allow several, application dependent configurations and enabling voltage sensor galvanic isolation. Parallel inverter control efforts realized a reliable technique to control individual inverters, connected in a parallel configuration, without a communication link. Individual inverter currents, AC and DC, were balanced in the paralleled modules by introducing a delay to the individual PWM gate pulses. The load current sharing is robust and independent of load types (i.e., linear and nonlinear, resistive and/or inductive). It is a simple yet powerful method for paralleling both individual devices dramatically improves reliability and fault tolerance of parallel inverter power systems. A patent application has been made based on this control technology.« less
NASA Astrophysics Data System (ADS)
Kaplan, J.; Howitt, R. E.; Kroll, S.
2016-12-01
Public financing of public projects is becoming more difficult with growing political and financial pressure to reduce the size and scope of government action. Private provision is possible but is often doomed by under-provision. If however, market-like mechanisms could be incorporated into the solicitation of funds to finance the provision of the good, because, for example, the good is supplied stochastically and is divisible, then we would expect fewer incentives to free ride and greater efficiency in providing the public good. In a controlled computer-based economic experiment, we evaluate two market-like conditions (reliability pricing allocation and self-sizing of the good) that are designed to reduce under-provision. The results suggest that financing an infrastructure project when the delivery is allocated based on reliability pricing rather than historical allocation results in significantly greater price formation efficiency and less free riding whether the project is of a fixed size determined by external policy makers or determined endogenously by the sum of private contributions. When reliability pricing and self-sizing (endogenous) mechanism are used in combination free-riding is reduced the greatest among the tested treatments. Furthermore, and as expected, self-sizing when combined with historical allocations results in the worst level of free-riding. This setting for this treatment creates an incentive to undervalue willingness to pay since very low contributions still return positive earnings as long as enough contributions are raised for a single unit. If everyone perceives everyone else is undervaluing their contribution the incentive grows stronger and we see the greatest degree of free riding among the treatments. Lastly, the results from the analysis suggested that the rebate rule may have encouraged those with willingness to pay values less than the cost of the project to feel confident when contributing more than their willingness to pay and to do so when they faced the endogenously-sized, reliability pricing solicitation since a rebate would likely return them positive earnings. In subsequent research we would like to explore the role of the rebate rule in the effectiveness of reliability pricing and self-sizing in increasing price-formation efficiency and reduce free riding.
Earthquake Damage Assessment Using Very High Resolution Satelliteimagery
NASA Astrophysics Data System (ADS)
Chiroiu, L.; André, G.; Bahoken, F.; Guillande, R.
Various studies using satellite imagery were applied in the last years in order to assess natural hazard damages, most of them analyzing the case of floods, hurricanes or landslides. For the case of earthquakes, the medium or small spatial resolution data available in the recent past did not allow a reliable identification of damages, due to the size of the elements (e.g. buildings or other structures), too small compared with the pixel size. The recent progresses of remote sensing in terms of spatial resolution and data processing makes possible a reliable damage detection to the elements at risk. Remote sensing techniques applied to IKONOS (1 meter resolution) and IRS (5 meters resolution) imagery were used in order to evaluate seismic vulnerability and post earthquake damages. A fast estimation of losses was performed using a multidisciplinary approach based on earthquake engineering and geospatial analysis. The results, integrated into a GIS database, could be transferred via satellite networks to the rescue teams deployed on the affected zone, in order to better coordinate the emergency operations. The methodology was applied to the city of Bhuj and Anjar after the 2001 Gujarat (India) Earthquake.
Cordell, Jacqueline M; Vogl, Michelle L; Wagoner Johnson, Amy J
2009-10-01
While recognized as a promising bone substitute material, hydroxyapatite (HA) has had limited use in clinical settings because of its inherent brittle behavior. It is well established that macropores ( approximately 100 microm) in a HA implant, or scaffold, are required for bone ingrowth, but recent research has shown that ingrowth is enhanced when scaffolds also contain microporosity. HA is sensitive to synthesis and processing parameters and therefore characterization for specific applications is necessary for transition to the clinic. To that end, the mechanical behavior of bulk microporous HA and HA scaffolds with multi-scale porosity (macropores between rods in the range of 250-350 microm and micropores within the rods with average size of either 5.96 microm or 16.2 microm) was investigated in order to determine how strength and reliability were affected by micropore size (5.96 microm versus 16.2 microm). For the bulk microporous HA, strength increased with decreasing micropore size in both bending (19 MPa to 22 MPa) and compression (71 MPa to 110 MPa). To determine strength reliability, the Weibull moduli for the bulk microporous HA were determined. The Weibull moduli for bending increased (became more reliable) with decreasing pore size (7 to 10) while the Weibull moduli for compression decreased (became less reliable) with decreasing pore size (9 to 6). Furthermore, the elastic properties of the bulk microporous HA (elastic modulus of 30 GPa) and the compressive strengths of the HA scaffolds with multi-scale porosity (8 MPa) did not vary with pore size. The mechanisms responsible for the trends observed were discussed.
Lo, Wing-Sze; Ho, Sai-Yin; Wong, Bonny Yee-Man; Mak, Kwok-Kei; Lam, Tai-Hing
2011-06-01
The reliability and validity of Stunkard's Figure Rating Scale (FRS) as a measure of current body size (CBS) was established in Western adolescent girls but not in non-Western population. We examined the validity and test-retest reliability of Stunkard's FRS in assessing CBS among Chinese adolescents. Methods. In a school-based survey in Hong Kong, 5666 adolescents (boys: 45.1%; mean age 14.7 years) provided data on self-reported height and weight, CBS, perceived weight status, and health-related quality of life using the Medical Outcomes Study Short-Form version 2 (SF-12v2). Height and weight were also objectively measured. Spearman's correlation was used to assess construct validity, concurrent validity and test-retest reliability. Convergent and discriminant validity were good: CBS correlated strongly with weight and self-reported/measured BMI, but only weakly with SF-12v2. CBS correlated strongly with perceived weight status, showing concurrent validity. Spearman's correlation (r) for CBS was 0.78 for girls and 0.72 for boys indicating good test-retest reliability. Validity and reliability results did not differ significantly between senior and junior grade adolescents. Our findings support the use of Stunkard's FRS to measure body size among Chinese adolescents.
Performance of salmon fishery portfolios across western North America.
Griffiths, Jennifer R; Schindler, Daniel E; Armstrong, Jonathan B; Scheuerell, Mark D; Whited, Diane C; Clark, Robert A; Hilborn, Ray; Holt, Carrie A; Lindley, Steven T; Stanford, Jack A; Volk, Eric C
2014-12-01
Quantifying the variability in the delivery of ecosystem services across the landscape can be used to set appropriate management targets, evaluate resilience and target conservation efforts. Ecosystem functions and services may exhibit portfolio-type dynamics, whereby diversity within lower levels promotes stability at more aggregated levels. Portfolio theory provides a framework to characterize the relative performance among ecosystems and the processes that drive differences in performance. We assessed Pacific salmon Oncorhynchus spp. portfolio performance across their native latitudinal range focusing on the reliability of salmon returns as a metric with which to assess the function of salmon ecosystems and their services to humans. We used the Sharpe ratio (e.g. the size of the total salmon return to the portfolio relative to its variability (risk)) to evaluate the performance of Chinook and sockeye salmon portfolios across the west coast of North America. We evaluated the effects on portfolio performance from the variance of and covariance among salmon returns within each portfolio, and the association between portfolio performance and watershed attributes. We found a positive latitudinal trend in the risk-adjusted performance of Chinook and sockeye salmon portfolios that also correlated negatively with anthropogenic impact on watersheds (e.g. dams and land-use change). High-latitude Chinook salmon portfolios were on average 2·5 times more reliable, and their portfolio risk was mainly due to low variance in the individual assets. Sockeye salmon portfolios were also more reliable at higher latitudes, but sources of risk varied among the highest performing portfolios. Synthesis and applications . Portfolio theory provides a straightforward method for characterizing the resilience of salmon ecosystems and their services. Natural variability in portfolio performance among undeveloped watersheds provides a benchmark for restoration efforts. Locally and regionally, assessing the sources of portfolio risk can guide actions to maintain existing resilience (protect habitat and disturbance regimes that maintain response diversity; employ harvest strategies sensitive to different portfolio components) or improve restoration activities. Improving our understanding of portfolio reliability may allow for management of natural resources that is robust to ongoing environmental change. Portfolio theory provides a straightforward method for characterizing the resilience of salmon ecosystems and their services. Natural variability in portfolio performance among undeveloped watersheds provides a benchmark for restoration efforts. Locally and regionally, assessing the sources of portfolio risk can guide actions to maintain existing resilience (protect habitat and disturbance regimes that maintain response diversity; employ harvest strategies sensitive to different portfolio components) or improve restoration activities. Improving our understanding of portfolio reliability may allow for management of natural resources that is robust to ongoing environmental change.
Performance of salmon fishery portfolios across western North America
Griffiths, Jennifer R; Schindler, Daniel E; Armstrong, Jonathan B; Scheuerell, Mark D; Whited, Diane C; Clark, Robert A; Hilborn, Ray; Holt, Carrie A; Lindley, Steven T; Stanford, Jack A; Volk, Eric C
2014-01-01
Quantifying the variability in the delivery of ecosystem services across the landscape can be used to set appropriate management targets, evaluate resilience and target conservation efforts. Ecosystem functions and services may exhibit portfolio-type dynamics, whereby diversity within lower levels promotes stability at more aggregated levels. Portfolio theory provides a framework to characterize the relative performance among ecosystems and the processes that drive differences in performance. We assessed Pacific salmon Oncorhynchus spp. portfolio performance across their native latitudinal range focusing on the reliability of salmon returns as a metric with which to assess the function of salmon ecosystems and their services to humans. We used the Sharpe ratio (e.g. the size of the total salmon return to the portfolio relative to its variability (risk)) to evaluate the performance of Chinook and sockeye salmon portfolios across the west coast of North America. We evaluated the effects on portfolio performance from the variance of and covariance among salmon returns within each portfolio, and the association between portfolio performance and watershed attributes. We found a positive latitudinal trend in the risk-adjusted performance of Chinook and sockeye salmon portfolios that also correlated negatively with anthropogenic impact on watersheds (e.g. dams and land-use change). High-latitude Chinook salmon portfolios were on average 2·5 times more reliable, and their portfolio risk was mainly due to low variance in the individual assets. Sockeye salmon portfolios were also more reliable at higher latitudes, but sources of risk varied among the highest performing portfolios. Synthesis and applications. Portfolio theory provides a straightforward method for characterizing the resilience of salmon ecosystems and their services. Natural variability in portfolio performance among undeveloped watersheds provides a benchmark for restoration efforts. Locally and regionally, assessing the sources of portfolio risk can guide actions to maintain existing resilience (protect habitat and disturbance regimes that maintain response diversity; employ harvest strategies sensitive to different portfolio components) or improve restoration activities. Improving our understanding of portfolio reliability may allow for management of natural resources that is robust to ongoing environmental change. Portfolio theory provides a straightforward method for characterizing the resilience of salmon ecosystems and their services. Natural variability in portfolio performance among undeveloped watersheds provides a benchmark for restoration efforts. Locally and regionally, assessing the sources of portfolio risk can guide actions to maintain existing resilience (protect habitat and disturbance regimes that maintain response diversity; employ harvest strategies sensitive to different portfolio components) or improve restoration activities. Improving our understanding of portfolio reliability may allow for management of natural resources that is robust to ongoing environmental change. PMID:25552746
Evaluation of Ventricle Size Measurements in Infants by Pediatric Emergency Medicine Physicians.
Halm, Brunhild M; Leone, Tina A; Chaudoin, Lindsey T; McKinley, Kenneth W; Ruzal-Shapiro, Carrie; Franke, Adrian A; Tsze, Daniel S
2018-06-05
The identification of hydrocephalus in infants by pediatric emergency medicine (PEM) physicians using cranial point-of-care ultrasound (POCUS) has not been evaluated. We aimed to conduct a pilot/proof-of-concept study to evaluate whether PEM physicians can identify hydrocephalus (anterior horn width >5 mm) in 15 infants (mean 69 ± 42 days old) from the neonatal intensive care unit using POCUS. Our exploratory aims were to determine the test characteristics of cranial POCUS performed by PEM physicians for diagnosing hydrocephalus and the interrater reliability between measurements made by the PEM physicians and the radiologist. Depending on the availability, 1 or 2 PEM physicians performed a cranial POCUS through the open anterior fontanel for each infant after a 30-minute didactic lecture to determine the size of the left and right ventricles by measuring the anterior horn width at the foramen of Monroe in coronal view. Within 1 week, an ultrasound (US) technologist performed a cranial US and a radiologist determined the ventricle sizes from the US images; these measurements were the criterion standard. A radiologist determined 12 of the 30 ventricles as hydrocephalic. The sensitivity and specificity of the PEM physicians performed cranial POCUS was 66.7% (95% confidence interval [CI], 34.9%-90.1%) and 94.4% (95% CI, 72.7%-99.9%), whereas the positive and negative predictive values were 88.9% (95% CI, 53.3%-98.2%) and 81.0% (95% CI, 65.5%-90.5%), respectively. The interrater reliability between the PEM physician's and radiologist's measurements was r = 0.91. The entire POCUS examinations performed by the PEM physicians took an average of 1.5 minutes. The time between the cranial POCUS and the radiology US was, on average, 4 days. While the PEM physicians in our study were able to determine the absence of hydrocephalus in infants with high specificity using cranial POCUS, there was insufficient evidence to support the use of this modality for identifying hydrocephalus. Future studies with more participants are warranted to accurately determine test characteristics.
ERIC Educational Resources Information Center
Helms, LuAnn Sherbeck
This paper discusses the fact that reliability is about scores and not tests and how reliability limits effect sizes. The paper also explores the classical reliability coefficients of stability, equivalence, and internal consistency. Stability is concerned with how stable test scores will be over time, while equivalence addresses the relationship…
Hasa, Dritan; Giacobbe, Carlotta; Perissutti, Beatrice; Voinovich, Dario; Grassi, Mario; Cervellino, Antonio; Masciocchi, Norberto; Guagliardi, Antonietta
2016-09-06
Microcrystalline vinpocetine, coground with cross-linked polyvinylpyrrolidone, affords hybrids containing nanosized drug nanocrystals, the size and size distributions of which depend on milling times and drug-to-polymer weight ratios. Using an innovative approach to microstructural characterization, we analyzed wide-angle X-ray total scattering data by the Debye function analysis and demonstrated the possibility to characterize pharmaceutical solid dispersions obtaining a reliable quantitative view of the physicochemical status of the drug dispersed in an amorphous carrier. The microstructural properties derived therefrom have been successfully employed in reconciling the enigmatic difference in behavior between in vitro and in vivo solubility tests performed on nanosized vinpocetine embedded in a polymeric matrix.
Microgrid Design Toolkit (MDT) User Guide Software v1.2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eddy, John P.
2017-08-01
The Microgrid Design Toolkit (MDT) supports decision analysis for new ("greenfield") microgrid designs as well as microgrids with existing infrastructure. The current version of MDT includes two main capabilities. The first capability, the Microgrid Sizing Capability (MSC), is used to determine the size and composition of a new, grid connected microgrid in the early stages of the design process. MSC is focused on developing a microgrid that is economically viable when connected to the grid. The second capability is focused on designing a microgrid for operation in islanded mode. This second capability relies on two models: the Technology Management Optimizationmore » (TMO) model and Performance Reliability Model (PRM).« less
Vincent, Mary Anne; Sheriff, Susan; Mellott, Susan
2015-02-01
High-fidelity simulation has become a growing educational modality among institutions of higher learning ever since the Institute of Medicine recommended that it be used to improve patient safety in 2000. However, there is limited research on the effect of high-fidelity simulation on psychomotor clinical performance improvement of undergraduate nursing students being evaluated by experts using reliable and valid appraisal instruments. The purpose of this integrative review and meta-analysis is to explore what researchers have established about the impact of high-fidelity simulation on improving the psychomotor clinical performance of undergraduate nursing students. Only eight of the 1120 references met inclusion criteria. A meta-analysis using Hedges' g to compute the effect size and direction of impact yielded a range of -0.26 to +3.39. A positive effect was shown in seven of eight studies; however, there were five different research designs and six unique appraisal instruments used among these studies. More research is necessary to determine if high-fidelity simulation improves psychomotor clinical performance in undergraduate nursing students. Nursing programs from multiple sites having a standardized curriculum and using the same appraisal instruments with established reliability and validity are ideal for this work.
The reliability of the Australasian Triage Scale: a meta-analysis
Ebrahimi, Mohsen; Heydari, Abbas; Mazlom, Reza; Mirhaghi, Amir
2015-01-01
BACKGROUND: Although the Australasian Triage Scale (ATS) has been developed two decades ago, its reliability has not been defined; therefore, we present a meta-analyis of the reliability of the ATS in order to reveal to what extent the ATS is reliable. DATA SOURCES: Electronic databases were searched to March 2014. The included studies were those that reported samples size, reliability coefficients, and adequate description of the ATS reliability assessment. The guidelines for reporting reliability and agreement studies (GRRAS) were used. Two reviewers independently examined abstracts and extracted data. The effect size was obtained by the z-transformation of reliability coefficients. Data were pooled with random-effects models, and meta-regression was done based on the method of moment’s estimator. RESULTS: Six studies were included in this study at last. Pooled coefficient for the ATS was substantial 0.428 (95%CI 0.340–0.509). The rate of mis-triage was less than fifty percent. The agreement upon the adult version is higher than the pediatric version. CONCLUSION: The ATS has shown an acceptable level of overall reliability in the emergency department, but it needs more development to reach an almost perfect agreement. PMID:26056538
A large flat panel multifunction display for military and space applications
NASA Astrophysics Data System (ADS)
Pruitt, James S.
1992-09-01
A flat panel multifunction display (MFD) that offers the size and reliability benefits of liquid crystal display technology while achieving near-CRT display quality is presented. Display generation algorithms that provide exceptional display quality are being implemented in custom VLSI components to minimize MFD size. A high-performance processor converts user-specified display lists to graphics commands used by these components, resulting in high-speed updates of two-dimensional and three-dimensional images. The MFD uses the MIL-STD-1553B data bus for compatibility with virtually all avionics systems. The MFD can generate displays directly from display lists received from the MIL-STD-1553B bus. Complex formats can be stored in the MFD and displayed using parameters from the data bus. The MFD also accepts direct video input and performs special processing on this input to enhance image quality.
Bubble performance of a novel dissolved air flotation(DAF) unit.
Chen, Fu-tai; Peng, Feng-xian; Wu, Xiao-qing; Luan, Zhao-kun
2004-01-01
ES-DAF, a novel DAF with low cost, high reliability and easy controllability, was studied. Without a costly air saturator, ES-DAF consists of an ejector and a static mixer between the pressure side and suction side of the recycle rotary pump. The bubble size distribution in this novel unit was studied in detail by using a newly developed CCD imagination through a microscope. Compared with M-DAF under the same saturation pressure, ES-DAF can produce smaller bubble size and higher bubble volume concentration, especially in lower pressure. In addition, the bubble size decreases with the increase of reflux ratio or decrease of superficial air-water ratio. These results suggested that smaller bubbles will be formed when the initial number of nucleation sites increases by enhancing the turbulence intensity in the saturation system.
Brayton heat exchange unit development program
NASA Technical Reports Server (NTRS)
Morse, C. J.; Richard, C. E.; Duncan, J. D.
1971-01-01
A Brayton Heat Exchanger Unit (BHXU), consisting of a recuperator, a heat sink heat exchanger and a gas ducting system, was designed, fabricated, and tested. The design was formulated to provide a high performance unit suitable for use in a long-life Brayton-cycle powerplant. A parametric analysis and design study was performed to establish the optimum component configurations to achieve low weight and size and high reliability, while meeting the requirements of high effectiveness and low pressure drop. Layout studies and detailed mechanical and structural design were performed to obtain a flight-type packaging arrangement. Evaluation testing was conducted from which it is estimated that near-design performance can be expected with the use of He-Xe as the working fluid.
Performance, optimization, and latest development of the SRI family of rotary cryocoolers
NASA Astrophysics Data System (ADS)
Dovrtel, Klemen; Megušar, Franc
2017-05-01
In this paper the SRI family of Le-tehnika rotary cryocoolers is presented (SRI401, SRI423/SRI421 and SRI474). The Stirling coolers cooling power range starts from 0.25W to 0.75W at 77K with available temperature range from 60K to 150K and are fitted to typical dewar detector sizes and powers supply voltages. The DDCA performance optimizing procedure is presented. The procedure includes cooler steady state performance mapping and optimization and cooldown optimization. The current cryogenic performance status and reliability evaluation method and figures are presented on the existing and new units. The latest improved SRI401 demonstrated MTTF close to 25'000 hours and the test is still on going.
2012-01-01
Background For parasites with complex life cycles, size at transmission can impact performance in the next host, thereby coupling parasite phenotypes in the two consecutive hosts. However, a handful of studies with parasites, and numerous studies with free-living, complex-life-cycle animals, have found that larval size correlates poorly with fitness under particular conditions, implying that other traits, such as physiological or ontogenetic variation, may predict fitness more reliably. Using the tapeworm Schistocephalus solidus, we evaluated how parasite size, age, and ontogeny in the copepod first host interact to determine performance in the stickleback second host. Methods We raised infected copepods under two feeding treatments (to manipulate parasite growth), and then exposed fish to worms of two different ages (to manipulate parasite ontogeny). We assessed how growth and ontogeny in copepods affected three measures of fitness in fish: infection probability, growth rate, and energy storage. Results Our main, novel finding is that the increase in fitness (infection probability and growth in fish) with larval size and age observed in previous studies on S. solidus seems to be largely mediated by ontogenetic variation. Worms that developed rapidly (had a cercomer after 9 days in copepods) were able to infect fish at an earlier age, and they grew to larger sizes with larger energy reserves in fish. Infection probability in fish increased with larval size chiefly in young worms, when size and ontogeny are positively correlated, but not in older worms that had essentially completed their larval development in copepods. Conclusions Transmission to sticklebacks as a small, not-yet-fully developed larva has clear costs for S. solidus, but it remains unclear what prevents the evolution of faster growth and development in this species. PMID:22564512
Supporting large scale applications on networks of workstations
NASA Technical Reports Server (NTRS)
Cooper, Robert; Birman, Kenneth P.
1989-01-01
Distributed applications on networks of workstations are an increasingly common way to satisfy computing needs. However, existing mechanisms for distributed programming exhibit poor performance and reliability as application size increases. Extension of the ISIS distributed programming system to support large scale distributed applications by providing hierarchical process groups is discussed. Incorporation of hierarchy in the program structure and exploitation of this to limit the communication and storage required in any one component of the distributed system is examined.
Management of colorectal emergencies: percutaneous abscess drainage.
Brusciano, L; Maffettone, V; Napolitano, V; Izzo, G; Rossetti, G; Izzo, D; Russo, F; Russo, G; del Genio, G; del Genio, A
2004-01-01
Pelvic abscesses represent the most frequent complications of colorectal surgery. Percutaneous CT or US guided drainage can be an alternative to surgical drainage that is associated to a significant mortality rate. In the current study results of PAD, performed in 39 patients with pelvic or abdominopelvic abscesses were reviewed in order to evaluate reliability of such procedure. Major part of the collections 33/39 (85%) developed after resective colorectal surgery, and 20/39 (51%) were associated to anastomotic fistula; 22/39 (56%) were poorly defined; 16/39 (41%) were multiloculated; 16/39 (41%) had a stool contamination, 23/39 (58%) were greater than 10 cm; 14/39 (35%) were multiple. Thirty-five patients (89.7%) healed, despite high number of complex abscesses and complete resolution of sepsis was achieved in 5.1 +/- 2.9 days. CT proved to be the most reliable tool in assessing distinctive features of collections as well as in identifying the best route for drainage. Adequate size of the catheter was essential to get an effective drainage. In particular, large sized catheter (> 20 Fr) had to be used to drain collections associated to anastomotic fistulas with stool contamination. In four elderly neoplastic patients with chronic illnesses (10%), only a single small sized catheter could be positioned, because of patients poor compliance and PAD was inaffective. Nevertheless even those patients got a partial resolution of the sepsis and their general conditions markedly improved, so that they were able to underwent successful surgical drainage. In conclusion PAD is a safe and reliable tool that can be employed as an alternative to surgical drainage at least as first measure, even if complex pelvic abscesses are found.
Grillo, Federica; Valle, Luca; Ferone, Diego; Albertelli, Manuela; Brisigotti, Maria Pia; Cittadini, Giuseppe; Vanoli, Alessandro; Fiocca, Roberto; Mastracci, Luca
2017-09-01
Ki-67 heterogeneity can impact on gastroenteropancreatic neuroendocrine tumor grade assignment, especially when tissue is scarce. This work is aimed at devising adequacy criteria for grade assessment in biopsy specimens. To analyze the impact of biopsy size on reliability, 360 virtual biopsies of different thickness and lengths were constructed. Furthermore, to estimate the mean amount of non-neoplastic tissue component present in biopsies, 28 real biopsies were collected, the non-neoplastic components (fibrosis and inflammation) quantified and the effective area of neoplastic tissue calculated for each biopsy. Heterogeneity of Ki-67 distribution, G2 tumors and biopsy size all play an important role in reducing the reliability of biopsy samples in Ki-67-based grade assignment. In particular in G2 cases, 59.9% of virtual biopsies downgraded the tumor and the smaller the biopsy, the more frequent downgrading occurs. In real biopsies the presence of non-neoplastic tissue reduced the available total area by a mean of 20%. By coupling the results from these two different approaches we show that both biopsy size and non-neoplastic component must be taken into account for biopsy adequacy. In particular, we can speculate that if the minimum biopsy area, necessary to confidently (80% concordance) grade gastro-entero-pancreatic neuroendocrine tumors on virtual biopsies ranges between 15 and 30 mm 2 , and if real biopsies are on average composed of only 80% of neoplastic tissue, then biopsies with a surface area not <12 mm 2 should be performed; using 18G needles, this corresponds to a minimum total length of 15 mm.
Chin, Esther Y; Nelson, Lindsay D; Barr, William B; McCrory, Paul; McCrea, Michael A
2016-09-01
The Sport Concussion Assessment Tool-3 (SCAT3) facilitates sideline clinical assessments of concussed athletes. Yet, there is little published research on clinically relevant metrics for the SCAT3 as a whole. We documented the psychometric properties of the major SCAT3 components (symptoms, cognition, balance) and derived clinical decision criteria (ie, reliable change score cutoffs and normative conversation tables) for clinicians to apply to cases with and without available preinjury baseline data. Cohort study (diagnosis); Level of evidence, 2. High school and collegiate athletes (N = 2018) completed preseason baseline evaluations including the SCAT3. Re-evaluations of 166 injured athletes and 164 noninjured controls were performed within 24 hours of injury and at 8, 15, and 45 days after injury. Analyses focused on predictors of baseline performance, test-retest reliability, and sensitivity and specificity of the SCAT3 using either single postinjury cutoffs or reliable change index (RCI) criteria derived from this sample. Athlete sex, level of competition, attention-deficit/hyperactivity disorder (ADHD), learning disability (LD), and estimated verbal intellectual ability (but not concussion history) were associated with baseline scores on ≥1 SCAT3 components (small to moderate effect sizes). Female sex, high school level of competition (vs college), and ADHD were associated with higher baseline symptom ratings (d = 0.25-0.32). Male sex, ADHD, and LD were associated with lower baseline Standardized Assessment of Concussion (SAC) scores (d = 0.28-0.68). Male sex, high school level of competition, ADHD, and LD were associated with poorer baseline Balance Error Scoring System (BESS) performance (d = 0.14-0.26). After injury, the symptom checklist manifested the largest effect size at the 24-hour assessment (d = 1.52), with group differences diminished but statistically significant at day 8 (d = 0.39) and nonsignificant at day 15. Effect sizes for the SAC and BESS were small to moderate at 24 hours (SAC: d = -0.36; modified BESS: d = 0.46; full BESS: d = 0.51) and became nonsignificant at day 8 (SAC) and day 15 (BESS). Receiver operating characteristic curve analyses demonstrated a stronger discrimination for symptoms (area under the curve [AUC] = 0.86) than cognitive and balance measures (AUCs = 0.58 and 0.62, respectively), with comparable discrimination of each SCAT3 component using postinjury scores alone versus baseline-adjusted scores (P = .71-.90). Normative conversion tables and RCI criteria were created to facilitate the use of the SCAT3 both with and without baseline test results. Individual predictors should be taken into account when interpreting the SCAT3. The normative conversion tables and RCIs presented can be used to help interpret concussed athletes' performance both with and without baseline data, given the comparability of the 2 interpretative approaches. © 2016 The Author(s).
NASA Astrophysics Data System (ADS)
McCurdy, David R.; Krivanek, Thomas M.; Roche, Joseph M.; Zinolabedini, Reza
2006-01-01
The concept of a human rated transport vehicle for various near earth missions is evaluated using a liquid hydrogen fueled Bimodal Nuclear Thermal Propulsion (BNTP) approach. In an effort to determine the preliminary sizing and optimal propulsion system configuration, as well as the key operating design points, an initial investigation into the main system level parameters was conducted. This assessment considered not only the performance variables but also the more subjective reliability, operability, and maintainability attributes. The SIZER preliminary sizing tool was used to facilitate rapid modeling of the trade studies, which included tank materials, propulsive versus an aero-capture trajectory, use of artificial gravity, reactor chamber operating pressure and temperature, fuel element scaling, engine thrust rating, engine thrust augmentation by adding oxygen to the flow in the nozzle for supersonic combustion, and the baseline turbopump configuration to address mission redundancy and safety requirements. A high level system perspective was maintained to avoid focusing solely on individual component optimization at the expense of system level performance, operability, and development cost.
Thompson, James J
2016-01-01
Summative didactic evaluation often involves multiple choice questions which are then aggregated into exam scores, course scores, and cumulative grade point averages. To be valid, each of these levels should have some relationship to the topic tested (dimensionality) and be sufficiently reproducible between persons (reliability) to justify student ranking. Evaluation of dimensionality is difficult and is complicated by the classic observation that didactic performance involves a generalized component (g) in addition to subtest specific factors. In this work, 183 students were analyzed over two academic years in 13 courses with 44 exams and 3352 questions for both accuracy and speed. Reliability at all levels was good (>0.95). Assessed by bifactor analysis, g effects dominated most levels resulting in essential unidimensionality. Effect sizes on predicted accuracy and speed due to nesting in exams and courses was small. There was little relationship between person ability and person speed. Thus, the hierarchical grading system appears warrented because of its g-dependence.
NASA Astrophysics Data System (ADS)
Reinert, K. A.
The use of linear decision rules (LDR) and chance constrained programming (CCP) to optimize the performance of wind energy conversion clusters coupled to storage systems is described. Storage is modelled by LDR and output by CCP. The linear allocation rule and linear release rule prescribe the size and optimize a storage facility with a bypass. Chance constraints are introduced to explicitly treat reliability in terms of an appropriate value from an inverse cumulative distribution function. Details of deterministic programming structure and a sample problem involving a 500 kW and a 1.5 MW WECS are provided, considering an installed cost of $1/kW. Four demand patterns and three levels of reliability are analyzed for optimizing the generator choice and the storage configuration for base load and peak operating conditions. Deficiencies in ability to predict reliability and to account for serial correlations are noted in the model, which is concluded useful for narrowing WECS design options.
Reliability of the Q Force; a mobile instrument for measuring isometric quadriceps muscle strength.
Douma, K W; Regterschot, G R H; Krijnen, W P; Slager, G E C; van der Schans, C P; Zijlstra, W
2016-01-01
The ability to generate muscle strength is a pre-requisite for all human movement. Decreased quadriceps muscle strength is frequently observed in older adults and is associated with a decreased performance and activity limitations. To quantify the quadriceps muscle strength and to monitor changes over time, instruments and procedures with a sufficient reliability are needed. The Q Force is an innovative mobile muscle strength measurement instrument suitable to measure in various degrees of extension. Measurements between 110 and 130° extension present the highest values and the most significant increase after training. The objective of this study is to determine the test-retest reliability of muscle strength measurements by the Q Force in older adults in 110° extension. Forty-one healthy older adults, 13 males and 28 females were included in the study. Mean (SD) age was 81.9 (4.89) years. Isometric muscle strength of the Quadriceps muscle was assessed with the Q Force at 110° of knee extension. Participants were measured at two sessions with a three to eight day interval between sessions. To determine relative reliability, the intraclass correlation coefficient (ICC) was calculated. To determine absolute reliability, Bland and Altman Limits of Agreement (LOA) were calculated and t-tests were performed. Relative reliability of the Q Force is good to excellent as all ICC coefficients are higher than 0.75. Generally a large 95 % LOA, reflecting only moderate absolute reliability, is found as exemplified for the peak torque left leg of -18.6 N to 33.8 N and the right leg of -9.2 N to 26.4 N was between 15.7 and 23.6 Newton representing 25.2 % to 39.9 % of the size of the mean. Small systematic differences in mean were found between measurement session 1 and 2. The present study shows that the Q Force has excellent relative test-retest reliability, but limited absolute test-retest reliability. Since the Q Force is relatively cheap and mobile it is suitable for application in various clinical settings, however, its capability to detect changes in muscle force over time is limited but comparable to existing instruments.
Olsen, Cecilie Fromholt; Bergland, Astrid
2017-06-09
The purpose of the study was to establish the test-retest reliability of the Norwegian version of the Short Physical Performance Battery (SPPB). This was a cross- sectional reliability study. A convenience sample of 61 older adults with a mean age of 88.4(8.1) was tested by two different physiotherapists at two time points. The mean time interval between tests was 2.5 days. The Intraclass Correlation Coefficient model 3.1 (ICC, 3.1) with 95% confidence intervals as well as the weighted Kappa (K) were used as measures of relative reliability. The Standard Error of Measurement (SEM) and Minimal Detectable Change (MDC) were used to measure absolute reliability. The results were also analyzed for a subgroup of 24 older people with dementia. The ICC reflected high relative reliability for the SPPB summary score and the 4 m walk test (4mwt), both for the total sample (ICC = 0.92, and 0.91 respectively)) and for the subgroup with dementia (ICC = 0.84 and 0.90 respectively). Furthermore, weighted Ks for the SPPB subscales were 0.64 for the chair stand, 0.80 for gait and 0.52 for balance for the total sample and almost identical for the subgroup with dementia. MDC-values at the 95% confidence intervals (MDC95) were calculated at 0.8 for the total score of SPPB and 0.39 m/s for the 4mwt in the total sample. For the subgroup with dementia MDC95 was 1.88 for the total score of SPPB and 0.28 m/s for 4mwt. The SPPB total score and the timed walking test showed overall high relative and absolute reliability for the total sample indicating that the Norwegian version of the SPPB is reliable when used by trained physiotherapists with older people. The reliability of the Norwegian SPPB in older people with dementia seems high, but due to a small sample size this needs further investigation.
Mukherjee, Shalini; Yadav, Rajeev; Yung, Iris; Zajdel, Daniel P; Oken, Barry S
2011-10-01
To determine (1) whether heart rate variability (HRV) was a sensitive and reliable measure in mental effort tasks carried out by healthy seniors and (2) whether non-linear approaches to HRV analysis, in addition to traditional time and frequency domain approaches were useful to study such effects. Forty healthy seniors performed two visual working memory tasks requiring different levels of mental effort, while ECG was recorded. They underwent the same tasks and recordings 2 weeks later. Traditional and 13 non-linear indices of HRV including Poincaré, entropy and detrended fluctuation analysis (DFA) were determined. Time domain, especially mean R-R interval (RRI), frequency domain and, among non-linear parameters - Poincaré and DFA were the most reliable indices. Mean RRI, time domain and Poincaré were also the most sensitive to different mental effort task loads and had the largest effect size. Overall, linear measures were the most sensitive and reliable indices to mental effort. In non-linear measures, Poincaré was the most reliable and sensitive, suggesting possible usefulness as an independent marker in cognitive function tasks in healthy seniors. A large number of HRV parameters was both reliable as well as sensitive indices of mental effort, although the simple linear methods were the most sensitive. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
DiCesare, Christopher A; Bates, Nathaniel A; Barber Foss, Kim D; Thomas, Staci M; Wordeman, Samuel C; Sugimoto, Dai; Roewer, Benjamin D; Medina McKeon, Jennifer M; Di Stasi, Stephanie; Noehren, Brian W; Ford, Kevin R; Kiefer, Adam W; Hewett, Timothy E; Myer, Gregory D
2015-12-01
Anterior cruciate ligament (ACL) injuries are physically and financially devastating but affect a relatively small percentage of the population. Prospective identification of risk factors for ACL injury necessitates a large sample size; therefore, study of this injury would benefit from a multicenter approach. To determine the reliability of kinematic and kinetic measures of a single-leg cross drop task across 3 institutions. Controlled laboratory study. Twenty-five female high school volleyball players participated in this study. Three-dimensional motion data of each participant performing the single-leg cross drop were collected at 3 institutions over a period of 4 weeks. Coefficients of multiple correlation were calculated to assess the reliability of kinematic and kinetic measures during the landing phase of the movement. Between-centers reliability for kinematic waveforms in the frontal and sagittal planes was good, but moderate in the transverse plane. Between-centers reliability for kinetic waveforms was good in the sagittal, frontal, and transverse planes. Based on these findings, the single-leg cross drop task has moderate to good reliability of kinematic and kinetic measures across institutions after implementation of a standardized testing protocol. Multicenter collaborations can increase study numbers and generalize results, which is beneficial for studies of relatively rare phenomena, such as ACL injury. An important step is to determine the reliability of risk assessments across institutions before a multicenter collaboration can be initiated.
DiCesare, Christopher A.; Bates, Nathaniel A.; Barber Foss, Kim D.; Thomas, Staci M.; Wordeman, Samuel C.; Sugimoto, Dai; Roewer, Benjamin D.; Medina McKeon, Jennifer M.; Di Stasi, Stephanie; Noehren, Brian W.; Ford, Kevin R.; Kiefer, Adam W.; Hewett, Timothy E.; Myer, Gregory D.
2015-01-01
Background: Anterior cruciate ligament (ACL) injuries are physically and financially devastating but affect a relatively small percentage of the population. Prospective identification of risk factors for ACL injury necessitates a large sample size; therefore, study of this injury would benefit from a multicenter approach. Purpose: To determine the reliability of kinematic and kinetic measures of a single-leg cross drop task across 3 institutions. Study Design: Controlled laboratory study. Methods: Twenty-five female high school volleyball players participated in this study. Three-dimensional motion data of each participant performing the single-leg cross drop were collected at 3 institutions over a period of 4 weeks. Coefficients of multiple correlation were calculated to assess the reliability of kinematic and kinetic measures during the landing phase of the movement. Results: Between-centers reliability for kinematic waveforms in the frontal and sagittal planes was good, but moderate in the transverse plane. Between-centers reliability for kinetic waveforms was good in the sagittal, frontal, and transverse planes. Conclusion: Based on these findings, the single-leg cross drop task has moderate to good reliability of kinematic and kinetic measures across institutions after implementation of a standardized testing protocol. Clinical Relevance: Multicenter collaborations can increase study numbers and generalize results, which is beneficial for studies of relatively rare phenomena, such as ACL injury. An important step is to determine the reliability of risk assessments across institutions before a multicenter collaboration can be initiated. PMID:26779550
Network Reliability: The effect of local network structure on diffusive processes
Youssef, Mina; Khorramzadeh, Yasamin; Eubank, Stephen
2014-01-01
This paper re-introduces the network reliability polynomial – introduced by Moore and Shannon in 1956 – for studying the effect of network structure on the spread of diseases. We exhibit a representation of the polynomial that is well-suited for estimation by distributed simulation. We describe a collection of graphs derived from Erdős-Rényi and scale-free-like random graphs in which we have manipulated assortativity-by-degree and the number of triangles. We evaluate the network reliability for all these graphs under a reliability rule that is related to the expected size of a connected component. Through these extensive simulations, we show that for positively or neutrally assortative graphs, swapping edges to increase the number of triangles does not increase the network reliability. Also, positively assortative graphs are more reliable than neutral or disassortative graphs with the same number of edges. Moreover, we show the combined effect of both assortativity-by-degree and the presence of triangles on the critical point and the size of the smallest subgraph that is reliable. PMID:24329321
Medical image digital archive: a comparison of storage technologies
NASA Astrophysics Data System (ADS)
Chunn, Timothy; Hutchings, Matt
1998-07-01
A cost effective, high capacity digital archive system is one of the remaining key factors that will enable a radiology department to eliminate film as an archive medium. The ever increasing amount of digital image data is creating the need for huge archive systems that can reliably store and retrieve millions of images and hold from a few terabytes of data to possibly hundreds of terabytes. Selecting the right archive solution depends on a number of factors: capacity requirements, write and retrieval performance requirements, scaleability in capacity and performance, conformance to open standards, archive availability and reliability, security, cost, achievable benefits and cost savings, investment protection, and more. This paper addresses many of these issues. It compares and positions optical disk and magnetic tape technologies, which are the predominant archive mediums today. New technologies will be discussed, such as DVD and high performance tape. Price and performance comparisons will be made at different archive capacities, plus the effect of file size on random and pre-fetch retrieval time will be analyzed. The concept of automated migration of images from high performance, RAID disk storage devices to high capacity, NearlineR storage devices will be introduced as a viable way to minimize overall storage costs for an archive.
Northeast Inspection Services, Inc. boresonic inspection system evaluation. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nottingham, L.D.; Sabourin, P.F.; Presson, J.H.
1993-04-01
Turbine rotor reliability and remaining life assessment are continuing concerns to electric utilities. Over the years, boresonic inspection and evaluation have served as primary components in rotor remaining life assessment. Beginning with an evaluation of TREES by EPRI in 1982, a series of reports that document the detection and sizing capabilities of several boresonic systems have been made available. These studies should provide utilities with a better understanding of system performance and lead to improved reliability when predicting rotor remaining life. In 1990, the procedures followed for evaluating rotor boresonic performance capabilities were changed to transfer a greater portion ofmore » the data analysis function to the participating vendor. This change from previous policy was instituted so that the evaluation results would better reflect the ``final answer`` that a vendor would provide in a real rotor inspection and also to reduce the cost of an evaluation. Among the first vendors to participate in the new performance demonstration was Northeast Inspection Services, Inc. (NISI). The tests reported herein were conducted by NISI personnel under the guidelines of the new plan. Details of the new evaluation plan are also presented. Rotor bore blocks containing surface-connected fatigue cracks, embedded glass beads, and embedded radial-axial oriented disks were used in the evaluation. Data were collected during twenty-five independent passes through the blocks. The evaluation consisted of statistical characterization of the detection capabilities, flaw sizing and location accuracy, and repeatability of the inspection system. The results of the evaluation are included in this report.« less
An online detection system for aggregate sizes and shapes based on digital image processing
NASA Astrophysics Data System (ADS)
Yang, Jianhong; Chen, Sijia
2017-02-01
Traditional aggregate size measuring methods are time-consuming, taxing, and do not deliver online measurements. A new online detection system for determining aggregate size and shape based on a digital camera with a charge-coupled device, and subsequent digital image processing, have been developed to overcome these problems. The system captures images of aggregates while falling and flat lying. Using these data, the particle size and shape distribution can be obtained in real time. Here, we calibrate this method using standard globules. Our experiments show that the maximum particle size distribution error was only 3 wt%, while the maximum particle shape distribution error was only 2 wt% for data derived from falling aggregates, having good dispersion. In contrast, the data for flat-lying aggregates had a maximum particle size distribution error of 12 wt%, and a maximum particle shape distribution error of 10 wt%; their accuracy was clearly lower than for falling aggregates. However, they performed well for single-graded aggregates, and did not require a dispersion device. Our system is low-cost and easy to install. It can successfully achieve online detection of aggregate size and shape with good reliability, and it has great potential for aggregate quality assurance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Vega, F F; Cantu-Paz, E; Lopez, J I
The population size of genetic algorithms (GAs) affects the quality of the solutions and the time required to find them. While progress has been made in estimating the population sizes required to reach a desired solution quality for certain problems, in practice the sizing of populations is still usually performed by trial and error. These trials might lead to find a population that is large enough to reach a satisfactory solution, but there may still be opportunities to optimize the computational cost by reducing the size of the population. This paper presents a technique called plague that periodically removes amore » number of individuals from the population as the GA executes. Recently, the usefulness of the plague has been demonstrated for genetic programming. The objective of this paper is to extend the study of plagues to genetic algorithms. We experiment with deceptive trap functions, a tunable difficult problem for GAs, and the experiments show that plagues can save computational time while maintaining solution quality and reliability.« less
Consistency of peak and mean concentric and eccentric force using a novel squat testing device.
Stock, Matt S; Luera, Micheal J
2014-04-01
The ability to examine force curves from multiple-joint assessments combines many of the benefits of dynamic constant external resistance exercise and isokinetic dynamometry. The purpose of this investigation was to examine test-retest reliability statistics for peak and mean force using the Exerbotics eSQ during maximal concentric and eccentric squats. Seventeen resistance-trained men (mean±SD age=21±2 years) visited the laboratory on two occasions. For each trial, the subjects performed two maximal concentric and eccentric squats, and the muscle actions with the highest force values were analyzed. There were no mean differences between the trials (P>.05), and the effect sizes were <0.12. When the entire force curve was examined, the intraclass correlation coefficients (model 2,1) and standard errors of measurement, respectively, were concentric peak force=0.743 (8.8%); concentric mean force=0.804 (6.0%); eccentric peak force=0.696 (10.6%); eccentric mean force=0.736 (9.6%). These findings indicated moderate-to-high reliability for the peak and mean force values obtained from the Exerbotics eSQ during maximal squat testing. The analysis of force curves from multiple-joint testing provides researchers and practitioners with a reliable means of assessing performance, especially during concentric muscle actions.
Design and characterization of very high frequency pulse tube prototypes
NASA Astrophysics Data System (ADS)
Lopes, Diogo; Duval, Jean-Marc; Charles, Ivan; Butterworth, James; Trollier, Thierry; Tanchon, Julien; Ravex, Alain; Daniel, Christophe
2012-06-01
Weight and size are important features of a cryocooler when it comes to space applications. Given their reliability and low level of exported vibrations (due to the absence of moving cold parts), pulse tubes are good candidates for spatial purposes and their miniaturization has been the focus of many studies. We report on the design and performance of a small-scale very high frequency pulse tube prototype, modeled after two previous prototypes which were optimized with a numerical code.
NASA Astrophysics Data System (ADS)
Sembiring, P.; Sembiring, S.; Tarigan, G.; Sembiring, OD
2017-12-01
This study aims to determine the level of student satisfaction in the learning process at the University of Sumatra Utara, Indonesia. The sample size of the study consisted 1204 students. Students’ response measured through questionnaires an adapted on a 5-point likert scale and interviews directly to the respondent. SERVQUAL method used to measure the quality of service with five dimensions of service characteristics, namely, physical evidence, reliability, responsiveness, assurance and concern. The result of Importance Performance Analysis reveals that six services attributes must be corrected by policy maker of University Sumatera Utara. The quality of service is still considered low by students.
Review of magnetostrictive vibration energy harvesters
NASA Astrophysics Data System (ADS)
Deng, Zhangxian; Dapino, Marcelo J.
2017-10-01
The field of energy harvesting has grown concurrently with the rapid development of portable and wireless electronics in which reliable and long-lasting power sources are required. Electrochemical batteries have a limited lifespan and require periodic recharging. In contrast, vibration energy harvesters can supply uninterrupted power by scavenging useful electrical energy from ambient structural vibrations. This article reviews the current state of vibration energy harvesters based on magnetostrictive materials, especially Terfenol-D and Galfenol. Existing magnetostrictive harvester designs are compared in terms of various performance metrics. Advanced techniques that can reduce device size and improve performance are presented. Models for magnetostrictive devices are summarized to guide future harvester designs.
NASA Astrophysics Data System (ADS)
Bush, Craig R.
This dissertation presents a novel current source converter topology that is primarily intended for single-phase photovoltaic (PV) applications. In comparison with the existing PV inverter technology, the salient features of the proposed topology are: a) the low frequency (double of line frequency) ripple that is common to single-phase inverters is greatly reduced; b) the absence of low frequency ripple enables significantly reduced size pass components to achieve necessary DC-link stiffness and c) improved maximum power point tracking (MPPT) performance is readily achieved due to the tightened current ripple even with reduced-size passive components. The proposed topology does not utilize any electrolytic capacitors. Instead an inductor is used as the DC-link filter and reliable AC film capacitors are utilized for the filter and auxiliary capacitor. The proposed topology has a life expectancy on par with PV panels. The proposed modulation technique can be used for any current source inverter where an unbalanced three-phase operation is desires such as active filters and power controllers. The proposed topology is ready for the next phase of microgrid and power system controllers in that it accepts reactive power commands. This work presents the proposed topology and its working principle supported by with numerical verifications and hardware results. Conclusions and future work are also presented.
Butler, Emily E; Saville, Christopher W N; Ward, Robert; Ramsey, Richard
2017-01-01
The human face cues a range of important fitness information, which guides mate selection towards desirable others. Given humans' high investment in the central nervous system (CNS), cues to CNS function should be especially important in social selection. We tested if facial attractiveness preferences are sensitive to the reliability of human nervous system function. Several decades of research suggest an operational measure for CNS reliability is reaction time variability, which is measured by standard deviation of reaction times across trials. Across two experiments, we show that low reaction time variability is associated with facial attractiveness. Moreover, variability in performance made a unique contribution to attractiveness judgements above and beyond both physical health and sex-typicality judgements, which have previously been associated with perceptions of attractiveness. In a third experiment, we empirically estimated the distribution of attractiveness preferences expected by chance and show that the size and direction of our results in Experiments 1 and 2 are statistically unlikely without reference to reaction time variability. We conclude that an operating characteristic of the human nervous system, reliability of information processing, is signalled to others through facial appearance. Copyright © 2016 Elsevier B.V. All rights reserved.
Test-retest reliability of effective connectivity in the face perception network.
Frässle, Stefan; Paulus, Frieder Michel; Krach, Sören; Jansen, Andreas
2016-02-01
Computational approaches have great potential for moving neuroscience toward mechanistic models of the functional integration among brain regions. Dynamic causal modeling (DCM) offers a promising framework for inferring the effective connectivity among brain regions and thus unraveling the neural mechanisms of both normal cognitive function and psychiatric disorders. While the benefit of such approaches depends heavily on their reliability, systematic analyses of the within-subject stability are rare. Here, we present a thorough investigation of the test-retest reliability of an fMRI paradigm for DCM analysis dedicated to unraveling intra- and interhemispheric integration among the core regions of the face perception network. First, we examined the reliability of face-specific BOLD activity in 25 healthy volunteers, who performed a face perception paradigm in two separate sessions. We found good to excellent reliability of BOLD activity within the DCM-relevant regions. Second, we assessed the stability of effective connectivity among these regions by analyzing the reliability of Bayesian model selection and model parameter estimation in DCM. Reliability was excellent for the negative free energy and good for model parameter estimation, when restricting the analysis to parameters with substantial effect sizes. Third, even when the experiment was shortened, reliability of BOLD activity and DCM results dropped only slightly as a function of the length of the experiment. This suggests that the face perception paradigm presented here provides reliable estimates for both conventional activation and effective connectivity measures. We conclude this paper with an outlook on potential clinical applications of the paradigm for studying psychiatric disorders. Hum Brain Mapp 37:730-744, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Reliability of Computerized Neurocognitive Tests for Concussion Assessment: A Meta-Analysis.
Farnsworth, James L; Dargo, Lucas; Ragan, Brian G; Kang, Minsoo
2017-09-01
Although widely used, computerized neurocognitive tests (CNTs) have been criticized because of low reliability and poor sensitivity. A systematic review was published summarizing the reliability of Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT) scores; however, this was limited to a single CNT. Expansion of the previous review to include additional CNTs and a meta-analysis is needed. Therefore, our purpose was to analyze reliability data for CNTs using meta-analysis and examine moderating factors that may influence reliability. A systematic literature search (key terms: reliability, computerized neurocognitive test, concussion) of electronic databases (MEDLINE, PubMed, Google Scholar, and SPORTDiscus) was conducted to identify relevant studies. Studies were included if they met all of the following criteria: used a test-retest design, involved at least 1 CNT, provided sufficient statistical data to allow for effect-size calculation, and were published in English. Two independent reviewers investigated each article to assess inclusion criteria. Eighteen studies involving 2674 participants were retained. Intraclass correlation coefficients were extracted to calculate effect sizes and determine overall reliability. The Fisher Z transformation adjusted for sampling error associated with averaging correlations. Moderator analyses were conducted to evaluate the effects of the length of the test-retest interval, intraclass correlation coefficient model selection, participant demographics, and study design on reliability. Heterogeneity was evaluated using the Cochran Q statistic. The proportion of acceptable outcomes was greatest for the Axon Sports CogState Test (75%) and lowest for the ImPACT (25%). Moderator analyses indicated that the type of intraclass correlation coefficient model used significantly influenced effect-size estimates, accounting for 17% of the variation in reliability. The Axon Sports CogState Test, which has a higher proportion of acceptable outcomes and shorter test duration relative to other CNTs, may be a reliable option; however, future studies are needed to compare the diagnostic accuracy of these instruments.
Yoo, Joanne Y; Cai, Jenny; Chen, Antonia F; Austin, Matthew S; Sharkey, Peter F
2016-05-01
Some manufacturers have introduced polyethylene (PE) inserts in 1-mm increment thickness options to allow for finer adjustments in total knee arthroplasty kinematics. Two surgeons with extensive experience performed 88 total knee arthroplasties using implants with 1-mm PE inserts. After trial components were inserted and the optimal PE thickness was selected, the insert was removed and a trial insert size was randomly chosen from opaque envelopes (1-mm smaller, same size, and 1-mm larger). The knee was re-examined and the surgeon determined which size PE had been placed. Surgeons reliably determined insert thicknesses in 62.5% (55 of 88; P = .050) of trials. Surgeons were not able to accurately detect 1-mm incremental changes of trial PE implants on a consistent basis. The potential clinical usefulness of this concept should be further evaluated. Copyright © 2016 Elsevier Inc. All rights reserved.
Photovoltaic module bypass diode encapsulation
NASA Technical Reports Server (NTRS)
Shepard, N. J., Jr.
1983-01-01
The design and processing techniques necessary to incorporate bypass diodes within the module encapsulant are presented. The Semicon PN junction diode cells were selected. Diode junction to heat spreader thermal resistance measurements, performed on a variety of mounted diode chip types and sizes, have yielded values which are consistently below 1 deg C per watt, but show some instability when thermally cycled over the temperature range from -40 to 150 deg C. Three representative experimental modules, each incorporating integral bypass diode/heat spreader assemblies of various sizes, were designed. Thermal testing of these modules enabled the formulation of a recommended heat spreader plate sizing relationship. The production cost of three encapsulated bypass diode/heat spreader assemblies were compared with similarly rated externally mounted packaged diodes. It is concluded that, when proper designed and installed, these bypass diode devices will improve the overall reliability of a terrestrial array over a 20 year design lifetime.
Kumar, Vineet
2011-12-01
The grain size statistics, commonly derived from the grain map of a material sample, are important microstructure characteristics that greatly influence its properties. The grain map for nanomaterials is usually obtained manually by visual inspection of the transmission electron microscope (TEM) micrographs because automated methods do not perform satisfactorily. While the visual inspection method provides reliable results, it is a labor intensive process and is often prone to human errors. In this article, an automated grain mapping method is developed using TEM diffraction patterns. The presented method uses wide angle convergent beam diffraction in the TEM. The automated technique was applied on a platinum thin film sample to obtain the grain map and subsequently derive grain size statistics from it. The grain size statistics obtained with the automated method were found in good agreement with the visual inspection method.
Risk assessment of turbine rotor failure using probabilistic ultrasonic non-destructive evaluations
NASA Astrophysics Data System (ADS)
Guan, Xuefei; Zhang, Jingdan; Zhou, S. Kevin; Rasselkorde, El Mahjoub; Abbasi, Waheed A.
2014-02-01
The study presents a method and application of risk assessment methodology for turbine rotor fatigue failure using probabilistic ultrasonic nondestructive evaluations. A rigorous probabilistic modeling for ultrasonic flaw sizing is developed by incorporating the model-assisted probability of detection, and the probability density function (PDF) of the actual flaw size is derived. Two general scenarios, namely the ultrasonic inspection with an identified flaw indication and the ultrasonic inspection without flaw indication, are considered in the derivation. To perform estimations for fatigue reliability and remaining useful life, uncertainties from ultrasonic flaw sizing and fatigue model parameters are systematically included and quantified. The model parameter PDF is estimated using Bayesian parameter estimation and actual fatigue testing data. The overall method is demonstrated using a realistic application of steam turbine rotor, and the risk analysis under given safety criteria is provided to support maintenance planning.
Reliability and agreement in student ratings of the class environment.
Nelson, Peter M; Christ, Theodore J
2016-09-01
The current study estimated the reliability and agreement of student ratings of the classroom environment obtained using the Responsive Environmental Assessment for Classroom Teaching (REACT; Christ, Nelson, & Demers, 2012; Nelson, Demers, & Christ, 2014). Coefficient alpha, class-level reliability, and class agreement indices were evaluated as each index provides important information for different interpretations and uses of student rating scale data. Data for 84 classes across 29 teachers in a suburban middle school were sampled to derive reliability and agreement indices for the REACT subscales across 4 class sizes: 25, 20, 15, and 10. All participating teachers were White and a larger number of 6th-grade classes were included (42%) relative to 7th- (33%) or 8th- (23%) grade classes. Teachers were responsible for a variety of content areas, including language arts (26%), science (26%), math (20%), social studies (19%), communications (6%), and Spanish (3%). Coefficient alpha estimates were generally high across all subscales and class sizes (α = .70-.95); class-mean estimates were greatly impacted by the number of students sampled from each class, with class-level reliability values generally falling below .70 when class size was reduced from 25 to 20. Further, within-class student agreement varied widely across the REACT subscales (mean agreement = .41-.80). Although coefficient alpha and test-retest reliability are commonly reported in research with student rating scales, class-level reliability and agreement are not. The observed differences across coefficient alpha, class-level reliability, and agreement indices provide evidence for evaluating students' ratings of the class environment according to their intended use (e.g., differentiating between classes, class-level instructional decisions). (PsycINFO Database Record (c) 2016 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Scott, Jessica M.; Martin, David S.; Cunningham, David; Matz, Timothy; Caine, Timothy; Hackney, Kyle J.; Arzeno, Natalia; Ploutz-Snyder, Lori
2010-01-01
Limb muscle atrophy and the accompanying decline in function can adversely affect the performance of astronauts during mission-related activities and upon re-ambulation in a gravitational environment. Previous characterization of space flight-induced muscle atrophy has been performed using pre and post flight magnetic resonance imaging (MRI). In addition to being costly and time consuming, MRI is an impractical methodology for assessing in-flight changes in muscle size. Given the mobility of ultrasound (US) equipment, it may be more feasible to evaluate changes in muscle size using this technique. PURPOSE: To examine the reliability and validity of using a customized template to acquire panoramic ultrasound (US) images for determining quadriceps and gastrocnemius anatomical cross sectional area (CSA). METHODS: Vastus lateralis (VL), rectus femoris (RF), medial gastrocnemius (MG), and lateral gastrocnemius (LG) CSA were assessed in 10 healthy individuals (36+/-2 yrs) using US and MRI. Panoramic US images were acquired by 2 sonographers using a customized template placed on the thigh and calf and analyzed by the same 2 sonographers (CX50 Philips). MRI images of the leg were acquired while subjects were supine in a 1.5T scanner (Signa Horizon LX, General Electric) and were analyzed by 3 trained investigators. The average of the 2 US and 3 MRI values were used for validity analysis. RESULTS: High inter-experimenter reliability was found for both the US template and MRI analysis as coefficients of variation across muscles ranged from 2.4 to 4.1% and 2.8 to 3.8%, respectively. Significant correlations were found between US and MRI CSA measures (VL, r = 0.85; RF, r = 0.60; MG, r = 0.86; LG, r = 0.73; p < 0.05). Furthermore, the standard error of measurement between US and MRI ranged from 0.91 to 2.09 sq cm with high limits of agreement analyzed by Bland-Altman plots. However, there were significant differences between absolute values of MRI and US for all muscles. CONCLUSION: The present results indicate that utilizing a customized US template provides reliable measures of leg muscle CSA, and thus could be used to characterize changes in muscle CSA both in flight and on the ground.
Simulated space environmental effects on a polyetherimide and its carbon fiber-reinforced composites
NASA Technical Reports Server (NTRS)
Kern, Kristen T.; Stancil, Phillip C.; Harries, Wynford L.; Long, Edward R., Jr.; Thibeault, Sheila A.
1993-01-01
The selection of materials for spacecraft construction requires identification of candidate materials which can perform reliably in the space environment. Understanding the effects of the space environment on the materials is an important step in the selection of candidate materials. This work examines the effects of energetic electrons, thermal cycling, electron radiation in conjunction with thermal cycling, and atomic oxygen on a thermoplastic polyetherimide and its carbon-fiber-reinforced composites. Composite materials made with non-sized fibers as well as materials made with fibers sized with an epoxy were evaluated. The mechanical and thermomechanical properties of the materials were studied and spectroscopic techniques were used to investigate the mechanisms for the observed effects. Considerations for future material development are suggested.
Securing quality of camera-based biomedical optics
NASA Astrophysics Data System (ADS)
Guse, Frank; Kasper, Axel; Zinter, Bob
2009-02-01
As sophisticated optical imaging technologies move into clinical applications, manufacturers need to guarantee their products meet required performance criteria over long lifetimes and in very different environmental conditions. A consistent quality management marks critical components features derived from end-users requirements in a top-down approach. Careful risk analysis in the design phase defines the sample sizes for production tests, whereas first article inspection assures the reliability of the production processes. We demonstrate the application of these basic quality principles to camera-based biomedical optics for a variety of examples including molecular diagnostics, dental imaging, ophthalmology and digital radiography, covering a wide range of CCD/CMOS chip sizes and resolutions. Novel concepts in fluorescence detection and structured illumination are also highlighted.
NASA Astrophysics Data System (ADS)
Tsuzuku, Koichiro; Hagiwara, Tomoya; Takeoka, Shunsuke; Ikemoto, Yuka
2008-05-01
Vibration bands of dielectric ceramics appear at a mid-infrared (MIR) and those position and shape are changed owing to change environment of crystal lattice. Therefore, micro-focus MIR spectroscopy is a one of useful tool to evaluate very small size capacitor (e.g. smaller than 0.5 mm in chip size). Very small size multi-layer capacitor: MLCC are one of very important device to produce high quality electrical products such as cell phone, etc. Quality and reliability of MLCC are corresponding to not only average dielectric properties but also local fluctuation of them. Furthermore, local fluctuation of dielectric properties of MLCC could evaluate with MIR spectroscopy. It is possible to obtain a satisfied MIR spectrum from small size samples performed by a micro-focus spectrometer combined with synchrotron radiation as a high luminance light source at beam line BL43IR of SPring-8. From the above result, it is possible to evaluate the degree of homogeneity by comparing the shape change of Ti-O peak on IR spectra.
Citation-related reliability analysis for a pilot sample of underground coal mines.
Kinilakodi, Harisha; Grayson, R Larry
2011-05-01
The scrutiny of underground coal mine safety was heightened because of the disasters that occurred in 2006-2007, and more recently in 2010. In the aftermath of the 2006 incidents, the U.S. Congress passed the Mine Improvement and New Emergency Response Act of 2006 (MINER Act), which strengthened the existing regulations and mandated new laws to address various issues related to emergency preparedness and response, escape from an emergency situation, and protection of miners. The National Mining Association-sponsored Mine Safety Technology and Training Commission study highlighted the role of risk management in identifying and controlling major hazards, which are elements that could come together and cause a mine disaster. In 2007 MSHA revised its approach to the "Pattern of Violations" (POV) process in order to target unsafe mines and then force them to remediate conditions in their mines. The POV approach has certain limitations that make it difficult for it to be enforced. One very understandable way to focus on removing threats from major-hazard conditions is to use citation-related reliability analysis. The citation reliability approach, which focuses on the probability of not getting a citation on a given inspector day, is considered an analogue to the maintenance reliability approach, which many mine operators understand and use. In this study, the citation reliability approach was applied to a stratified random sample of 31 underground coal mines to examine its potential for broader application. The results clearly show the best-performing and worst-performing mines for compliance with mine safety standards, and they highlight differences among different mine sizes. Copyright © 2010 Elsevier Ltd. All rights reserved.
Park, Sun-June; Lee, Kyeong-Tae; Jeon, Byung-Joon; Woo, Kyong-Je
2018-04-01
Pedicled perforator flaps (PPFs) have been widely used to treat pressure sores in the gluteal region. Selection of a reliable perforator is crucial for successful surgical treatment of pressure sores using PPFs. In this study, we evaluate the role of magnetic resonance imaging (MRI) in planning PPF reconstruction of pressure sores in the gluteal region. A retrospective chart review was performed in patients who had undergone these PPF reconstructions and who had received preoperative MRI. Preoperatively, the extent of infection and necrotic tissue was evaluated using MRI, and a reliable perforator was identified, considering the perforator location in relation to the defect, perforator size, and perforator courses. Intraoperatively, the targeted perforator was marked on the skin at the locations measured on the MRI images, and the marked location was confirmed using intraoperative handheld Doppler. Superior gluteal artery, inferior gluteal artery, or parasacral perforators were used for the PPFs. Surgical outcomes were evaluated. A total of 12 PPFs were performed in 12 patients. Superior gluteal artery perforator flaps were performed in 7 patients, inferior gluteal artery perforator flaps were performed in 3 patients, and parasacral perforator flaps were performed in 2 patients. We could identify a reliable perforator on MRI, and it was found at the predicted locations in all cases. There was only one case of partial flap necrosis. There was no recurrence of the pressure sores during the mean follow-up period of 6.7 months (range = 3-15 months). In selected patients with gluteal pressure sores, MRI is a suitable means for not only providing information about disease extent and comorbidities but also for evaluating perforators for PPF reconstructions.
Structure reliability design and analysis of support ring for cylinder seal
NASA Astrophysics Data System (ADS)
Minmin, Zhao
2017-09-01
In this paper, the general reliability design process of the cross-sectional dimension of the support ring is introduced, which is used for the cylinder sealing. Then, taking a certain section shape support ring as an example, the every size parameters of section are determined from the view point of reliability design. Last, the static strength and reliability of the support ring are analyzed to verify the correctness of the reliability design result.
A Cost-effective and Reliable Method to Predict Mechanical Stress in Single-use and Standard Pumps
Dittler, Ina; Dornfeld, Wolfgang; Schöb, Reto; Cocke, Jared; Rojahn, Jürgen; Kraume, Matthias; Eibl, Dieter
2015-01-01
Pumps are mainly used when transferring sterile culture broths in biopharmaceutical and biotechnological production processes. However, during the pumping process shear forces occur which can lead to qualitative and/or quantitative product loss. To calculate the mechanical stress with limited experimental expense, an oil-water emulsion system was used, whose suitability was demonstrated for drop size detections in bioreactors1. As drop breakup of the oil-water emulsion system is a function of mechanical stress, drop sizes need to be counted over the experimental time of shear stress investigations. In previous studies, the inline endoscopy has been shown to be an accurate and reliable measurement technique for drop size detections in liquid/liquid dispersions. The aim of this protocol is to show the suitability of the inline endoscopy technique for drop size measurements in pumping processes. In order to express the drop size, the Sauter mean diameter d32 was used as the representative diameter of drops in the oil-water emulsion. The results showed low variation in the Sauter mean diameters, which were quantified by standard deviations of below 15%, indicating the reliability of the measurement technique. PMID:26274765
Effects of Ni particle morphology on cell performance of Na/NiCl2 battery
NASA Astrophysics Data System (ADS)
Kim, Mangi; Ahn, Cheol-Woo; Hahn, Byung-Dong; Jung, Keeyoung; Park, Yoon-Cheol; Cho, Nam-ung; Lee, Heesoo; Choi, Joon-Hwan
2017-11-01
Electrochemical reaction of Ni particle, one of active cathode materials in the Na/NiCl2 battery, occurs on the particle surface. The NiCl2 layer formed on the Ni particle surface during charging can disconnect the electron conduction path through Ni particles because the NiCl2 layer has very low conductivity. The morphology and size of Ni particles, therefore, need to be controlled to obtain high charge capacity and excellent cyclic retention. Effects of the Ni particle size on the cell performance were investigated using spherical Ni particles with diameters of 0.5 μm, 6 μm, and 50 μm. The charge capacities of the cells with spherical Ni particles increased when the Ni particle size becomes smaller because of their higher surface area but their charge capacities were significantly decreased with increasing cyclic tests owing to the disconnection of electron conduction path. The inferior cyclic retention of charge capacity was improved using reticular Ni particles which maintained the reliable connection for the electron conduction in the Na/NiCl2 battery. The charge capacity of the cell with the reticular Ni particles was higher than the cell with the small-sized spherical Ni particles approximately by 26% at 30th cycle.
Hacquebord, Jacques H; Hanel, Douglas P; Friedrich, Jeffrey B
2017-08-01
The pedicled latissimus flap has been shown to provide effective coverage of wounds around the elbow with an average size of 100 to 147 cm 2 but with complication rates of 20% to 57%. We believe the pedicled latissimus dorsi flap is an effective and safe technique that provides reliable and durable coverage of considerably larger soft tissue defects around the elbow and proximal forearm. A retrospective review was performed including all patients from Harborview Medical Center between 1998 and 2012 who underwent coverage with pedicled latissimus dorsi flap for defects around the elbow. Demographic information, injury mechanism, soft tissue defect size, complications (minor vs major), and time to surgery were collected. The size of the soft tissue defect, complications, and successful soft tissue coverage were the primary outcome measures. A total of 18 patients were identified with variable mechanisms of injury. Average defect size around the elbow was 422 cm 2 . Three patients had partial necrosis of the distal most aspect of the flap, which was treated conservatively. One patient required a secondary fasciocutaneous flap, and another required conversion to a free latissimus flap secondary to venous congestion. Two were lost to follow-up after discharge from the hospital. In all, 88% (14 of 16) of the patients had documented (>3-month follow-up) successful soft tissue coverage with single-stage pedicled latissimus dorsi flap. The pedicled latissimus dorsi flap is a reliable option for large and complex soft tissue injuries around the elbow significantly larger than previous reports. However, coverage of the proximal forearm remains challenging.
Integrating Reliability Analysis with a Performance Tool
NASA Technical Reports Server (NTRS)
Nicol, David M.; Palumbo, Daniel L.; Ulrey, Michael
1995-01-01
A large number of commercial simulation tools support performance oriented studies of complex computer and communication systems. Reliability of these systems, when desired, must be obtained by remodeling the system in a different tool. This has obvious drawbacks: (1) substantial extra effort is required to create the reliability model; (2) through modeling error the reliability model may not reflect precisely the same system as the performance model; (3) as the performance model evolves one must continuously reevaluate the validity of assumptions made in that model. In this paper we describe an approach, and a tool that implements this approach, for integrating a reliability analysis engine into a production quality simulation based performance modeling tool, and for modeling within such an integrated tool. The integrated tool allows one to use the same modeling formalisms to conduct both performance and reliability studies. We describe how the reliability analysis engine is integrated into the performance tool, describe the extensions made to the performance tool to support the reliability analysis, and consider the tool's performance.
Robust gene selection methods using weighting schemes for microarray data analysis.
Kang, Suyeon; Song, Jongwoo
2017-09-02
A common task in microarray data analysis is to identify informative genes that are differentially expressed between two different states. Owing to the high-dimensional nature of microarray data, identification of significant genes has been essential in analyzing the data. However, the performances of many gene selection techniques are highly dependent on the experimental conditions, such as the presence of measurement error or a limited number of sample replicates. We have proposed new filter-based gene selection techniques, by applying a simple modification to significance analysis of microarrays (SAM). To prove the effectiveness of the proposed method, we considered a series of synthetic datasets with different noise levels and sample sizes along with two real datasets. The following findings were made. First, our proposed methods outperform conventional methods for all simulation set-ups. In particular, our methods are much better when the given data are noisy and sample size is small. They showed relatively robust performance regardless of noise level and sample size, whereas the performance of SAM became significantly worse as the noise level became high or sample size decreased. When sufficient sample replicates were available, SAM and our methods showed similar performance. Finally, our proposed methods are competitive with traditional methods in classification tasks for microarrays. The results of simulation study and real data analysis have demonstrated that our proposed methods are effective for detecting significant genes and classification tasks, especially when the given data are noisy or have few sample replicates. By employing weighting schemes, we can obtain robust and reliable results for microarray data analysis.
Pojskic, Haris; Åslin, Erik; Krolo, Ante; Jukic, Ivan; Uljevic, Ognjen; Spasic, Miodrag; Sekulic, Damir
2018-01-01
Agility is a significant determinant of success in soccer; however, studies have rarely presented and evaluated soccer-specific tests of reactive agility (S_RAG) and non-reactive agility (change of direction speed – S_CODS) or their applicability in this sport. The aim of this study was to define the reliability and validity of newly developed tests of the S_RAG and S_CODS to discriminate between the performance levels of junior soccer players. The study consisted of 20 players who were involved at the highest national competitive rank (all males; age: 17.0 ± 0.9 years), divided into three playing positions (defenders, midfielders, and forwards) and two performance levels (U17 and U19). Variables included body mass (BM), body height, body fat percentage, 20-m sprint, squat jump, countermovement jump, reactive-strength-index, unilateral jump, 1RM-back-squat, S_CODS, and three protocols of S_RAG. The reliabilities of the S_RAG and S_CODS were appropriate to high (ICC: 0.70 to 0.92), with the strongest reliability evidenced for the S_CODS. The S_CODS and S_RAG shared 25–40% of the common variance. Playing positions significantly differed in BM (large effect-size differences [ES]; midfielders were lightest) and 1RM-back-squat (large ES; lowest results in midfielders). The performance levels significantly differed in age and experience in soccer; U19 achieved better results in the S_CODS (t-test: 3.61, p < 0.05, large ES) and two S_RAG protocols (t-test: 2.14 and 2.41, p < 0.05, moderate ES). Newly developed tests of soccer-specific agility are applicable to differentiate U17 and U19 players. Coaches who work with young soccer athletes should be informed that the development of soccer-specific CODS and RAG in this age is mostly dependent on training of the specific motor proficiency. PMID:29867552
Alafate, Aierken; Shinya, Takayoshi; Okumura, Yoshihiro; Sato, Shuhei; Hiraki, Takao; Ishii, Hiroaki; Gobara, Hideo; Kato, Katsuya; Fujiwara, Toshiyoshi; Miyoshi, Shinichiro; Kaji, Mitsumasa; Kanazawa, Susumu
2013-01-01
We retrospectively evaluated the accumulation of fluorodeoxy glucose (FDG) in pulmonary malignancies without local recurrence during 2-year follow-up on positron emission tomography (PET)/computed tomography (CT) after radiofrequency ablation (RFA). Thirty tumors in 25 patients were studied (10 non-small cell lung cancers;20 pulmonary metastatic tumors). PET/CT was performed before RFA, 3 months after RFA, and 6 months after RFA. We assessed the FDG accumulation with the maximum standardized uptake value (SUVmax) compared with the diameters of the lesions. The SUVmax had a decreasing tendency in the first 6 months and, at 6 months post-ablation, FDG accumulation was less affected by inflammatory changes than at 3 months post-RFA. The diameter of the ablated lesion exceeded that of the initial tumor at 3 months post-RFA and shrank to pre-ablation dimensions by 6 months post-RFA. SUVmax was more reliable than the size measurements by CT in the first 6 months after RFA, and PET/CT at 6 months post-RFA may be more appropriate for the assessment of FDG accumulation than that at 3 months post-RFA.
McKay, E
2000-01-01
An innovative research program was devised to investigate the interactive effect of instructional strategies enhanced with text-plus-textual metaphors or text-plus-graphical metaphors, and cognitive style on the acquisition of programming concepts. The Cognitive Styles Analysis (CSA) program (Riding,1991) was used to establish the participants' cognitive style. The QUEST Interactive Test Analysis System (Adams and Khoo,1996) provided the cognitive performance measuring tool, which ensured an absence of error measurement in the programming knowledge testing instruments. Therefore, reliability of the instrumentation was assured through the calibration techniques utilized by the QUEST estimate; providing predictability of the research design. A means analysis of the QUEST data, using the Cohen (1977) approach to size effect and statistical power further quantified the significance of the findings. The experimental methodology adopted for this research links the disciplines of instructional science, cognitive psychology, and objective measurement to provide reliable mechanisms for beneficial use in the evaluation of cognitive performance by the education, training and development sectors. Furthermore, the research outcomes will be of interest to educators, cognitive psychologists, communications engineers, and computer scientists specializing in computer-human interactions.
Finite-Size Effects of Binary Mutual Diffusion Coefficients from Molecular Dynamics
2018-01-01
Molecular dynamics simulations were performed for the prediction of the finite-size effects of Maxwell-Stefan diffusion coefficients of molecular mixtures and a wide variety of binary Lennard–Jones systems. A strong dependency of computed diffusivities on the system size was observed. Computed diffusivities were found to increase with the number of molecules. We propose a correction for the extrapolation of Maxwell–Stefan diffusion coefficients to the thermodynamic limit, based on the study by Yeh and Hummer (J. Phys. Chem. B, 2004, 108, 15873−15879). The proposed correction is a function of the viscosity of the system, the size of the simulation box, and the thermodynamic factor, which is a measure for the nonideality of the mixture. Verification is carried out for more than 200 distinct binary Lennard–Jones systems, as well as 9 binary systems of methanol, water, ethanol, acetone, methylamine, and carbon tetrachloride. Significant deviations between finite-size Maxwell–Stefan diffusivities and the corresponding diffusivities at the thermodynamic limit were found for mixtures close to demixing. In these cases, the finite-size correction can be even larger than the simulated (finite-size) Maxwell–Stefan diffusivity. Our results show that considering these finite-size effects is crucial and that the suggested correction allows for reliable computations. PMID:29664633
A Note on Structural Equation Modeling Estimates of Reliability
ERIC Educational Resources Information Center
Yang, Yanyun; Green, Samuel B.
2010-01-01
Reliability can be estimated using structural equation modeling (SEM). Two potential problems with this approach are that estimates may be unstable with small sample sizes and biased with misspecified models. A Monte Carlo study was conducted to investigate the quality of SEM estimates of reliability by themselves and relative to coefficient…
How to Measure the Onset of Babbling Reliably?
ERIC Educational Resources Information Center
Molemans, Inge; van den Berg, Renate; van Severen, Lieve; Gillis, Steven
2012-01-01
Various measures for identifying the onset of babbling have been proposed in the literature, but a formal definition of the exact procedure and a thorough validation of the sample size required for reliably establishing babbling onset is lacking. In this paper the reliability of five commonly used measures is assessed using a large longitudinal…
New perspective on single-radiator multiple-port antennas for adaptive beamforming applications.
Byun, Gangil; Choo, Hosung
2017-01-01
One of the most challenging problems in recent antenna engineering fields is to achieve highly reliable beamforming capabilities in an extremely restricted space of small handheld devices. In this paper, we introduce a new perspective on single-radiator multiple-port (SRMP) antenna to alter the traditional approach of multiple-antenna arrays for improving beamforming performances with reduced aperture sizes. The major contribution of this paper is to demonstrate the beamforming capability of the SRMP antenna for use as an extremely miniaturized front-end component in more sophisticated beamforming applications. To examine the beamforming capability, the radiation properties and the array factor of the SRMP antenna are theoretically formulated for electromagnetic characterization and are used as complex weights to form adaptive array patterns. Then, its fundamental performance limits are rigorously explored through enumerative studies by varying the dielectric constant of the substrate, and field tests are conducted using a beamforming hardware to confirm the feasibility. The results demonstrate that the new perspective of the SRMP antenna allows for improved beamforming performances with the ability of maintaining consistently smaller aperture sizes compared to the traditional multiple-antenna arrays.
Kato, Haruhisa; Nakamura, Ayako; Takahashi, Kayori; Kinugasa, Shinichi
2012-01-01
Accurate determination of the intensity-average diameter of polystyrene latex (PS-latex) by dynamic light scattering (DLS) was carried out through extrapolation of both the concentration of PS-latex and the observed scattering angle. Intensity-average diameter and size distribution were reliably determined by asymmetric flow field flow fractionation (AFFFF) using multi-angle light scattering (MALS) with consideration of band broadening in AFFFF separation. The intensity-average diameter determined by DLS and AFFFF-MALS agreed well within the estimated uncertainties, although the size distribution of PS-latex determined by DLS was less reliable in comparison with that determined by AFFFF-MALS. PMID:28348293
Besharati Tabrizi, Leila; Mahvash, Mehran
2015-07-01
An augmented reality system has been developed for image-guided neurosurgery to project images with regions of interest onto the patient's head, skull, or brain surface in real time. The aim of this study was to evaluate system accuracy and to perform the first intraoperative application. Images of segmented brain tumors in different localizations and sizes were created in 10 cases and were projected to a head phantom using a video projector. Registration was performed using 5 fiducial markers. After each registration, the distance of the 5 fiducial markers from the visualized tumor borders was measured on the virtual image and on the phantom. The difference was considered a projection error. Moreover, the image projection technique was intraoperatively applied in 5 patients and was compared with a standard navigation system. Augmented reality visualization of the tumors succeeded in all cases. The mean time for registration was 3.8 minutes (range 2-7 minutes). The mean projection error was 0.8 ± 0.25 mm. There were no significant differences in accuracy according to the localization and size of the tumor. Clinical feasibility and reliability of the augmented reality system could be proved intraoperatively in 5 patients (projection error 1.2 ± 0.54 mm). The augmented reality system is accurate and reliable for the intraoperative projection of images to the head, skull, and brain surface. The ergonomic advantage of this technique improves the planning of neurosurgical procedures and enables the surgeon to use direct visualization for image-guided neurosurgery.
Wright, Mark H.; Tung, Chih-Wei; Zhao, Keyan; Reynolds, Andy; McCouch, Susan R.; Bustamante, Carlos D.
2010-01-01
Motivation: The development of new high-throughput genotyping products requires a significant investment in testing and training samples to evaluate and optimize the product before it can be used reliably on new samples. One reason for this is current methods for automated calling of genotypes are based on clustering approaches which require a large number of samples to be analyzed simultaneously, or an extensive training dataset to seed clusters. In systems where inbred samples are of primary interest, current clustering approaches perform poorly due to the inability to clearly identify a heterozygote cluster. Results: As part of the development of two custom single nucleotide polymorphism genotyping products for Oryza sativa (domestic rice), we have developed a new genotype calling algorithm called ‘ALCHEMY’ based on statistical modeling of the raw intensity data rather than modelless clustering. A novel feature of the model is the ability to estimate and incorporate inbreeding information on a per sample basis allowing accurate genotyping of both inbred and heterozygous samples even when analyzed simultaneously. Since clustering is not used explicitly, ALCHEMY performs well on small sample sizes with accuracy exceeding 99% with as few as 18 samples. Availability: ALCHEMY is available for both commercial and academic use free of charge and distributed under the GNU General Public License at http://alchemy.sourceforge.net/ Contact: mhw6@cornell.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20926420
The (un)reliability of item-level semantic priming effects.
Heyman, Tom; Bruninx, Anke; Hutchison, Keith A; Storms, Gert
2018-04-05
Many researchers have tried to predict semantic priming effects using a myriad of variables (e.g., prime-target associative strength or co-occurrence frequency). The idea is that relatedness varies across prime-target pairs, which should be reflected in the size of the priming effect (e.g., cat should prime dog more than animal does). However, it is only insightful to predict item-level priming effects if they can be measured reliably. Thus, in the present study we examined the split-half and test-retest reliabilities of item-level priming effects under conditions that should discourage the use of strategies. The resulting priming effects proved extremely unreliable, and reanalyses of three published priming datasets revealed similar cases of low reliability. These results imply that previous attempts to predict semantic priming were unlikely to be successful. However, one study with an unusually large sample size yielded more favorable reliability estimates, suggesting that big data, in terms of items and participants, should be the future for semantic priming research.
Goniometric reliability in a clinical setting. Shoulder measurements.
Riddle, D L; Rothstein, J M; Lamb, R L
1987-05-01
The purpose of this study was to examine the intratester and intertester reliabilities for clinical goniometric measurements of shoulder passive range of motion (PROM) using two different sizes of universal goniometers. Patients were measured without controlling therapist goniometric placement technique or patient position during measurements. Repeated PROM measurements of shoulder flexion, extension, abduction, shoulder horizontal abduction, horizontal adduction, lateral (external) rotation, and medial (internal) rotation were taken of two groups of 50 subjects each. The intratester intraclass correlation coefficients (ICCs) for all motions ranged from .87 to .99. The ICCs for the intertester reliability of PROM measurements of horizontal abduction, horizontal adduction, extension, and medial rotation ranged from .26 to .55. The intertester ICCs for PROM measurements of flexion, abduction, and lateral rotation ranged from .84 to .90. Goniometric PROM measurements for the shoulder appear to be highly reliable when taken by the same physical therapist, regardless of the size of the goniometer used. The degree of intertester reliability for these measurements appears to be range-of-motion specific.
A taxonomy of accountable care organizations for policy and practice.
Shortell, Stephen M; Wu, Frances M; Lewis, Valerie A; Colla, Carrie H; Fisher, Elliott S
2014-12-01
To develop an exploratory taxonomy of Accountable Care Organizations (ACOs) to describe and understand early ACO development and to provide a basis for technical assistance and future evaluation of performance. Data from the National Survey of Accountable Care Organizations, fielded between October 2012 and May 2013, of 173 Medicare, Medicaid, and commercial payer ACOs. Drawing on resource dependence and institutional theory, we develop measures of eight attributes of ACOs such as size, scope of services offered, and the use of performance accountability mechanisms. Data are analyzed using a two-step cluster analysis approach that accounts for both continuous and categorical data. We identified a reliable and internally valid three-cluster solution: larger, integrated systems that offer a broad scope of services and frequently include one or more postacute facilities; smaller, physician-led practices, centered in primary care, and that possess a relatively high degree of physician performance management; and moderately sized, joint hospital-physician and coalition-led groups that offer a moderately broad scope of services with some involvement of postacute facilities. ACOs can be characterized into three distinct clusters. The taxonomy provides a framework for assessing performance, for targeting technical assistance, and for diagnosing potential antitrust violations. © Health Research and Educational Trust.
Body mass estimates of hominin fossils and the evolution of human body size.
Grabowski, Mark; Hatala, Kevin G; Jungers, William L; Richmond, Brian G
2015-08-01
Body size directly influences an animal's place in the natural world, including its energy requirements, home range size, relative brain size, locomotion, diet, life history, and behavior. Thus, an understanding of the biology of extinct organisms, including species in our own lineage, requires accurate estimates of body size. Since the last major review of hominin body size based on postcranial morphology over 20 years ago, new fossils have been discovered, species attributions have been clarified, and methods improved. Here, we present the most comprehensive and thoroughly vetted set of individual fossil hominin body mass predictions to date, and estimation equations based on a large (n = 220) sample of modern humans of known body masses. We also present species averages based exclusively on fossils with reliable taxonomic attributions, estimates of species averages by sex, and a metric for levels of sexual dimorphism. Finally, we identify individual traits that appear to be the most reliable for mass estimation for each fossil species, for use when only one measurement is available for a fossil. Our results show that many early hominins were generally smaller-bodied than previously thought, an outcome likely due to larger estimates in previous studies resulting from the use of large-bodied modern human reference samples. Current evidence indicates that modern human-like large size first appeared by at least 3-3.5 Ma in some Australopithecus afarensis individuals. Our results challenge an evolutionary model arguing that body size increased from Australopithecus to early Homo. Instead, we show that there is no reliable evidence that the body size of non-erectus early Homo differed from that of australopiths, and confirm that Homo erectus evolved larger average body size than earlier hominins. Copyright © 2015 Elsevier Ltd. All rights reserved.
New International Program to Asses the Reliability of Emerging Nondestructive Techniques (PARENT)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prokofiev, Iouri; Cumblidge, Stephen E.; Csontos, Aladar A.
2013-01-25
The Nuclear Regulatory Commission (NRC) established the Program to Assess the Reliability of Emerging Nondestructive Techniques (PARENT) to follow on from the successful Program for the Inspection of Nickel alloy Components (PINC). The goal of the PARENT is to conduct a confirmatory assessment of the reliability of nondestructive evaluation (NDE) techniques for detecting and sizing primary water stress corrosion cracks (PWSCC) and applying the lessons learned from PINC to a series of round-robin tests. These open and blind round-robin tests will comprise a new set of typical pressure boundary components including dissimilar metal welds (DMWs) and bottom-mounted instrumentation penetrations. Openmore » round-robin tests will engage research and industry teams worldwide to investigate and demonstrate the reliability of emerging NDE techniques to detect and size flaws with a wide range of lengths, depths, orientations, and locations. Blind round-robin tests will utilize various testing organizations, whose inspectors and procedures are certified by the standards for the nuclear industry in their respective countries, to investigate the ability of established NDE techniques to detect and size flaws whose characteristics range from relatively easy to very difficult for detection and sizing. Blind and open round-robin testing started in late 2011 and early 2012, respectively. This paper will present the work scope with reports on progress, NDE methods evaluated, and project timeline for PARENT.« less
Truby, Helen; Paxton, Susan J
2008-03-01
To test the reliability of the Children's Body Image Scale (CBIS) and assess its usefulness in the context of new body size charts for children. Participants were 281 primary schoolchildren with 50% being retested after 3 weeks. The CBIS figure scale was compared with a range of international body mass index (BMI) reference standards. Children had a high degree of body image dissatisfaction. The test-retest reliability of the CBIS was supported. The CBIS is a useful tool for assessing body image in children with sound scale properties. It can also be used to identify the body size of children, which lies outside the healthy weight range of BMI.
Small numbers, disclosure risk, security, and reliability issues in Web-based data query systems.
Rudolph, Barbara A; Shah, Gulzar H; Love, Denise
2006-01-01
This article describes the process for developing consensus guidelines and tools for releasing public health data via the Web and highlights approaches leading agencies have taken to balance disclosure risk with public dissemination of reliable health statistics. An agency's choice of statistical methods for improving the reliability of released data for Web-based query systems is based upon a number of factors, including query system design (dynamic analysis vs preaggregated data and tables), population size, cell size, data use, and how data will be supplied to users. The article also describes those efforts that are necessary to reduce the risk of disclosure of an individual's protected health information.
Conceptual Launch Vehicle and Spacecraft Design for Risk Assessment
NASA Technical Reports Server (NTRS)
Motiwala, Samira A.; Mathias, Donovan L.; Mattenberger, Christopher J.
2014-01-01
One of the most challenging aspects of developing human space launch and exploration systems is minimizing and mitigating the many potential risk factors to ensure the safest possible design while also meeting the required cost, weight, and performance criteria. In order to accomplish this, effective risk analyses and trade studies are needed to identify key risk drivers, dependencies, and sensitivities as the design evolves. The Engineering Risk Assessment (ERA) team at NASA Ames Research Center (ARC) develops advanced risk analysis approaches, models, and tools to provide such meaningful risk and reliability data throughout vehicle development. The goal of the project presented in this memorandum is to design a generic launch 7 vehicle and spacecraft architecture that can be used to develop and demonstrate these new risk analysis techniques without relying on other proprietary or sensitive vehicle designs. To accomplish this, initial spacecraft and launch vehicle (LV) designs were established using historical sizing relationships for a mission delivering four crewmembers and equipment to the International Space Station (ISS). Mass-estimating relationships (MERs) were used to size the crew capsule and launch vehicle, and a combination of optimization techniques and iterative design processes were employed to determine a possible two-stage-to-orbit (TSTO) launch trajectory into a 350-kilometer orbit. Primary subsystems were also designed for the crewed capsule architecture, based on a 24-hour on-orbit mission with a 7-day contingency. Safety analysis was also performed to identify major risks to crew survivability and assess the system's overall reliability. These procedures and analyses validate that the architecture's basic design and performance are reasonable to be used for risk trade studies. While the vehicle designs presented are not intended to represent a viable architecture, they will provide a valuable initial platform for developing and demonstrating innovative risk assessment capabilities.
de los Santos, Gonzalo; Reyes, Pablo; del Castillo, Raúl; Fragola, Claudio; Royuela, Ana
2015-11-01
Our objective was to perform translation, cross-cultural adaptation and validation of the sino-nasal outcome test 22 (SNOT-22) to Spanish language. SNOT-22 was translated, back translated, and a pretest trial was performed. The study included 119 individuals divided into 60 cases, who met diagnostic criteria for chronic rhinosinusitis according to the European Position Paper on Rhinosinusitis 2012; and 59 controls, who reported no sino-nasal disease. Internal consistency was evaluated with Cronbach's alpha test, reproducibility with Kappa coefficient, reliability with intraclass correlation coefficient (ICC), validity with Mann-Whitney U test and responsiveness with Wilcoxon test. In cases, Cronbach's alpha was 0.91 both before and after treatment, as for controls, it was 0.90 at their first test assessment and 0.88 at 3 weeks. Kappa coefficient was calculated for each item, with an average score of 0.69. ICC was also performed for each item, with a score of 0.87 in the overall score and an average among all items of 0.71. Median score for cases was 47, and 2 for controls, finding the difference to be highly significant (Mann-Whitney U test, p < 0.001). Clinical changes were observed among treated patients, with a median score of 47 and 13.5 before and after treatment, respectively (Wilcoxon test, p < 0.001). The effect size resulted in 0.14 in treated patients whose status at 3 weeks was unvarying; 1.03 in those who were better and 1.89 for much better group. All controls were unvarying with an effect size of 0.05. The Spanish version of the SNOT-22 has the internal consistency, reliability, reproducibility, validity and responsiveness necessary to be a valid instrument to be used in clinical practice.
2015-01-01
Stereotype threat effects arise when an individual feels at risk of confirming a negative stereotype about their group and consequently underperforms on stereotype relevant tasks (Steele, 2010). Among older people, underperformance across cognitive and physical tasks is hypothesized to result from age-based stereotype threat (ABST) because of negative age-stereotypes regarding older adults’ competence. The present review and meta-analyses examine 22 published and 10 unpublished articles, including 82 effect sizes (N = 3882) investigating ABST on older people’s (Mage = 69.5) performance. The analysis revealed a significant small-to-medium effect of ABST (d = .28) and important moderators of the effect size. Specifically, older adults are more vulnerable to ABST when (a) stereotype-based rather than fact-based manipulations are used (d = .52); (b) when performance is tested using cognitive measures (d = .36); and (c) occurs reliably when the dependent variable is measured proximally to the manipulation. The review raises important theoretical and methodological issues, and areas for future research. PMID:25621742
Evolutionary pattern search algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, W.E.
1995-09-19
This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimentalmore » analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.« less
The endothelial sample size analysis in corneal specular microscopy clinical examinations.
Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci
2012-05-01
To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.
López-Pina, José Antonio; Sánchez-Meca, Julio; López-López, José Antonio; Marín-Martínez, Fulgencio; Núñez-Núñez, Rosa Ma; Rosa-Alcázar, Ana I; Gómez-Conesa, Antonia; Ferrer-Requena, Josefa
2015-01-01
The Yale-Brown Obsessive-Compulsive Scale for children and adolescents (CY-BOCS) is a frequently applied test to assess obsessive-compulsive symptoms. We conducted a reliability generalization meta-analysis on the CY-BOCS to estimate the average reliability, search for reliability moderators, and propose a predictive model that researchers and clinicians can use to estimate the expected reliability of the CY-BOCS scores. A total of 47 studies reporting a reliability coefficient with the data at hand were included in the meta-analysis. The results showed good reliability and a large variability associated to the standard deviation of total scores and sample size.
Accelerated testing of module-level power electronics for long-term reliability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flicker, Jack David; Tamizhmani, Govindasamy; Moorthy, Mathan Kumar
This work has applied a suite of long-term-reliability accelerated tests to a variety of module-level power electronics (MLPE) devices (such as microinverters and optimizers) from five different manufacturers. This dataset is one of the first (only the paper by Parker et al. entitled “Dominant factors affecting reliability of alternating current photovoltaic modules,” in Proc. 42nd IEEE Photovoltaic Spec. Conf., 2015, is reported for reliability testing in the literature), as well as the largest, experimental sets in public literature, both in the sample size (five manufacturers including both dc/dc and dc/ac units and 20 units for each test) and the numbermore » of experiments (six different experimental test conditions) for MLPE devices. The accelerated stress tests (thermal cycling test per IEC 61215 profile, damp heat test per IEC 61215 profile, and static temperature tests at 100 and 125 °C) were performed under powered and unpowered conditions. The first independent long-term experimental data regarding damp heat and grid transient testing, as well as the longest term (>9 month) testing of MLPE units reported in the literature for thermal cycling and high-temperature operating life, are included in these experiments. Additionally, this work is the first to show in situ power measurements, as well as periodic efficiency measurements over a series of experimental tests, demonstrating whether certain tests result in long-term degradation or immediate catastrophic failures. Lastly, the result of this testing highlights the performance of MLPE units under the application of several accelerated environmental stressors.« less
Accelerated testing of module-level power electronics for long-term reliability
Flicker, Jack David; Tamizhmani, Govindasamy; Moorthy, Mathan Kumar; ...
2016-11-10
This work has applied a suite of long-term-reliability accelerated tests to a variety of module-level power electronics (MLPE) devices (such as microinverters and optimizers) from five different manufacturers. This dataset is one of the first (only the paper by Parker et al. entitled “Dominant factors affecting reliability of alternating current photovoltaic modules,” in Proc. 42nd IEEE Photovoltaic Spec. Conf., 2015, is reported for reliability testing in the literature), as well as the largest, experimental sets in public literature, both in the sample size (five manufacturers including both dc/dc and dc/ac units and 20 units for each test) and the numbermore » of experiments (six different experimental test conditions) for MLPE devices. The accelerated stress tests (thermal cycling test per IEC 61215 profile, damp heat test per IEC 61215 profile, and static temperature tests at 100 and 125 °C) were performed under powered and unpowered conditions. The first independent long-term experimental data regarding damp heat and grid transient testing, as well as the longest term (>9 month) testing of MLPE units reported in the literature for thermal cycling and high-temperature operating life, are included in these experiments. Additionally, this work is the first to show in situ power measurements, as well as periodic efficiency measurements over a series of experimental tests, demonstrating whether certain tests result in long-term degradation or immediate catastrophic failures. Lastly, the result of this testing highlights the performance of MLPE units under the application of several accelerated environmental stressors.« less
NASA Technical Reports Server (NTRS)
Johnson, Sylvia M.
2011-01-01
Thermal protection materials and systems (TPS) are required to protect a vehicle returning from space or entering an atmosphere. The selection of the material depends on the heat flux, heat load, pressure, and shear and other mechanical loads imposed on the material, which are in turn determined by the vehicle configuration and size, location on the vehicle, speed, a trajectory, and the atmosphere. In all cases the goal is to use a material that is both reliable and efficient for the application. Reliable materials are well understood and have sufficient test data under the appropriate conditions to provide confidence in their performance. Efficiency relates to the behavior of a material under the specific conditions that it encounters TPS that performs very well at high heat fluxes may not be efficient at lower heat fluxes. Mass of the TPS is a critical element of efficiency. This talk will review the major classes of TPS, reusable or insulating materials and ablators. Ultra high temperature ceramics for sharp leading edges will also be reviewed. The talk will focus on the properties and behavior of these materials.
The NTID speech recognition test: NSRT(®).
Bochner, Joseph H; Garrison, Wayne M; Doherty, Karen A
2015-07-01
The purpose of this study was to collect and analyse data necessary for expansion of the NSRT item pool and to evaluate the NSRT adaptive testing software. Participants were administered pure-tone and speech recognition tests including W-22 and QuickSIN, as well as a set of 323 new NSRT items and NSRT adaptive tests in quiet and background noise. Performance on the adaptive tests was compared to pure-tone thresholds and performance on other speech recognition measures. The 323 new items were subjected to Rasch scaling analysis. Seventy adults with mild to moderately severe hearing loss participated in this study. Their mean age was 62.4 years (sd = 20.8). The 323 new NSRT items fit very well with the original item bank, enabling the item pool to be more than doubled in size. Data indicate high reliability coefficients for the NSRT and moderate correlations with pure-tone thresholds (PTA and HFPTA) and other speech recognition measures (W-22, QuickSIN, and SRT). The adaptive NSRT is an efficient and effective measure of speech recognition, providing valid and reliable information concerning respondents' speech perception abilities.
Enhancing the clinical utility of the burn specific health scale-brief: not just for major burns.
Finlay, V; Phillips, M; Wood, F; Hendrie, D; Allison, G T; Edgar, D
2014-03-01
Like many other Western burn services, the proportion of major to minor burns managed at Royal Perth Hospital (RPH) is in the order of 1:10. The Burn Specific Health Scale-Brief (BSHS-B) is an established measure of recovery after major burn, however its performance and validity in a population with a high volume of minor burns is uncertain. Utilizing the tool across burns of all sizes would be useful in service wide clinical practice. This study was designed to examine the reliability and validity of the BSHS-B across a sample of mostly minor burn patients. BSHS-B scores of patients, obtained between January 2006 and February 2013 and stored on a secure hospital database were collated and analyzed Cronbach's alpha, factor analysis, logistic regression and longitudinal regression were used to examine reliability and validity of the BSHS-B. Data from 927 burn patients (2031 surveys) with a mean % total burn surface area (TBSA) of 6.7 (SD 10.0) were available for analysis. The BSHS-B demonstrated excellent reliability with a Cronbach's alpha of 0.95. First and second order factor analyses reduced the 40 item scale to four domains: Work; Affect and Relations; Physical Function; Skin Involvement, as per the established construct. TBSA, length of stay and burn surgery all predicted burn specific health in the first three months of injury (p<0.001, p<0.001, p=0.03). BSHS-B whole scale and domain scores showed significant improvement over 24 months from burn (p<0.001). The results from this study show that the structure and performance of the BSHS-B in a burn population consisting of 90% minor burns is consistent with that demonstrated in major burns. The BSHS-B can be employed to track and predict recovery after burns of all sizes to assist the provision of targeted burn care. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lin, Yi-Kuei; Yeh, Cheng-Ta
2013-05-01
From the perspective of supply chain management, the selected carrier plays an important role in freight delivery. This article proposes a new criterion of multi-commodity reliability and optimises the carrier selection based on such a criterion for logistics networks with routes and nodes, over which multiple commodities are delivered. Carrier selection concerns the selection of exactly one carrier to deliver freight on each route. The capacity of each carrier has several available values associated with a probability distribution, since some of a carrier's capacity may be reserved for various orders. Therefore, the logistics network, given any carrier selection, is a multi-commodity multi-state logistics network. Multi-commodity reliability is defined as a probability that the logistics network can satisfy a customer's demand for various commodities, and is a performance indicator for freight delivery. To solve this problem, this study proposes an optimisation algorithm that integrates genetic algorithm, minimal paths and Recursive Sum of Disjoint Products. A practical example in which multi-sized LCD monitors are delivered from China to Germany is considered to illustrate the solution procedure.
Psychometric properties of the Social Phobia and Anxiety Inventory for Children in a Spanish sample.
Olivares, José; Sánchez-García, Raquel; López-Pina, José Antonio; Rosa-Alcázar, Ana Isabel
2010-11-01
The objectives of the present study were to adapt and analyze the factor structure, reliability, and validity of the Social Phobia and Anxiety Inventory for Children (SPAI-C; Beidel, Turner, & Morris, 1995) in a Spanish population. The SPAI-C was applied to a sample of 1588 children and adolescents with ages ranging from 10 to 17 years. The confirmatory factor analysis (CFA) showed a four-factor structure: Public performance, Assertiveness, Fear and avoidance/escape in social encounters, and Cognitive and psychophysiological interferences. Internal consistency was high (.90) and test-retest reliability was moderate (.56). Significant differences were found in the variables sex and age, although the effect size was small in both variables and their interaction. Overall, the increase of the age value was inversely proportional to that of social anxiety measured with the SPAI-C; in participants of the same age, values were higher for girls than for boys. Results suggest that the Social Phobia and Anxiety Inventory For Children is a valid and reliable instrument to assess social anxiety in Spanish children and adolescents.
Miniature Stirling cryocoolers at Thales Cryogenics: qualification results and integration solutions
NASA Astrophysics Data System (ADS)
Arts, R.; Martin, J.-Y.; Willems, D.; Seguineau, C.; de Jonge, G.; Van Acker, S.; Mullié, J.; Le Bordays, J.; Benschop, T.
2016-05-01
During the 2015 SPIE-DSS conference, Thales Cryogenics presented new miniature cryocoolers for high operating temperatures. In this paper, an update is given regarding the qualification programme performed on these new products. Integration aspects are discussed, including an in-depth examination of the influence of the dewar cold finger on sizing and performance of the cryocooler. The UP8197 will be placed in the reference frame of the Thales product range of high-reliability linear cryocoolers, while the rotary solution will be considered as the most compact solution in the Thales portfolio. Compatibility of the cryocoolers design with new and existing 1/4" dewar designs is examined, and potential future developments are presented.
NASA Technical Reports Server (NTRS)
Wilson, D. A.
1976-01-01
Specific requirements for a wash/rinse capability to support Spacelab biological experimentation and to identify various concepts for achieving this capability were determined. This included the examination of current state-of-the-art and emerging technology designs that would meet the wash/rinse requirements. Once several concepts were identified, including the disposable utensils, tools and gloves or other possible alternatives, a tradeoff analysis involving system cost, weight, volume utilization, functional performance, maintainability, reliability, power utilization, safety, complexity, etc., was performed so as to determine an optimum approach for achieving a wash/rinse capability to support future space flights. Missions of varying crew size and durations were considered.
100-lbf LO2/CH4 RCS Thruster Testing and Validation
NASA Technical Reports Server (NTRS)
Barnes, Frank; Cannella, Matthew; Gomez, Carlos; Hand, Jeffrey; Rosenberg, David
2009-01-01
100 pound thrust liquid Oxygen-Methane thruster sized for RCS (Reaction Control System) applications. Innovative Design Characteristics include: a) Simple compact design with minimal part count; b) Gaseous or Liquid propellant operation; c) Affordable and Reusable; d) Greater flexibility than existing systems; e) Part of NASA'S study of "Green Propellants." Hot-fire testing validated performance and functionality of thruster. Thruster's dependence on mixture ratio has been evaluated. Data has been used to calculate performance parameters such as thrust and Isp. Data has been compared with previous test results to verify reliability and repeatability. Thruster was found to have an Isp of 131 s and 82 lbf thrust at a mixture ratio of 1.62.
Solar array study for solar electric propulsion spacecraft for the Encke rendezvous mission
NASA Technical Reports Server (NTRS)
Sequeira, E. A.; Patterson, R. E.
1974-01-01
The work is described which was performed on the design, analysis and performance of a 20 kW rollup solar array capable of meeting the design requirements of a solar electric spacecraft for the 1980 Encke rendezvous mission. To meet the high power requirements of the proposed electric propulsion mission, solar arrays on the order of 186.6 sq m were defined. Because of the large weights involved with arrays of this size, consideration of array configurations is limited to lightweight, large area concepts with maximum power-to-weight ratios. Items covered include solar array requirements and constraints, array concept selection and rationale, structural and electrical design considerations, and reliability considerations.
Low-Dimensional Palladium Nanostructures for Fast and Reliable Hydrogen Gas Detection
Noh, Jin-Seo; Lee, Jun Min; Lee, Wooyoung
2011-01-01
Palladium (Pd) has received attention as an ideal hydrogen sensor material due to its properties such as high sensitivity and selectivity to hydrogen gas, fast response, and operability at room temperature. Interestingly, various Pd nanostructures that have been realized by recent developments in nanotechnologies are known to show better performance than bulk Pd. This review highlights the characteristic properties, issues, and their possible solutions of hydrogen sensors based on the low-dimensional Pd nanostructures with more emphasis on Pd thin films and Pd nanowires. The finite size effects, relative strengths and weaknesses of the respective Pd nanostructures are discussed in terms of performance, manufacturability, and practical applicability. PMID:22346605
The Use of Empirical Data Sources in HRA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruce Hallbert; David Gertman; Julie Marble
This paper presents a review of available information related to human performance to support Human Reliability Analysis (HRA) performed for nuclear power plants (NPPs). A number of data sources are identified as potentially useful. These include NPP licensee event reports (LERs), augmented inspection team (AIT) reports, operator requalification data, results from the literature in experimental psychology, and the Aviation Safety Reporting System (ASRSs). The paper discusses how utilizing such information improves our capability to model and quantify human performance. In particular the paper discusses how information related to performance shaping factors (PSFs) can be extracted from empirical data to determinemore » their size effect, their relative effects, as well as their interactions. The paper concludes that appropriate use of existing sources can help addressing some of the important issues we are currently facing in HRA.« less
Mahowald, Kyle; Fedorenko, Evelina
2016-10-01
The majority of functional neuroimaging investigations aim to characterize an average human brain. However, another important goal of cognitive neuroscience is to understand the ways in which individuals differ from one another and the significance of these differences. This latter goal is given special weight by the recent reconceptualization of neurological disorders where sharp boundaries are no longer drawn either between health and neuropsychiatric and neurodevelopmental disorders, or among different disorders (e.g., Insel et al., 2010). Consequently, even the variability in the healthy population can inform our understanding of brain disorders. However, because the use of functional neural markers is still in its infancy, no consensus presently exists about which measures (e.g., effect size?, extent of activation?, degree of lateralization?) are the best ones to use. We here attempt to address this question with respect to one large-scale neural system: the set of brain regions in the frontal and temporal cortices that jointly support high-level linguistic processing (e.g., Binder et al., 1997; Fedorenko, Hsieh, Nieto-Castanon, Whitfield-Gabrieli, & Kanwisher, 2010). In particular, using data from 150 individuals all of whom had performed a language "localizer" task contrasting sentences and nonword sequences (Fedorenko et al., 2010), we: a) characterize the distributions of the values for four key neural measures of language activity (region effect sizes, region volumes, lateralization based on effect sizes, and lateralization based on volumes); b) test the reliability of these measures in a subset of 32 individuals who were scanned across two sessions; c) evaluate the relationship among the different regions of the language system; and d) evaluate the relationship among the different neural measures. Based on our results, we provide some recommendations for future studies of brain-behavior and brain-genes relationships. Although some of our conclusions are specific to the language system, others (e.g., the fact that effect-size-based measures tend to be more reliable than volume-based measures) are likely to generalize to the rest of the brain. Copyright © 2016 Elsevier Inc. All rights reserved.
de Fiebre, Nancyellen C; Sumien, Nathalie; Forster, Michael J; de Fiebre, Christopher M
2006-09-01
Two tests often used in aging research, the elevated path test and the Morris water maze test, were examined for their application to the study of brain aging in a large sample of C57BL/6JNia mice. Specifically, these studies assessed: (1) sensitivity to age and the degree of interrelatedness among different behavioral measures derived from these tests, (2) the effect of age on variation in the measurements, and (3) the reliability of individual differences in performance on the tests. Both tests detected age-related deficits in group performance that occurred independently of each other. However, analysis of data obtained on the Morris water maze test revealed three relatively independent components of cognitive performance. Performance in initial acquisition of spatial learning in the Morris maze was not highly correlated with performance during reversal learning (when mice were required to learn a new spatial location), whereas performance in both of those phases was independent of spatial performance assessed during a single probe trial administered at the end of acquisition training. Moreover, impaired performance during initial acquisition could be detected at an earlier age than impairments in reversal learning. There were modest but significant age-related increases in the variance of both elevated path test scores and in several measures of learning in the Morris maze test. Analysis of test scores of mice across repeated testing sessions confirmed reliability of the measurements obtained for cognitive and psychomotor function. Power calculations confirmed that there are sufficiently large age-related differences in elevated path test performance, relative to within age variability, to render this test useful for studies into the ability of an intervention to prevent or reverse age-related deficits in psychomotor performance. Power calculations indicated a need for larger sample sizes for detection of intervention effects on cognitive components of the Morris water maze test, at least when implemented at the ages tested in this study. Variability among old mice in both tests, including each of the various independent measures in the Morris maze, may be useful for elucidating the biological bases of different aspects of dysfunctional brain aging.
[Breast-reduction surgery--a long-term survey of indications and outcomes].
Kneser, U; Jaeger, K; Bach, A D; Polykandriotis, E; Ohnolz, J; Kopp, J; Horch, R E
2004-10-14
Between 1986 and 2003, breast-reduction surgery was performed in a total of 814 women. The indication was established on the basis of physical complaints, chronic back pain, stiff neck or recurrent intertrigo in the foldbeneath the breasts. A proportion of the patients were interviewed postoperatively using a questionnaire, to determine the impact of the operation on their quality of life. 91% of those surveyed reported a postoperative improvement in the perception of their own body, and 80% were satisfied with the reduced size of their breasts. In conclusion, in the hands of an experienced breast surgeon, breast-reduction surgery for the proper indication results in a reliable and safe diminishment in breast size and tightening of slack tissue, leading to a significant enhancement in the patient's quality of life.
Estimating cirrus cloud properties from MIPAS data
NASA Astrophysics Data System (ADS)
Mendrok, J.; Schreier, F.; Höpfner, M.
2007-04-01
High resolution mid-infrared limb emission spectra observed by the spaceborne Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) showing evidence of cloud interference are analyzed. Using the new line-by-line multiple scattering [Approximate] Spherical Atmospheric Radiative Transfer code (SARTre), a sensitivity study with respect to cirrus cloud parameters, e.g., optical thickness and particle size distribution, is performed. Cirrus properties are estimated by fitting spectra in three distinct microwindows between 8 and 12 μm. For a cirrus with extremely low ice water path (IWP = 0.1 g/m2) and small effective particle size (D e = 10 μm) simulated spectra are in close agreement with observations in broadband signal and fine structures. We show that a multi-microwindow technique enhances reliability of MIPAS cirrus retrievals compared to single microwindow methods.
An advanced actuator for high-performance slewing
NASA Technical Reports Server (NTRS)
Downer, James; Eisenhaure, David; Hockney, Richard
1988-01-01
A conceptual design for an advanced momentum exchange actuator for application to spacecraft slewing is described. The particular concept is a magnetically-suspended, magnetically gimballed Control Moment Gyro (CMG). A scissored pair of these devices is sized to provide the torque and angular momentum capacity required to reorient a large spacecraft through large angle maneuvers. The concept described utilizes a composite material rotor to achieve the high momentum and energy densities to minimize system mass, an advanced superconducting magnetic suspension system to minimize system weight and power consumption. The magnetic suspension system is also capable of allowing for large angle gimballing of the rotor, thus eliminating the mass and reliability penalties attendant to conventional gimbals. Descriptions of the various subelement designs are included along with the necessary system sizing formulation and material.
ABLE project: Development of an advanced lead-acid storage system for autonomous PV installations
NASA Astrophysics Data System (ADS)
Lemaire-Potteau, Elisabeth; Vallvé, Xavier; Pavlov, Detchko; Papazov, G.; Borg, Nico Van der; Sarrau, Jean-François
In the advanced battery for low-cost renewable energy (ABLE) project, the partners have developed an advanced storage system for small and medium-size PV systems. It is composed of an innovative valve-regulated lead-acid (VRLA) battery, optimised for reliability and manufacturing cost, and an integrated regulator, for optimal battery management and anti-fraudulent use. The ABLE battery performances are comparable to flooded tubular batteries, which are the reference in medium-size PV systems. The ABLE regulator has several innovative features regarding energy management and modular series/parallel association. The storage system has been validated by indoor, outdoor and field tests, and it is expected that this concept could be a major improvement for large-scale implementation of PV within the framework of national rural electrification schemes.
Karami, Manoochehr; Khazaei, Salman; Poorolajal, Jalal; Soltanian, Alireza; Sajadipoor, Mansour
2017-08-01
There is no reliable estimate of the size of female sex workers (FSWs). This study aimed to estimate the size of FSWs in south of Tehran, Iran in 2016 using direct capture-recapture method. In the capture phase, the hangouts of FSWs were mapped as their meeting places. FSWs who agreed to participate in the study tagged with a T-shirt. The recapture phase was implemented at the same places tagging FSWs with a blue bracelet. The total estimated size of FSWs was 690 (95% CI 633, 747). About 89.43% of FSWs experienced sexual intercourse prior to age 20. The prevalence of human immunodeficiency virus infection among FSWs was 4.60%. The estimated population size of FSWs was much more than our expectation. This issue must be the focus of special attention for planning prevention strategies. However, alternative estimates require to estimating the number FSWs, reliably.
First trimester size charts of embryonic brain structures.
Gijtenbeek, M; Bogers, H; Groenenberg, I A L; Exalto, N; Willemsen, S P; Steegers, E A P; Eilers, P H C; Steegers-Theunissen, R P M
2014-02-01
Can reliable size charts of human embryonic brain structures be created from three-dimensional ultrasound (3D-US) visualizations? Reliable size charts of human embryonic brain structures can be created from high-quality images. Previous studies on the visualization of both the cavities and the walls of the brain compartments were performed using 2D-US, 3D-US or invasive intrauterine sonography. However, the walls of the diencephalon, mesencephalon and telencephalon have not been measured non-invasively before. Last-decade improvements in transvaginal ultrasound techniques allow a better visualization and offer the tools to measure these human embryonic brain structures with precision. This study is embedded in a prospective periconceptional cohort study. A total of 141 pregnancies were included before the sixth week of gestation and were monitored until delivery to assess complications and adverse outcomes. For the analysis of embryonic growth, 596 3D-US scans encompassing the entire embryo were obtained from 106 singleton non-malformed live birth pregnancies between 7(+0) and 12(+6) weeks' gestational age (GA). Using 4D View (3D software) the measured embryonic brain structures comprised thickness of the diencephalon, mesencephalon and telencephalon, and the total diameter of the diencephalon and mesencephalon. Of 596 3D scans, 161 (27%) high-quality scans of 79 pregnancies were eligible for analysis. The reliability of all embryonic brain structure measurements, based on the intra-class correlation coefficients (ICCs) (all above 0.98), was excellent. Bland-Altman plots showed moderate agreement for measurements of the telencephalon, but for all other measurements the agreement was good. Size charts were constructed according to crown-rump length (CRL). The percentage of high-quality scans suitable for analysis of these brain structures was low (27%). The size charts of human embryonic brain structures can be used to study normal and abnormal development of brain development in future. Also, the effects of periconceptional maternal exposures, such as folic acid supplement use and smoking, on human embryonic brain development can be a topic of future research. This study was supported by the Department of Obstetrics and Gynaecology of the Erasmus University Medical Center. M.G. was supported by an additional grant from the Sophia Foundation for Medical Research (SSWO grant number 644). No competing interests are declared.
Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell
NASA Astrophysics Data System (ADS)
Mao, Lei; Jackson, Lisa
2016-10-01
In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.
Design and Scheduling of Microgrids using Benders Decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagarajan, Adarsh; Ayyanar, Raja
2016-11-21
The distribution feeder laterals in a distribution feeder with relatively high PV generation as compared to the load can be operated as microgrids to achieve reliability, power quality and economic benefits. However, renewable resources are intermittent and stochastic in nature. A novel approach for sizing and scheduling an energy storage system and microturbine for reliable operation of microgrids is proposed. The size and schedule of an energy storage system and microturbine are determined using Benders' decomposition, considering PV generation as a stochastic resource.
Feasibility and fidelity of practising surgical fixation on a virtual ulna bone
LeBlanc, Justin; Hutchison, Carol; Hu, Yaoping; Donnon, Tyrone
2013-01-01
Background Surgical simulators provide a safe environment to learn and practise psychomotor skills. A goal for these simulators is to achieve high levels of fidelity. The purpose of this study was to develop a reliable surgical simulator fidelity questionnaire and to assess whether a newly developed virtual haptic simulator for fixation of an ulna has comparable levels of fidelity as Sawbones. Methods Simulator fidelity questionnaires were developed. We performed a stratified randomized study with surgical trainees. They performed fixation of the ulna using a virtual simulator and Sawbones. They completed the fidelity questionnaires after each procedure. Results Twenty-two trainees participated in the study. The reliability of the fidelity questionnaire for each separate domain (environment, equipment, psychological) was Cronbach α greater than 0.70, except for virtual environment. The Sawbones had significantly higher levels of fidelity than the virtual simulator (p < 0.001) with a large effect size difference (Cohen d < 1.3). Conclusion The newly developed fidelity questionnaire is a reliable tool that can potentially be used to determine the fidelity of other surgical simulators. Increasing the fidelity of this virtual simulator is required before its use as a training tool for surgical fixation. The virtual simulator brings with it the added benefits of repeated, independent safe use with immediate, objective feedback and the potential to alter the complexity of the skill. PMID:23883510
Reliable evaluation of the quantal determinants of synaptic efficacy using Bayesian analysis
Beato, M.
2013-01-01
Communication between neurones in the central nervous system depends on synaptic transmission. The efficacy of synapses is determined by pre- and postsynaptic factors that can be characterized using quantal parameters such as the probability of neurotransmitter release, number of release sites, and quantal size. Existing methods of estimating the quantal parameters based on multiple probability fluctuation analysis (MPFA) are limited by their requirement for long recordings to acquire substantial data sets. We therefore devised an algorithm, termed Bayesian Quantal Analysis (BQA), that can yield accurate estimates of the quantal parameters from data sets of as small a size as 60 observations for each of only 2 conditions of release probability. Computer simulations are used to compare its performance in accuracy with that of MPFA, while varying the number of observations and the simulated range in release probability. We challenge BQA with realistic complexities characteristic of complex synapses, such as increases in the intra- or intersite variances, and heterogeneity in release probabilities. Finally, we validate the method using experimental data obtained from electrophysiological recordings to show that the effect of an antagonist on postsynaptic receptors is correctly characterized by BQA by a specific reduction in the estimates of quantal size. Since BQA routinely yields reliable estimates of the quantal parameters from small data sets, it is ideally suited to identify the locus of synaptic plasticity for experiments in which repeated manipulations of the recording environment are unfeasible. PMID:23076101
Rapid and reliable healing of critical size bone defects with genetically modified sheep muscle.
Liu, F; Ferreira, E; Porter, R M; Glatt, V; Schinhan, M; Shen, Z; Randolph, M A; Kirker-Head, C A; Wehling, C; Vrahas, M S; Evans, C H; Wells, J W
2015-09-21
Large segmental defects in bone fail to heal and remain a clinical problem. Muscle is highly osteogenic, and preliminary data suggest that autologous muscle tissue expressing bone morphogenetic protein-2 (BMP-2) efficiently heals critical size defects in rats. Translation into possible human clinical trials requires, inter alia, demonstration of efficacy in a large animal, such as the sheep. Scale-up is fraught with numerous biological, anatomical, mechanical and structural variables, which cannot be addressed systematically because of cost and other practical issues. For this reason, we developed a translational model enabling us to isolate the biological question of whether sheep muscle, transduced with adenovirus expressing BMP-2, could heal critical size defects in vivo. Initial experiments in athymic rats noted strong healing in only about one-third of animals because of unexpected immune responses to sheep antigens. For this reason, subsequent experiments were performed with Fischer rats under transient immunosuppression. Such experiments confirmed remarkably rapid and reliable healing of the defects in all rats, with bridging by 2 weeks and remodelling as early as 3-4 weeks, despite BMP-2 production only in nanogram quantities and persisting for only 1-3 weeks. By 8 weeks the healed defects contained well-organised new bone with advanced neo-cortication and abundant marrow. Bone mineral content and mechanical strength were close to normal values. These data demonstrate the utility of this model when adapting this technology for bone healing in sheep, as a prelude to human clinical trials.
Park, Myung Sook; Kang, Kyung Ja; Jang, Sun Joo; Lee, Joo Yun; Chang, Sun Ju
2018-03-01
This study aimed to evaluate the components of test-retest reliability including time interval, sample size, and statistical methods used in patient-reported outcome measures in older people and to provide suggestions on the methodology for calculating test-retest reliability for patient-reported outcomes in older people. This was a systematic literature review. MEDLINE, Embase, CINAHL, and PsycINFO were searched from January 1, 2000 to August 10, 2017 by an information specialist. This systematic review was guided by both the Preferred Reporting Items for Systematic Reviews and Meta-Analyses checklist and the guideline for systematic review published by the National Evidence-based Healthcare Collaborating Agency in Korea. The methodological quality was assessed by the Consensus-based Standards for the selection of health Measurement Instruments checklist box B. Ninety-five out of 12,641 studies were selected for the analysis. The median time interval for test-retest reliability was 14days, and the ratio of sample size for test-retest reliability to the number of items in each measure ranged from 1:1 to 1:4. The most frequently used statistical methods for continuous scores was intraclass correlation coefficients (ICCs). Among the 63 studies that used ICCs, 21 studies presented models for ICC calculations and 30 studies reported 95% confidence intervals of the ICCs. Additional analyses using 17 studies that reported a strong ICC (>0.09) showed that the mean time interval was 12.88days and the mean ratio of the number of items to sample size was 1:5.37. When researchers plan to assess the test-retest reliability of patient-reported outcome measures for older people, they need to consider an adequate time interval of approximately 13days and the sample size of about 5 times the number of items. Particularly, statistical methods should not only be selected based on the types of scores of the patient-reported outcome measures, but should also be described clearly in the studies that report the results of test-retest reliability. Copyright © 2017 Elsevier Ltd. All rights reserved.
Prototype microprocessor controller. [for STDN antennas
NASA Technical Reports Server (NTRS)
Zarur, J.; Kraeuter, R.
1980-01-01
A microcomputer controller for STDN antennas was developed. The microcomputer technology reduces the system's physical size by the implementation in firmware of functions. The reduction in the number of components increases system reliability and similar benefit is derived when a graphic video display is substituted for several control and indicator panels. A substantial reduction in the number of cables, connectors, and mechanical switches is achieved. The microcomputer based system is programmed to perform calibration and diagnostics, to update the satellite orbital vector, and to communicate with other network systems. The design is applicable to antennas and lasers.
Monolithic Microwave Integrated Circuits Based on GaAs Mesfet Technology
NASA Astrophysics Data System (ADS)
Bahl, Inder J.
Advanced military microwave systems are demanding increased integration, reliability, radiation hardness, compact size and lower cost when produced in large volume, whereas the microwave commercial market, including wireless communications, mandates low cost circuits. Monolithic Microwave Integrated Circuit (MMIC) technology provides an economically viable approach to meeting these needs. In this paper the design considerations for several types of MMICs and their performance status are presented. Multifunction integrated circuits that advance the MMIC technology are described, including integrated microwave/digital functions and a highly integrated transceiver at C-band.
Digital Image Compression Using Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.
1993-01-01
The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.
3D MEMS in Standard Processes: Fabrication, Quality Assurance, and Novel Measurement Microstructures
NASA Technical Reports Server (NTRS)
Lin, Gisela; Lawton, Russell A.
2000-01-01
Three-dimensional MEMS microsystems that are commercially fabricated require minimal post-processing and are easily integrated with CMOS signal processing electronics. Measurements to evaluate the fabrication process (such as cross-sectional imaging and device performance characterization) provide much needed feedback in terms of reliability and quality assurance. MEMS technology is bringing a new class of microscale measurements to fruition. The relatively small size of MEMS microsystems offers the potential for higher fidelity recordings compared to macrosize counterparts, as illustrated in the measurement of muscle cell forces.
Vision technology/algorithms for space robotics applications
NASA Technical Reports Server (NTRS)
Krishen, Kumar; Defigueiredo, Rui J. P.
1987-01-01
The thrust of automation and robotics for space applications has been proposed for increased productivity, improved reliability, increased flexibility, higher safety, and for the performance of automating time-consuming tasks, increasing productivity/performance of crew-accomplished tasks, and performing tasks beyond the capability of the crew. This paper provides a review of efforts currently in progress in the area of robotic vision. Both systems and algorithms are discussed. The evolution of future vision/sensing is projected to include the fusion of multisensors ranging from microwave to optical with multimode capability to include position, attitude, recognition, and motion parameters. The key feature of the overall system design will be small size and weight, fast signal processing, robust algorithms, and accurate parameter determination. These aspects of vision/sensing are also discussed.
Komeiji, Y; Yokoyama, H; Uebayasi, M; Taiji, M; Fukushige, T; Sugimoto, D; Takata, R; Shimizu, A; Itsukashi, K
1996-01-01
GRAPE (GRavity PipE) processors are special purpose computers for simulation of classical particles. The performance of MD-GRAPE, one of the GRAPEs developed for molecular dynamics, was investigated. The effective speed of MD-GRAPE was equivalent to approximately 6 Gflops. The precision of MD-GRAPE was good judging from the acceptable fluctuation of the total energy. Then a software named PEACH (Program for Energetic Analysis of bioCHemical molecules) was developed for molecular dynamics of biomolecules in combination with MD-GRAPE. Molecular dynamics simulation was performed for several protein-solvent systems with different sizes. Simulation of the largest system investigated (27,000 atoms) took only 5 sec/step. Thus, the PEACH-GRAPE system is expected to be useful in accurate and reliable simulation of large biomolecules.
Brayton heat exchanger unit development program (alternate design)
NASA Technical Reports Server (NTRS)
Duncan, J. D.; Gibson, J. C.; Graves, R. F.; Morse, C. J.; Richard, C. E.
1973-01-01
A Brayton Heat Exchanger Unit Alternate Design (BHXU-Alternate) consisting of a recuperator, a heat sink heat exchanger, and a gas ducting system, was designed and fabricated. The design was formulated to provide a high performance unit suitable for use in a long-life Brayton-cycle powerplant. Emphasis was on double containment against external leakage and leakage of the organic coolant into the gas stream. A parametric analysis and design study was performed to establish the optimum component configurations to achieve low weight and size and high reliability, while meeting the requirements of high effectiveness and low pressure drop. Layout studies and detailed mechanical and structural design were performed to obtain a flight-type packaging arrangement, including the close-coupled integration of the BHXU-Alternate with the Brayton Rotating Unit (BRU).
ERIC Educational Resources Information Center
Keller, Lisa A.; Clauser, Brian E.; Swanson, David B.
2010-01-01
In recent years, demand for performance assessments has continued to grow. However, performance assessments are notorious for lower reliability, and in particular, low reliability resulting from task specificity. Since reliability analyses typically treat the performance tasks as randomly sampled from an infinite universe of tasks, these estimates…
Particle size distribution of the stratospheric aerosol from SCIAMACHY limb measurements
NASA Astrophysics Data System (ADS)
Rozanov, Alexei; Malinina, Elizaveta; Bovensmann, Heinrich; Burrows, John
2017-04-01
A crucial role of the stratospheric aerosols for the radiative budget of the Earth's atmosphere and the consequences for the climate change are widely recognized. A reliable knowledge on physical and optical properties of the stratospheric aerosols as well as on their vertical and spatial distributing is a key issue to assure a proper initialization and running conditions for climate models. On a global scale this information can only be gained from space borne measurements. While a series of past, present and future instruments provide extensive date sets of such aerosol characteristics as extinction coefficient or backscattering ratio, information on a size distribution of the stratospheric aerosols is sparse. One of the important sources on vertically and spatially resolved information on the particle size distribution of stratospheric aerosols is provided by space borne measurements of the scattered solar light in limb viewing geometry performed in visible, near-infrared and short-wave infrared spectral ranges. SCIAMACHY (SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY) instrument operated on the European satellite Envisat from 2002 to 2102 was capable of providing spectral information needed to retrieve parameters of aerosol particle size distributions. In this presentation we discuss the retrieval method, present first validation results with SAGE II data and analyze first data sets of stratospheric aerosol particle size distribution parameters obtained from SCIAMACHY limb measurements. The research work was performed in the framework of ROMIC (Role of the middle atmosphere in climate) project.
NASA Astrophysics Data System (ADS)
Larkin, K.; Ghommem, M.; Abdelkefi, A.
2018-05-01
Capacitive-based sensing microelectromechanical (MEMS) and nanoelectromechanical (NEMS) gyroscopes have significant advantages over conventional gyroscopes, such as low power consumption, batch fabrication, and possible integration with electronic circuits. However, inadequacies in the modeling of these inertial sensors have presented issues of reliability and functionality of micro-/nano-scale gyroscopes. In this work, a micromechanical model is developed to represent the unique microstructure of nanocrystalline materials and simulate the response of micro-/nano-gyroscope comprising an electrostatically-actuated cantilever beam with a tip mass at the free end. Couple stress and surface elasticity theories are integrated into the classical Euler-Bernoulli beam model in order to derive a size-dependent model. This model is then used to investigate the influence of size-dependent effects on the static pull-in instability, the natural frequencies and the performance output of gyroscopes as the scale decreases from micro-to nano-scale. The simulation results show significant changes in the static pull-in voltage and the natural frequency as the scale of the system is decreased. However, the differential frequency between the two vibration modes of the gyroscope is observed to drastically decrease as the size of the gyroscope is reduced. As such, the frequency-based operation mode may not be an efficient strategy for nano-gyroscopes. The results show that a strong coupling between the surface elasticity and material structure takes place when smaller grain sizes and higher void percentages are considered.
Reliability Estimation When a Test Is Split into Two Parts of Unknown Effective Length.
ERIC Educational Resources Information Center
Feldt, Leonard S.
2002-01-01
Considers the situation in which content or administrative considerations limit the way in which a test can be partitioned to estimate the internal consistency reliability of the total test score. Demonstrates that a single-valued estimate of the total score reliability is possible only if an assumption is made about the comparative size of the…
Levecke, Bruno; Kaplan, Ray M; Thamsborg, Stig M; Torgerson, Paul R; Vercruysse, Jozef; Dobson, Robert J
2018-04-15
Although various studies have provided novel insights into how to best design, analyze and interpret a fecal egg count reduction test (FECRT), it is still not straightforward to provide guidance that allows improving both the standardization and the analytical performance of the FECRT across a variety of both animal and nematode species. For example, it has been suggested to recommend a minimum number of eggs to be counted under the microscope (not eggs per gram of feces), but we lack the evidence to recommend any number of eggs that would allow a reliable assessment of drug efficacy. Other aspects that need further research are the methodology of calculating uncertainty intervals (UIs; confidence intervals in case of frequentist methods and credible intervals in case of Bayesian methods) and the criteria of classifying drug efficacy into 'normal', 'suspected' and 'reduced'. The aim of this study is to provide complementary insights into the current knowledge, and to ultimately provide guidance in the development of new standardized guidelines for the FECRT. First, data were generated using a simulation in which the 'true' drug efficacy (TDE) was evaluated by the FECRT under varying scenarios of sample size, analytic sensitivity of the diagnostic technique, and level of both intensity and aggregation of egg excretion. Second, the obtained data were analyzed with the aim (i) to verify which classification criteria allow for reliable detection of reduced drug efficacy, (ii) to identify the UI methodology that yields the most reliable assessment of drug efficacy (coverage of TDE) and detection of reduced drug efficacy, and (iii) to determine the required sample size and number of eggs counted under the microscope that optimizes the detection of reduced efficacy. Our results confirm that the currently recommended criteria for classifying drug efficacy are the most appropriate. Additionally, the UI methodologies we tested varied in coverage and ability to detect reduced drug efficacy, thus a combination of UI methodologies is recommended to assess the uncertainty across all scenarios of drug efficacy estimates. Finally, based on our model estimates we were able to determine the required number of eggs to count for each sample size, enabling investigators to optimize the probability of correctly classifying a theoretical TDE while minimizing both financial and technical resources. Copyright © 2018 Elsevier B.V. All rights reserved.
Ng, Raymond; Lee, Chun Fan; Wong, Nan Soon; Luo, Nan; Yap, Yoon Sim; Lo, Soo Kien; Chia, Whay Kuang; Yee, Alethea; Krishna, Lalit; Goh, Cynthia; Cheung, Yin Bun
2012-01-01
The objective of the study was to examine the measurement properties of and comparability between the English and Chinese versions of the Functional Assessment of Cancer Therapy-Breast (FACT-B) in breast cancer patients in Singapore. This is an observational study of 271 Singaporean breast cancer patients. The known-group validity of FACT-B total score and Trial Outcome Index (TOI) were assessed in relation to performance status, evidence of disease, and treatment status cross-sectionally; responsiveness to change was assessed in relation to change in performance status longitudinally. Internal consistency and test-retest reliability were evaluated by the Cronbach's alpha and intraclass correlation coefficient (ICC), respectively. Multiple regression analyses were performed to compare the scores on the two language versions, adjusting for covariates. The FACT-B total score and TOI demonstrated known-group validity in differentiating patients with different clinical status. They showed high internal consistency and test-retest reliability, with Cronbach's alpha ranging from 0.87 to 0.91 and ICC ranging from 0.82 to 0.89. The English version was responsive to the change in performance status. The Chinese version was shown to be responsive to decline in performance status but the sample size of Chinese-speaking patients who improved in performance status was too small (N = 6) for conclusive analysis about responsiveness to improvement. Two items concerning sexuality had a high item non-response rate (50.2 and 14.4%). No practically significant difference was found in the total score and TOI between the two language versions despite minor differences in two of the 37 items. The English and Chinese versions of the FACT-B are valid, responsive, and reliable instruments in assessing health-related quality of life in breast cancer patients in Singapore. Data collected from the English and Chinese versions can be pooled and either version could be used for bilingual patients.
Chaudhry, Aafia; Benson, Laura; Varshaver, Michael; Farber, Ori; Weinberg, Uri; Kirson, Eilon; Palti, Yoram
2015-11-11
Optune™, previously known as the NovoTTF-100A System™, generates Tumor Treating Fields (TTFields), an effective anti-mitotic therapy for glioblastoma. The system delivers intermediate frequency, alternating electric fields to the supratentorial brain. Patient therapy is personalized by configuring transducer array layout placement on the scalp to the tumor site using MRI measurements and the NovoTAL System. Transducer array layout mapping optimizes therapy by maximizing electric field intensity to the tumor site. This study evaluated physician performance in conducting transducer array layout mapping using the NovoTAL System compared with mapping performed by the Novocure in-house clinical team. Fourteen physicians (7 neuro-oncologists, 4 medical oncologists, and 3 neurosurgeons) evaluated five blinded cases of recurrent glioblastoma and performed head size and tumor location measurements using a standard Digital Imaging and Communications in Medicine reader. Concordance with Novocure measurement and intra- and inter-rater reliability were assessed using relevant correlation coefficients. The study criterion for success was a concordance correlation coefficient (CCC) >0.80. CCC for each physician versus Novocure on 20 MRI measurements was 0.96 (standard deviation, SD ± 0.03, range 0.90-1.00), indicating very high agreement between the two groups. Intra- and inter-rater reliability correlation coefficients were similarly high: 0.83 (SD ±0.15, range 0.54-1.00) and 0.80 (SD ±0.18, range 0.48-1.00), respectively. This user study demonstrated an excellent level of concordance between prescribing physicians and Novocure in-house clinical teams in performing transducer array layout planning. Intra-rater reliability was very high, indicating reproducible performance. Physicians prescribing TTFields, when trained on the NovoTAL System, can independently perform transducer array layout mapping required for the initiation and maintenance of patients on TTFields therapy.
Some Aspects of the Failure Mechanisms in BaTiO3-Based Multilayer Ceramic Capacitors
NASA Technical Reports Server (NTRS)
Liu, David Donhang; Sampson, Michael J.
2012-01-01
The objective of this presentation is to gain insight into possible failure mechanisms in BaTiO3-based ceramic capacitors that may be associated with the reliability degradation that accompanies a reduction in dielectric thickness, as reported by Intel Corporation in 2010. The volumetric efficiency (microF/cm3) of a multilayer ceramic capacitor (MLCC) has been shown to not increase limitlessly due to the grain size effect on the dielectric constant of ferroelectric ceramic BaTiO3 material. The reliability of an MLCC has been discussed with respect to its structure. The MLCCs with higher numbers of dielectric layers will pose more challenges for the reliability of dielectric material, which is the case for most base-metal-electrode (BME) capacitors. A number of MLCCs manufactured using both precious-metal-electrode (PME) and BME technology, with 25 V rating and various chip sizes and capacitances, were tested at accelerated stress levels. Most of these MLCCs had a failure behavior with two mixed failure modes: the well-known rapid dielectric wearout, and so-called 'early failures." The two failure modes can be distinguished when the testing data were presented and normalized at use-level using a 2-parameter Weibull plot. The early failures had a slope parameter of Beta >1, indicating that the early failures are not infant mortalities. Early failures are triggered due to external electrical overstress and become dominant as dielectric layer thickness decreases, accompanied by a dramatic reduction in reliability. This indicates that early failures are the main cause of the reliability degradation in MLCCs as dielectric layer thickness decreases. All of the early failures are characterized by an avalanche-like breakdown leakage current. The failures have been attributed to the extrinsic minor construction defects introduced during fabrication of the capacitors. A reliability model including dielectric thickness and extrinsic defect feature size is proposed in this presentation. The model can be used to explain the Intel-reported reliability degradation in MLCCs with respect to the reduction of dielectric thickness. It can also be used to estimate the reliability of a MLCC based on its construction and microstructure parameters such as dielectric thickness, average grain size, and number of dielectric layers. Measures for preventing early failures are also discussed in this document.
Simple, Defensible Sample Sizes Based on Cost Efficiency
Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.
2009-01-01
Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055
NASA Technical Reports Server (NTRS)
Kimmel, William M. (Technical Monitor); Bradley, Kevin R.
2004-01-01
This paper describes the development of a methodology for sizing Blended-Wing-Body (BWB) transports and how the capabilities of the Flight Optimization System (FLOPS) have been expanded using that methodology. In this approach, BWB transports are sized based on the number of passengers in each class that must fit inside the centerbody or pressurized vessel. Weight estimation equations for this centerbody structure were developed using Finite Element Analysis (FEA). This paper shows how the sizing methodology has been incorporated into FLOPS to enable the design and analysis of BWB transports. Previous versions of FLOPS did not have the ability to accurately represent or analyze BWB configurations in any reliable, logical way. The expanded capabilities allow the design and analysis of a 200 to 450-passenger BWB transport or the analysis of a BWB transport for which the geometry is already known. The modifications to FLOPS resulted in differences of less than 4 percent for the ramp weight of a BWB transport in this range when compared to previous studies performed by NASA and Boeing.
Technological advances in suspended-sediment surrogate monitoring
NASA Astrophysics Data System (ADS)
Gray, John R.; Gartner, Jeffrey W.
2009-04-01
Surrogate technologies to continuously monitor suspended sediment show promise toward supplanting traditional data collection methods requiring routine collection and analysis of water samples. Commercially available instruments operating on bulk optic (turbidity), laser optic, pressure difference, and acoustic backscatter principles are evaluated based on cost, reliability, robustness, accuracy, sample volume, susceptibility to biological fouling, and suitable range of mass concentration and particle size distribution. In situ turbidimeters are widely used. They provide reliable data where the point measurements can be reliably correlated to the river's mean cross section concentration value, effects of biological fouling can be minimized, and concentrations remain below the sensor's upper measurement limit. In situ laser diffraction instruments have similar limitations and can cost 6 times the approximate $5000 purchase price of a turbidimeter. However, laser diffraction instruments provide volumetric-concentration data in 32 size classes. Pressure differential instruments measure mass density in a water column, thus integrating substantially more streamflow than a point measurement. They are designed for monitoring medium-to-large concentrations, are generally unaffected by biological fouling, and cost about the same as a turbidimeter. However, their performance has been marginal in field applications. Acoustic Doppler profilers use acoustic backscatter to measure suspended sediment concentrations in orders of magnitude more streamflow than do instruments that rely on point measurements. The technology is relatively robust and generally immune to effects of biological fouling. Cost of a single-frequency device is about double that of a turbidimeter. Multifrequency arrays also provide the potential to resolve concentrations by clay silt versus sand size fractions. Multifrequency hydroacoustics shows the most promise for revolutionizing collection of continuous suspended sediment data by instruments that require only periodic calibration for correlation to mean concentrations in river cross sections. Broad application of proven suspended sediment surrogate technologies has the potential to revolutionize fluvial sediment monitoring. Once applied, benefits could be enormous, providing for safer, more frequent and consistent, arguably more accurate, and ultimately less expensive sediment data for managing the world's sedimentary resources.
Technological advances in suspended‐sediment surrogate monitoring
Gray, John R.; Gartner, Jeffrey W.
2009-01-01
Surrogate technologies to continuously monitor suspended sediment show promise toward supplanting traditional data collection methods requiring routine collection and analysis of water samples. Commercially available instruments operating on bulk optic (turbidity), laser optic, pressure difference, and acoustic backscatter principles are evaluated based on cost, reliability, robustness, accuracy, sample volume, susceptibility to biological fouling, and suitable range of mass concentration and particle size distribution. In situ turbidimeters are widely used. They provide reliable data where the point measurements can be reliably correlated to the river's mean cross section concentration value, effects of biological fouling can be minimized, and concentrations remain below the sensor's upper measurement limit. In situ laser diffraction instruments have similar limitations and can cost 6 times the approximate $5000 purchase price of a turbidimeter. However, laser diffraction instruments provide volumetric‐concentration data in 32 size classes. Pressure differential instruments measure mass density in a water column, thus integrating substantially more streamflow than a point measurement. They are designed for monitoring medium‐to‐large concentrations, are generally unaffected by biological fouling, and cost about the same as a turbidimeter. However, their performance has been marginal in field applications. Acoustic Doppler profilers use acoustic backscatter to measure suspended sediment concentrations in orders of magnitude more streamflow than do instruments that rely on point measurements. The technology is relatively robust and generally immune to effects of biological fouling. Cost of a single‐frequency device is about double that of a turbidimeter. Multifrequency arrays also provide the potential to resolve concentrations by clay silt versus sand size fractions. Multifrequency hydroacoustics shows the most promise for revolutionizing collection of continuous suspended sediment data by instruments that require only periodic calibration for correlation to mean concentrations in river cross sections. Broad application of proven suspended sediment surrogate technologies has the potential to revolutionize fluvial sediment monitoring. Once applied, benefits could be enormous, providing for safer, more frequent and consistent, arguably more accurate, and ultimately less expensive sediment data for managing the world's sedimentary resources.
Kabadayi, Can; Taylor, Lucy A; von Bayern, Auguste M P; Osvath, Mathias
2016-04-01
Overriding motor impulses instigated by salient perceptual stimuli represent a fundamental inhibitory skill. Such motor self-regulation facilitates more rational behaviour, as it brings economy into the bodily interaction with the physical and social world. It also underlies certain complex cognitive processes including decision making. Recently, MacLean et al. (MacLean et al. 2014 Proc. Natl Acad. Sci. USA 111, 2140-2148. (doi:10.1073/pnas.1323533111)) conducted a large-scale study involving 36 species, comparing motor self-regulation across taxa. They concluded that absolute brain size predicts level of performance. The great apes were most successful. Only a few of the species tested were birds. Given birds' small brain size-in absolute terms-yet flexible behaviour, their motor self-regulation calls for closer study. Corvids exhibit some of the largest relative avian brain sizes-although small in absolute measure-as well as the most flexible cognition in the animal kingdom. We therefore tested ravens, New Caledonian crows and jackdaws in the so-called cylinder task. We found performance indistinguishable from that of great apes despite the much smaller brains. We found both absolute and relative brain volume to be a reliable predictor of performance within Aves. The complex cognition of corvids is often likened to that of great apes; our results show further that they share similar fundamental cognitive mechanisms.
Llorens, Roberto; Latorre, Jorge; Noé, Enrique; Keshner, Emily A
2016-01-01
Posturography systems that incorporate force platforms are considered to assess balance and postural control with greater sensitivity and objectivity than conventional clinical tests. The Wii Balance Board (WBB) system has been shown to have similar performance characteristics as other force platforms, but with lower cost and size. To determine the validity and reliability of a freely available WBB-based posturography system that combined the WBB with several traditional balance assessments, and to assess the performance of a cohort of stroke individuals with respect to healthy individuals. Healthy subjects and individuals with stroke were recruited. Both groups were assessed using the WBB-based posturography system. Individuals with stroke were also assessed using a laboratory grade posturography system and a battery of clinical tests to determine the concurrent validity of the system. A group of subjects were assessed twice with the WBB-based system to determine its reliability. A total of 144 healthy individuals and 53 individuals with stroke participated in the study. Concurrent validity with another posturography system was moderate to high. Correlations with clinical scales were consistent with previous research. The reliability of the system was excellent in almost all measures. In addition, the system successfully characterized individuals with stroke with respect to the healthy population. The WBB-based posturography system exhibited excellent psychometric properties and sensitivity for identifying balance performance of individuals with stroke in comparison with healthy subjects, which supports feasibility of the system as a clinical tool. Copyright © 2015 Elsevier B.V. All rights reserved.
Epistemic belief structures within introductory astronomy
NASA Astrophysics Data System (ADS)
Johnson, Keith; Willoughby, Shannon D.
2018-06-01
The reliability and validity of inventories should be verified in multiple ways. Although the epistemological beliefs about the physical science survey (EBAPS) has been deemed to be reliable and valid by the authors, the axes or factor structure proposed by the authors has not been independently checked. Using data from a study sample we discussed in previous publications, we performed exploratory factor analysis on 1,258 post-test EBAPS surveys. The students in the sample were from an introductory Astronomy course at a mid-sized western university. Inspection suggested the use of either a three-factor model or a five-factor model. Each of the factors is interpreted and discussed, and the factors are compared to the axes proposed by the authors of the EBAPS. We find that the five-factor model extrapolated from our data partially overlaps with the model put forth by the authors of the EBAPS, and that many of the questions did not load onto any factors.
A Study on the Thermomechanical Reliability Risks of Through-Silicon-Vias in Sensor Applications
Shao, Shuai; Liu, Dapeng; Niu, Yuling; O’Donnell, Kathy; Sengupta, Dipak; Park, Seungbae
2017-01-01
Reliability risks for two different types of through-silicon-vias (TSVs) are discussed in this paper. The first is a partially-filled copper TSV, if which the copper layer covers the side walls and bottom. A polymer is used to fill the rest of the cavity. Stresses in risk sites are studied and ranked for this TSV structure by FEA modeling. Parametric studies for material properties (modulus and thermal expansion) of TSV polymer are performed. The second type is a high aspect ratio TSV filled by polycrystalline silicon (poly Si). Potential risks of the voids in the poly Si due to filling defects are studied. Fracture mechanics methods are utilized to evaluate the risk for two different assembly conditions: package assembled to printed circuit board (PCB) and package assembled to flexible substrate. The effect of board/substrate/die thickness and the size and location of the void are discussed. PMID:28208758
NASA Astrophysics Data System (ADS)
Hirasawa, Kazunori; Shoji, Nobuyuki; Kasahara, Masayuki; Matsumura, Kazuhiro; Shimizu, Kimiya
2016-05-01
This prospective randomized study compared test results of size modulation standard automated perimetry (SM-SAP) performed with the Octopus 600 and conventional SAP (C-SAP) performed with the Humphrey Field Analyzer (HFA) in glaucoma patients. Eighty-eight eyes of 88 glaucoma patients underwent SM-SAP and C-SAP tests with the Octopus 600 24-2 Dynamic and HFA 24-2 SITA-Standard, respectively. Fovea threshold, mean defect, and square loss variance of SM-SAP were significantly correlated with the corresponding C-SAP indices (P < 0.001). The false-positive rate was slightly lower, and false-negative rate slightly higher, with SM-SAP than C-SAP (P = 0.002). Point-wise threshold values obtained with SM-SAP were moderately to strongly correlated with those obtained with C-SAP (P < 0.001). The correlation coefficients of the central zone were significantly lower than those of the middle to peripheral zone (P = 0.031). The size and depth of the visual field (VF) defect were smaller (P = 0.039) and greater (P = 0.043), respectively, on SM-SAP than on C-SAP. Although small differences were observed in VF sensitivity in the central zone, the defect size and depth and the reliability indices between SM-SAP and C-SAP, global indices of the two testing modalities were well correlated.
Evaluation methodologies for an advanced information processing system
NASA Technical Reports Server (NTRS)
Schabowsky, R. S., Jr.; Gai, E.; Walker, B. K.; Lala, J. H.; Motyka, P.
1984-01-01
The system concept and requirements for an Advanced Information Processing System (AIPS) are briefly described, but the emphasis of this paper is on the evaluation methodologies being developed and utilized in the AIPS program. The evaluation tasks include hardware reliability, maintainability and availability, software reliability, performance, and performability. Hardware RMA and software reliability are addressed with Markov modeling techniques. The performance analysis for AIPS is based on queueing theory. Performability is a measure of merit which combines system reliability and performance measures. The probability laws of the performance measures are obtained from the Markov reliability models. Scalar functions of this law such as the mean and variance provide measures of merit in the AIPS performability evaluations.
Simpson, V; Hughes, M; Wilkinson, J; Herrick, A L; Dinsdale, G
2018-03-01
Digital ulcers are a major problem in patients with systemic sclerosis (SSc), causing severe pain and impairment of hand function. In addition, digital ulcers heal slowly and sometimes become infected, which can lead to gangrene and necessitate amputation if appropriate intervention is not taken. A reliable, objective method for assessing digital ulcer healing or progression is needed in both the clinical and research arenas. This study was undertaken to compare 2 computer-assisted planimetry methods of measurement of digital ulcer area on photographs (ellipse and freehand regions of interest [ROIs]), and to assess the reliability of photographic calibration and the 2 methods of area measurement. Photographs were taken of 107 digital ulcers in 36 patients with SSc spectrum disease. Three raters assessed the photographs. Custom software allowed raters to calibrate photograph dimensions and draw ellipse or freehand ROIs. The shapes and dimensions of the ROIs were saved for further analysis. Calibration (by a single rater performing 5 repeats per image) produced an intraclass correlation coefficient (intrarater reliability) of 0.99. The mean ± SD areas of digital ulcers assessed using ellipse and freehand ROIs were 18.7 ± 20.2 mm 2 and 17.6 ± 19.3 mm 2 , respectively. Intrarater and interrater reliability of the ellipse ROI were 0.97 and 0.77, respectively. For the freehand ROI, the intrarater and interrater reliability were 0.98 and 0.76, respectively. Our findings indicate that computer-assisted planimetry methods applied to SSc-related digital ulcers can be extremely reliable. Further work is needed to move toward applying these methods as outcome measures for clinical trials and in clinical settings. © 2017, American College of Rheumatology.
Analysis of spatial distribution of land cover maps accuracy
NASA Astrophysics Data System (ADS)
Khatami, R.; Mountrakis, G.; Stehman, S. V.
2017-12-01
Land cover maps have become one of the most important products of remote sensing science. However, classification errors will exist in any classified map and affect the reliability of subsequent map usage. Moreover, classification accuracy often varies over different regions of a classified map. These variations of accuracy will affect the reliability of subsequent analyses of different regions based on the classified maps. The traditional approach of map accuracy assessment based on an error matrix does not capture the spatial variation in classification accuracy. Here, per-pixel accuracy prediction methods are proposed based on interpolating accuracy values from a test sample to produce wall-to-wall accuracy maps. Different accuracy prediction methods were developed based on four factors: predictive domain (spatial versus spectral), interpolation function (constant, linear, Gaussian, and logistic), incorporation of class information (interpolating each class separately versus grouping them together), and sample size. Incorporation of spectral domain as explanatory feature spaces of classification accuracy interpolation was done for the first time in this research. Performance of the prediction methods was evaluated using 26 test blocks, with 10 km × 10 km dimensions, dispersed throughout the United States. The performance of the predictions was evaluated using the area under the curve (AUC) of the receiver operating characteristic. Relative to existing accuracy prediction methods, our proposed methods resulted in improvements of AUC of 0.15 or greater. Evaluation of the four factors comprising the accuracy prediction methods demonstrated that: i) interpolations should be done separately for each class instead of grouping all classes together; ii) if an all-classes approach is used, the spectral domain will result in substantially greater AUC than the spatial domain; iii) for the smaller sample size and per-class predictions, the spectral and spatial domain yielded similar AUC; iv) for the larger sample size (i.e., very dense spatial sample) and per-class predictions, the spatial domain yielded larger AUC; v) increasing the sample size improved accuracy predictions with a greater benefit accruing to the spatial domain; and vi) the function used for interpolation had the smallest effect on AUC.
Early Change in Stroke Size Performs Best in Predicting Response to Therapy.
Simpkins, Alexis Nétis; Dias, Christian; Norato, Gina; Kim, Eunhee; Leigh, Richard
2017-01-01
Reliable imaging biomarkers of response to therapy in acute stroke are needed. The final infarct volume and percent of early reperfusion have been used for this purpose. Early fluctuation in stroke size is a recognized phenomenon, but its utility as a biomarker for response to therapy has not been established. This study examined the clinical relevance of early change in stroke volume and compared it with the final infarct volume and percent of early reperfusion in identifying early neurologic improvement (ENI). Acute stroke patients, enrolled between 2013 and 2014 with serial magnetic resonance imaging (MRI) scans (pretreatment baseline, 2 h post, and 24 h post), who received thrombolysis were included in the analysis. Early change in stroke volume, infarct volume at 24 h on diffusion, and percent of early reperfusion were calculated from the baseline and 2 h MRI scans were compared. ENI was defined as ≥4 point decrease in National Institutes of Health Stroke Scales within 24 h. Logistic regression models and receiver operator characteristics analysis were used to compare the efficacy of 3 imaging biomarkers. Serial MRIs of 58 acute stroke patients were analyzed. Early change in stroke volume was significantly associated with ENI by logistic regression analysis (OR 0.93, p = 0.048) and remained significant after controlling for stroke size and severity (OR 0.90, p = 0.032). Thus, for every 1 mL increase in stroke volume, there was a 10% decrease in the odds of ENI, while for every 1 mL decrease in stroke volume, there was a 10% increase in the odds of ENI. Neither infarct volume at 24 h nor percent of early reperfusion were significantly associated with ENI by logistic regression. Receiver-operator characteristic analysis identified early change in stroke volume as the only biomarker of the 3 that performed significantly different than chance (p = 0.03). Early fluctuations in stroke size may represent a more reliable biomarker for response to therapy than the more traditional measures of final infarct volume and percent of early reperfusion. © 2017 S. Karger AG, Basel.
The reliability and stability of visual working memory capacity.
Xu, Z; Adam, K C S; Fang, X; Vogel, E K
2018-04-01
Because of the central role of working memory capacity in cognition, many studies have used short measures of working memory capacity to examine its relationship to other domains. Here, we measured the reliability and stability of visual working memory capacity, measured using a single-probe change detection task. In Experiment 1, the participants (N = 135) completed a large number of trials of a change detection task (540 in total, 180 each of set sizes 4, 6, and 8). With large numbers of both trials and participants, reliability estimates were high (α > .9). We then used an iterative down-sampling procedure to create a look-up table for expected reliability in experiments with small sample sizes. In Experiment 2, the participants (N = 79) completed 31 sessions of single-probe change detection. The first 30 sessions took place over 30 consecutive days, and the last session took place 30 days later. This unprecedented number of sessions allowed us to examine the effects of practice on stability and internal reliability. Even after much practice, individual differences were stable over time (average between-session r = .76).
Solid Insulated Switchgear and Investigation of its Mechanical and Electrical Reliability
NASA Astrophysics Data System (ADS)
Sato, Junichi; Kinoshita, Susumu; Sakaguchi, Osamu; Miyagawa, Masaru; Shimizu, Toshio; Homma, Mitsutaka
SF6 gas is applied widely to medium voltage switchgear because of its high insulation reliability and down-sizing ability. However, SF6 gas was placed on the list of greenhouse gases under the Kyoto Protocol in 1997. Since then, the investigation and development concerning SF6-free or less has carried out activity. Therefore, we paid attention to the solid material which has higher dielectric strength than SF6, and we have newly developed solid insulated switchgear (SIS) achieved by molding all main circuit. A new epoxy casting material is applied, which contains a great deal of spherical silica and a small amount of rubber particles. This new material has the high mechanical strength, high thermal resistance, high toughness, and also high dielectric strength because of directly molding the vacuum bottle, down-sizing and reliability. This paper describes about the technology of a new epoxy casting material which achieves the SIS. In addition, the mechanical and electrical reliability test of SIS applied a new epoxy resin are carried out, and effectiveness of the development material and the mechanical and electrical reliability of SIS are verified.
NASA Astrophysics Data System (ADS)
Lu, Mengqian; Lall, Upmanu; Robertson, Andrew W.; Cook, Edward
2017-03-01
Streamflow forecasts at multiple time scales provide a new opportunity for reservoir management to address competing objectives. Market instruments such as forward contracts with specified reliability are considered as a tool that may help address the perceived risk associated with the use of such forecasts in lieu of traditional operation and allocation strategies. A water allocation process that enables multiple contracts for water supply and hydropower production with different durations, while maintaining a prescribed level of flood risk reduction, is presented. The allocation process is supported by an optimization model that considers multitime scale ensemble forecasts of monthly streamflow and flood volume over the upcoming season and year, the desired reliability and pricing of proposed contracts for hydropower and water supply. It solves for the size of contracts at each reliability level that can be allocated for each future period, while meeting target end of period reservoir storage with a prescribed reliability. The contracts may be insurable, given that their reliability is verified through retrospective modeling. The process can allow reservoir operators to overcome their concerns as to the appropriate skill of probabilistic forecasts, while providing water users with short-term and long-term guarantees as to how much water or energy they may be allocated. An application of the optimization model to the Bhakra Dam, India, provides an illustration of the process. The issues of forecast skill and contract performance are examined. A field engagement of the idea is useful to develop a real-world perspective and needs a suitable institutional environment.
Stitch-bond parallel-gap welding for IC circuits
NASA Technical Reports Server (NTRS)
Chvostal, P.; Tuttle, J.; Vanderpool, R.
1980-01-01
Stitch-bonded flatpacks are superior to soldered dual-in-lines where size, weight, and reliability are important. Results should interest designers of packaging for complex high-reliability electronics, such as that used in security systems, industrial process control, and vehicle electronics.
Do team processes really have an effect on clinical performance? A systematic literature review.
Schmutz, J; Manser, T
2013-04-01
There is a growing literature on the relationship between team processes and clinical performance. The purpose of this review is to summarize these articles and examine the impact of team process behaviours on clinical performance. We conducted a literature search in five major databases. Inclusion criteria were: English peer-reviewed papers published between January 2001 and May 2012, which showed or tried to show (i) a statistical relationship of a team process variable and clinical performance or (ii) an improvement of a performance variable through a team process intervention. Study quality was assessed using predefined quality indicators. For every study, we calculated the relevant effect sizes. We included 28 studies in the review, seven of which were intervention studies. Every study reported at least one significant relationship between team processes or an intervention and performance. Also, some non-significant effects were reported. Most of the reported effect sizes were large or medium. The study quality ranged from medium to high. The studies are highly diverse regarding the specific team process behaviours investigated and also regarding the methods used. However, they suggest that team process behaviours do influence clinical performance and that training results in increased performance. Future research should rely on existing theoretical frameworks, valid, and reliable methods to assess processes such as teamwork or coordination and focus on the development of adequate tools to assess process performance, linking them with outcomes in the clinical setting.
High-reliability computing for the smarter planet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, Heather M; Graham, Paul; Manuzzato, Andrea
2010-01-01
The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities ofmore » inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is necessary. Already critical infrastructure is failing too frequently. In this paper, we will introduce the Cross-Layer Reliability concept for designing more reliable computer systems.« less
PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems
Mohamed, Mohamed A.; Eltamaly, Ali M.; Alolah, Abdulrahman I.
2016-01-01
This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers. PMID:27513000
PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems.
Mohamed, Mohamed A; Eltamaly, Ali M; Alolah, Abdulrahman I
2016-01-01
This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers.
Application of a novel new multispectral nanoparticle tracking technique
NASA Astrophysics Data System (ADS)
McElfresh, Cameron; Harrington, Tyler; Vecchio, Kenneth S.
2018-06-01
Fast, reliable, and accurate particle size analysis techniques must meet the demands of evolving industrial and academic research in areas of functionalized nanoparticle synthesis, advanced materials development, and other nanoscale enabled technologies. In this study a new multispectral particle tracking analysis (m-PTA) technique enabled by the ViewSizer™ 3000 (MANTA Instruments, USA) was evaluated using solutions of monomodal and multimodal gold and polystyrene latex nanoparticles, as well as a spark eroded polydisperse 316L stainless steel nanopowder, and large (non-Brownian) borosilicate particles. It was found that m-PTA performed comparably to the DLS in evaluation of monomodal particle size distributions. When measuring bimodal, trimodal and polydisperse solutions, the m-PTA technique overwhelmingly outperformed traditional dynamic light scattering (DLS) in both peak detection and relative particle concentration analysis. It was also observed that the m-PTA technique is less susceptible to large particle overexpression errors. The ViewSizer™ 3000 was also found to be successful in accurately evaluating sizes and concentrations of monomodal and bimodal sinking borosilicate particles.
Microfluidic model experiments on the injectability of monoclonal antibody solutions
NASA Astrophysics Data System (ADS)
Duchene, Charles; Filipe, Vasco; Nakach, Mostafa; Huille, Sylvain; Lindner, Anke
2017-11-01
Autoinjection devices that allow patients to self-administer medicine are becoming used more frequently; however, this advance comes with an increased need for precision in the injection process. The rare occurrence of protein aggregates in solutions of monoclonal antibodies constitutes a threat to the reliability of such devices. Here we study the flow of protein solutions containing aggregates in microfluidic model systems, mimicking injection devices, to gain fundamental understanding of the catastrophic clogging of constrictions of given size. We form aggregates by mechanically shaking or heating antibody solutions and then inject these solutions into microfluidic channels with varying types of constrictions. Geometrical clogging occurs when aggregates reach the size of the constriction and can in some cases be undone by increasing the applied pressure. We perform systematic experiments varying the relative aggregate size and the flow rate or applied pressure. The mechanical deformation of aggregates during their passage through constrictions is investigated to gain a better understanding of the clogging and unclogging mechanisms.
Wallwork, Tracy L; Hides, Julie A; Stanton, Warren R
2007-10-01
Within-session intrarater and interrater reliability study. To establish the intrarater and interrater reliability of thickness measurements of the multifidus muscle in a parasagittal plane, conducted by an experienced ultrasound operator and a novice assessor. There is considerable evidence for the important role of the multifidus muscle in segmental stabilization of the lumbar spine. The cross-sectional area of the multifidus muscle has been assessed in healthy subjects and patients with low back pain using real-time ultrasound imaging. However, few studies have measured the thickness of the multifidus muscle using a parasagittal view. The thickness of the multifidus muscle was measured at rest, using real-time ultrasound imaging, in 10 subjects without a history of low back pain, at the levels of the L2-3 and L4-5 zygapophyseal joints. The measure was carried out 3 times at each level by 2 assessors (1 experienced, 1 novice). Intrarater (model 3) and interrater (model 2) reliability was assessed by calculation of an F statistic (analysis of variance), the intraclass correlation coefficient (ICC), and the standard error of measurement (SEM). On the basis of an average of 3 trials, the 2 operators showed very high interrater agreement on the measurement of thicknesses at the L2-3 level (ICC2,3 = 0.96; 95% CI: 0.84 to 0.99) and the L4-5 vertebral level (ICC2,3 = 0.97; 95% CI: 0.87 to 0.99), with no systematic differences in muscle size across operators (P > .05). Interrater reliability was relatively lower for the L2-3 level (ICC2,1 = 0.85; 95% CI: 0.51 to 0.96) than the L4-5 level (ICC2,1 = 0.87; 95% CI: 0.52 to 0.97) when a single trial per rater was used, but these values still indicated a high level of agreement. In addition, the novice and experienced operator produced reliable intrarater measurements at L2-3 (ICC3,1 = 0.89; 95% CI: 0.72 to 0.97 and 0.94; 95% CI: 0.86 to 0.99) and at L4-5 (ICC3,1 = 0.88; 95% CI: 0.68 to 0.97 and 0.95; 95% CI: 0.86 to 0.99), with no systematic differences in muscle size across trials (P > .05). The consistently low SEM values also indicate low measurement error. A novice and an experienced assessor were both able to reliably perform this measure at rest for 2 vertebral levels using real-time ultrasound imaging. An average of 3 trials produced higher interrater reliability scores, though using a single trial per rater was also reliable.
Do Multiple-Choice Options Inflate Estimates of Vocabulary Size on the VST?
ERIC Educational Resources Information Center
Stewart, Jeffrey
2014-01-01
Validated under a Rasch framework (Beglar, 2010), the Vocabulary Size Test (VST) (Nation & Beglar, 2007) is an increasingly popular measure of decontextualized written receptive vocabulary size in the field of second language acquisition. However, although the validation indicates that the test has high internal reliability, still unaddressed…
Performance Evaluation of Reliable Multicast Protocol for Checkout and Launch Control Systems
NASA Technical Reports Server (NTRS)
Shu, Wei Wennie; Porter, John
2000-01-01
The overall objective of this project is to study reliability and performance of Real Time Critical Network (RTCN) for checkout and launch control systems (CLCS). The major tasks include reliability and performance evaluation of Reliable Multicast (RM) package and fault tolerance analysis and design of dual redundant network architecture.
Elsebaie, H B; Dannawi, Z; Altaf, F; Zaidan, A; Al Mukhtar, M; Shaw, M J; Gibson, A; Noordeen, H
2016-02-01
The achievement of shoulder balance is an important measure of successful scoliosis surgery. No previously described classification system has taken shoulder balance into account. We propose a simple classification system for AIS based on two components which include the curve type and shoulder level. Altogether, three curve types have been defined according to the size and location of the curves, each curve pattern is subdivided into type A or B depending on the shoulder level. This classification was tested for interobserver reproducibility and intraobserver reliability. A retrospective analysis of the radiographs of 232 consecutive cases of AIS patients treated surgically between 2005 and 2009 was also performed. Three major types and six subtypes were identified. Type I accounted for 30 %, type II 28 % and type III 42 %. The retrospective analysis showed three patients developed a decompensation that required extension of the fusion. One case developed worsening of shoulder balance requiring further surgery. This classification was tested for interobserver and intraobserver reliability. The mean kappa coefficients for interobserver reproducibility ranged from 0.89 to 0.952, while the mean kappa value for intraobserver reliability was 0.964 indicating a good-to-excellent reliability. The treatment algorithm guides the spinal surgeon to achieve optimal curve correction and postoperative shoulder balance whilst fusing the smallest number of spinal segments. The high interobserver reproducibility and intraobserver reliability makes it an invaluable tool to describe scoliosis curves in everyday clinical practice.
Mehta, Shraddha; Bastero-Caballero, Rowena F; Sun, Yijun; Zhu, Ray; Murphy, Diane K; Hardas, Bhushan; Koch, Gary
2018-04-29
Many published scale validation studies determine inter-rater reliability using the intra-class correlation coefficient (ICC). However, the use of this statistic must consider its advantages, limitations, and applicability. This paper evaluates how interaction of subject distribution, sample size, and levels of rater disagreement affects ICC and provides an approach for obtaining relevant ICC estimates under suboptimal conditions. Simulation results suggest that for a fixed number of subjects, ICC from the convex distribution is smaller than ICC for the uniform distribution, which in turn is smaller than ICC for the concave distribution. The variance component estimates also show that the dissimilarity of ICC among distributions is attributed to the study design (ie, distribution of subjects) component of subject variability and not the scale quality component of rater error variability. The dependency of ICC on the distribution of subjects makes it difficult to compare results across reliability studies. Hence, it is proposed that reliability studies should be designed using a uniform distribution of subjects because of the standardization it provides for representing objective disagreement. In the absence of uniform distribution, a sampling method is proposed to reduce the non-uniformity. In addition, as expected, high levels of disagreement result in low ICC, and when the type of distribution is fixed, any increase in the number of subjects beyond a moderately large specification such as n = 80 does not have a major impact on ICC. Copyright © 2018 John Wiley & Sons, Ltd.
Huang, Emily; Chern, Hueylan; O'Sullivan, Patricia; Cook, Brian; McDonald, Erik; Palmer, Barnard; Liu, Terrence; Kim, Edward
2014-10-01
Knot tying is a fundamental and crucial surgical skill. We developed a kinesthetic pedagogical approach that increases precision and economy of motion by explicitly teaching suture-handling maneuvers and studied its effects on novice performance. Seventy-four first-year medical students were randomized to learn knot tying via either the traditional or the novel "kinesthetic" method. After 1 week of independent practice, students were videotaped performing 4 tying tasks. Three raters scored deidentified videos using a validated visual analog scale. The groups were compared using analysis of covariance with practice knots as a covariate and visual analog scale score (range, 0 to 100) as the dependent variable. Partial eta-square was calculated to indicate effect size. Overall rater reliability was .92. The kinesthetic group scored significantly higher than the traditional group for individual tasks and overall, controlling for practice (all P < .004). The kinesthetic overall mean was 64.15 (standard deviation = 16.72) vs traditional 46.31 (standard deviation = 16.20; P < .001; effect size = .28). For novices, emphasizing kinesthetic suture handling substantively improved performance on knot tying. We believe this effect can be extrapolated to more complex surgical skills. Copyright © 2014 Elsevier Inc. All rights reserved.
New perspective on single-radiator multiple-port antennas for adaptive beamforming applications
Choo, Hosung
2017-01-01
One of the most challenging problems in recent antenna engineering fields is to achieve highly reliable beamforming capabilities in an extremely restricted space of small handheld devices. In this paper, we introduce a new perspective on single-radiator multiple-port (SRMP) antenna to alter the traditional approach of multiple-antenna arrays for improving beamforming performances with reduced aperture sizes. The major contribution of this paper is to demonstrate the beamforming capability of the SRMP antenna for use as an extremely miniaturized front-end component in more sophisticated beamforming applications. To examine the beamforming capability, the radiation properties and the array factor of the SRMP antenna are theoretically formulated for electromagnetic characterization and are used as complex weights to form adaptive array patterns. Then, its fundamental performance limits are rigorously explored through enumerative studies by varying the dielectric constant of the substrate, and field tests are conducted using a beamforming hardware to confirm the feasibility. The results demonstrate that the new perspective of the SRMP antenna allows for improved beamforming performances with the ability of maintaining consistently smaller aperture sizes compared to the traditional multiple-antenna arrays. PMID:29023493
A standard for test reliability in group research.
Ellis, Jules L
2013-03-01
Many authors adhere to the rule that test reliabilities should be at least .70 or .80 in group research. This article introduces a new standard according to which reliabilities can be evaluated. This standard is based on the costs or time of the experiment and of administering the test. For example, if test administration costs are 7 % of the total experimental costs, the efficient value of the reliability is .93. If the actual reliability of a test is equal to this efficient reliability, the test size maximizes the statistical power of the experiment, given the costs. As a standard in experimental research, it is proposed that the reliability of the dependent variable be close to the efficient reliability. Adhering to this standard will enhance the statistical power and reduce the costs of experiments.
Griessenauer, Christoph J; Foreman, Paul; Shoja, Mohammadali M; Kicielinski, Kimberly P; Deveikis, John P; Walters, Beverly C; Harrigan, Mark R
2015-04-01
Traumatic aneurysms occur in up to 20% of blunt traumatic extracranial carotid artery injuries. Currently there is no standardized method for characterization of traumatic aneurysms. For the carotid and vertebral injury study (CAVIS), a prospective study of traumatic cerebrovascular injury, we established a method for aneurysm characterization and tested its reliability. Saccular aneurysm size was defined as the greatest linear distance between the expected location of the normal artery wall and the outer edge of the aneurysm lumen ("depth"). Fusiform aneurysm size was defined as the "depth" and longitudinal distance ("length") paralleling the normal artery. The size of the aneurysm relative to the normal artery was also assessed. Reliability measurements were made using four raters who independently reviewed 15 computed tomographic angiograms (CTAs) and 13 digital subtraction angiograms (DSAs) demonstrating a traumatic aneurysm of the internal carotid artery. Raters categorized the aneurysms as either "saccular" or "fusiform" and made measurements. Five scans of each imaging modality were repeated to evaluate intra-rater reliability. Fleiss's free-marginal multi-rater kappa (κ), Cohen's kappa (κ), and interclass correlation coefficient (ICC) determined inter- and intra-rater reliability. Inter-rater agreement as to the aneurysm "shape" was almost perfect for CTA (κ = 0.82) and DSA (κ = 0.897). Agreements on aneurysm "depth," "length," "aneurysm plus parent artery," and "parent artery" for CTA and DSA were excellent (ICC > 0.75). Intra-rater agreement as to aneurysm "shape" was substantial to almost perfect (κ > 0.60). The CAVIS method of traumatic aneurysm characterization has remarkable inter- and intra-rater reliability and will facilitate further studies of the natural history and management of extracranial cerebrovascular traumatic aneurysms. © The Author(s) 2015 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Nelson, Lindsay D; LaRoche, Ashley A; Pfaller, Adam Y; Lerner, E Brooke; Hammeke, Thomas A; Randolph, Christopher; Barr, William B; Guskiewicz, Kevin; McCrea, Michael A
2016-01-01
Limited data exist comparing the performance of computerized neurocognitive tests (CNTs) for assessing sport-related concussion. We evaluated the reliability and validity of three CNTs-ANAM, Axon Sports/Cogstate Sport, and ImPACT-in a common sample. High school and collegiate athletes completed two CNTs each at baseline. Concussed (n=165) and matched non-injured control (n=166) subjects repeated testing within 24 hr and at 8, 15, and 45 days post-injury. Roughly a quarter of each CNT's indices had stability coefficients (M=198 day interval) over .70. Group differences in performance were mostly moderate to large at 24 hr and small by day 8. The sensitivity of reliable change indices (RCIs) was best at 24 hr (67.8%, 60.3%, and 47.6% with one or more significant RCIs for ImPACT, Axon, and ANAM, respectively) but diminished to near the false positive rates thereafter. Across time, the CNTs' sensitivities were highest in those athletes who became asymptomatic within 1 day before neurocognitive testing but was similar to the tests' false positive rates when including athletes who became asymptomatic several days earlier. Test-retest reliability was similar among these three CNTs and below optimal standards for clinical use on many subtests. Analyses of group effect sizes, discrimination, and sensitivity and specificity suggested that the CNTs may add incrementally (beyond symptom scores) to the identification of clinical impairment within 24 hr of injury or within a short time period after symptom resolution but do not add significant value over symptom assessment later. The rapid clinical recovery course from concussion and modest stability probably jointly contribute to limited signal detection capabilities of neurocognitive tests outside a brief post-injury window. (JINS, 2016, 22, 24-37).
Gordon, J. J.; Gardner, J. K.; Wang, S.; Siebers, J. V.
2012-01-01
Purpose: This work uses repeat images of intensity modulated radiation therapy (IMRT) fields to quantify fluence anomalies (i.e., delivery errors) that can be reliably detected in electronic portal images used for IMRT pretreatment quality assurance. Methods: Repeat images of 11 clinical IMRT fields are acquired on a Varian Trilogy linear accelerator at energies of 6 MV and 18 MV. Acquired images are corrected for output variations and registered to minimize the impact of linear accelerator and electronic portal imaging device (EPID) positioning deviations. Detection studies are performed in which rectangular anomalies of various sizes are inserted into the images. The performance of detection strategies based on pixel intensity deviations (PIDs) and gamma indices is evaluated using receiver operating characteristic analysis. Results: Residual differences between registered images are due to interfraction positional deviations of jaws and multileaf collimator leaves, plus imager noise. Positional deviations produce large intensity differences that degrade anomaly detection. Gradient effects are suppressed in PIDs using gradient scaling. Background noise is suppressed using median filtering. In the majority of images, PID-based detection strategies can reliably detect fluence anomalies of ≥5% in ∼1 mm2 areas and ≥2% in ∼20 mm2 areas. Conclusions: The ability to detect small dose differences (≤2%) depends strongly on the level of background noise. This in turn depends on the accuracy of image registration, the quality of the reference image, and field properties. The longer term aim of this work is to develop accurate and reliable methods of detecting IMRT delivery errors and variations. The ability to resolve small anomalies will allow the accuracy of advanced treatment techniques, such as image guided, adaptive, and arc therapies, to be quantified. PMID:22894421
Nelson, Lindsay D.; LaRoche, Ashley A.; Pfaller, Adam Y.; Lerner, E. Brooke; Hammeke, Thomas A.; Randolph, Christopher; Barr, William B.; Guskiewicz, Kevin; McCrea, Michael A.
2016-01-01
Limited data exist comparing the performance of computerized neurocognitive tests (CNTs) for assessing sport-related concussion. We evaluated the reliability and validity of three CNTs—ANAM, Axon Sports/Cogstate Sport, and ImPACT—in a common sample. High school and collegiate athletes completed two CNTs each at baseline. Concussed (n = 165) and matched non-injured control (n = 166) subjects repeated testing within 24 hr and at 8, 15, and 45 days post-injury. Roughly a quarter of each CNT's indices had stability coefficients (M = 198 day interval) over .70. Group differences in performance were mostly moderate to large at 24 hr and small by day 8. The sensitivity of reliable change indices (RCIs) was best at 24 hr (67.8%, 60.3%, and 47.6% with one or more significant RCIs for ImPACT, Axon, and ANAM, respectively) but diminished to near the false positive rates thereafter. Across time, the CNTs' sensitivities were highest in those athletes who became asymptomatic within 1 day before neurocognitive testing but was similar to the tests' false positive rates when including athletes who became asymptomatic several days earlier. Test–retest reliability was similar among these three CNTs and below optimal standards for clinical use on many subtests. Analyses of group effect sizes, discrimination, and sensitivity and specificity suggested that the CNTs may add incrementally (beyond symptom scores) to the identification of clinical impairment within 24 hr of injury or within a short time period after symptom resolution but do not add significant value over symptom assessment later. The rapid clinical recovery course from concussion and modest stability probably jointly contribute to limited signal detection capabilities of neurocognitive tests outside a brief post-injury window. PMID:26714883
Overview of the Mars Sample Return Earth Entry Vehicle
NASA Technical Reports Server (NTRS)
Dillman, Robert; Corliss, James
2008-01-01
NASA's Mars Sample Return (MSR) project will bring Mars surface and atmosphere samples back to Earth for detailed examination. Langley Research Center's MSR Earth Entry Vehicle (EEV) is a core part of the mission, protecting the sample container during atmospheric entry, descent, and landing. Planetary protection requirements demand a higher reliability from the EEV than for any previous planetary entry vehicle. An overview of the EEV design and preliminary analysis is presented, with a follow-on discussion of recommended future design trade studies to be performed over the next several years in support of an MSR launch in 2018 or 2020. Planned topics include vehicle size for impact protection of a range of sample container sizes, outer mold line changes to achieve surface sterilization during re-entry, micrometeoroid protection, aerodynamic stability, thermal protection, and structural materials selection.
A Taxonomy of Accountable Care Organizations for Policy and Practice
Shortell, Stephen M; Wu, Frances M; Lewis, Valerie A; Colla, Carrie H; Fisher, Elliott S
2014-01-01
Objective To develop an exploratory taxonomy of Accountable Care Organizations (ACOs) to describe and understand early ACO development and to provide a basis for technical assistance and future evaluation of performance. Data Sources/Study Setting Data from the National Survey of Accountable Care Organizations, fielded between October 2012 and May 2013, of 173 Medicare, Medicaid, and commercial payer ACOs. Study Design Drawing on resource dependence and institutional theory, we develop measures of eight attributes of ACOs such as size, scope of services offered, and the use of performance accountability mechanisms. Data are analyzed using a two-step cluster analysis approach that accounts for both continuous and categorical data. Principal Findings We identified a reliable and internally valid three-cluster solution: larger, integrated systems that offer a broad scope of services and frequently include one or more postacute facilities; smaller, physician-led practices, centered in primary care, and that possess a relatively high degree of physician performance management; and moderately sized, joint hospital–physician and coalition-led groups that offer a moderately broad scope of services with some involvement of postacute facilities. Conclusions ACOs can be characterized into three distinct clusters. The taxonomy provides a framework for assessing performance, for targeting technical assistance, and for diagnosing potential antitrust violations. PMID:25251146
Determining chewing efficiency using a solid test food and considering all phases of mastication.
Liu, Ting; Wang, Xinmiao; Chen, Jianshe; van der Glas, Hilbert W
2018-07-01
Following chewing a solid food, the median particle size, X 50 , is determined after N chewing cycles, by curve-fitting of the particle size distribution. Reduction of X 50 with N is traditionally followed from N ≥ 15-20 cycles when using the artificial test food Optosil ® , because of initially unreliable values of X 50 . The aims of the study were (i) to enable testing at small N-values by using initial particles of appropriate size, shape and amount, and (ii) to compare measures of chewing ability, i.e. chewing efficiency (N needed to halve the initial particle size, N(1/2-Xo)) and chewing performance (X 50 at a particular N-value, X 50,N ). 8 subjects with a natural dentition chewed 4 types of samples of Optosil particles: (1) 8 cubes of 8 mm, border size relative to bin size (traditional test), (2) 9 half-cubes of 9.6 mm, mid-size; similar sample volume, (3) 4 half-cubes of 9.6 mm, and 2 half-cubes of 9.6 mm; reduced particle number and sample volume. All samples were tested with 4 N-values. Curve-fitting with a 2nd order polynomial function yielded log(X 50 )-log(N) relationships, after which N(1/2-Xo) and X 50,N were obtained. Reliable X 50 -values are obtained for all N-values when using half-cubes with a mid-size relative to bin sizes. By using 2 or 4 half-cubes, determination of N(1/2-Xo) or X 50,N needs less chewing cycles than traditionally. Chewing efficiency is preferable over chewing performance because of a comparison of inter-subject chewing ability at the same stage of food comminution and constant intra-subject and inter-subject ratios between and within samples respectively. Copyright © 2018 Elsevier Ltd. All rights reserved.
The reliability of in-training assessment when performance improvement is taken into account.
van Lohuizen, Mirjam T; Kuks, Jan B M; van Hell, Elisabeth A; Raat, A N; Stewart, Roy E; Cohen-Schotanus, Janke
2010-12-01
During in-training assessment students are frequently assessed over a longer period of time and therefore it can be expected that their performance will improve. We studied whether there really is a measurable performance improvement when students are assessed over an extended period of time and how this improvement affects the reliability of the overall judgement. In-training assessment results were obtained from 104 students on rotation at our university hospital or at one of the six affiliated hospitals. Generalisability theory was used in combination with multilevel analysis to obtain reliability coefficients and to estimate the number of assessments needed for reliable overall judgement, both including and excluding performance improvement. Students' clinical performance ratings improved significantly from a mean of 7.6 at the start to a mean of 7.8 at the end of their clerkship. When taking performance improvement into account, reliability coefficients were higher. The number of assessments needed to achieve a reliability of 0.80 or higher decreased from 17 to 11. Therefore, when studying reliability of in-training assessment, performance improvement should be considered.
Sharenko, Alexander; Toney, Michael F
2016-01-20
Solution-processed lead halide perovskite thin-film solar cells have achieved power conversion efficiencies comparable to those obtained with several commercial photovoltaic technologies in a remarkably short period of time. This rapid rise in device efficiency is largely the result of the development of fabrication protocols capable of producing continuous, smooth perovskite films with micrometer-sized grains. Further developments in film fabrication and morphological control are necessary, however, in order for perovskite solar cells to reliably and reproducibly approach their thermodynamic efficiency limit. This Perspective discusses the fabrication of lead halide perovskite thin films, while highlighting the processing-property-performance relationships that have emerged from the literature, and from this knowledge, suggests future research directions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orendorff, Christopher J.; Nagasubramanian, Ganesan; Fenton, Kyle R.
As lithium-ion battery technologies mature, the size and energy of these systems continues to increase (> 50 kWh for EVs); making safety and reliability of these high energy systems increasingly important. While most material advances for lithium-ion chemistries are directed toward improving cell performance (capacity, energy, cycle life, etc.), there are a variety of materials advancements that can be made to improve lithium-ion battery safety. Issues including energetic thermal runaway, electrolyte decomposition and flammability, anode SEI stability, and cell-level abuse tolerance continue to be critical safety concerns. This report highlights work with our collaborators to develop advanced materials to improvemore » lithium-ion battery safety and abuse tolerance and to perform cell-level characterization of new materials.« less
Shrivastava, Vimal K; Londhe, Narendra D; Sonawane, Rajendra S; Suri, Jasjit S
2016-04-01
Psoriasis is an autoimmune skin disease with red and scaly plaques on skin and affecting about 125 million people worldwide. Currently, dermatologist use visual and haptic methods for diagnosis the disease severity. This does not help them in stratification and risk assessment of the lesion stage and grade. Further, current methods add complexity during monitoring and follow-up phase. The current diagnostic tools lead to subjectivity in decision making and are unreliable and laborious. This paper presents a first comparative performance study of its kind using principal component analysis (PCA) based CADx system for psoriasis risk stratification and image classification utilizing: (i) 11 higher order spectra (HOS) features, (ii) 60 texture features, and (iii) 86 color feature sets and their seven combinations. Aggregate 540 image samples (270 healthy and 270 diseased) from 30 psoriasis patients of Indian ethnic origin are used in our database. Machine learning using PCA is used for dominant feature selection which is then fed to support vector machine classifier (SVM) to obtain optimized performance. Three different protocols are implemented using three kinds of feature sets. Reliability index of the CADx is computed. Among all feature combinations, the CADx system shows optimal performance of 100% accuracy, 100% sensitivity and specificity, when all three sets of feature are combined. Further, our experimental result with increasing data size shows that all feature combinations yield high reliability index throughout the PCA-cutoffs except color feature set and combination of color and texture feature sets. HOS features are powerful in psoriasis disease classification and stratification. Even though, independently, all three set of features HOS, texture, and color perform competitively, but when combined, the machine learning system performs the best. The system is fully automated, reliable and accurate. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
An assessment of alternative fuel cell designs for residential and commercial cogeneration
NASA Technical Reports Server (NTRS)
Wakefield, R. A.
1980-01-01
A comparative assessment of three fuel cell systems for application in different buildings and geographic locations is presented. The study was performed at the NASA Lewis Center and comprised the fuel cell design, performance in different conditions, and the economic parameters. Applications in multifamily housing, stores and hospitals were considered, with a load of 10kW-1 MW. Designs were traced through system sizing, simulation/evaluation, and reliability analysis, and a computer simulation based on a fourth-order representation of a generalized system was performed. The cells were all phosphoric acid type cells, and were found to be incompatible with gas/electric systems and more favorable economically than the gas/electric systems in hospital uses. The methodology used provided an optimized energy-use pattern and minimized back-up system turn-on.
Qin, Feng; Huang, Jun; Qiu, Xinjian; Hu, Sihang; Huang, Xi
2011-01-01
A simple, sensitive, and reliable ultra-performance liquid chromatography (UPLC) method has been developed for simultaneous determination of 22 major constituents in modified xiaoyao san (MXS), a multiherbal formula. The chromatographic separation was performed on an ACQUITY UPLC BEH C18 column (150 x 2.1 mm, 1.7 microm, particle size), with an aqueous 0.5% acetic acid and acetonitrile mobile phase gradient. The method was validated for linearity (r2 >0.9937), intraday and interday precision (RSD <8.51%), recovery (91.18-107.73%), LOD (0.02-4.17 ng/mL), and LOQ (0.05-12.50 ng/mL). The established method was successfully applied to quantify the 22 marker compounds in MXS, which provided a useful basis of overall evaluation of the quality of MXS.
Improved Protocols for Illumina Sequencing
Bronner, Iraad F.; Quail, Michael A.; Turner, Daniel J.; Swerdlow, Harold
2013-01-01
In this unit, we describe a set of improvements we have made to the standard Illumina protocols to make the sequencing process more reliable in a high-throughput environment, reduce amplification bias, narrow the distribution of insert sizes, and reliably obtain high yields of data. PMID:19582764
A stochastic method for stand-alone photovoltaic system sizing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cabral, Claudia Valeria Tavora; Filho, Delly Oliveira; Martins, Jose Helvecio
Photovoltaic systems utilize solar energy to generate electrical energy to meet load demands. Optimal sizing of these systems includes the characterization of solar radiation. Solar radiation at the Earth's surface has random characteristics and has been the focus of various academic studies. The objective of this study was to stochastically analyze parameters involved in the sizing of photovoltaic generators and develop a methodology for sizing of stand-alone photovoltaic systems. Energy storage for isolated systems and solar radiation were analyzed stochastically due to their random behavior. For the development of the methodology proposed stochastic analysis were studied including the Markov chainmore » and beta probability density function. The obtained results were compared with those for sizing of stand-alone using from the Sandia method (deterministic), in which the stochastic model presented more reliable values. Both models present advantages and disadvantages; however, the stochastic one is more complex and provides more reliable and realistic results. (author)« less
Simulation analyses of space use: Home range estimates, variability, and sample size
Bekoff, Marc; Mech, L. David
1984-01-01
Simulations of space use by animals were run to determine the relationship among home range area estimates, variability, and sample size (number of locations). As sample size increased, home range size increased asymptotically, whereas variability decreased among mean home range area estimates generated by multiple simulations for the same sample size. Our results suggest that field workers should ascertain between 100 and 200 locations in order to estimate reliably home range area. In some cases, this suggested guideline is higher than values found in the few published studies in which the relationship between home range area and number of locations is addressed. Sampling differences for small species occupying relatively small home ranges indicate that fewer locations may be sufficient to allow for a reliable estimate of home range. Intraspecific variability in social status (group member, loner, resident, transient), age, sex, reproductive condition, and food resources also have to be considered, as do season, habitat, and differences in sampling and analytical methods. Comparative data still are needed.
Optimized Generator Designs for the DTU 10-MW Offshore Wind Turbine using GeneratorSE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sethuraman, Latha; Maness, Michael; Dykes, Katherine
Compared to land-based applications, offshore wind imposes challenges for the development of next generation wind turbine generator technology. Direct-drive generators are believed to offer high availability, efficiency, and reduced operation and maintenance requirements; however, previous research suggests difficulties in scaling to several megawatts or more in size. The resulting designs are excessively large and/or massive, which are major impediments to transportation logistics, especially for offshore applications. At the same time, geared wind turbines continue to sustain offshore market growth through relatively cheaper and lightweight generators. However, reliability issues associated with mechanical components in a geared system create significant operation andmore » maintenance costs, and these costs make up a large portion of overall system costs offshore. Thus, direct-drive turbines are likely to outnumber their gear-driven counterparts for this market, and there is a need to review the costs or opportunities of building machines with different types of generators and examining their competitiveness at the sizes necessary for the next generation of offshore wind turbines. In this paper, we use GeneratorSE, the National Renewable Energy Laboratory's newly developed systems engineering generator sizing tool to estimate mass, efficiency, and the costs of different generator technologies satisfying the electromagnetic, structural, and basic thermal design requirements for application in a very large-scale offshore wind turbine such as the Technical University of Denmark's (DTU) 10-MW reference wind turbine. For the DTU reference wind turbine, we use the previously mentioned criteria to optimize a direct-drive, radial flux, permanent-magnet synchronous generator; a direct-drive electrically excited synchronous generator; a medium-speed permanent-magnet generator; and a high-speed, doubly-fed induction generator. Preliminary analysis of leveled costs of energy indicate that for large turbines, the cost of permanent magnets and reliability issues associated with brushes in electrically excited machines are the biggest deterrents for building direct-drive systems. The advantage of medium-speed permanent-magnet machines over doubly-fed induction generators is evident, yet, variability in magnet prices and solutions to address reliability issues associated with gearing and brushes can change this outlook. This suggests the need to potentially pursue fundamentally new innovations in generator designs that help avoid high capital costs but still have significant reliability related to performance.« less
Optimized Generator Designs for the DTU 10-MW Offshore Wind Turbine using GeneratorSE: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sethuraman, Latha; Maness, Michael; Dykes, Katherine
Compared to land-based applications, offshore wind imposes challenges for the development of next generation wind turbine generator technology. Direct-drive generators are believed to offer high availability, efficiency, and reduced operation and maintenance requirements; however, previous research suggests difficulties in scaling to several megawatts or more in size. The resulting designs are excessively large and/or massive, which are major impediments to transportation logistics, especially for offshore applications. At the same time, geared wind turbines continue to sustain offshore market growth through relatively cheaper and lightweight generators. However, reliability issues associated with mechanical components in a geared system create significant operation andmore » maintenance costs, and these costs make up a large portion of overall system costs offshore. Thus, direct-drive turbines are likely to outnumber their gear-driven counterparts for this market, and there is a need to review the costs or opportunities of building machines with different types of generators and examining their competitiveness at the sizes necessary for the next generation of offshore wind turbines. In this paper, we use GeneratorSE, the National Renewable Energy Laboratory's newly developed systems engineering generator sizing tool to estimate mass, efficiency, and the costs of different generator technologies satisfying the electromagnetic, structural, and basic thermal design requirements for application in a very large-scale offshore wind turbine such as the Technical University of Denmark's (DTU) 10-MW reference wind turbine. For the DTU reference wind turbine, we use the previously mentioned criteria to optimize a direct-drive, radial flux, permanent-magnet synchronous generator; a direct-drive electrically excited synchronous generator; a medium-speed permanent-magnet generator; and a high-speed, doubly-fed induction generator. Preliminary analysis of leveled costs of energy indicate that for large turbines, the cost of permanent magnets and reliability issues associated with brushes in electrically excited machines are the biggest deterrents for building direct-drive systems. The advantage of medium-speed permanent-magnet machines over doubly-fed induction generators is evident, yet, variability in magnet prices and solutions to address reliability issues associated with gearing and brushes can change this outlook. This suggests the need to potentially pursue fundamentally new innovations in generator designs that help avoid high capital costs but still have significant reliability related to performance.« less
Exergy analysis of large-scale helium liquefiers: Evaluating design trade-offs
NASA Astrophysics Data System (ADS)
Thomas, Rijo Jacob; Ghosh, Parthasarathi; Chowdhury, Kanchan
2014-01-01
It is known that higher heat exchanger area, more number of expanders with higher efficiency and more involved configuration with multi-pressure compression system increase the plant efficiency of a helium liquefier. However, they involve higher capital investment and larger size. Using simulation software Aspen Hysys v 7.0 and exergy analysis as the tool of analysis, authors have attempted to identify various trade-offs while selecting the number of stages, the pressure levels in compressor, the cold-end configuration, the heat exchanger surface area, the maximum allowable pressure drop in heat exchangers, the efficiency of expanders, the parallel/series connection of expanders etc. Use of more efficient cold ends reduces the number of refrigeration stages and the size of the plant. For achieving reliability along with performance, a configuration with a combination of expander and Joule-Thomson valve is found to be a better choice for cold end. Use of multi-pressure system is relevant only when the number of refrigeration stages is more than 5. Arrangement of expanders in series reduces the number of expanders as well as the heat exchanger size with slight expense of plant efficiency. Superior heat exchanger (having less pressure drop per unit heat transfer area) results in only 5% increase of plant performance even when it has 100% higher heat exchanger surface area.
NASA Astrophysics Data System (ADS)
Babick, Frank; Mielke, Johannes; Wohlleben, Wendel; Weigel, Stefan; Hodoroaba, Vasile-Dan
2016-06-01
Currently established and projected regulatory frameworks require the classification of materials (whether nano or non-nano) as specified by respective definitions, most of which are based on the size of the constituent particles. This brings up the question if currently available techniques for particle size determination are capable of reliably classifying materials that potentially fall under these definitions. In this study, a wide variety of characterisation techniques, including counting, fractionating, and spectroscopic techniques, has been applied to the same set of materials under harmonised conditions. The selected materials comprised well-defined quality control materials (spherical, monodisperse) as well as industrial materials of complex shapes and considerable polydispersity. As a result, each technique could be evaluated with respect to the determination of the number-weighted median size. Recommendations on the most appropriate and efficient use of techniques for different types of material are given.
Body size shifts and early warning signals precede the historic collapse of whale stocks.
Clements, Christopher F; Blanchard, Julia L; Nash, Kirsty L; Hindell, Mark A; Ozgul, Arpat
2017-06-22
Predicting population declines is a key challenge in the face of global environmental change. Abundance-based early warning signals have been shown to precede population collapses; however, such signals are sensitive to the low reliability of abundance estimates. Here, using historical data on whales harvested during the 20th century, we demonstrate that early warning signals can be present not only in the abundance data, but also in the more reliable body size data of wild populations. We show that during the period of commercial whaling, the mean body size of caught whales declined dramatically (by up to 4 m over a 70-year period), leading to early warning signals being detectable up to 40 years before the global collapse of whale stocks. Combining abundance and body size data can reduce the length of the time series required to predict collapse, and decrease the chances of false positive early warning signals.
NASA Astrophysics Data System (ADS)
Taylor, John R.; Stolz, Christopher J.
1993-08-01
Laser system performance and reliability depends on the related performance and reliability of the optical components which define the cavity and transport subsystems. High-average-power and long transport lengths impose specific requirements on component performance. The complexity of the manufacturing process for optical components requires a high degree of process control and verification. Qualification has proven effective in ensuring confidence in the procurement process for these optical components. Issues related to component reliability have been studied and provide useful information to better understand the long term performance and reliability of the laser system.
NASA Astrophysics Data System (ADS)
Taylor, J. R.; Stolz, C. J.
1992-12-01
Laser system performance and reliability depends on the related performance and reliability of the optical components which define the cavity and transport subsystems. High-average-power and long transport lengths impose specific requirements on component performance. The complexity of the manufacturing process for optical components requires a high degree of process control and verification. Qualification has proven effective in ensuring confidence in the procurement process for these optical components. Issues related to component reliability have been studied and provide useful information to better understand the long term performance and reliability of the laser system.
Test-retest reliability of evoked BOLD signals from a cognitive-emotive fMRI test battery.
Plichta, Michael M; Schwarz, Adam J; Grimm, Oliver; Morgen, Katrin; Mier, Daniela; Haddad, Leila; Gerdes, Antje B M; Sauer, Carina; Tost, Heike; Esslinger, Christine; Colman, Peter; Wilson, Frederick; Kirsch, Peter; Meyer-Lindenberg, Andreas
2012-04-15
Even more than in cognitive research applications, moving fMRI to the clinic and the drug development process requires the generation of stable and reliable signal changes. The performance characteristics of the fMRI paradigm constrain experimental power and may require different study designs (e.g., crossover vs. parallel groups), yet fMRI reliability characteristics can be strongly dependent on the nature of the fMRI task. The present study investigated both within-subject and group-level reliability of a combined three-task fMRI battery targeting three systems of wide applicability in clinical and cognitive neuroscience: an emotional (face matching), a motivational (monetary reward anticipation) and a cognitive (n-back working memory) task. A group of 25 young, healthy volunteers were scanned twice on a 3T MRI scanner with a mean test-retest interval of 14.6 days. FMRI reliability was quantified using the intraclass correlation coefficient (ICC) applied at three different levels ranging from a global to a localized and fine spatial scale: (1) reliability of group-level activation maps over the whole brain and within targeted regions of interest (ROIs); (2) within-subject reliability of ROI-mean amplitudes and (3) within-subject reliability of individual voxels in the target ROIs. Results showed robust evoked activation of all three tasks in their respective target regions (emotional task=amygdala; motivational task=ventral striatum; cognitive task=right dorsolateral prefrontal cortex and parietal cortices) with high effect sizes (ES) of ROI-mean summary values (ES=1.11-1.44 for the faces task, 0.96-1.43 for the reward task, 0.83-2.58 for the n-back task). Reliability of group level activation was excellent for all three tasks with ICCs of 0.89-0.98 at the whole brain level and 0.66-0.97 within target ROIs. Within-subject reliability of ROI-mean amplitudes across sessions was fair to good for the reward task (ICCs=0.56-0.62) and, dependent on the particular ROI, also fair-to-good for the n-back task (ICCs=0.44-0.57) but lower for the faces task (ICC=-0.02-0.16). In conclusion, all three tasks are well suited to between-subject designs, including imaging genetics. When specific recommendations are followed, the n-back and reward task are also suited for within-subject designs, including pharmaco-fMRI. The present study provides task-specific fMRI reliability performance measures that will inform the optimal use, powering and design of fMRI studies using comparable tasks. Copyright © 2012 Elsevier Inc. All rights reserved.
Keilmann, Annerose; Konerding, Uwe; Oberherr, Constantin; Nawka, Tadeus
2016-12-01
Structural, neurological and muscular diseases can lead to impairments of articulation. These impairments can severely impact social life. To judge health status comprehensively, this impact must be adequately quantified. For this purpose, the articulation handicap index (AHI) has been developed. Psychometric analyses referring to this index are presented here. The AHI was completed by 113 patients who had undergone treatment of tumours of the head or neck. The patients also gave a general self-assessment of their impairments due to articulation problems. Furthermore, tumour size, tumour location and kind of therapy were recorded. Missing data were analysed and replaced by multiple imputation. Internal structure was investigated using principal component analysis (PCA); reliability using Cronbach's alpha. Validity was investigated by analysing the relationship between AHI and general self-assessment of impairments. Moreover, the relationships with tumour size, tumour location and kind of therapy were analysed. Only 0.12 % of the answers to the AHI were missing. The Scree test performed with the PCA results suggested one-dimensionality with the first component explaining 49.6 % of the item variance. Cronbach's alpha was 0.96. Kendall's tau between the AHI sum score and the general self-assessment was 0.69. The intervals of AHI sum scores for the self-assessment categories were determined with 0-13 for no, 14-44 for mild, 46-76 for moderate, and 77-120 for severe impairment. The AHI sum score did not systematically relate to tumour size, tumour location or kind of therapy. The results are evidence for high acceptance, reliability and validity.
Huang, Jianyan; Maram, Jyotsna; Tepelus, Tudor C; Modak, Cristina; Marion, Ken; Sadda, SriniVas R; Chopra, Vikas; Lee, Olivia L
2017-08-07
To determine the reliability of corneal endothelial cell density (ECD) obtained by automated specular microscopy versus that of validated manual methods and factors that predict such reliability. Sharp central images from 94 control and 106 glaucomatous eyes were captured with Konan specular microscope NSP-9900. All images were analyzed by trained graders using Konan CellChek Software, employing the fully- and semi-automated methods as well as Center Method. Images with low cell count (input cells number <100) and/or guttata were compared with the Center and Flex-Center Methods. ECDs were compared and absolute error was used to assess variation. The effect on ECD of age, cell count, cell size, and cell size variation was evaluated. No significant difference was observed between the Center and Flex-Center Methods in corneas with guttata (p=0.48) or low ECD (p=0.11). No difference (p=0.32) was observed in ECD of normal controls <40 yrs old between the fully-automated method and manual Center Method. However, in older controls and glaucomatous eyes, ECD was overestimated by the fully-automated method (p=0.034) and semi-automated method (p=0.025) as compared to manual method. Our findings show that automated analysis significantly overestimates ECD in the eyes with high polymegathism and/or large cell size, compared to the manual method. Therefore, we discourage reliance upon the fully-automated method alone to perform specular microscopy analysis, particularly if an accurate ECD value is imperative. Copyright © 2017. Published by Elsevier España, S.L.U.
Wang, Yutang; Liu, Yuanyuan; Xiao, Chunxia; Liu, Laping; Hao, Miao; Wang, Jianguo; Liu, Xuebo
2014-06-01
This study established a new method for quantitative and qualitative determination of certain components in black rice wine, a traditional Chinese brewed wine. Specifically, we combined solid-phase extraction and high-performance liquid chromatography (HPLC) with triple quadrupole mass spectrometry (MS/MS) to determine 8 phenolic acids, 3 flavonols, and 4 anthocyanins in black rice wine. First, we clean samples with OASIS HLB cartridges and optimized extraction parameters. Next, we performed separation on a SHIM-PACK XR-ODS column (I.D. 3.0 mm × 75 mm, 2.2 μm particle size) with a gradient elution of 50% aqueous acetonitrile (V/V) and water, both containing 0.2% formic acid. We used multiple-reaction monitoring scanning for quantification, with switching electrospray ion source polarity between positive and negative modes in a single chromatographic run. We detected 15 phenolic compounds properly within 38 min under optimized conditions. Limits of detection ranged from 0.008 to 0.030 mg/L, and average recoveries ranged from 60.8 to 103.1% with relative standard deviation ≤8.6%. We validated the method and found it to be sensitive and reliable for quantifying phenolic compounds in rice wine matrices. This study developed a new, reliable HPLC-MS/MS method for simultaneous determination of 15 bioactive components in black rice wine. This method was validated and found to be sensitive and reliable for quantifying phenolic compounds in rice wine. © 2014 Institute of Food Technologists®
Analyzing Responses of Chemical Sensor Arrays
NASA Technical Reports Server (NTRS)
Zhou, Hanying
2007-01-01
NASA is developing a third-generation electronic nose (ENose) capable of continuous monitoring of the International Space Station s cabin atmosphere for specific, harmful airborne contaminants. Previous generations of the ENose have been described in prior NASA Tech Briefs issues. Sensor selection is critical in both (prefabrication) sensor material selection and (post-fabrication) data analysis of the ENose, which detects several analytes that are difficult to detect, or that are at very low concentration ranges. Existing sensor selection approaches usually include limited statistical measures, where selectivity is more important but reliability and sensitivity are not of concern. When reliability and sensitivity can be major limiting factors in detecting target compounds reliably, the existing approach is not able to provide meaningful selection that will actually improve data analysis results. The approach and software reported here consider more statistical measures (factors) than existing approaches for a similar purpose. The result is a more balanced and robust sensor selection from a less than ideal sensor array. The software offers quick, flexible, optimal sensor selection and weighting for a variety of purposes without a time-consuming, iterative search by performing sensor calibrations to a known linear or nonlinear model, evaluating the individual sensor s statistics, scoring the individual sensor s overall performance, finding the best sensor array size to maximize class separation, finding optimal weights for the remaining sensor array, estimating limits of detection for the target compounds, evaluating fingerprint distance between group pairs, and finding the best event-detecting sensors.
NASA Technical Reports Server (NTRS)
Clune, E.; Segall, Z.; Siewiorek, D.
1984-01-01
A program of experiments has been conducted at NASA-Langley to test the fault-free performance of a Fault-Tolerant Multiprocessor (FTMP) avionics system for next-generation aircraft. Baseline measurements of an operating FTMP system were obtained with respect to the following parameters: instruction execution time, frame size, and the variation of clock ticks. The mechanisms of frame stretching were also investigated. The experimental results are summarized in a table. Areas of interest for future tests are identified, with emphasis given to the implementation of a synthetic workload generation mechanism on FTMP.
NASA Technical Reports Server (NTRS)
Stone, M. S.; Mcadam, P. L.; Saunders, O. W.
1977-01-01
The results are presented of a 4 month study to design a hybrid analog/digital receiver for outer planet mission probe communication links. The scope of this study includes functional design of the receiver; comparisons between analog and digital processing; hardware tradeoffs for key components including frequency generators, A/D converters, and digital processors; development and simulation of the processing algorithms for acquisition, tracking, and demodulation; and detailed design of the receiver in order to determine its size, weight, power, reliability, and radiation hardness. In addition, an evaluation was made of the receiver's capabilities to perform accurate measurement of signal strength and frequency for radio science missions.
Design of the Space Station Freedom power system
NASA Technical Reports Server (NTRS)
Thomas, Ronald L.; Hallinan, George J.
1989-01-01
The design of Space Station Freedom's electric power system (EPS) is reviewed, highlighting the key design goals of performance, low cost, reliability and safety. Tradeoff study results that illustrate the competing factors responsible for many of the more important design decisions are discussed. When Freedom's EPS is compared with previous space power designs, two major differences stand out. The first is the size of the EPS, which is larger than any prior system. The second major difference between the EPS and other space power designs is the indefinite expected life of Freedom; 30 years has been used for life-cycle-cost calculations.
Tolerancing aspheres based on manufacturing knowledge
NASA Astrophysics Data System (ADS)
Wickenhagen, S.; Kokot, S.; Fuchs, U.
2017-10-01
A standard way of tolerancing optical elements or systems is to perform a Monte Carlo based analysis within a common optical design software package. Although, different weightings and distributions are assumed they are all counting on statistics, which usually means several hundreds or thousands of systems for reliable results. Thus, employing these methods for small batch sizes is unreliable, especially when aspheric surfaces are involved. The huge database of asphericon was used to investigate the correlation between the given tolerance values and measured data sets. The resulting probability distributions of these measured data were analyzed aiming for a robust optical tolerancing process.
Tolerancing aspheres based on manufacturing statistics
NASA Astrophysics Data System (ADS)
Wickenhagen, S.; Möhl, A.; Fuchs, U.
2017-11-01
A standard way of tolerancing optical elements or systems is to perform a Monte Carlo based analysis within a common optical design software package. Although, different weightings and distributions are assumed they are all counting on statistics, which usually means several hundreds or thousands of systems for reliable results. Thus, employing these methods for small batch sizes is unreliable, especially when aspheric surfaces are involved. The huge database of asphericon was used to investigate the correlation between the given tolerance values and measured data sets. The resulting probability distributions of these measured data were analyzed aiming for a robust optical tolerancing process.
Design of a setup for 252Cf neutron source for storage and analysis purpose
NASA Astrophysics Data System (ADS)
Hei, Daqian; Zhuang, Haocheng; Jia, Wenbao; Cheng, Can; Jiang, Zhou; Wang, Hongtao; Chen, Da
2016-11-01
252Cf is a reliable isotopic neutron source and widely used in the prompt gamma ray neutron activation analysis (PGNAA) technique. A cylindrical barrel made by polymethyl methacrylate contained with the boric acid solution was designed for storage and application of a 5 μg 252Cf neutron source. The size of the setup was optimized with Monte Carlo code. The experiments were performed and the results showed the doses were reduced with the setup and less than the allowable limit. The intensity and collimating radius of the neutron beam could also be adjusted through different collimator.
Chemical Microthruster Options
NASA Technical Reports Server (NTRS)
DeGroot, Wim; Oleson, Steve
1996-01-01
Chemical propulsion systems with potential application to microsatellites are classified by propellant phase, i.e. gas, liquid, or solid. Four promising concepts are selected based on performance, weight, size, cost, and reliability. The selected concepts, in varying stages of development, are advanced monopropellants, tridyne(TM), electrolysis, and solid gas generator propulsion. Tridyne(TM) and electrolysis propulsion are compared vs. existing cold gas and monopropellant systems for selected microsatellite missions. Electrolysis is shown to provide a significant weight advantage over monopropellant propulsion for an orbit transfer and plane change mission. Tridyne(TM) is shown to provide a significant advantage over cold gas thrusters for orbit trimming and spacecraft separation.
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2018-03-01
Like other NDE methods, eddy current surface crack detectability is determined using probability of detection (POD) demonstration. The POD demonstration involves eddy current testing of surface crack specimens with known crack sizes. Reliably detectable flaw size, denoted by, a90/95 is determined by statistical analysis of POD test data. The surface crack specimens shall be made from a similar material with electrical conductivity close to the part conductivity. A calibration standard with electro-discharged machined (EDM) notches is typically used in eddy current testing for surface crack detection. The calibration standard conductivity shall be within +/- 15% of the part conductivity. This condition is also applicable to the POD demonstration crack set. Here, a case is considered, where conductivity of the crack specimens available for POD testing differs by more than 15% from that of the part to be inspected. Therefore, a direct POD demonstration of reliably detectable flaw size is not applicable. Additional testing is necessary to use the demonstrated POD test data. An approach to estimate the reliably detectable flaw size in eddy current testing for part made from material A using POD crack specimens made from material B with different conductivity is provided. The approach uses additional test data obtained on EDM notch specimens made from materials A and B. EDM notch test data from the two materials is used to create a transfer function between the demonstrated a90/95 size on crack specimens made of material B and the estimated a90/95 size for part made of material A. Two methods are given. For method A, a90/95 crack size for material B is given and POD data is available. Objective of method A is to determine a90/95 crack size for material A using the same relative decision threshold that was used for material B. For method B, target crack size a90/95 for material A is known. Objective is to determine decision threshold for inspecting material A.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mizuno, Hideyuki, E-mail: h-mizuno@nirs.go.jp; Fukumura, Akifumi; Fukahori, Mai
Purpose: The purpose of this study was to obtain a set of correction factors of the radiophotoluminescent glass dosimeter (RGD) output for field size changes and wedge insertions. Methods: Several linear accelerators were used for irradiation of the RGDs. The field sizes were changed from 5 × 5 cm to 25 × 25 cm for 4, 6, 10, and 15 MV x-ray beams. The wedge angles were 15°, 30°, 45°, and 60°. In addition to physical wedge irradiation, nonphysical (dynamic/virtual) wedge irradiations were performed. Results: The obtained data were fitted with a single line for each energy, and correction factorsmore » were determined. Compared with ionization chamber outputs, the RGD outputs gradually increased with increasing field size, because of the higher RGD response to scattered low-energy photons. The output increase was about 1% per 10 cm increase in field size, with a slight difference dependent on the beam energy. For both physical and nonphysical wedged beam irradiation, there were no systematic trends in the RGD outputs, such as monotonic increase or decrease depending on the wedge angle change if the authors consider the uncertainty, which is approximately 0.6% for each set of measured points. Therefore, no correction factor was needed for all inserted wedges. Based on this work, postal dose audits using RGDs for the nonreference condition were initiated in 2010. The postal dose audit results between 2010 and 2012 were analyzed. The mean difference between the measured and stated doses was within 0.5% for all fields with field sizes between 5 × 5 cm and 25 × 25 cm and with wedge angles from 15° to 60°. The standard deviations (SDs) of the difference distribution were within the estimated uncertainty (1SD) except for the 25 × 25 cm field size data, which were not reliable because of poor statistics (n = 16). Conclusions: A set of RGD output correction factors was determined for field size changes and wedge insertions. The results obtained from recent postal dose audits were analyzed, and the mean differences between the measured and stated doses were within 0.5% for every field size and wedge angle. The SDs of the distribution were within the estimated uncertainty, except for one condition that was not reliable because of poor statistics.« less
Passive Two-Phase Cooling for Automotive Power Electronics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreno, G.; Jeffers, J. R.; Narumanchi, S.
2014-01-01
Experiments were conducted to evaluate the use of a passive two-phase cooling strategy as a means of cooling automotive power electronics. The proposed cooling approach utilizes an indirect cooling configuration to alleviate some reliability concerns and to allow the use of conventional power modules. An inverter-scale proof-of-concept cooling system was fabricated and tested using the refrigerants hydrofluoroolefin HFO-1234yf and hydrofluorocarbon HFC-245 fa. Results demonstrated that the system can dissipate at least 3.5 kW of heat with 250 cm3 of HFC-245fa. An advanced evaporator concept that incorporates features to improve performance and reduce its size was designed. Simulation results indicate themore » concept's thermal resistance can be 58% to 65% lower than automotive dual-side-cooled power modules. Tests were also conducted to measure the thermal performance of two air-cooled condensers-plain and rifled finned tube designs. The results combined with some analysis were then used to estimate the required condenser size per operating conditions and maximum allowable system (i.e., vapor and liquid) temperatures.« less
Passive Two-Phase Cooling of Automotive Power Electronics: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreno, G.; Jeffers, J. R.; Narumanchi, S.
2014-08-01
Experiments were conducted to evaluate the use of a passive two-phase cooling strategy as a means of cooling automotive power electronics. The proposed cooling approach utilizes an indirect cooling configuration to alleviate some reliability concerns and to allow the use of conventional power modules. An inverter-scale proof-of-concept cooling system was fabricated, and tests were conducted using the refrigerants hydrofluoroolefin HFO-1234yf and hydrofluorocarbon HFC-245fa. Results demonstrated that the system can dissipate at least 3.5 kW of heat with 250 cm3 of HFC-245fa. An advanced evaporator design that incorporates features to improve performance and reduce size was conceived. Simulation results indicate itsmore » thermal resistance can be 37% to 48% lower than automotive dual side cooled power modules. Tests were also conducted to measure the thermal performance of two air-cooled condensers--plain and rifled finned tube designs. The results combined with some analysis were then used to estimate the required condenser size per operating conditions and maximum allowable system (i.e., vapor and liquid) temperatures.« less
Modification of Hazen's equation in coarse grained soils by soft computing techniques
NASA Astrophysics Data System (ADS)
Kaynar, Oguz; Yilmaz, Isik; Marschalko, Marian; Bednarik, Martin; Fojtova, Lucie
2013-04-01
Hazen first proposed a Relationship between coefficient of permeability (k) and effective grain size (d10) was first proposed by Hazen, and it was then extended by some other researchers. However many attempts were done for estimation of k, correlation coefficients (R2) of the models were generally lower than ~0.80 and whole grain size distribution curves were not included in the assessments. Soft computing techniques such as; artificial neural networks, fuzzy inference systems, genetic algorithms, etc. and their hybrids are now being successfully used as an alternative tool. In this study, use of some soft computing techniques such as Artificial Neural Networks (ANNs) (MLP, RBF, etc.) and Adaptive Neuro-Fuzzy Inference System (ANFIS) for prediction of permeability of coarse grained soils was described, and Hazen's equation was then modificated. It was found that the soft computing models exhibited high performance in prediction of permeability coefficient. However four different kinds of ANN algorithms showed similar prediction performance, results of MLP was found to be relatively more accurate than RBF models. The most reliable prediction was obtained from ANFIS model.
Lamont, Ruth A; Swift, Hannah J; Abrams, Dominic
2015-03-01
Stereotype threat effects arise when an individual feels at risk of confirming a negative stereotype about their group and consequently underperforms on stereotype relevant tasks (Steele, 2010). Among older people, underperformance across cognitive and physical tasks is hypothesized to result from age-based stereotype threat (ABST) because of negative age-stereotypes regarding older adults' competence. The present review and meta-analyses examine 22 published and 10 unpublished articles, including 82 effect sizes (N = 3882) investigating ABST on older people's (Mage = 69.5) performance. The analysis revealed a significant small-to-medium effect of ABST (d = .28) and important moderators of the effect size. Specifically, older adults are more vulnerable to ABST when (a) stereotype-based rather than fact-based manipulations are used (d = .52); (b) when performance is tested using cognitive measures (d = .36); and (c) occurs reliably when the dependent variable is measured proximally to the manipulation. The review raises important theoretical and methodological issues, and areas for future research. (c) 2015 APA, all rights reserved).
Yang, Xiao-Li; Song, Hai-Liang; Lu, Ji-Lai; Fu, Da-Fang; Cheng, Bing
2010-12-01
This paper examined the effect of diatomite addition on membrane fouling and process performance in an anoxic/oxic submerged membrane bioreactor (A/O MBR). Particle size distribution, molecular weight distribution and microbial activity have been investigated to characterize the sludge mixed liquor. Results show that diatomite addition is a reliable and effective approach in terms of both membrane fouling mitigation and pollutants removal improvement. The MBR system with diatomite addition of 50 mg/L enhanced the removal of COD, TN and TP by 0.9%, 6.9% and 31.2%, respectively, as compared to the control MBR (without diatomite addition). The NH(4)-N removal always maintained at a high level of over 98% irrespective of diatomite addition. Due to the hybrid effect of adsorption and co-precipitation on fine colloids and dissolved organic matter (DOM) from the addition of diatomite, a reduction in foulants amount, an increase in microbial floc size and an improvement in sludge settleability have been achieved simultaneously. As a result, the membrane fouling rate was mitigated successfully. 2010 Elsevier Ltd. All rights reserved.
Enabling technologies for fiber optic sensing
NASA Astrophysics Data System (ADS)
Ibrahim, Selwan K.; Farnan, Martin; Karabacak, Devrez M.; Singer, Johannes M.
2016-04-01
In order for fiber optic sensors to compete with electrical sensors, several critical parameters need to be addressed such as performance, cost, size, reliability, etc. Relying on technologies developed in different industrial sectors helps to achieve this goal in a more efficient and cost effective way. FAZ Technology has developed a tunable laser based optical interrogator based on technologies developed in the telecommunication sector and optical transducer/sensors based on components sourced from the automotive market. Combining Fiber Bragg Grating (FBG) sensing technology with the above, high speed, high precision, reliable quasi distributed optical sensing systems for temperature, pressure, acoustics, acceleration, etc. has been developed. Careful design needs to be considered to filter out any sources of measurement drifts/errors due to different effects e.g. polarization and birefringence, coating imperfections, sensor packaging etc. Also to achieve high speed and high performance optical sensing systems, combining and synchronizing multiple optical interrogators similar to what has been used with computer/processors to deliver super computing power is an attractive solution. This path can be achieved by using photonic integrated circuit (PIC) technology which opens the doors to scaling up and delivering powerful optical sensing systems in an efficient and cost effective way.
Dust-concentration measurement based on Mie scattering of a laser beam
Yu, Xiaoyu; Shi, Yunbo; Wang, Tian; Sun, Xu
2017-01-01
To realize automatic measurement of the concentration of dust particles in the air, a theory for dust concentration measurement was developed, and a system was designed to implement the dust concentration measurement method based on laser scattering. In the study, the principle of dust concentration detection using laser scattering is studied, and the detection basis of Mie scattering theory is determined. Through simulation, the influence of the incident laser wavelength, dust particle diameter, and refractive index of dust particles on the scattered light intensity distribution are obtained for determining the scattered light intensity curves of single suspended dust particles under different characteristic parameters. A genetic algorithm was used to study the inverse particle size distribution, and the reliability of the measurement system design is proven theoretically. The dust concentration detection system, which includes a laser system, computer circuitry, air flow system, and control system, was then implemented according to the parameters obtained from the theoretical analysis. The performance of the designed system was evaluated. Experimental results show that the system performance was stable and reliable, resulting in high-precision automatic dust concentration measurement with strong anti-interference ability. PMID:28767662
Minnig, Shawn; Bragg, Robert M; Tiwana, Hardeep S; Solem, Wes T; Hovander, William S; Vik, Eva-Mari S; Hamilton, Madeline; Legg, Samuel R W; Shuttleworth, Dominic D; Coffey, Sydney R; Cantle, Jeffrey P; Carroll, Jeffrey B
2018-02-02
Apathy is one of the most prevalent and progressive psychiatric symptoms in Huntington's disease (HD) patients. However, preclinical work in HD mouse models tends to focus on molecular and motor, rather than affective, phenotypes. Measuring behavior in mice often produces noisy data and requires large cohorts to detect phenotypic rescue with appropriate power. The operant equipment necessary for measuring affective phenotypes is typically expensive, proprietary to commercial entities, and bulky which can render adequately sized mouse cohorts as cost-prohibitive. Thus, we describe here a home-built, open-source alternative to commercial hardware that is reliable, scalable, and reproducible. Using off-the-shelf hardware, we adapted and built several of the rodent operant buckets (ROBucket) to test Htt Q111/+ mice for attention deficits in fixed ratio (FR) and progressive ratio (PR) tasks. We find that, despite normal performance in reward attainment in the FR task, Htt Q111/+ mice exhibit reduced PR performance at 9-11 months of age, suggesting motivational deficits. We replicated this in two independent cohorts, demonstrating the reliability and utility of both the apathetic phenotype, and these ROBuckets, for preclinical HD studies.
Performance of MEMS Silicon Oscillator, ASFLM1, under Wide Operating Temperature Range
NASA Technical Reports Server (NTRS)
Patterson, Richard L.; Hammoud, Ahmad
2008-01-01
Over the last few years, MEMS (Micro-Electro-Mechanical Systems) resonator-based oscillators began to be offered as commercial-off-the-shelf (COTS) parts by a few companies [1-2]. These quartz-free, miniature silicon devices could compete with the traditional crystal oscillators in providing the timing (clock function) for many digital and analog electronic circuits. They provide stable output frequency, offer great tolerance to shock and vibration, and are immune to electro-static discharge [1-2]. In addition, they are encapsulated in compact lead-free packages, cover a wide frequency range (1 MHz to 125 MHz), and are specified, depending on the grade, for extended temperature operation from -40 C to +85 C. The small size of the MEMS oscillators along with their reliability and thermal stability make them candidates for use in space exploration missions. Limited data, however, exist on the performance and reliability of these devices under operation in applications where extreme temperatures or thermal cycling swings, which are typical of space missions, are encountered. This report presents the results of the work obtained on the evaluation of an ABRACON Corporation MEMS silicon oscillator chip, type ASFLM1, under extreme temperatures.
Estimating search engine index size variability: a 9-year longitudinal study.
van den Bosch, Antal; Bogers, Toine; de Kunder, Maurice
One of the determining factors of the quality of Web search engines is the size of their index. In addition to its influence on search result quality, the size of the indexed Web can also tell us something about which parts of the WWW are directly accessible to the everyday user. We propose a novel method of estimating the size of a Web search engine's index by extrapolating from document frequencies of words observed in a large static corpus of Web pages. In addition, we provide a unique longitudinal perspective on the size of Google and Bing's indices over a nine-year period, from March 2006 until January 2015. We find that index size estimates of these two search engines tend to vary dramatically over time, with Google generally possessing a larger index than Bing. This result raises doubts about the reliability of previous one-off estimates of the size of the indexed Web. We find that much, if not all of this variability can be explained by changes in the indexing and ranking infrastructure of Google and Bing. This casts further doubt on whether Web search engines can be used reliably for cross-sectional webometric studies.
Salido-Vallejo, R; Ruano, J; Garnacho-Saucedo, G; Godoy-Gijón, E; Llorca, D; Gómez-Fernández, C; Moreno-Giménez, J C
2014-12-01
Tuberous sclerosis complex (TSC) is an autosomal dominant neurocutaneous disorder characterized by the development of multisystem hamartomatous tumours. Topical sirolimus has recently been suggested as a potential treatment for TSC-associated facial angiofibroma (FA). To validate a reproducible scale created for the assessment of clinical severity and treatment response in these patients. We developed a new tool, the Facial Angiofibroma Severity Index (FASI) to evaluate the grade of erythema and the size and extent of FAs. In total, 30 different photographs of patients with TSC were shown to 56 dermatologists at each evaluation. Three evaluations using the same photographs but in a different random order were performed 1 week apart. Test and retest reliability and interobserver reproducibility were determined. There was good agreement between the investigators. Inter-rater reliability showed strong correlations (> 0.98; range 0.97-0.99) with inter-rater correlation coefficients (ICCs) for the FASI. The global estimated kappa coefficient for the degree of intra-rater agreement (test-retest) was 0.94 (range 0.91-0.97). The FASI is a valid and reliable tool for measuring the clinical severity of TSC-associated FAs, which can be applied in clinical practice to evaluate the response to treatment in these patients. © 2014 British Association of Dermatologists.
Improved reliability of wind turbine towers with active tuned mass dampers (ATMDs)
NASA Astrophysics Data System (ADS)
Fitzgerald, Breiffni; Sarkar, Saptarshi; Staino, Andrea
2018-04-01
Modern multi-megawatt wind turbines are composed of slender, flexible, and lightly damped blades and towers. These components exhibit high susceptibility to wind-induced vibrations. As the size, flexibility and cost of the towers have increased in recent years, the need to protect these structures against damage induced by turbulent aerodynamic loading has become apparent. This paper combines structural dynamic models and probabilistic assessment tools to demonstrate improvements in structural reliability when modern wind turbine towers are equipped with active tuned mass dampers (ATMDs). This study proposes a multi-modal wind turbine model for wind turbine control design and analysis. This study incorporates an ATMD into the tower of this model. The model is subjected to stochastically generated wind loads of varying speeds to develop wind-induced probabilistic demand models for towers of modern multi-megawatt wind turbines under structural uncertainty. Numerical simulations have been carried out to ascertain the effectiveness of the active control system to improve the structural performance of the wind turbine and its reliability. The study constructs fragility curves, which illustrate reductions in the vulnerability of towers to wind loading owing to the inclusion of the damper. Results show that the active controller is successful in increasing the reliability of the tower responses. According to the analysis carried out in this paper, a strong reduction of the probability of exceeding a given displacement at the rated wind speed has been observed.
Robotic-Assisted Knee Arthroplasty: An Overview.
van der List, Jelle P; Chawla, Harshvardhan; Pearle, Andrew D
2016-01-01
Unicompartmental knee arthroplasty and total knee arthroplasty are reliable treatment options for osteoarthritis. In order to improve survivorship rates, variables that are intraoperatively controlled by the orthopedic surgeon are being evaluated. These variables include lower leg alignment, soft tissue balance, joint line maintenance, and tibial and femoral component alignment, size, and fixation methods. Since tighter control of these factors is associated with improved outcomes of knee arthroplasty, several computer-assisted surgery systems have been developed. These systems differ in the number and type of variables they control. Robotic-assisted systems control these aforementioned variables and, in addition, aim to improve the surgical precision of the procedure. Robotic-assisted systems are active, semi-active, or passive, depending on how independently the systems perform maneuvers. Reviewing the robotic-assisted knee arthroplasty systems, it becomes clear that these systems can accurately and reliably control the aforementioned variables. Moreover, these systems are more accurate and reliable in controlling these variables when compared to the current gold standard of conventional manual surgery. At present, few studies have assessed the survivorship and functional outcomes of robotic-assisted surgery, and no sufficiently powered studies were identified that compared survivorship or functional outcomes between robotic-assisted and conventional knee arthroplasty. Although preliminary outcomes of robotic-assisted surgery look promising, more studies are necessary to assess if the increased accuracy and reliability in controlling the surgical variables leads to better outcomes of robotic-assisted knee arthroplasty.
Zhou, Hui; Ji, Ning; Samuel, Oluwarotimi Williams; Cao, Yafei; Zhao, Zheyi; Chen, Shixiong; Li, Guanglin
2016-10-01
Real-time detection of gait events can be applied as a reliable input to control drop foot correction devices and lower-limb prostheses. Among the different sensors used to acquire the signals associated with walking for gait event detection, the accelerometer is considered as a preferable sensor due to its convenience of use, small size, low cost, reliability, and low power consumption. Based on the acceleration signals, different algorithms have been proposed to detect toe off (TO) and heel strike (HS) gait events in previous studies. While these algorithms could achieve a relatively reasonable performance in gait event detection, they suffer from limitations such as poor real-time performance and are less reliable in the cases of up stair and down stair terrains. In this study, a new algorithm is proposed to detect the gait events on three walking terrains in real-time based on the analysis of acceleration jerk signals with a time-frequency method to obtain gait parameters, and then the determination of the peaks of jerk signals using peak heuristics. The performance of the newly proposed algorithm was evaluated with eight healthy subjects when they were walking on level ground, up stairs, and down stairs. Our experimental results showed that the mean F1 scores of the proposed algorithm were above 0.98 for HS event detection and 0.95 for TO event detection on the three terrains. This indicates that the current algorithm would be robust and accurate for gait event detection on different terrains. Findings from the current study suggest that the proposed method may be a preferable option in some applications such as drop foot correction devices and leg prostheses.
Zhou, Hui; Ji, Ning; Samuel, Oluwarotimi Williams; Cao, Yafei; Zhao, Zheyi; Chen, Shixiong; Li, Guanglin
2016-01-01
Real-time detection of gait events can be applied as a reliable input to control drop foot correction devices and lower-limb prostheses. Among the different sensors used to acquire the signals associated with walking for gait event detection, the accelerometer is considered as a preferable sensor due to its convenience of use, small size, low cost, reliability, and low power consumption. Based on the acceleration signals, different algorithms have been proposed to detect toe off (TO) and heel strike (HS) gait events in previous studies. While these algorithms could achieve a relatively reasonable performance in gait event detection, they suffer from limitations such as poor real-time performance and are less reliable in the cases of up stair and down stair terrains. In this study, a new algorithm is proposed to detect the gait events on three walking terrains in real-time based on the analysis of acceleration jerk signals with a time-frequency method to obtain gait parameters, and then the determination of the peaks of jerk signals using peak heuristics. The performance of the newly proposed algorithm was evaluated with eight healthy subjects when they were walking on level ground, up stairs, and down stairs. Our experimental results showed that the mean F1 scores of the proposed algorithm were above 0.98 for HS event detection and 0.95 for TO event detection on the three terrains. This indicates that the current algorithm would be robust and accurate for gait event detection on different terrains. Findings from the current study suggest that the proposed method may be a preferable option in some applications such as drop foot correction devices and leg prostheses. PMID:27706086
Montazeri, Zahra; Yanofsky, Corey M; Bickel, David R
2010-01-01
Research on analyzing microarray data has focused on the problem of identifying differentially expressed genes to the neglect of the problem of how to integrate evidence that a gene is differentially expressed with information on the extent of its differential expression. Consequently, researchers currently prioritize genes for further study either on the basis of volcano plots or, more commonly, according to simple estimates of the fold change after filtering the genes with an arbitrary statistical significance threshold. While the subjective and informal nature of the former practice precludes quantification of its reliability, the latter practice is equivalent to using a hard-threshold estimator of the expression ratio that is not known to perform well in terms of mean-squared error, the sum of estimator variance and squared estimator bias. On the basis of two distinct simulation studies and data from different microarray studies, we systematically compared the performance of several estimators representing both current practice and shrinkage. We find that the threshold-based estimators usually perform worse than the maximum-likelihood estimator (MLE) and they often perform far worse as quantified by estimated mean-squared risk. By contrast, the shrinkage estimators tend to perform as well as or better than the MLE and never much worse than the MLE, as expected from what is known about shrinkage. However, a Bayesian measure of performance based on the prior information that few genes are differentially expressed indicates that hard-threshold estimators perform about as well as the local false discovery rate (FDR), the best of the shrinkage estimators studied. Based on the ability of the latter to leverage information across genes, we conclude that the use of the local-FDR estimator of the fold change instead of informal or threshold-based combinations of statistical tests and non-shrinkage estimators can be expected to substantially improve the reliability of gene prioritization at very little risk of doing so less reliably. Since the proposed replacement of post-selection estimates with shrunken estimates applies as well to other types of high-dimensional data, it could also improve the analysis of SNP data from genome-wide association studies.
Integrated performance and reliability specification for digital avionics systems
NASA Technical Reports Server (NTRS)
Brehm, Eric W.; Goettge, Robert T.
1995-01-01
This paper describes an automated tool for performance and reliability assessment of digital avionics systems, called the Automated Design Tool Set (ADTS). ADTS is based on an integrated approach to design assessment that unifies traditional performance and reliability views of system designs, and that addresses interdependencies between performance and reliability behavior via exchange of parameters and result between mathematical models of each type. A multi-layer tool set architecture has been developed for ADTS that separates the concerns of system specification, model generation, and model solution. Performance and reliability models are generated automatically as a function of candidate system designs, and model results are expressed within the system specification. The layered approach helps deal with the inherent complexity of the design assessment process, and preserves long-term flexibility to accommodate a wide range of models and solution techniques within the tool set structure. ADTS research and development to date has focused on development of a language for specification of system designs as a basis for performance and reliability evaluation. A model generation and solution framework has also been developed for ADTS, that will ultimately encompass an integrated set of analytic and simulated based techniques for performance, reliability, and combined design assessment.
Low-Frequency Fluctuations of the Resting Brain: High Magnitude Does Not Equal High Reliability
Jia, Wenbin; Liao, Wei; Li, Xun; Huang, Huiyuan; Yuan, Jianhua; Zang, Yu-Feng; Zhang, Han
2015-01-01
The amplitude of low-frequency fluctuation (ALFF) measures low-frequency oscillations of the blood-oxygen-level-dependent signal, characterizing local spontaneous activity during the resting state. ALFF is a commonly used measure for resting-state functional magnetic resonance imaging (rs-fMRI) in numerous basic and clinical neuroscience studies. Using a test-retest rs-fMRI dataset consisting of 21 healthy subjects and three repetitive scans, we found that several key brain regions with high ALFF intensities (or magnitude) had poor reliability. Such regions included the posterior cingulate cortex, the medial prefrontal cortex in the default mode network, parts of the right and left thalami, and the primary visual and motor cortices. The above finding was robust with regard to different sample sizes (number of subjects), different scanning parameters (repetition time) and variations of test-retest intervals (i.e., intra-scan, intra-session, and inter-session reliability), as well as with different scanners. Moreover, the qualitative, map-wise results were validated further with a region-of-interest-based quantitative analysis using “canonical” coordinates as reported previously. Therefore, we suggest that the reliability assessments be incorporated in future ALFF studies, especially for the brain regions with a large ALFF magnitude as listed in our paper. Splitting single data into several segments and assessing within-scan “test-retest” reliability is an acceptable alternative if no “real” test-retest datasets are available. Such evaluations might become more necessary if the data are collected with clinical scanners whose performance is not as good as those that are used for scientific research purposes and are better maintained because the lower signal-to-noise ratio may further dampen ALFF reliability. PMID:26053265
Towgood, Karren; Barker, Gareth J; Caceres, Alejandro; Crum, William R; Elwes, Robert D C; Costafreda, Sergi G; Mehta, Mitul A; Morris, Robin G; von Oertzen, Tim J; Richardson, Mark P
2015-04-01
fMRI is increasingly implemented in the clinic to assess memory function. There are multiple approaches to memory fMRI, but limited data on advantages and reliability of different methods. Here, we compared effect size, activation lateralisation, and between-sessions reliability of seven memory fMRI protocols: Hometown Walking (block design), Scene encoding (block design and event-related design), Picture encoding (block and event-related), and Word encoding (block and event-related). All protocols were performed on three occasions in 16 patients with temporal lobe epilepsy (TLE). Group T-maps showed activity bilaterally in medial temporal lobe for all protocols. Using ANOVA, there was an interaction between hemisphere and seizure-onset lateralisation (P = 0.009) and between hemisphere, protocol and seizure-onset lateralisation (P = 0.002), showing that the distribution of memory-related activity between left and right temporal lobes differed between protocols and between patients with left-onset and right-onset seizures. Using voxelwise intraclass Correlation Coefficient, between-sessions reliability was best for Hometown and Scenes (block and event). The between-sessions spatial overlap of activated voxels was also greatest for Hometown and Scenes. Lateralisation of activity between hemispheres was most reliable for Scenes (block and event) and Words (event). Using receiver operating characteristic analysis to explore the ability of each fMRI protocol to classify patients as left-onset or right-onset TLE, only the Words (event) protocol achieved a significantly above-chance classification of patients at all three sessions. We conclude that Words (event) protocol shows the best combination of between-sessions reliability of the distribution of activity between hemispheres and reliable ability to distinguish between left-onset and right-onset patients. © 2015 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Effect of grinding with diamond-disc and -bur on the mechanical behavior of a Y-TZP ceramic.
Pereira, G K R; Amaral, M; Simoneti, R; Rocha, G C; Cesar, P F; Valandro, L F
2014-09-01
This study compared the effects of grinding on the surface micromorphology, phase transformation (t→m), biaxial flexural strength and structural reliability (Weibull analysis) of a Y-TZP (Lava) ceramic using diamond-discs and -burs. 170 discs (15×1.2mm) were produced and divided into 5 groups: without treatment (Ctrl, as-sintered), and ground with 4 different systems: extra-fine (25µm, Xfine) and coarse diamond-bur (181µm, Coarse), 600-grit (25µm, D600) and 120-grit diamond-disc (160µm, D120). Grinding with burs was performed using a contra-angle handpiece (T2-Revo R170, Sirona), while for discs (Allied) a Polishing Machine (Ecomet, Buehler) was employed, both under water-cooling. Micromorphological analysis showed distinct patterns generated by grinding with discs and burs, independent of grit size. There was no statistical difference for characteristic strength values (MPa) between smaller grit sizes (D600 - 1050.08 and Xfine - 1171.33), although they presented higher values compared to Ctrl (917.58). For bigger grit sizes, a significant difference was observed (Coarse - 1136.32>D120 - 727.47). Weibull Modules were statistically similar between the tested groups. Within the limits of this study, from a micromorphological point-of-view, the treatments performed did not generate similar effects, so from a methodological point-of-view, diamond-discs should not be employed to simulate clinical abrasion performed with diamond-burs on Y-TZP ceramics. Copyright © 2014 Elsevier Ltd. All rights reserved.
Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.
2011-01-01
Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.
NASA Technical Reports Server (NTRS)
Ghaffarian, R.
2000-01-01
A JPL-led chip scale package (CSP) Consortium, composed of team members representing government agencies and private companies, recently joined together to pool in-kind resources for developing the quality and reliability of chip scale packages (CSPs) for a variety of projects.
Nagayama, T.; Mancini, R. C.; Mayes, D.; ...
2015-11-18
Temperature and density asymmetry diagnosis is critical to advance inertial confinement fusion (ICF) science. A multi-monochromatic x-ray imager (MMI) is an attractive diagnostic for this purpose. The MMI records the spectral signature from an ICF implosion core with time resolution, 2-D space resolution, and spectral resolution. While narrow-band images and 2-D space-resolved spectra from the MMI data constrain temperature and density spatial structure of the core, the accuracy of the images and spectra depends not only on the quality of the MMI data but also on the reliability of the post-processing tools. In this paper, we synthetically quantify the accuracymore » of images and spectra reconstructed from MMI data. Errors in the reconstructed images are less than a few percent when the space-resolution effect is applied to the modeled images. The errors in the reconstructed 2-D space-resolved spectra are also less than a few percent except those for the peripheral regions. Spectra reconstructed for the peripheral regions have slightly but systematically lower intensities by ~6% due to the instrumental spatial-resolution effects. However, this does not alter the relative line ratios and widths and thus does not affect the temperature and density diagnostics. We also investigate the impact of the pinhole size variation on the extracted images and spectra. A 10% pinhole size variation could introduce spatial bias to the images and spectra of ~10%. A correction algorithm is developed, and it successfully reduces the errors to a few percent. Finally, it is desirable to perform similar synthetic investigations to fully understand the reliability and limitations of each MMI application.« less
La Padula, Simone; Hersant, Barbara; Meningaud, Jean Paul
2018-03-30
Anatomical variability of anterolateral thigh flap (ALT) perforators has been reported. The aim of this study is to assess if the use of intraoperative indocyanine green angiography (iICGA) can help surgeons to choose the ALT flap best perforator to be preserved. A retrospective study was conducted in 28 patients with open tibial fracture, following a road traffic crash, who had undergone ALT flap. Patients were classified into two groups: ICGA group (iICGA was used to select the more reliable perforator) and control group. The mean tissue loss size of the ICGA group (n = 13, 11 men and 2 women, mean age: 52 ± 6 years) was of 16.6 cm × 12.2 cm. The mean defect size of the control group (n = 15, 14 men and 1 women, mean age: 50 ± 5.52 years) was of 15.3 cm × 11.1 cm. Statistical analysis was performed to analyze and compare the results. ICGA allowed preserving only the most functional perforator, that provided the best ALT flap perfusion in 10 out of the 13 cases (77%). ICGA allowed a significant operative time reduction (160 ± 23 vs. 202 ± 48 minutes; P < .001). One case of distal necrosis was observed in the ICGA group (mean follow-up 12.3 months), while partial skin necrosis occurred in three cases of the control group (mean follow-up 13.1 months); P = .35. No additional coverage was required and a successful bone healing was observed in both groups. These findings suggest that iICGA is an effective method that allows to select the most reliable ALT flap perforators and to reduce operative time. © 2018 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagayama, T.; Mancini, R. C.; Mayes, D.
2015-11-15
Temperature and density asymmetry diagnosis is critical to advance inertial confinement fusion (ICF) science. A multi-monochromatic x-ray imager (MMI) is an attractive diagnostic for this purpose. The MMI records the spectral signature from an ICF implosion core with time resolution, 2-D space resolution, and spectral resolution. While narrow-band images and 2-D space-resolved spectra from the MMI data constrain temperature and density spatial structure of the core, the accuracy of the images and spectra depends not only on the quality of the MMI data but also on the reliability of the post-processing tools. Here, we synthetically quantify the accuracy of imagesmore » and spectra reconstructed from MMI data. Errors in the reconstructed images are less than a few percent when the space-resolution effect is applied to the modeled images. The errors in the reconstructed 2-D space-resolved spectra are also less than a few percent except those for the peripheral regions. Spectra reconstructed for the peripheral regions have slightly but systematically lower intensities by ∼6% due to the instrumental spatial-resolution effects. However, this does not alter the relative line ratios and widths and thus does not affect the temperature and density diagnostics. We also investigate the impact of the pinhole size variation on the extracted images and spectra. A 10% pinhole size variation could introduce spatial bias to the images and spectra of ∼10%. A correction algorithm is developed, and it successfully reduces the errors to a few percent. It is desirable to perform similar synthetic investigations to fully understand the reliability and limitations of each MMI application.« less
Nagayama, T; Mancini, R C; Mayes, D; Tommasini, R; Florido, R
2015-11-01
Temperature and density asymmetry diagnosis is critical to advance inertial confinement fusion (ICF) science. A multi-monochromatic x-ray imager (MMI) is an attractive diagnostic for this purpose. The MMI records the spectral signature from an ICF implosion core with time resolution, 2-D space resolution, and spectral resolution. While narrow-band images and 2-D space-resolved spectra from the MMI data constrain temperature and density spatial structure of the core, the accuracy of the images and spectra depends not only on the quality of the MMI data but also on the reliability of the post-processing tools. Here, we synthetically quantify the accuracy of images and spectra reconstructed from MMI data. Errors in the reconstructed images are less than a few percent when the space-resolution effect is applied to the modeled images. The errors in the reconstructed 2-D space-resolved spectra are also less than a few percent except those for the peripheral regions. Spectra reconstructed for the peripheral regions have slightly but systematically lower intensities by ∼6% due to the instrumental spatial-resolution effects. However, this does not alter the relative line ratios and widths and thus does not affect the temperature and density diagnostics. We also investigate the impact of the pinhole size variation on the extracted images and spectra. A 10% pinhole size variation could introduce spatial bias to the images and spectra of ∼10%. A correction algorithm is developed, and it successfully reduces the errors to a few percent. It is desirable to perform similar synthetic investigations to fully understand the reliability and limitations of each MMI application.
Osmani, Feroz A; Thakkar, Savyasachi; Ramme, Austin; Elbuluk, Ameer; Wojack, Paul; Vigdorchik, Jonathan M
2017-12-01
Preoperative total hip arthroplasty templating can be performed with radiographs using acetate prints, digital viewing software, or with computed tomography (CT) images. Our hypothesis is that 3D templating is more precise and accurate with cup size prediction as compared to 2D templating with acetate prints and digital templating software. Data collected from 45 patients undergoing robotic-assisted total hip arthroplasty compared cup sizes templated on acetate prints and OrthoView software to MAKOplasty software that uses CT scan. Kappa analysis determined strength of agreement between each templating modality and the final size used. t tests compared mean cup-size variance from the final size for each templating technique. Interclass correlation coefficient (ICC) determined reliability of digital and acetate planning by comparing predictions of the operating surgeon and a blinded adult reconstructive fellow. The Kappa values for CT-guided, digital, and acetate templating with the final size was 0.974, 0.233, and 0.262, respectively. Both digital and acetate templating significantly overpredicted cup size, compared to CT-guided methods ( P < .001). There was no significant difference between digital and acetate templating ( P = .117). Interclass correlation coefficient value for digital and acetate templating was 0.928 and 0.931, respectively. CT-guided planning more accurately predicts hip implant cup size when compared to the significant overpredictions of digital and acetate templating. CT-guided templating may also lead to better outcomes due to bone stock preservation from a smaller and more accurate cup size predicted than that of digital and acetate predictions.
Hotchkiss, David R; Aqil, Anwer; Lippeveld, Theo; Mukooyo, Edward
2010-07-03
Sound policy, resource allocation and day-to-day management decisions in the health sector require timely information from routine health information systems (RHIS). In most low- and middle-income countries, the RHIS is viewed as being inadequate in providing quality data and continuous information that can be used to help improve health system performance. In addition, there is limited evidence on the effectiveness of RHIS strengthening interventions in improving data quality and use. The purpose of this study is to evaluate the usefulness of the newly developed Performance of Routine Information System Management (PRISM) framework, which consists of a conceptual framework and associated data collection and analysis tools to assess, design, strengthen and evaluate RHIS. The specific objectives of the study are: a) to assess the reliability and validity of the PRISM instruments and b) to assess the validity of the PRISM conceptual framework. Facility- and worker-level data were collected from 110 health care facilities in twelve districts in Uganda in 2004 and 2007 using records reviews, structured interviews and self-administered questionnaires. The analysis procedures include Cronbach's alpha to assess internal consistency of selected instruments, test-retest analysis to assess the reliability and sensitivity of the instruments, and bivariate and multivariate statistical techniques to assess validity of the PRISM instruments and conceptual framework. Cronbach's alpha analysis suggests high reliability (0.7 or greater) for the indices measuring a promotion of a culture of information, RHIS tasks self-efficacy and motivation. The study results also suggest that a promotion of a culture of information influences RHIS tasks self-efficacy, RHIS tasks competence and motivation, and that self-efficacy and the presence of RHIS staff have a direct influence on the use of RHIS information, a key aspect of RHIS performance. The study results provide some empirical support for the reliability and validity of the PRISM instruments and the validity of the PRISM conceptual framework, suggesting that the PRISM approach can be effectively used by RHIS policy makers and practitioners to assess the RHIS and evaluate RHIS strengthening interventions. However, additional studies with larger sample sizes are needed to further investigate the value of the PRISM instruments in exploring the linkages between RHIS data quality and use, and health systems performance.
Effects of Age and Size on Xylem Phenology in Two Conifers of Northwestern China.
Zeng, Qiao; Rossi, Sergio; Yang, Bao
2017-01-01
The climatic signals that directly affect the trees can be registered by xylem during its growth. If the timings and duration of xylem formation change, xylogenesis can occur under different environmental conditions and subsequently be subject to different climatic signals. An experimental design was applied in the field to disentangle the effects of age and size on xylem phenology, and it challenges the hypothesis that the timings and dynamics of xylem growth are size-dependent. Intra-annual dynamics of xylem formation were monitored weekly during the growing seasons 2013 and 2014 in Chinese pine ( Pinus tabulaeformis ) and Qilian juniper ( Juniperus przewalskii ) with different sizes and ages in a semi-arid region of northwestern China. Cell differentiation started 3 weeks earlier in 2013 and terminated 1 week later in 2014 in small-young pines than in big-old pines. However, differences in the timings of growth reactivation disappeared when comparing the junipers with different sizes but similar age. Overall, 77 days were required for xylem differentiation to take place, but timings were shorter for older trees, which also exhibited smaller cell production. Results from this study suggest that tree age does play an important role in timings and duration of growth. The effect of age should also be considered to perform reliable responses of trees to climate.
Butera, Gianfranco; Lovin, Nicusor; Basile, Domenica Paola
2017-01-01
Secundum atrial septum defect (ASD) is the most common congenital heart disease. It is usually treated by a transcatheter approach using a femoral venous access. In case of bilateral femoral vein occlusion, the internal jugular venous approach for ASD closure is an option, in particular in cases where ASD balloon occlusion test and sizing is needed. Here, we report on a new technique for ASD closure using a venous-arterial circuit from the right internal jugular vein to the femoral artery. Two patients (females, 4 and 10 years of age) had occlusion of both femoral veins because of a previous history of pulmonary atresia and intact ventricular septum, for which they underwent percutaneous radiofrequency perforation and balloon angioplasty. These subjects needed balloon occlusion test of a residual ASD to size the hole and to check for hemodynamic suitability to ASD closure. After performing a venous-arterial circuit, a 24 mm St Jude ASD sizing balloon catheter was advanced over the circuit and the defect closed for 15 min to check hemodynamics and size the defect. ASD was closed is hemodinamically suitable. This technique was safe and reliable. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Can blind persons accurately assess body size from the voice?
Pisanski, Katarzyna; Oleszkiewicz, Anna; Sorokowska, Agnieszka
2016-04-01
Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can accurately assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the prediction that accurate voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20-65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for accurate body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. © 2016 The Author(s).
Can blind persons accurately assess body size from the voice?
Oleszkiewicz, Anna; Sorokowska, Agnieszka
2016-01-01
Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can accurately assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the prediction that accurate voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20–65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for accurate body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. PMID:27095264
Javanshir, Khodabakhsh; Mohseni-Bandpei, Mohammad Ali; Rezasoltani, Asghar; Amiri, Mohsen; Rahgozar, Mehdi
2011-01-01
In this study, the reliability of the longus colli muscle (LCM) size was assessed in a relaxed state by a real time ultrasonography (US) device in a group of healthy subjects and a group of patients with chronic neck pain. Fifteen healthy subjects (19-41 years old) and 10 patients with chronic neck pain (27-44 years old) were recruited for the purpose of this study. LCM size was measured at the level of thyroid cartilage. Two images were taken on the same day with an hour interval to assess the within day reliability and the third image was taken 1 week later to determine between days reliability. Cross sectional area (CSA), anterior posterior dimension (APD), and lateral dimension (LD) were measured each time. The shape ratio was calculated as LD/APD. Intraclass correlation coefficients (ICC) and standard error of measurement (SEM) were computed for data analysis. The ICC of left and right CSA for within day and between days reliability in healthy subjects were (0.90, 0.93) and (0.85, 0.82), respectively. The ICC of left and right CSA for within day and between days reliability in patients with neck pain were (0.86, 0.82) and (0.76, 0.81), respectively. The results indicated that US could be used as a reliable tool to measure the LCM dimensions in healthy subjects and patients with chronic neck pain. Copyright © 2009 Elsevier Ltd. All rights reserved.
Brandmaier, Andreas M.; von Oertzen, Timo; Ghisletta, Paolo; Lindenberger, Ulman; Hertzog, Christopher
2018-01-01
Latent Growth Curve Models (LGCM) have become a standard technique to model change over time. Prediction and explanation of inter-individual differences in change are major goals in lifespan research. The major determinants of statistical power to detect individual differences in change are the magnitude of true inter-individual differences in linear change (LGCM slope variance), design precision, alpha level, and sample size. Here, we show that design precision can be expressed as the inverse of effective error. Effective error is determined by instrument reliability and the temporal arrangement of measurement occasions. However, it also depends on another central LGCM component, the variance of the latent intercept and its covariance with the latent slope. We derive a new reliability index for LGCM slope variance—effective curve reliability (ECR)—by scaling slope variance against effective error. ECR is interpretable as a standardized effect size index. We demonstrate how effective error, ECR, and statistical power for a likelihood ratio test of zero slope variance formally relate to each other and how they function as indices of statistical power. We also provide a computational approach to derive ECR for arbitrary intercept-slope covariance. With practical use cases, we argue for the complementary utility of the proposed indices of a study's sensitivity to detect slope variance when making a priori longitudinal design decisions or communicating study designs. PMID:29755377
A Reliability Model for Ni-BaTiO3-Based (BME) Ceramic Capacitors
NASA Technical Reports Server (NTRS)
Liu, Donhang
2014-01-01
The evaluation of multilayer ceramic capacitors (MLCCs) with base-metal electrodes (BMEs) for potential NASA space project applications requires an in-depth understanding of their reliability. The reliability of an MLCC is defined as the ability of the dielectric material to retain its insulating properties under stated environmental and operational conditions for a specified period of time t. In this presentation, a general mathematic expression of a reliability model for a BME MLCC is developed and discussed. The reliability model consists of three parts: (1) a statistical distribution that describes the individual variation of properties in a test group of samples (Weibull, log normal, normal, etc.), (2) an acceleration function that describes how a capacitors reliability responds to external stresses such as applied voltage and temperature (All units in the test group should follow the same acceleration function if they share the same failure mode, independent of individual units), and (3) the effect and contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size r, and capacitor chip size S. In general, a two-parameter Weibull statistical distribution model is used in the description of a BME capacitors reliability as a function of time. The acceleration function that relates a capacitors reliability to external stresses is dependent on the failure mode. Two failure modes have been identified in BME MLCCs: catastrophic and slow degradation. A catastrophic failure is characterized by a time-accelerating increase in leakage current that is mainly due to existing processing defects (voids, cracks, delamination, etc.), or the extrinsic defects. A slow degradation failure is characterized by a near-linear increase in leakage current against the stress time; this is caused by the electromigration of oxygen vacancies (intrinsic defects). The two identified failure modes follow different acceleration functions. Catastrophic failures follow the traditional power-law relationship to the applied voltage. Slow degradation failures fit well to an exponential law relationship to the applied electrical field. Finally, the impact of capacitor structure on the reliability of BME capacitors is discussed with respect to the number of dielectric layers in an MLCC unit, the number of BaTiO3 grains per dielectric layer, and the chip size of the capacitor device.
Using theta and alpha band power to assess cognitive workload in multitasking environments.
Puma, Sébastien; Matton, Nadine; Paubel, Pierre-V; Raufaste, Éric; El-Yagoubi, Radouane
2018-01-01
Cognitive workload is of central importance in the fields of human factors and ergonomics. A reliable measurement of cognitive workload could allow for improvements in human machine interface designs and increase safety in several domains. At present, numerous studies have used electroencephalography (EEG) to assess cognitive workload, reporting the rise in cognitive workload to be associated with increases in theta band power and decreases in alpha band power. However, results have been inconsistent with some failing to reach the required level of significance. We hypothesized that the lack of consistency could be related to individual differences in task performance and/or to the small sample sizes in most EEG studies. In the present study we used EEG to assess the increase in cognitive workload occurring in a multitasking environment while taking into account differences in performance. Twenty participants completed a task commonly used in airline pilot recruitment, which included an increasing number of concurrent sub-tasks to be processed from one to four. Subjective ratings, performances scores, pupil size and EEG signals were recorded. Results showed that increases in EEG alpha and theta band power reflected increases in the involvement of cognitive resources for the completion of one to three subtasks in a multitasking environment. These values reached a ceiling when performances dropped. Consistent differences in levels of alpha and theta band power were associated to levels of task performance: highest performance was related to lowest band power. Copyright © 2017 Elsevier B.V. All rights reserved.
Nine, seven, five, or three: how many figures do we need for assessing body image?
Ambrosi-Randić, Neala; Pokrajac-Bulian, Alessandra; Taksić, Vladimir
2005-04-01
320 Croatian female students (M=20.4 yr.) were recruited to examine the validity and reliability of figural scales using different numbers of stimuli (3, 5, 7, and 9) and different serial presentation (serial and nonserial order). A two-way analysis of variance (4 numbers x 2 orders of stimuli) was performed on ratings of current self-size and ideal size as dependent variables. Analysis indicated a significant main effect of number of stimuli. This, together with post hoc tests indicated that ratings were significantly different for a scale of three figures from scales of more figures, which in turn did not differ among themselves. Main effects of order of stimuli, as well as the interaction, were not significant. The results support the hypothesis that the optimal number of figures on a scale is seven plus (or minus) two.
Multiple Image Arrangement for Subjective Quality Assessment
NASA Astrophysics Data System (ADS)
Wang, Yan; Zhai, Guangtao
2017-12-01
Subjective quality assessment serves as the foundation for almost all visual quality related researches. Size of the image quality databases has expanded from dozens to thousands in the last decades. Since each subjective rating therein has to be averaged over quite a few participants, the ever-increasing overall size of those databases calls for an evolution of existing subjective test methods. Traditional single/double stimulus based approaches are being replaced by multiple image tests, where several distorted versions of the original one are displayed and rated at once. And this naturally brings upon the question of how to arrange those multiple images on screen during the test. In this paper, we answer this question by performing subjective viewing test with eye tracker for different types arrangements. Our research indicates that isometric arrangement imposes less duress on participants and has more uniform distribution of eye fixations and movements and therefore is expected to generate more reliable subjective ratings.
a High Frequency Thermoacoustically-Driven Pulse Tube Cryocooler with Coaxial Resonator
NASA Astrophysics Data System (ADS)
Yu, G. Y.; Wang, X. T.; Dai, W.; Luo, E. C.
2010-04-01
High frequency thermoacoustically-driven pulse tube cryocoolers are quite promising due to their compact size and high reliability, which can find applications in space use. With continuous effort, a lowest cold head temperature of 68.3 K has been obtained on a 300 Hz pulse tube cryocooler driven by a standing-wave thermoacoustic heat engine with 4.0 MPa helium gas and 750 W heat input. To further reduce the size of the system, a coaxial resonator was designed and the two sub-systems, i.e., the pulse tube cryocooler and the standing-wave thermoacoustic heat engine were properly coupled through an acoustic amplifier tube, which leads to a system axial length of only about 0.7 m. The performance of the system with the coaxial resonator was tested, and shows moderate degradation compared to that with the in-line resonator, which might be attributed to the large flow loss of the 180 degree corner.
Multiphysics Simulations of Hot-Spot Initiation in Shocked Insensitive High-Explosive
NASA Astrophysics Data System (ADS)
Najjar, Fady; Howard, W. M.; Fried, L. E.
2010-11-01
Solid plastic-bonded high-explosive materials consist of crystals with micron-sized pores embedded. Under mechanical or thermal insults, these voids increase the ease of shock initiation by generating high-temperature regions during their collapse that might lead to ignition. Understanding the mechanisms of hot-spot initiation has significant research interest due to safety, reliability and development of new insensitive munitions. Multi-dimensional high-resolution meso-scale simulations are performed using the multiphysics software, ALE3D, to understand the hot-spot initiation. The Cheetah code is coupled to ALE3D, creating multi-dimensional sparse tables for the HE properties. The reaction rates were obtained from MD Quantum computations. Our current predictions showcase several interesting features regarding hot spot dynamics including the formation of a "secondary" jet. We will discuss the results obtained with hydro-thermo-chemical processes leading to ignition growth for various pore sizes and different shock pressures.
NASA Astrophysics Data System (ADS)
Van Oyen, Tomas; Blondeaux, Paolo; Van den Eynde, Dries
2013-07-01
A site-by-site comparison between field observations and theoretical predictions of sediment sorting patterns along tidal sand waves is performed for ten locations in the North Sea. At each site, the observed grain size distribution along the bottom topography and the geometry of the bed forms is described in detail and the procedure used to obtain the model parameters is summarized. The model appears to accurately describe the wavelength of the observed sand waves for the majority of the locations; still providing a reliable estimate for the other sites. In addition, it is found that for seven out of the ten locations, the qualitative sorting process provided by the model agrees with the observed grain size distribution. A discussion of the site-by-site comparison is provided which, taking into account uncertainties in the field data, indicates that the model grasps the major part of the key processes controlling the phenomenon.
Design studies of continuously variable transmissions for electric vehicles
NASA Technical Reports Server (NTRS)
Parker, R. J.; Loewenthal, S. H.; Fischer, G. K.
1981-01-01
Preliminary design studies were performed on four continuously variable transmission (CVT) concepts for use with a flywheel equipped electric vehicle of 1700 kg gross weight. Requirements of the CVT's were a maximum torque of 450 N-m (330 lb-ft), a maximum output power of 75 kW (100 hp), and a flywheel speed range of 28,000 to 14,000 rpm. Efficiency, size, weight, cost, reliability, maintainability, and controls were evaluated for each of the four concepts which included a steel V-belt type, a flat rubber belt type, a toroidal traction type, and a cone roller traction type. All CVT's exhibited relatively high calculated efficiencies (68 percent to 97 percent) over a broad range of vehicle operating conditions. Estimated weight and size of these transmissions were comparable to or less than equivalent automatic transmission. The design of each concept was carried through the design layout stage.
De, Abhishek; Hasanoor Reja, Abu Hena; Aggarwal, Ishad; Sen, Sumit; Sil, Amrita; Bhattacharya, Basudev; Sharma, Nidhi; Ansari, Asad; Sarda, Aarti; Chatterjee, Gobinda; Das, Sudip
2017-01-01
Pure neural leprosy (PNL) still remains a diagnostic challenge because of the absence of sine qua non skin lesions of leprosy and a confirmatory diagnostic method. The authors had earlier described a simple yet objective technique of combining fine needle aspiration cytology (FNAC) coupled with a multiplex polymerase chain reaction (PCR) in a pilot study, wherein the technique showed promise of a reliable diagnostic tool. In the pursuit of further evidence, the authors carried out a 4-year study with PNL cases to find the efficacy and reliability of the said method in a larger sample size. This study was conducted to find the efficacy, reliability, and reproducibility of FNAC coupled with multiplex PCR and Ziehl-Neelsen (ZN) staining in identifying the cases of PNL. All cases that were suspected to be suffering from PNL, following evaluation by two independent observers were included in the study and were subjected to FNAC from the affected nerve, and the aspirates were evaluated for cytology, ZN staining, and multiplex PCR for Mycobacterium leprae genome. In addition, serum anti-PGL1 levels were also performed in all the study subjects. Fifteen non-PNL cases were also included in the control arm. A total of 47 cases were included in the test arm and subjected to FNAC. Conventional ZN staining could demonstrate acid-fast bacilli (AFB) in only 15 out of 47 cases (31.91%) while M. leprae DNA could be elicited in 37 (78.72%) cases by the multiplex PCR. Only 13 (27.65%) out of 47 cases showed anti-PGLI-1 antibody positivity. On cytological examination of the nerve aspirates, only 11 (23.40%) cases showed epithelioid cells whereas nonspecific inflammation was seen in 26 (75.60%) cases. The results of this study conducted over a larger sample size corroborate with the findings of our pilot study. In a resource poor set up, FNAC in combination with ZN staining and multiplex PCR is a rapid, simple, and easily performed test, which can give a reproducible and objective diagnosis in cases of PNL.
Nimon, Kim; Zientek, Linda Reichwein; Henson, Robin K.
2012-01-01
The purpose of this article is to help researchers avoid common pitfalls associated with reliability including incorrectly assuming that (a) measurement error always attenuates observed score correlations, (b) different sources of measurement error originate from the same source, and (c) reliability is a function of instrumentation. To accomplish our purpose, we first describe what reliability is and why researchers should care about it with focus on its impact on effect sizes. Second, we review how reliability is assessed with comment on the consequences of cumulative measurement error. Third, we consider how researchers can use reliability generalization as a prescriptive method when designing their research studies to form hypotheses about whether or not reliability estimates will be acceptable given their sample and testing conditions. Finally, we discuss options that researchers may consider when faced with analyzing unreliable data. PMID:22518107
Bossier, Han; Seurinck, Ruth; Kühn, Simone; Banaschewski, Tobias; Barker, Gareth J.; Bokde, Arun L. W.; Martinot, Jean-Luc; Lemaitre, Herve; Paus, Tomáš; Millenet, Sabina; Moerkerke, Beatrijs
2018-01-01
Given the increasing amount of neuroimaging studies, there is a growing need to summarize published results. Coordinate-based meta-analyses use the locations of statistically significant local maxima with possibly the associated effect sizes to aggregate studies. In this paper, we investigate the influence of key characteristics of a coordinate-based meta-analysis on (1) the balance between false and true positives and (2) the activation reliability of the outcome from a coordinate-based meta-analysis. More particularly, we consider the influence of the chosen group level model at the study level [fixed effects, ordinary least squares (OLS), or mixed effects models], the type of coordinate-based meta-analysis [Activation Likelihood Estimation (ALE) that only uses peak locations, fixed effects, and random effects meta-analysis that take into account both peak location and height] and the amount of studies included in the analysis (from 10 to 35). To do this, we apply a resampling scheme on a large dataset (N = 1,400) to create a test condition and compare this with an independent evaluation condition. The test condition corresponds to subsampling participants into studies and combine these using meta-analyses. The evaluation condition corresponds to a high-powered group analysis. We observe the best performance when using mixed effects models in individual studies combined with a random effects meta-analysis. Moreover the performance increases with the number of studies included in the meta-analysis. When peak height is not taken into consideration, we show that the popular ALE procedure is a good alternative in terms of the balance between type I and II errors. However, it requires more studies compared to other procedures in terms of activation reliability. Finally, we discuss the differences, interpretations, and limitations of our results. PMID:29403344
Steam bottoming cycle for an adiabatic diesel engine
NASA Technical Reports Server (NTRS)
Poulin, E.; Demier, R.; Krepchin, I.; Walker, D.
1984-01-01
Steam bottoming cycles using adiabatic diesel engine exhaust heat which projected substantial performance and economic benefits for long haul trucks were studied. Steam cycle and system component variables, system cost, size and performance were analyzed. An 811 K/6.90 MPa state of the art reciprocating expander steam system with a monotube boiler and radiator core condenser was selected for preliminary design. The costs of the diesel with bottoming system (TC/B) and a NASA specified turbocompound adiabatic diesel with aftercooling with the same total output were compared, the annual fuel savings less the added maintenance cost was determined to cover the increase initial cost of the TC/B system in a payback period of 2.3 years. Steam bottoming system freeze protection strategies were developed, technological advances required for improved system reliability are considered and the cost and performance of advanced systes are evaluated.
Brush Seals for Improved Steam Turbine Performance
NASA Technical Reports Server (NTRS)
Turnquist, Norman; Chupp, Ray; Baily, Fred; Burnett, Mark; Rivas, Flor; Bowsher, Aaron; Crudgington, Peter
2006-01-01
GE Energy has retrofitted brush seals into more than 19 operating steam turbines. Brush seals offer superior leakage control compared to labyrinth seals, owing to their compliant nature and ability to maintain very tight clearances to the rotating shaft. Seal designs have been established for steam turbines ranging in size from 12 MW to over 1200 MW, including fossil, nuclear, combined-cycle and industrial applications. Steam turbines present unique design challenges that must be addressed to ensure that the potential performance benefits of brush seals are realized. Brush seals can have important effects on the overall turbine system that must be taken into account to assure reliable operation. Subscale rig tests are instrumental to understanding seal behavior under simulated steam-turbine operating conditions, prior to installing brush seals in the field. This presentation discusses the technical challenges of designing brush seals for steam turbines; subscale testing; performance benefits of brush seals; overall system effects; and field applications.
NASCOM network: Ground communications reliability report
NASA Technical Reports Server (NTRS)
1973-01-01
A reliability performance analysis of the NASCOM Network circuits is reported. Network performance narrative summary is presented to include significant changes in circuit configurations, current figures, and trends in each trouble category with notable circuit totals specified. Lost time and interruption tables listing circuits which were affected by outages showing their totals category are submitted. A special analysis of circuits with low reliabilities is developed with tables depicting the performance and graphs for individual reliabilities.
Videotape Reliability: A Method of Evaluation of a Clinical Performance Examination.
ERIC Educational Resources Information Center
And Others; Liu, Philip
1980-01-01
A method of statistically analyzing clinical performance examinations for reliability and the application of this method in determining the reliability of two examinations of skill in administering anesthesia are described. Videotaped performances for the Spinal Anesthesia Skill Examination and the Anesthesia Setup and Machine Checkout Examination…
Krishnamoorthy, Vignesh P; Perumal, Rajamani; Daniel, Alfred J; Poonnoose, Pradeep M
2015-12-01
Templating of the acetabular cup size in Total Hip Replacement (THR) is normally done using conventional radiographs. As these are being replaced by digital radiographs, it has become essential to create a technique of templating using digital films. We describe a technique that involves templating the digital films using the universally available acetate templates for THR without the use of special software. Preoperative digital radiographs of the pelvis were taken with a 30 mm diameter spherical metal ball strapped over the greater trochanter. Using standard acetate templates provided by the implant company on magnified digital radiographs, the size of the metal ball (X mm) and acetabular cup (Y mm) were determined. The size of the acetabular cup to be implanted was estimated using the formula 30*Y/X. The estimated size was compared with the actual size of the cup used at surgery. Using this technique, it was possible to accurately predict the acetabular cup size in 28/40 (70%) of the hips. When the accuracy to within one size was considered, templating was correct in 90% (36/40). When assessed by two independent observers, there was good intra-observer and inter-observer reliability with intra-class correlation coefficient values greater than 0.8. It was possible to accurately and reliably predict the size of the acetabular cup, using acetate templates on digital films, without any digital templates.
Development and validation of a toddler silhouette scale.
Hager, Erin R; McGill, Adrienne E; Black, Maureen M
2010-02-01
The purpose of this study is to develop and validate a toddler silhouette scale. A seven-point scale was developed by an artist based on photographs of 15 toddlers (6 males, 9 females) varying in race/ethnicity and body size, and a list of phenotypic descriptions of toddlers of varying body sizes. Content validity, age-appropriateness, and gender and race/ethnicity neutrality were assessed among 180 pediatric health professionals and 129 parents of toddlers. Inter- and intrarater reliability and concurrent validity were assessed by having 138 pediatric health professionals match the silhouettes with photographs of toddlers. Assessments of content validity revealed that most health professionals (74.6%) and parents of toddlers (63.6%) ordered all seven silhouettes correctly, and interobserver agreement for weight status classification was high (kappa = 0.710, r = 0.827, P < 0.001). Most respondents reported that the scale represented toddlers aged 12-36 months (89%) and was gender (68.5%) and race/ethnicity (77.3%) neutral. The inter-rater reliability, based on matching silhouettes with photographs, was 0.787 (Cronbach's alpha) and the intrarater reliability was 0.855 (P < 0.001). The concurrent validity, based on the correlation between silhouette choice and the weight-for-length percentile of each toddler's photograph, was 0.633 (P < 0.001). In conclusion, a valid and reliable toddler silhouette scale that may be used for male or female toddlers, aged 12-36 months, of varying race/ethnicity was developed and evaluated. This scale may be used clinically or in research settings to assess parents' perception of and satisfaction with their toddler's body size. Interventions can be targeted toward parents who have inaccurate perceptions of or are dissatisfied with their toddler's body size.
Reliability and Validity of the Turkish Version of the Job Performance Scale Instrument.
Harmanci Seren, Arzu Kader; Tuna, Rujnan; Eskin Bacaksiz, Feride
2018-02-01
Objective measurement of the job performance of nursing staff using valid and reliable instruments is important in the evaluation of healthcare quality. A current, valid, and reliable instrument that specifically measures the performance of nurses is required for this purpose. The aim of this study was to determine the validity and reliability of the Turkish version of the Job Performance Instrument. This study used a methodological design and a sample of 240 nurses working at different units in four hospitals in Istanbul, Turkey. A descriptive data form, the Job Performance Scale, and the Employee Performance Scale were used to collect data. Data were analyzed using IBM SPSS Statistics Version 21.0 and LISREL Version 8.51. On the basis of the data analysis, the instrument was revised. Some items were deleted, and subscales were combined. The Turkish version of the Job Performance Instrument was determined to be valid and reliable to measure the performance of nurses. The instrument is suitable for evaluating current nursing roles.
Large Engine Technology Program. Task 22: Variable Geometry Concepts for Rich-Quench-Lean Combustors
NASA Technical Reports Server (NTRS)
Tacina, Robert R. (Technical Monitor); Cohen, J. M.; Padget, F. C.; Kwoka, D.; Wang, Q.; Lohmann, R. P.
2005-01-01
The objective of the task reported herein was to define, evaluate, and optimize variable geometry concepts suitable for use with a Rich-Quench-Lean (RQL) combustor. The specific intent was to identify approaches that would satisfy High Speed Civil Transport (HSCT) cycle operational requirements with regard to fuel-air ratio turndown capability, ignition, and stability margin without compromising the stringent emissions, performance, and reliability goals that this combustor would have to achieve. Four potential configurations were identified and three of these were refined and tested in a high-pressure modular RQL combustor rig. The tools used in the evolution of these concepts included models built with rapid fabrication techniques that were tested for airflow characteristics to confirm sizing and airflow management capability, spray patternation, and atomization characterization tests of these models and studies that were supported by Computational Fluid Dynamics analyses. Combustion tests were performed with each of the concepts at supersonic cruise conditions and at other critical conditions in the flight envelope, including the transition points of the variable geometry system, to identify performance, emissions, and operability impacts. Based upon the cold flow characterization, emissions results, acoustic behavior observed during the tests and consideration of mechanical, reliability, and implementation issues, the tri-swirler configuration was selected as the best variable geometry concept for incorporation in the RQL combustor evolution efforts for the HSCT.
Sample Size for Estimation of G and Phi Coefficients in Generalizability Theory
ERIC Educational Resources Information Center
Atilgan, Hakan
2013-01-01
Problem Statement: Reliability, which refers to the degree to which measurement results are free from measurement errors, as well as its estimation, is an important issue in psychometrics. Several methods for estimating reliability have been suggested by various theories in the field of psychometrics. One of these theories is the generalizability…
NASA Astrophysics Data System (ADS)
Capps, Gregory
Semiconductor products are manufactured and consumed across the world. The semiconductor industry is constantly striving to manufacture products with greater performance, improved efficiency, less energy consumption, smaller feature sizes, thinner gate oxides, and faster speeds. Customers have pushed towards zero defects and require a more reliable, higher quality product than ever before. Manufacturers are required to improve yields, reduce operating costs, and increase revenue to maintain a competitive advantage. Opportunities exist for integrated circuit (IC) customers and manufacturers to work together and independently to reduce costs, eliminate waste, reduce defects, reduce warranty returns, and improve quality. This project focuses on electrical over-stress (EOS) and re-test okay (RTOK), two top failure return mechanisms, which both make great defect reduction opportunities in customer-manufacturer relationship. Proactive continuous improvement initiatives and methodologies are addressed with emphasis on product life cycle, manufacturing processes, test, statistical process control (SPC), industry best practices, customer education, and customer-manufacturer interaction.
Performance Enhancement of a High Speed Jet Impingement System for Nonvolatile Residue Removal
NASA Technical Reports Server (NTRS)
Klausner, James F.; Mei, Renwei; Near, Steve; Stith, Rex
1996-01-01
A high speed jet impingement cleaning facility has been developed to study the effectiveness of the nonvolatile residue removal. The facility includes a high pressure air compressor which charges the k-bottles to supply high pressure air, an air heating section to vary the temperature of the high pressure air, an air-water mixing chamber to meter the water flow and generate small size droplets, and a converging- diverging nozzle to deliver the supersonic air-droplet mixture flow to the cleaning surface. To reliably quantify the cleanliness of the surface, a simple procedure for measurement and calibration is developed to relate the amount of the residue on the surface to the relative change in the reflectivity between a clean surface and the greased surface. This calibration procedure is economical, simple, reliable, and robust. a theoretical framework is developed to provide qualitative guidance for the design of the test and interpretation of the experimental results. The result documented in this report support the theoretical considerations.
CSP Manufacturing Challenges and Assembly Reliability
NASA Technical Reports Server (NTRS)
Ghaffarian, Reza
2000-01-01
Although the expression of CSP is widely used by industry from suppliers to users, its implied definition had evolved as the technology has matured. There are "expert definition"- package that is up to 1.5 time die- or "interim definition". CSPs are miniature new packages that industry is starting to implement and there are many unresolved technical issues associated with their implementation. For example, in early 1997, packages with 1 mm pitch and lower were the dominant CSPs, whereas in early 1998 packages with 0.8 mm and lower became the norm for CSPs. Other changes included the use of flip chip die rather than wire bond in CSP. Nonetheless the emerging CSPs are competing with bare die assemblies and are becoming the package of choice for size reduction applications. These packages provide the benefits of small size and performance of the bare die or flip chip, with the advantage of standard die packages. The JPL-led MicrotypeBGA Consortium of enterprises representing government agencies and private companies have jointed together to pool in-kind resources for developing the quality and reliability of chip scale packages (CSPs) for a variety of projects. This talk will cover specifically the experience of our consortium on technology implementation challenges, including design and build of both standard and microvia boards, assembly of two types of test vehicles, and the most current environmental thermal cycling test results.
Paiva, Eduardo M; da Silva, Vitor H; Poppi, Ronei J; Pereira, Claudete F; Rohwedder, Jarbas J R
2018-05-12
This work reports on the use of micro- and macro-Raman measurements for quantification of mebendazole (MBZ) polymorphs A, B, and C in mixtures. Three Raman spectrophotometers were studied with a laser spot size of 3, 80 and 100 μm and spectral resolutions of 3.9, 9 and 4 cm -1 , respectively. The samples studied were ternary mixtures varying the MBZ polymorphs A and C from 0 to 100% and polymorph B from 0 to 30%. Partial Least Squares (PLS) regression models were developed using the pre-processing spectra (2nd derivative) of the ternary mixtures. The best performance was obtained when the macro-Raman configuration was applied, obtaining RMSEP values of 1.68%, 1.24% and 2.03% w/w for polymorphs A, B, and C, respectively. In general, micro-Raman presented worst results for MBZ polymorphs prediction because the spectra obtained with this configuration does not represent the bulk proportion of mixtures, which have different particle morphologies and sizes. In addition, the influence of these particle features on micro-Raman measurements was also studied. Finally, the results demonstrated that reliable analytical quantifying of MBZ polymorphs can be reached using a laser with wider area illuminated, thus enabling acquisition of more reproductive and representative spectra of the mixtures. Copyright © 2018 Elsevier B.V. All rights reserved.
Investigation of the Specht density estimator
NASA Technical Reports Server (NTRS)
Speed, F. M.; Rydl, L. M.
1971-01-01
The feasibility of using the Specht density estimator function on the IBM 360/44 computer is investigated. Factors such as storage, speed, amount of calculations, size of the smoothing parameter and sample size have an effect on the results. The reliability of the Specht estimator for normal and uniform distributions and the effects of the smoothing parameter and sample size are investigated.
Deterministic Ethernet for Space Applications
NASA Astrophysics Data System (ADS)
Fidi, C.; Wolff, B.
2015-09-01
Typical spacecraft systems are distributed to be able to achieve the required reliability and availability targets of the mission. However the requirements on these systems are different for launchers, satellites, human space flight and exploration missions. Launchers require typically high reliability with very short mission times whereas satellites or space exploration missions require very high availability at very long mission times. Comparing a distributed system of launchers with satellites it shows very fast reaction times in launchers versus much slower once in satellite applications. Human space flight missions are maybe most challenging concerning reliability and availability since human lives are involved and the mission times can be very long e.g. ISS. Also the reaction times of these vehicles can get challenging during mission scenarios like landing or re-entry leading to very fast control loops. In these different applications more and more autonomous functions are required to fulfil the needs of current and future missions. This autonomously leads to new requirements with respect to increase performance, determinism, reliability and availability. On the other hand side the pressure on reducing costs of electronic components in space applications is increasing, leading to the use of more and more COTS components especially for launchers and LEO satellites. This requires a technology which is able to provide a cost competitive solution for both the high reliable and available deep-space as well as the low cost “new space” markets. Future spacecraft communication standards therefore have to be much more flexible, scalable and modular to be able to deal with these upcoming challenges. The only way to fulfill these requirements is, if they are based on open standards which are used cross industry leading to a reduction of the lifecycle costs and an increase in performance. The use of a communication network that fulfills these requirements will be essential for such spacecraft’s to allow the use in launcher, satellite, human space flight and exploration missions. Using one technology and the related infrastructure for these different applications will lead to a significant reduction of complexity and would moreover lead to significant savings in size weight and power while increasing the performance of the overall system. The paper focuses on the use of the TTEthernet technology for launchers, satellites and human spaceflight and will demonstrate the scalability of the technology for the different applications. The data used is derived from the ESA TRP 7594 on “Reliable High-Speed Data Bus/Network for Safety-Oriented Missions”.
Harrison, Xavier A
2015-01-01
Overdispersion is a common feature of models of biological data, but researchers often fail to model the excess variation driving the overdispersion, resulting in biased parameter estimates and standard errors. Quantifying and modeling overdispersion when it is present is therefore critical for robust biological inference. One means to account for overdispersion is to add an observation-level random effect (OLRE) to a model, where each data point receives a unique level of a random effect that can absorb the extra-parametric variation in the data. Although some studies have investigated the utility of OLRE to model overdispersion in Poisson count data, studies doing so for Binomial proportion data are scarce. Here I use a simulation approach to investigate the ability of both OLRE models and Beta-Binomial models to recover unbiased parameter estimates in mixed effects models of Binomial data under various degrees of overdispersion. In addition, as ecologists often fit random intercept terms to models when the random effect sample size is low (<5 levels), I investigate the performance of both model types under a range of random effect sample sizes when overdispersion is present. Simulation results revealed that the efficacy of OLRE depends on the process that generated the overdispersion; OLRE failed to cope with overdispersion generated from a Beta-Binomial mixture model, leading to biased slope and intercept estimates, but performed well for overdispersion generated by adding random noise to the linear predictor. Comparison of parameter estimates from an OLRE model with those from its corresponding Beta-Binomial model readily identified when OLRE were performing poorly due to disagreement between effect sizes, and this strategy should be employed whenever OLRE are used for Binomial data to assess their reliability. Beta-Binomial models performed well across all contexts, but showed a tendency to underestimate effect sizes when modelling non-Beta-Binomial data. Finally, both OLRE and Beta-Binomial models performed poorly when models contained <5 levels of the random intercept term, especially for estimating variance components, and this effect appeared independent of total sample size. These results suggest that OLRE are a useful tool for modelling overdispersion in Binomial data, but that they do not perform well in all circumstances and researchers should take care to verify the robustness of parameter estimates of OLRE models.
Eddy Current for Sizing Cracks in Canisters for Dry Storage of Used Nuclear Fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Ryan M.; Jones, Anthony M.; Pardini, Allan F.
2014-01-01
The storage of used nuclear fuel (UNF) in dry canister storage systems (DCSSs) at Independent Spent Fuel Storage Installations (ISFSI) sites is a temporary measure to accommodate UNF inventory until it can be reprocessed or transferred to a repository for permanent disposal. Policy uncertainty surrounding the long-term management of UNF indicates that DCSSs will need to store UNF for much longer periods than originally envisioned. Meanwhile, the structural and leak-tight integrity of DCSSs must not be compromised. The eddy current technique is presented as a potential tool for inspecting the outer surfaces of DCSS canisters for degradation, particularly atmospheric stressmore » corrosion cracking (SCC). Results are presented that demonstrate that eddy current can detect flaws that cannot be detected reliably using standard visual techniques. In addition, simulations are performed to explore the best parameters of a pancake coil probe for sizing of SCC flaws in DCSS canisters and to identify features in frequency sweep curves that may potentially be useful for facilitating accurate depth sizing of atmospheric SCC flaws from eddy current measurements.« less
A Step Made Toward Designing Microelectromechanical System (MEMS) Structures With High Reliability
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.
2003-01-01
The mechanical design of microelectromechanical systems-particularly for micropower generation applications-requires the ability to predict the strength capacity of load-carrying components over the service life of the device. These microdevices, which typically are made of brittle materials such as polysilicon, show wide scatter (stochastic behavior) in strength as well as a different average strength for different sized structures (size effect). These behaviors necessitate either costly and time-consuming trial-and-error designs or, more efficiently, the development of a probabilistic design methodology for MEMS. Over the years, the NASA Glenn Research Center s Life Prediction Branch has developed the CARES/Life probabilistic design methodology to predict the reliability of advanced ceramic components. In this study, done in collaboration with Johns Hopkins University, the ability of the CARES/Life code to predict the reliability of polysilicon microsized structures with stress concentrations is successfully demonstrated.
Variable pixel size ionospheric tomography
NASA Astrophysics Data System (ADS)
Zheng, Dunyong; Zheng, Hongwei; Wang, Yanjun; Nie, Wenfeng; Li, Chaokui; Ao, Minsi; Hu, Wusheng; Zhou, Wei
2017-06-01
A novel ionospheric tomography technique based on variable pixel size was developed for the tomographic reconstruction of the ionospheric electron density (IED) distribution. In variable pixel size computerized ionospheric tomography (VPSCIT) model, the IED distribution is parameterized by a decomposition of the lower and upper ionosphere with different pixel sizes. Thus, the lower and upper IED distribution may be very differently determined by the available data. The variable pixel size ionospheric tomography and constant pixel size tomography are similar in most other aspects. There are some differences between two kinds of models with constant and variable pixel size respectively, one is that the segments of GPS signal pay should be assigned to the different kinds of pixel in inversion; the other is smoothness constraint factor need to make the appropriate modified where the pixel change in size. For a real dataset, the variable pixel size method distinguishes different electron density distribution zones better than the constant pixel size method. Furthermore, it can be non-chided that when the effort is spent to identify the regions in a model with best data coverage. The variable pixel size method can not only greatly improve the efficiency of inversion, but also produce IED images with high fidelity which are the same as a used uniform pixel size method. In addition, variable pixel size tomography can reduce the underdetermined problem in an ill-posed inverse problem when the data coverage is irregular or less by adjusting quantitative proportion of pixels with different sizes. In comparison with constant pixel size tomography models, the variable pixel size ionospheric tomography technique achieved relatively good results in a numerical simulation. A careful validation of the reliability and superiority of variable pixel size ionospheric tomography was performed. Finally, according to the results of the statistical analysis and quantitative comparison, the proposed method offers an improvement of 8% compared with conventional constant pixel size tomography models in the forward modeling.
A TEOM (tm) particulate monitor for comet dust, near Earth space, and planetary atmospheres
NASA Astrophysics Data System (ADS)
1988-04-01
Scientific missions to comets, near earth space, and planetary atmospheres require particulate and mass accumulation instrumentation for both scientific and navigation purposes. The Rupprecht & Patashnick tapered element oscillating microbalance can accurately measure both mass flux and mass distribution of particulates over a wide range of particle sizes and loadings. Individual particles of milligram size down to a few picograms can be resolved and counted, and the accumulation of smaller particles or molecular deposition can be accurately measured using the sensors perfected and toughened under this contract. No other sensor has the dynamic range or sensitivity attained by these picogram direct mass measurement sensors. The purpose of this contract was to develop and implement reliable and repeatable manufacturing methods; build and test prototype sensors; and outline a quality control program. A dust 'thrower' was to be designed and built, and used to verify performance. Characterization and improvement of the optical motion detection system and drive feedback circuitry was to be undertaken, with emphasis on reliability, low noise, and low power consumption. All the goals of the contract were met or exceeded. An automated glass puller was built and used to make repeatable tapered elements. Materials and assembly methods were standardized, and controllers and calibrated fixtures were developed and used in all phases of preparing, coating and assembling the sensors. Quality control and reliability resulted from the use of calibrated manufacturing equipment with measurable working parameters. Thermal and vibration testing of completed prototypes showed low temperature sensitivity and high vibration tolerance. An electrostatic dust thrower was used in vacuum to throw particles from 2 x 106 g to 7 x 10-12 g in size. Using long averaging times, particles as small as 0.7 to 4 x 1011 g were weighted to resolutions in the 5 to 9 x 10-13 g range. The drive circuit and optics systems were developed beyond what was anticipated in the contract, and are now virtually flight prototypes. There is already commercial interest in the developed capability of measuring picogram mass losses and gains. One area is contamination and outgassing research, both measuring picogram losses from samples and collecting products of outgassing.
NASA Astrophysics Data System (ADS)
Iskandar, Ismed; Satria Gondokaryono, Yudi
2016-02-01
In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range between the true value and the maximum likelihood estimated value lines.
Steidle-Kloc, E; Wirth, W; Ruhdorfer, A; Dannhauer, T; Eckstein, F
2016-03-01
The infra-patellar fat pad (IPFP), as intra-articular adipose tissue represents a potential source of pro-inflammatory cytokines and its size has been suggested to be associated with osteoarthritis (OA) of the knee. This study examines inter- and intra-observer reliability of fat-suppressed (fs) and non-fat-suppressed (nfs) MR imaging for determination of IPFP morphological measurements as novel biomarkers. The IPFP of nine right knees of healthy Osteoarthritis Initiative participants was segmented by five readers, using fs and nfs baseline sagittal MRIs. The intra-observer reliability was determined from baseline and 1-year follow-up images. All segmentations were quality controlled (QC) by an expert reader. Reliability was expressed as root mean square coefficient of variation (RMS CV%). After QC, the inter-observer reliability for fs (nfs) imaging was 2.0% (1.1%) for IPFP volume, 2.1%/2.5% (1.6%/1.8%) for anterior/posterior surface areas, 1.8% (1.8%) for depth, and 2.1% (2.4%) for maximum sagittal area. The intra-observer reliability was 3.1% (5.0%) for volume, 2.3%/2.8% (2.5%/2.9%) for anterior/posterior surfaces, 1.9% (3.5%) for depth, and 3.3% (4.5%) for maximum sagittal area. IPFP volume from nfs images was systematically greater (+7.3%) than from fs images, but highly correlated (r=0.98). The results suggest that quantitative measurements of IPFP morphology can be performed with satisfactory reliability when expert QC is implemented. The IPFP is more clearly depicted in nfs images, and there is a small systematic off-set versus analysis from fs images. However, the high linear relationship between fs and nfs imaging suggests that fs images can be used to analyze IPFP morphology, when nfs images are not available. Copyright © 2015 Elsevier GmbH. All rights reserved.
Steidle-Kloc, E.; Wirth, W.; Ruhdorfer, A.; Dannhauer, T.; Eckstein, F.
2015-01-01
The infra-patellar fat pad (IPFP), as intra-articular adipose tissue represents a potential source of pro-inflammatory cytokines and its size has been suggested to be associated with osteoarthritis (OA) of the knee. This study examines inter- and intra-observer reliability of fat-suppressed (fs) and non-fat-suppressed (nfs) MR imaging for determination of IPFP morphological measurements as novel biomarkers. The IPFP of nine right knees of healthy Osteoarthritis Initiative participants was segmented by five readers, using fs and nfs baseline sagittal MRIs. The intra-observer reliability was determined from baseline and 1-year follow-up images. All segmentations were quality controlled (QC) by an expert reader. Reliability was expressed as root mean square coefficient of variation (RMS CV%). After QC, the inter-observer reliability for fs (nfs) imaging was 2.0% (1.1%) for IPFP volume, 2.1%/2.5% (1.6%/1.8%) for anterior/posterior surface areas, 1.8% (1.8%) for depth, and 2.1% (2.4%) for maximum sagittal area. The intra-observer reliability was 3.1% (5.0%) for volume, 2.3%/2.8% (2.5%/2.9%) for anterior/posterior surfaces, 1.9% (3.5%) for depth, and 3.3% (4.5%) for maximum sagittal area. IPFP volume from nfs images was systematically greater (+7.3%) than from fs images, but highly correlated (r = 0.98). The results suggest that quantitative measurements of IPFP morphology can be performed with satisfactory reliability when expert QC is implemented. The IPFP is more clearly depicted in nfs images, and there is a small systematic off-set versus analysis from fs images. However, the high linear relationship between fs and nfs imaging suggests that fs images can be used to analyze IPFP morphology, when nfs images are not available. PMID:26569532
Tian, Yuxi; Schuemie, Martijn J; Suchard, Marc A
2018-06-22
Propensity score adjustment is a popular approach for confounding control in observational studies. Reliable frameworks are needed to determine relative propensity score performance in large-scale studies, and to establish optimal propensity score model selection methods. We detail a propensity score evaluation framework that includes synthetic and real-world data experiments. Our synthetic experimental design extends the 'plasmode' framework and simulates survival data under known effect sizes, and our real-world experiments use a set of negative control outcomes with presumed null effect sizes. In reproductions of two published cohort studies, we compare two propensity score estimation methods that contrast in their model selection approach: L1-regularized regression that conducts a penalized likelihood regression, and the 'high-dimensional propensity score' (hdPS) that employs a univariate covariate screen. We evaluate methods on a range of outcome-dependent and outcome-independent metrics. L1-regularization propensity score methods achieve superior model fit, covariate balance and negative control bias reduction compared with the hdPS. Simulation results are mixed and fluctuate with simulation parameters, revealing a limitation of simulation under the proportional hazards framework. Including regularization with the hdPS reduces commonly reported non-convergence issues but has little effect on propensity score performance. L1-regularization incorporates all covariates simultaneously into the propensity score model and offers propensity score performance superior to the hdPS marginal screen.
An FEC Adaptive Multicast MAC Protocol for Providing Reliability in WLANs
NASA Astrophysics Data System (ADS)
Basalamah, Anas; Sato, Takuro
For wireless multicast applications like multimedia conferencing, voice over IP and video/audio streaming, a reliable transmission of packets within short delivery delay is needed. Moreover, reliability is crucial to the performance of error intolerant applications like file transfer, distributed computing, chat and whiteboard sharing. Forward Error Correction (FEC) is frequently used in wireless multicast to enhance Packet Error Rate (PER) performance, but cannot assure full reliability unless coupled with Automatic Repeat Request forming what is knows as Hybrid-ARQ. While reliable FEC can be deployed at different levels of the protocol stack, it cannot be deployed on the MAC layer of the unreliable IEEE802.11 WLAN due to its inability to exchange ACKs with multiple recipients. In this paper, we propose a Multicast MAC protocol that enhances WLAN reliability by using Adaptive FEC and study it's performance through mathematical analysis and simulation. Our results show that our protocol can deliver high reliability and throughput performance.
NASA Astrophysics Data System (ADS)
Lee, Tae-Kyu; Chen, Zhiqiang; Guirguis, Cherif; Akinade, Kola
2017-10-01
The stability of solder interconnects in a mechanical shock environment is crucial for large body size flip-chip ball grid array (FCBGA) electronic packages. Additionally, the junction temperature increases with higher electric power condition, which brings the component into an elevated temperature environment, thus introducing another consideration factor for mechanical stability of interconnection joints. Since most of the shock performance data available were produced at room temperature, the effect of elevated temperature is of interest to ensure the reliability of the device in a mechanical shock environment. To achieve a stable␣interconnect in a dynamic shock environment, the interconnections must tolerate mechanical strain, which is induced by the shock wave input and reaches the particular component interconnect joint. In this study, large body size (52.5 × 52.5 mm2) FCBGA components assembled on 2.4-mm-thick boards were tested with various isothermal pre-conditions and testing conditions. With a heating element embedded in the test board, a test temperature range from room temperature to 100°C was established. The effects of elevated temperature on mechanical shock performance were investigated. Failure and degradation mechanisms are identified and discussed based on the microstructure evolution and grain structure transformations.
Hybrid Propulsion Technology Program
NASA Technical Reports Server (NTRS)
Jensen, G. E.; Holzman, A. L.
1990-01-01
Future launch systems of the United States will require improvements in booster safety, reliability, and cost. In order to increase payload capabilities, performance improvements are also desirable. The hybrid rocket motor (HRM) offers the potential for improvements in all of these areas. The designs are presented for two sizes of hybrid boosters, a large 4.57 m (180 in.) diameter booster duplicating the Advanced Solid Rocket Motor (ASRM) vacuum thrust-time profile and smaller 2.44 m (96 in.), one-quater thrust level booster. The large booster would be used in tandem, while eight small boosters would be used to achieve the same total thrust. These preliminary designs were generated as part of the NASA Hybrid Propulsion Technology Program. This program is the first phase of an eventual three-phaes program culminating in the demonstration of a large subscale engine. The initial trade and sizing studies resulted in preferred motor diameters, operating pressures, nozzle geometry, and fuel grain systems for both the large and small boosters. The data were then used for specific performance predictions in terms of payload and the definition and selection of the requirements for the major components: the oxidizer feed system, nozzle, and thrust vector system. All of the parametric studies were performed using realistic fuel regression models based upon specific experimental data.
Reliability of Radioisotope Stirling Convertor Linear Alternator
NASA Technical Reports Server (NTRS)
Shah, Ashwin; Korovaichuk, Igor; Geng, Steven M.; Schreiber, Jeffrey G.
2006-01-01
Onboard radioisotope power systems being developed and planned for NASA s deep-space missions would require reliable design lifetimes of up to 14 years. Critical components and materials of Stirling convertors have been undergoing extensive testing and evaluation in support of a reliable performance for the specified life span. Of significant importance to the successful development of the Stirling convertor is the design of a lightweight and highly efficient linear alternator. Alternator performance could vary due to small deviations in the permanent magnet properties, operating temperature, and component geometries. Durability prediction and reliability of the alternator may be affected by these deviations from nominal design conditions. Therefore, it is important to evaluate the effect of these uncertainties in predicting the reliability of the linear alternator performance. This paper presents a study in which a reliability-based methodology is used to assess alternator performance. The response surface characterizing the induced open-circuit voltage performance is constructed using 3-D finite element magnetic analysis. Fast probability integration method is used to determine the probability of the desired performance and its sensitivity to the alternator design parameters.
de Jong, Lex D; van Meeteren, Annemiek; Emmelot, Cornelis H; Land, Nanne E; Dijkstra, Pieter U
2018-03-01
To determine reliability of the ABILHAND-Kids, explore sources of variation associated with these measurement results, and generate repeatability coefficients. A reliability study with a repeated measures design was performed in an ambulatory rehabilitation care department from a rehabilitation center, and a center for special education. A physician, an occupational therapist, and parents of 27 children with spastic cerebral palsy independently rated the children's manual capacity when performing 21 standardized tasks of the ABILHAND-Kids from video recordings twice with a three week time interval (27 first-, and 25 second video recordings available). Parents additionally rated their children's performance based on their own perception of their child's ability to perform manual activities in everyday life, resulting in eight ratings per child. ABILHAND-Kids ratings were systematically different between observers, sessions, and rating method. Participant × observer interaction (66%) and residual variance (20%) contributed the most to error variance (9%). Test-retest reliability was 0.92. Repeatability coefficients (between 0.81 and 1.82 logit points) were largest for the parents' performance-based ratings. ABILHAND-Kids scores can be reliably used as a performance- and capacity-based rating method across different raters. Parents' performance-based ratings are less reliable than their capacity-based ratings. Resulting repeatability coefficients can be used to interpret ABILHAND-Kids ratings with more confidence. Implications for Rehabilitation The ABILHAND-Kids is a valuable tool to assess a child's unimanual and bimanual upper limb activities. The reliability of the ABILHANDS-Kids is good across different observers as a performance- and capacity-based rating method. Parents' performance-based ratings are less reliable than their capacity-based ones. This study has generated repeatability coefficients for clinical decision making.
Extrapolating Single Organic Ion Solvation Thermochemistry from Simulated Water Nanodroplets.
Coles, Jonathan P; Houriez, Céline; Meot-Ner Mautner, Michael; Masella, Michel
2016-09-08
We compute the ion/water interaction energies of methylated ammonium cations and alkylated carboxylate anions solvated in large nanodroplets of 10 000 water molecules using 10 ns molecular dynamics simulations and an all-atom polarizable force-field approach. Together with our earlier results concerning the solvation of these organic ions in nanodroplets whose molecular sizes range from 50 to 1000, these new data allow us to discuss the reliability of extrapolating absolute single-ion bulk solvation energies from small ion/water droplets using common power-law functions of cluster size. We show that reliable estimates of these energies can be extrapolated from a small data set comprising the results of three droplets whose sizes are between 100 and 1000 using a basic power-law function of droplet size. This agrees with an earlier conclusion drawn from a model built within the mean spherical framework and paves the road toward a theoretical protocol to systematically compute the solvation energies of complex organic ions.
NASA Technical Reports Server (NTRS)
Unal, Resit; Morris, W. Douglas; White, Nancy H.; Lepsch, Roger A.; Brown, Richard W.
2000-01-01
This paper describes the development of parametric models for estimating operational reliability and maintainability (R&M) characteristics for reusable vehicle concepts, based on vehicle size and technology support level. A R&M analysis tool (RMAT) and response surface methods are utilized to build parametric approximation models for rapidly estimating operational R&M characteristics such as mission completion reliability. These models that approximate RMAT, can then be utilized for fast analysis of operational requirements, for lifecycle cost estimating and for multidisciplinary sign optimization.
Evaluating success of curettage in the surgical treatment of endometrial polyps.
Hafizi, Leili; Mousavifar, Nezhat; Zirak, Nahid; Khadem, Nayereh; Davarpanah, Sousan; Akhondi, Mohsen
2015-02-01
To determine treatment efficacy of curettage on endometrial polyp. The quasi-experimental pre-and-post study was conducted in 2011-12 at the gynaecology department of Imam Reza Hospital, Mashhad, Iran, and comprised patients who underwent hysteroscopy for endometrial polyp. Location, size, number and base condition of the polyps were recorded before the patient underwent curettage. Hysteroscopy was then performed and the condition of the remaining polyps was compared with initial findings. Also, the remaining polyps were resected. SPSS 13 was used for statistical analysis. There were 51 patients in the study with a mean age of 33.14 ± 8.19 years (range: 23-59 years)Besides, there were 82 polyps; 38(46.3%) having a narrow base, and 44(53.7%) having a wide base. The mean polyp size was 2.39 ± 2.63cm.After performing curettage, 23 (28.0%) polyps were removed completely, 39(47.6%) had size reduction, and 20(24.4%) had no change in size. Curettage could not significantly remove polyps (p < 0.001). Polyps smaller than 2cm were more likely to have been removed compared to the bigger ones (p = 0.003).Polyps with wide base were more significantly removed than those with narrow base (p < 0.001).Further, those with wide base and also smaller than 2 cm were removed more significantly than others (p < 0.001).The location of polyps had no effect on removal probability by curettage (p = 0.114). Curettage was not found to be a reliable method for endometrial polyp removal. If hysteroscopy is not accessible, the size of the polyp should be determined by vaginal sonograghy to estimate the probability of its removal by curettage.
Fatigue crack sizing in rail steel using crack closure-induced acoustic emission waves
NASA Astrophysics Data System (ADS)
Li, Dan; Kuang, Kevin Sze Chiang; Ghee Koh, Chan
2017-06-01
The acoustic emission (AE) technique is a promising approach for detecting and locating fatigue cracks in metallic structures such as rail tracks. However, it is still a challenge to quantify the crack size accurately using this technique. AE waves can be generated by either crack propagation (CP) or crack closure (CC) processes and classification of these two types of AE waves is necessary to obtain more reliable crack sizing results. As the pre-processing step, an index based on wavelet power (WP) of AE signal is initially established in this paper in order to distinguish between the CC-induced AE waves and their CP-induced counterparts. Here, information embedded within the AE signal was used to perform the AE wave classification, which is preferred to the use of real-time load information, typically adopted in other studies. With the proposed approach, it renders the AE technique more amenable to practical implementation. Following the AE wave classification, a novel method to quantify the fatigue crack length was developed by taking advantage of the CC-induced AE waves, the count rate of which was observed to be positively correlated with the crack length. The crack length was subsequently determined using an empirical model derived from the AE data acquired during the fatigue tests of the rail steel specimens. The performance of the proposed method was validated by experimental data and compared with that of the traditional crack sizing method, which is based on CP-induced AE waves. As a significant advantage over other AE crack sizing methods, the proposed novel method is able to estimate the crack length without prior knowledge of the initial crack length, integration of AE data or real-time load amplitude. It is thus applicable to the health monitoring of both new and existing structures.
Garcia, Tiago Severo; Rech, Tatiana Helena; Leitão, Cristiane Bauermann
2017-01-01
Imaging studies are expected to produce reliable information regarding the size and fat content of the pancreas. However, the available studies have produced inconclusive results. The aim of this study was to perform a systematic review and meta-analysis of imaging studies assessing pancreas size and fat content in patients with type 1 diabetes (T1DM) and type 2 diabetes (T2DM). Medline and Embase databases were performed. Studies evaluating pancreatic size (diameter, area or volume) and/or fat content by ultrasound, computed tomography, or magnetic resonance imaging in patients with T1DM and/or T2DM as compared to healthy controls were selected. Seventeen studies including 3,403 subjects (284 T1DM patients, 1,139 T2DM patients, and 1,980 control subjects) were selected for meta-analyses. Pancreas diameter, area, volume, density, and fat percentage were evaluated. Pancreatic volume was reduced in T1DM and T2DM vs. controls (T1DM vs. controls: -38.72 cm3, 95%CI: -52.25 to -25.19, I2 = 70.2%, p for heterogeneity = 0.018; and T2DM vs. controls: -12.18 cm3, 95%CI: -19.1 to -5.25, I2 = 79.3%, p for heterogeneity = 0.001). Fat content was higher in T2DM vs. controls (+2.73%, 95%CI 0.55 to 4.91, I2 = 82.0%, p for heterogeneity<0.001). Individuals with T1DM and T2DM have reduced pancreas size in comparison with control subjects. Patients with T2DM have increased pancreatic fat content.
Boe, S G; Dalton, B H; Harwood, B; Doherty, T J; Rice, C L
2009-05-01
To establish the inter-rater reliability of decomposition-based quantitative electromyography (DQEMG) derived motor unit number estimates (MUNEs) and quantitative motor unit (MU) analysis. Using DQEMG, two examiners independently obtained a sample of needle and surface-detected motor unit potentials (MUPs) from the tibialis anterior muscle from 10 subjects. Coupled with a maximal M wave, surface-detected MUPs were used to derive a MUNE for each subject and each examiner. Additionally, size-related parameters of the individual MUs were obtained following quantitative MUP analysis. Test-retest MUNE values were similar with high reliability observed between examiners (ICC=0.87). Additionally, MUNE variability from test-retest as quantified by a 95% confidence interval was relatively low (+/-28 MUs). Lastly, quantitative data pertaining to MU size, complexity and firing rate were similar between examiners. MUNEs and quantitative MU data can be obtained with high reliability by two independent examiners using DQEMG. Establishing the inter-rater reliability of MUNEs and quantitative MU analysis using DQEMG is central to the clinical applicability of the technique. In addition to assessing response to treatments over time, multiple clinicians may be involved in the longitudinal assessment of the MU pool of individuals with disorders of the central or peripheral nervous system.
Development and Reliability Testing of a Fast-Food Restaurant Observation Form.
Rimkus, Leah; Ohri-Vachaspati, Punam; Powell, Lisa M; Zenk, Shannon N; Quinn, Christopher M; Barker, Dianne C; Pugach, Oksana; Resnick, Elissa A; Chaloupka, Frank J
2015-01-01
To develop a reliable observational data collection instrument to measure characteristics of the fast-food restaurant environment likely to influence consumer behaviors, including product availability, pricing, and promotion. The study used observational data collection. Restaurants were in the Chicago Metropolitan Statistical Area. A total of 131 chain fast-food restaurant outlets were included. Interrater reliability was measured for product availability, pricing, and promotion measures on a fast-food restaurant observational data collection instrument. Analysis was done with Cohen's κ coefficient and proportion of overall agreement for categorical variables and intraclass correlation coefficient (ICC) for continuous variables. Interrater reliability, as measured by average κ coefficient, was .79 for menu characteristics, .84 for kids' menu characteristics, .92 for food availability and sizes, .85 for beverage availability and sizes, .78 for measures on the availability of nutrition information,.75 for characteristics of exterior advertisements, and .62 and .90 for exterior and interior characteristics measures, respectively. For continuous measures, average ICC was .88 for food pricing measures, .83 for beverage prices, and .65 for counts of exterior advertisements. Over 85% of measures demonstrated substantial or almost perfect agreement. Although some measures required revision or protocol clarification, results from this study suggest that the instrument may be used to reliably measure the fast-food restaurant environment.
Jamil, Muhammad; Ng, E Y K
2015-07-01
Radiofrequency ablation (RFA) has been increasingly used in treating cancer for multitude of situations in various tissue types. To perform the therapy safely and reliably, the effect of critical parameters needs to be known beforehand. Temperature plays an important role in the outcome of the therapy and any uncertainties in temperature assessment can be lethal. This study presents the RFA case of fixed tip temperature where we've analysed the effect of electrical conductivity, thermal conductivity and blood perfusion rate of the tumour and surrounding normal tissue on the radiofrequency ablation. Ablation volume was chosen as the characteristic to be optimised and temperature control was achieved via PID controller. The effect of all 6 parameters each having 3 levels was quantified with minimum number of experiments harnessing the fractional factorial characteristic of Taguchi's orthogonal arrays. It was observed that as the blood perfusion increases the ablation volume decreases. Increasing electrical conductivity of the tumour results in increase of ablation volume whereas increase in normal tissue conductivity tends to decrease the ablation volume and vice versa. Likewise, increasing thermal conductivity of the tumour results in enhanced ablation volume whereas an increase in thermal conductivity of the surrounding normal tissue has a debilitating effect on the ablation volume and vice versa. With increase in the size of the tumour (i.e., 2-3cm) the effect of each parameter is not linear. The parameter effect varies with change in size of the tumour that is manifested by the different gradient observed in ablation volume. Most important is the relative insensitivity of ablation volume to blood perfusion rate for smaller tumour size (2cm) that is also in accordance with the previous results presented in literature. These findings will provide initial insight for safe, reliable and improved treatment planning perceptively. Copyright © 2015 Elsevier Ltd. All rights reserved.
Wu, Zhao; Xiong, Naixue; Huang, Yannong; Xu, Degang; Hu, Chunyang
2015-01-01
The services composition technology provides flexible methods for building service composition applications (SCAs) in wireless sensor networks (WSNs). The high reliability and high performance of SCAs help services composition technology promote the practical application of WSNs. The optimization methods for reliability and performance used for traditional software systems are mostly based on the instantiations of software components, which are inapplicable and inefficient in the ever-changing SCAs in WSNs. In this paper, we consider the SCAs with fault tolerance in WSNs. Based on a Universal Generating Function (UGF) we propose a reliability and performance model of SCAs in WSNs, which generalizes a redundancy optimization problem to a multi-state system. Based on this model, an efficient optimization algorithm for reliability and performance of SCAs in WSNs is developed based on a Genetic Algorithm (GA) to find the optimal structure of SCAs with fault-tolerance in WSNs. In order to examine the feasibility of our algorithm, we have evaluated the performance. Furthermore, the interrelationships between the reliability, performance and cost are investigated. In addition, a distinct approach to determine the most suitable parameters in the suggested algorithm is proposed. PMID:26561818
Yamato, Tie Parma; Maher, Chris; Koes, Bart; Moseley, Anne
2017-06-01
The Physiotherapy Evidence Database (PEDro) scale has been widely used to investigate methodological quality in physiotherapy randomized controlled trials; however, its validity has not been tested for pharmaceutical trials. The aim of this study was to investigate the validity and interrater reliability of the PEDro scale for pharmaceutical trials. The reliability was also examined for the Cochrane Back and Neck (CBN) Group risk of bias tool. This is a secondary analysis of data from a previous study. We considered randomized placebo controlled trials evaluating any pain medication for chronic spinal pain or osteoarthritis. Convergent validity was evaluated by correlating the PEDro score with the summary score of the CBN risk of bias tool. The construct validity was tested using a linear regression analysis to determine the degree to which the total PEDro score is associated with treatment effect sizes, journal impact factor, and the summary score for the CBN risk of bias tool. The interrater reliability was estimated using the Prevalence and Bias Adjusted Kappa coefficient and 95% confidence interval (CI) for the PEDro scale and CBN risk of bias tool. Fifty-three trials were included, with 91 treatment effect sizes included in the analyses. The correlation between PEDro scale and CBN risk of bias tool was 0.83 (95% CI 0.76-0.88) after adjusting for reliability, indicating strong convergence. The PEDro score was inversely associated with effect sizes, significantly associated with the summary score for the CBN risk of bias tool, and not associated with the journal impact factor. The interrater reliability for each item of the PEDro scale and CBN risk of bias tool was at least substantial for most items (>0.60). The intraclass correlation coefficient for the PEDro score was 0.80 (95% CI 0.68-0.88), and for the CBN, risk of bias tool was 0.81 (95% CI 0.69-0.88). There was evidence for the convergent and construct validity for the PEDro scale when used to evaluate methodological quality of pharmacological trials. Both risk of bias tools have acceptably high interrater reliability. Copyright © 2017 Elsevier Inc. All rights reserved.
A proton irradiation test facility for space research in Ankara, Turkey
NASA Astrophysics Data System (ADS)
Gencer, Ayşenur; Yiǧitoǧlu, Merve; Bilge Demirköz, Melahat; Efthymiopoulos, Ilias
2016-07-01
Space radiation often affects the electronic components' performance during the mission duration. In order to ensure reliable performance, the components must be tested to at least the expected dose that will be received in space, before the mission. Accelerator facilities are widely used for such irradiation tests around the world. Turkish Atomic Energy Authority (TAEA) has a 15MeV to 30MeV variable proton cyclotron in Ankara and the facility's main purpose is to produce radioisotopes in three different rooms for different target systems. There is also an R&D room which can be used for research purposes. This paper will detail the design and current state of the construction of a beamline to perform Single Event Effect (SEE) tests in Ankara for the first time. ESA ESCC No.25100 Standard Single Event Effect Test Method and Guidelines is being considered for these SEE tests. The proton beam kinetic energy must be between 20MeV and 200MeV according to the standard. While the proton energy is suitable for SEE tests, the beam size must be 15.40cm x 21.55cm and the flux must be between 10 ^{5} p/cm ^{2}/s to at least 10 ^{8} p/cm ^{2}/s according to the standard. The beam size at the entrance of the R&D room is mm-sized and the current is variable between 10μA and 1.2mA. Therefore, a defocusing beam line has been designed to enlarge the beam size and reduce the flux value. The beam line has quadrupole magnets to enlarge the beam size and the collimators and scattering foils are used for flux reduction. This facility will provide proton fluxes between 10 ^{7} p/cm ^{2}/s and 10 ^{10} p/cm ^{2}/s for the area defined in the standard when completed. Also for testing solar cells developed for space, the proton beam energy will be lowered below 10MeV. This project has been funded by Ministry of Development in Turkey and the beam line construction will finish in two years and SEE tests will be performed for the first time in Turkey.
Performance of an X-Ray Microcalorimeter with a 240 Micron Absorber and a 50 Micron TES Bilayer
NASA Technical Reports Server (NTRS)
Miniussi, Antoine R.; Adams, Joseph S.; Bandler, Simon R.; Chervenak, James A.; Datesman, Aaron M.; Eckart, Megan E.; Ewin, Audrey J.; Finkbeiner, Fred M.; Kelley, Richard L.; Kilbourne, Caroline A.;
2017-01-01
We have been developing superconducting transition-edge sensor (TES) microcalorimeters for a variety of potential astrophysics missions, including Athena. The X-ray Integral Field Unit (X-IFU) instrument on this mission requires close-packed pixels on a 0.25 mm pitch, and high quantum efficiency between 0.2 and 12 keV. The traditional approach within our group has been to use square TES bilayers on molybdenum and gold that are between 100 and 140 microns in size, deposited on silicon nitride membranes to provide a weak thermal conductance to a 50 mK heat bath temperature. It has been shown that normal metal stripes on top of the bilayer are needed to keep the unexplained noise at a level consistent with the expected based upon estimates for the non-equilibrium non-linear Johnson noise.In this work we describe a new approach in which we use a square TES bilayer that is 50 microns in size. While the weak link effect is much stronger in this size of TES, we have found that excellent spectral performance can be achieved without the need for any normal metal strips on top of the TES. A spectral performance of 1.58 eV at 6 KeV has been achieved, the best resolution seen in any of our devices with this pixel size. The absence of normal metal stripes has led to more uniform transition shapes, and more reliable excellent spectral performance. The smaller TES size has meant that that the thermal conductance to the heat bath, determined by the perimeter length of the TES and the membrane thickness, is lower than on previous devices, and thus has a lower count rate capability. This is an advantage for low count-rate applications where the slower speed enables easier multiplexing in the read-out, thus potential higher multiplexing factors. In order to recover the higher count rate capabilities, a potential path exits using thicker silicon nitride membranes to increase the thermal conductance to the heat bath.
Faletti, Riccardo; Gatti, Marco; Cosentino, Aurelio; Bergamasco, Laura; Cura Stura, Erik; Garabello, Domenica; Pennisi, Giovanni; Salizzoni, Stefano; Veglia, Simona; Ottavio, Davini; Rinaldi, Mauro; Fonio, Paolo
2018-05-26
to determine reliability and reproducibility of measurements of aortic annulus in 3D models printed from cardiovascular computed tomography (CCT) images. Retrospective study on the records of 20 patients who underwent aortic valve replacement (AVR) with pre-surgery annulus assessment by CCT and intra-operative sizing by Hegar dilators (IOS). 3D models were fabricated by fused deposition modelling of thermoplastic polyurethane filaments. For each patient, two 3D models were independently segmented, modelled and printed by two blinded "manufacturers": a radiologist and a radiology technician. Two blinded cardiac surgeons performed the annulus diameter measurements by Hegar dilators on the two sets of models. Matched data from different measurements were analyzed with Wilcoxon test, Bland-Altmann plot and within-subject ANOVA. No significant differences were found among the measurements made by each cardiac surgeon on the same 3D model (p = 0.48) or on the 3D models printed by different manufacturers (p = 0.25); also, no intraobserver variability (p = 0.46). The annulus diameter measured on 3D models showed good agreement with the reference CCT measurement (p = 0.68) and IOH sizing (p = 0.11). Time and cost per model were: model creation ∼10-15 min; printing time ∼60 min; post-processing ∼5min; material cost ∼1€. CONCLUSION: 3D printing of aortic annulus can offer reliable, not expensive patient-specific information to be used in the pre-operative planning of AVR or transcatheter aortic valve implantation (TAVI). Copyright © 2018 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.
Eyes wide open: Pupil size as a proxy for inhibition in the masked-priming paradigm.
Geller, Jason; Still, Mary L; Morris, Alison L
2016-05-01
A core assumption underlying competitive-network models of word recognition is that in order for a word to be recognized, the representations of competing orthographically similar words must be inhibited. This inhibitory mechanism is revealed in the masked-priming lexical-decision task (LDT) when responses to orthographically similar word prime-target pairs are slower than orthographically different word prime-target pairs (i.e., inhibitory priming). In English, however, behavioral evidence for inhibitory priming has been mixed. In the present study, we utilized a physiological correlate of cognitive effort never before used in the masked-priming LDT, pupil size, to replicate and extend behavioral demonstrations of inhibitory effects (i.e., Nakayama, Sears, & Lupker, Journal of Experimental Psychology: Human Perception and Performance, 34, 1236-1260, 2008, Exp. 1). Previous research had suggested that pupil size is a reliable indicator of cognitive load, making it a promising index of lexical inhibition. Our pupillometric data replicated and extended previous behavioral findings, in that inhibition was obtained for orthographically similar word prime-target pairs. However, our response time data provided only a partial replication of Nakayama et al. Journal of Experimental Psychology: Human Perception and Performance, 34, 1236-1260, 2008. These results provide converging lines of evidence that inhibition operates in word recognition and that pupillometry is a useful addition to word recognition researchers' toolbox.
Ab initio molecular dynamics in a finite homogeneous electric field.
Umari, P; Pasquarello, Alfredo
2002-10-07
We treat homogeneous electric fields within density functional calculations with periodic boundary conditions. A nonlocal energy functional depending on the applied field is used within an ab initio molecular dynamics scheme. The reliability of the method is demonstrated in the case of bulk MgO for the Born effective charges, and the high- and low-frequency dielectric constants. We evaluate the static dielectric constant by performing a damped molecular dynamics in an electric field and avoiding the calculation of the dynamical matrix. Application of this method to vitreous silica shows good agreement with experiment and illustrates its potential for systems of large size.
Statistical analysis of global horizontal solar irradiation GHI in Fez city, Morocco
NASA Astrophysics Data System (ADS)
Bounoua, Z.; Mechaqrane, A.
2018-05-01
An accurate knowledge of the solar energy reaching the ground is necessary for sizing and optimizing the performances of solar installations. This paper describes a statistical analysis of the global horizontal solar irradiation (GHI) at Fez city, Morocco. For better reliability, we have first applied a set of check procedures to test the quality of hourly GHI measurements. We then eliminate the erroneous values which are generally due to measurement or the cosine effect errors. Statistical analysis show that the annual mean daily values of GHI is of approximately 5 kWh/m²/day. Daily monthly mean values and other parameter are also calculated.
Design Considerations of a Solid State Thermal Energy Storage
NASA Astrophysics Data System (ADS)
Janbozorgi, Mohammad; Houssainy, Sammy; Thacker, Ariana; Ip, Peggy; Ismail, Walid; Kavehpour, Pirouz
2016-11-01
With the growing governmental restrictions on carbon emission, renewable energies are becoming more prevalent. A reliable use of a renewable source however requires a built-in storage to overcome the inherent intermittent nature of the available energy. Thermal design of a solid state energy storage has been investigated for optimal performance. The impact of flow regime, laminar vs. turbulent, on the design and sizing of the system is also studied. The implications of low thermal conductivity of the storage material are discussed and a design that maximizes the round trip efficiency is presented. This study was supported by Award No. EPC-14-027 Granted by California Energy Commission (CEC).
Fiber-optic interconnection networks for spacecraft
NASA Technical Reports Server (NTRS)
Powers, Robert S.
1992-01-01
The overall goal of this effort was to perform the detailed design, development, and construction of a prototype 8x8 all-optical fiber optic crossbar switch using low power liquid crystal shutters capable of operation in a network with suitable fiber optic transmitters and receivers at a data rate of 1 Gb/s. During the earlier Phase 1 feasibility study, it was determined that the all-optical crossbar system had significant advantages compared to electronic crossbars in terms of power consumption, weight, size, and reliability. The result is primarily due to the fact that no optical transmitters and receivers are required for electro-optic conversion within the crossbar switch itself.
Structured Low-Density Parity-Check Codes with Bandwidth Efficient Modulation
NASA Technical Reports Server (NTRS)
Cheng, Michael K.; Divsalar, Dariush; Duy, Stephanie
2009-01-01
In this work, we study the performance of structured Low-Density Parity-Check (LDPC) Codes together with bandwidth efficient modulations. We consider protograph-based LDPC codes that facilitate high-speed hardware implementations and have minimum distances that grow linearly with block sizes. We cover various higher- order modulations such as 8-PSK, 16-APSK, and 16-QAM. During demodulation, a demapper transforms the received in-phase and quadrature samples into reliability information that feeds the binary LDPC decoder. We will compare various low-complexity demappers and provide simulation results for assorted coded-modulation combinations on the additive white Gaussian noise and independent Rayleigh fading channels.
Cyclic Spin Testing of Superalloy Disks With a Dual Grain Microstructure
NASA Technical Reports Server (NTRS)
Gayda, John; Kantzos, Pete
2005-01-01
An aggressive cyclic spin test program was run to verify the reliability of superalloy disks with a dual grain structure, fine grain bore and coarse grain rim, utilizing a disk design with web holes bisecting the grain size transition zone. Results of these tests were compared with conventional disks with uniform grain structures. Analysis of the test results indicated the cyclic performance of disks with a dual grain structure could be estimated to a level of accuracy which does not appear to prohibit the use of this technology in advanced gas turbine engines, although further refinement of lifing methodology is clearly warranted.
Critical factors to achieve low voltage- and capacitance-based organic field-effect transistors.
Jang, Mi; Park, Ji Hoon; Im, Seongil; Kim, Se Hyun; Yang, Hoichang
2014-01-15
Hydrophobic organo-compatible but low-capacitance dielectrics (10.5 nFcm(-2) ), polystyrene-grafted SiO2 could induce surface-mediated large crystal grains of face-to-face stacked triethylsilylethynyl anthradithiophene (TES-ADT), producing more efficient charge-carrier transport, in comparison to μm-sized pentacene crystals containing a face-to-edge packing. Low-voltage operating TES-ADT OFETs showed good device performance (μFET ≈ 1.3 cm(2) V(-1) s(-1) , Vth ≈ 0.5 V, SS ≈ 0.2 V), as well as excellent device reliability. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Validation of an instrument to evaluate health promotion at schools
Pinto, Raquel Oliveira; Pattussi, Marcos Pascoal; Fontoura, Larissa do Prado; Poletto, Simone; Grapiglia, Valenca Lemes; Balbinot, Alexandre Didó; Teixeira, Vanessa Andina; Horta, Rogério Lessa
2016-01-01
ABSTRACT OBJECTIVE To validate an instrument designed to assess health promotion in the school environment. METHODS A questionnaire, based on guidelines from the World Health Organization and in line with the Brazilian school health context, was developed to validate the research instrument. There were 60 items in the instrument that included 40 questions for the school manager and 20 items with direct observations made by the interviewer. The items’ content validation was performed using the Delphi technique, with the instrument being applied in 53 schools from two medium-sized cities in the South region of Brazil. Reliability (Cronbach’s alpha and split-half) and validity (principal component analysis) analyses were performed. RESULTS The final instrument remained composed of 28 items, distributed into three dimensions: pedagogical, structural and relational. The resulting components showed good factorial loads (> 0.4) and acceptable reliability (> 0.6) for most items. The pedagogical dimension identifies educational activities regarding drugs and sexuality, violence and prejudice, auto care and peace and quality of life. The structural dimension is comprised of access, sanitary structure, and conservation and equipment. The relational dimension includes relationships within the school and with the community. CONCLUSIONS The proposed instrument presents satisfactory validity and reliability values, which include aspects relevant to promote health in schools. Its use allows the description of the health promotion conditions to which students from each educational institution are exposed. Because this instrument includes items directly observed by the investigator, it should only be used during periods when there are full and regular activities at the school in question. PMID:26982958
Hamrin, Tova Hannegård; Radell, Peter J; Fläring, Urban; Berner, Jonas; Eksborg, Staffan
2017-12-28
The aim of the present study was to evaluate the performance of regional oxygen saturation (rSO 2 ) monitoring with near infrared spectroscopy (NIRS) during pediatric inter-hospital transports and to optimize processing of the electronically stored data. Cerebral (rSO 2 -C) and abdominal (rSO 2 -A) NIRS sensors were used during transport in air ambulance and connecting ground ambulance. Data were electronically stored by the monitor during transport, extracted and analyzed off-line after the transport. After removal of all zero and floor effect values, the Savitzky-Golay algorithm of data smoothing was applied on the NIRS-signal. The second order of smoothing polynomial was used and the optimal number of neighboring points for the smoothing procedure was evaluated. NIRS-data from 38 pediatric patients was examined. Reliability, defined as measurements without values of 0 or 15%, was acceptable during transport (> 90% of all measurements). There were, however, individual patients with < 90% reliable measurements during transport, while no patient was found to have < 90% reliable measurements in hospital. Satisfactory noise reduction of the signal, without distortion of the underlying information, was achieved when 20-50 neighbors ("window-size") were used. The use of NIRS for measuring rSO 2 in clinical studies during pediatric transport in ground and air-ambulance is feasible but hampered by unreliable values and signal interference. By applying the Savitzky-Golay algorithm, the signal-to-noise ratio was improved and enabled better post-hoc signal evaluation.
Psychometric properties of the Mayo Elbow Performance Score.
Celik, Derya
2015-06-01
To translate and culturally adapt the Mayo Elbow Performance Score (MEPS), a widely used instrument for evaluating disability associated with elbow injuries, into Turkish (MEPS-T) and to determine psychometric properties of the translated version. The MEPS was translated into Turkish using published methodological guidelines. The measurement properties of the MEPS-T (construct validity and floor and ceiling effects) were tested in 91 patients with elbow pathology. The reproducibility of the MEPS-T was tested in 59 patients over 7-14 days. The responsiveness of the MEPS-T was tested in a subgroup of 46 patients diagnosed with lateral epicondylitis and who received conservative treatment for 6 weeks. The interclass correlation coefficient (ICC) was used to estimate the test-retest reliability. The construct validity was analyzed with the disabilities of the arm, shoulder and hand (DASH), Visual Analog Scale (VAS) and the Short Form 36 (SF-36). Effect size (ES) was used to assess the responsiveness. The distribution of floor and ceiling effects was determined. The MEPS-T showed very good test-retest reliability (ICC 0.89). The correlation coefficients between the MEPS-T and DASH and VAS were -0.61 and -0.53, respectively (p < 0.001). The highest correlations were between the MEPS-T and the mental component summary (r = 0.47, p = 0.001) and role emotional (r = 0.45, p = 0.001). The MEPS-T ES, 0.50, was moderate (95% CI 0.33-0.62). We observed no ceiling or floor effects. The MEPS-T represents a valid, reliable and moderately responsive instrument for evaluating patients with elbow disease.
Boileau, C; Martel-Pelletier, J; Abram, F; Raynauld, J-P; Troncy, E; D'Anjou, M-A; Moreau, M; Pelletier, J-P
2008-07-01
Osteoarthritis (OA) structural changes take place over decades in humans. MRI can provide precise and reliable information on the joint structure and changes over time. In this study, we investigated the reliability of quantitative MRI in assessing knee OA structural changes in the experimental anterior cruciate ligament (ACL) dog model of OA. OA was surgically induced by transection of the ACL of the right knee in five dogs. High resolution three dimensional MRI using a 1.5 T magnet was performed at baseline, 4, 8 and 26 weeks post surgery. Cartilage volume/thickness, cartilage defects, trochlear osteophyte formation and subchondral bone lesion (hypersignal) were assessed on MRI images. Animals were killed 26 weeks post surgery and macroscopic evaluation was performed. There was a progressive and significant increase over time in the loss of knee cartilage volume, the cartilage defect and subchondral bone hypersignal. The trochlear osteophyte size also progressed over time. The greatest cartilage loss at 26 weeks was found on the tibial plateaus and in the medial compartment. There was a highly significant correlation between total knee cartilage volume loss or defect and subchondral bone hypersignal, and also a good correlation between the macroscopic and the MRI findings. This study demonstrated that MRI is a useful technology to provide a non-invasive and reliable assessment of the joint structural changes during the development of OA in the ACL dog model. The combination of this OA model with MRI evaluation provides a promising tool for the evaluation of new disease-modifying osteoarthritis drugs (DMOADs).
Nuechterlein, Keith H.; Green, Michael F.; Calkins, Monica E.; Greenwood, Tiffany A.; Gur, Raquel E.; Gur, Ruben C.; Lazzeroni, Laura C.; Light, Gregory A.; Radant, Allen D.; Seidman, Larry J.; Siever, Larry J.; Silverman, Jeremy M.; Sprock, Joyce; Stone, William S.; Sugar, Catherine A.; Swerdlow, Neal R.; Tsuang, Debby W.; Tsuang, Ming T.; Turetsky, Bruce I.; Braff, David L.
2015-01-01
Attention/vigilance impairments are present in individuals with schizophrenia across psychotic and remitted states and in their first-degree relatives. An important question is whether deficits in attention/vigilance can be consistently and reliably measured across sites varying in many participant demographic, clinical, and functional characteristics, as needed for large-scale genetic studies of endophenotypes. We examined Continuous Performance Test (CPT) data from Phase 2 of the Consortium on the Genetics of Schizophrenia (COGS-2), the largest-scale assessment of cognitive and psychophysiological endophenotypes relevant to schizophrenia. CPT data from 2251 participants from five sites were examined. A perceptual-load vigilance task (the Degraded Stimulus CPT or DS-CPT) and a memory-load vigilance task (CPT - Identical Pairs or CPT-IP) were utilized. Schizophrenia patients performed more poorly than healthy comparison subjects (HCS) across sites, despite significant site differences in participant age, sex, education, and racial distribution. Patient-HCS differences in signal/noise discrimination (d’) in the DS-CPT varied significantly across sites, but averaged a medium effect size. CPT-IP performance showed large patient-HCS differences across sites. Poor CPT performance was independent of or weakly correlated with symptom severity, but was significantly associated with lower educational achievement and functional capacity. Current smoking was associated with poorer CPT-IP d’. Patients taking both atypical and typical antipsychotic medication performed more poorly than those on no or atypical antipsychotic medications, likely reflecting their greater severity of illness. We conclude that CPT deficits in schizophrenia can be reliably detected across sites, are relatively independent of current symptom severity, and are related to functional capacity. PMID:25749017
Nuechterlein, Keith H; Green, Michael F; Calkins, Monica E; Greenwood, Tiffany A; Gur, Raquel E; Gur, Ruben C; Lazzeroni, Laura C; Light, Gregory A; Radant, Allen D; Seidman, Larry J; Siever, Larry J; Silverman, Jeremy M; Sprock, Joyce; Stone, William S; Sugar, Catherine A; Swerdlow, Neal R; Tsuang, Debby W; Tsuang, Ming T; Turetsky, Bruce I; Braff, David L
2015-04-01
Attention/vigilance impairments are present in individuals with schizophrenia across psychotic and remitted states and in their first-degree relatives. An important question is whether deficits in attention/vigilance can be consistently and reliably measured across sites varying in many participant demographic, clinical, and functional characteristics, as needed for large-scale genetic studies of endophenotypes. We examined Continuous Performance Test (CPT) data from phase 2 of the Consortium on the Genetics of Schizophrenia (COGS-2), the largest-scale assessment of cognitive and psychophysiological endophenotypes relevant to schizophrenia. The CPT data from 2251 participants from five sites were examined. A perceptual-load vigilance task (the Degraded Stimulus CPT or DS-CPT) and a memory-load vigilance task (CPT-Identical Pairs or CPT-IP) were utilized. Schizophrenia patients performed more poorly than healthy comparison subjects (HCS) across sites, despite significant site differences in participant age, sex, education, and racial distribution. Patient-HCS differences in signal/noise discrimination (d') in the DS-CPT varied significantly across sites, but averaged a medium effect size. CPT-IP performance showed large patient-HCS differences across sites. Poor CPT performance was independent of or weakly correlated with symptom severity, but was significantly associated with lower educational achievement and functional capacity. Current smoking was associated with poorer CPT-IP d'. Patients taking both atypical and typical antipsychotic medication performed more poorly than those on no or atypical antipsychotic medications, likely reflecting their greater severity of illness. We conclude that CPT deficits in schizophrenia can be reliably detected across sites, are relatively independent of current symptom severity, and are related to functional capacity. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Davis, Scott; Lichter, Michael; Raible, Daniel
2016-01-01
Emergent data-intensive missions coupled with dramatic reductions in spacecraft size plus an increasing number of space-based missions necessitates new high performance, compact and low cost communications technology. Free space optical communications offer advantages including orders of magnitude increase for data rate performance, increased security, immunity to jamming and lack of frequency allocation requirements when compared with conventional radio frequency (RF) means. The spatial coherence and low divergence associated with the optical frequencies of laser communications lends themselves to superior performance, but this increased directionality also creates one of the primary technical challenges in establishing a laser communications link by repeatedly and reliably pointing the beam onto the receive aperture. Several solutions have emerged from wide angle (slow) mechanical articulation systems, fine (fast) steering mirrors and rotating prisms, inertial compensation gyros and vibration isolation cancellation systems, but each requires moving components and imparts a measured amount of burden on the host platform. The complexity, cost and size of current mechanically scanned solutions limits their platform applicability, and restricts the feasibility of deploying optical communications payloads on very compact spacecraft employing critical systems. A high speed, wide angle, non-mechanical solution is therefore desirable. The purpose of this work is to share the development, testing, and demonstration of a breadboard prototype electro-optic (EO) scanned laser-communication link (see Figure 1). This demonstration is a step toward realizing ultra-low Size, Weight and Power (SWaP) SmallSat/MicroSat EO non-mechanical laser beam steering modules for high bandwidth ( greater than Gbps) free-space data links operating in the 1550 nm wavelength bands. The elimination of all moving parts will dramatically reduce SWaP and cost, increase component lifetime and reliability, and simplify the system design of laser communication modules. This paper describes the target mission architectures and requirements (few cubic centimeters of volume, 10's of grams of weight with milliwatts of power) and design of the beam steering module. Laboratory metrology is used to determine the component performance including horizontal and vertical resolution (20urad) as a function of control voltage (see Figure 2), transition time (0.1-1ms), pointing repeatability and optic insertion loss. A test bed system demonstration, including a full laser communications link, is conducted. The capabilities of this new EO beam steerer provide an opportunity to dramatically improve space communications through increased utilization of laser technology on smaller platforms than were previously attainable.
Lexical Frequency Profiles and Zipf's Law
ERIC Educational Resources Information Center
Edwards, Roderick; Collins, Laura
2011-01-01
Laufer and Nation (1995) proposed that the Lexical Frequency Profile (LFP) can estimate the size of a second-language writer's productive vocabulary. Meara (2005) questioned the sensitivity and the reliability of LFPs for estimating vocabulary sizes, based on the results obtained from probabilistic simulations of LFPs. However, the underlying…
Women and Men Together in Recruit Training.
Orme, Geoffrey J; Kehoe, E James
2018-05-01
Although men and women recruits to the Australian Army have trained in mixed-gender platoons since 1995, restrictions on women joining the combat arms were only removed in 2016. As part of a longitudinal study starting with recruit training, this article examined recruit records collected before 2016 with the aims of delineating (1) the relative performance of women versus men in mixed-gender platoons and (2) the relative performance of men in mixed-gender platoons versus all-male platoons. De-identified instructor ratings for 630 females and 4,505 males who completed training between 2011 and 2015 were obtained. Recruits were distributed across 128 platoons (averaging 41.6 members, SD = 8.3) of which 75% contained females, in proportions from 5% to 45%. These analyses were conducted under defense ethics approval DPR-LREP 069-15. Factor analyses revealed that instructor ratings generally loaded onto a single factor, accounting 77.2% of the variance. Consequently, a composite recruit performance score (range 1-5) was computed for 16 of 19 competencies. Analyses of the scores revealed that the distributions of the scores for females and males overlapped considerably. Observed effects were negligible to small in size. The distributions were all centered between 3.0 and 3.5. In mixed-gender platoons, 51% of the females and 52% of the males fell in this band, and 44% of recruits in all-male platoons had scores in this band. The lower three bands (1.0-3.0) contained a slightly greater proportion of females (18%) than males in either mixed-gender platoons (12%) or all-male platoons (12%). Conversely, the upper three bands (3.5-5.0) contained a slightly smaller percentage of females (31%) than males in either mixed-gender platoons (36%) or all-male platoons (44%). Although scores for females were reliably lower than those of males in mixed-gender platoons, χ2 (4) = 16.01, p < 0.01, the effect size (V = 0.07) did not reach the criterion for even a small effect (0.10). For male recruits, those in mixed-gender platoons had scores that were reliably lower than in all-male platoons, χ2 (4) = 48.38, p < 0.001; its effect size (V = 0.11) just exceeded the criterion for a small effect (0.10). Further analyses revealed that male scores had a near-zero correlation (r = -0.033) with the proportion of females in platoons (0-45%). This large-scale secondary analysis of instructor ratings of female and male recruits provides a platform for monitoring the integration of women into the combat arms. The analyses revealed nearly complete overlap in the performance of female versus male recruits. The detected gender-related differences were negligible to small in size. These small differences must be viewed with considerable caution. They may be artifacts of rater bias or other uncontrolled features of the rating system, which was designed for reporting individual recruit performance rather than aggregate analyses. Even with these limitations, this baseline snapshot of recruit performance suggests that, at recruit training, women and men are already working well together, which bodes well for their subsequent integration into the combat arms.
USDA-ARS?s Scientific Manuscript database
Background: The reliability of estimating muscle fiber cross-sectional area (measure of muscle fiber size) and fiber number from only a subset of fibers in rat hindlimb muscle cross-sections has not been systematically evaluated. This study examined the variability in mean estimates of fiber cross-s...
Gebreyesus, Grum; Lund, Mogens S; Buitenhuis, Bart; Bovenhuis, Henk; Poulsen, Nina A; Janss, Luc G
2017-12-05
Accurate genomic prediction requires a large reference population, which is problematic for traits that are expensive to measure. Traits related to milk protein composition are not routinely recorded due to costly procedures and are considered to be controlled by a few quantitative trait loci of large effect. The amount of variation explained may vary between regions leading to heterogeneous (co)variance patterns across the genome. Genomic prediction models that can efficiently take such heterogeneity of (co)variances into account can result in improved prediction reliability. In this study, we developed and implemented novel univariate and bivariate Bayesian prediction models, based on estimates of heterogeneous (co)variances for genome segments (BayesAS). Available data consisted of milk protein composition traits measured on cows and de-regressed proofs of total protein yield derived for bulls. Single-nucleotide polymorphisms (SNPs), from 50K SNP arrays, were grouped into non-overlapping genome segments. A segment was defined as one SNP, or a group of 50, 100, or 200 adjacent SNPs, or one chromosome, or the whole genome. Traditional univariate and bivariate genomic best linear unbiased prediction (GBLUP) models were also run for comparison. Reliabilities were calculated through a resampling strategy and using deterministic formula. BayesAS models improved prediction reliability for most of the traits compared to GBLUP models and this gain depended on segment size and genetic architecture of the traits. The gain in prediction reliability was especially marked for the protein composition traits β-CN, κ-CN and β-LG, for which prediction reliabilities were improved by 49 percentage points on average using the MT-BayesAS model with a 100-SNP segment size compared to the bivariate GBLUP. Prediction reliabilities were highest with the BayesAS model that uses a 100-SNP segment size. The bivariate versions of our BayesAS models resulted in extra gains of up to 6% in prediction reliability compared to the univariate versions. Substantial improvement in prediction reliability was possible for most of the traits related to milk protein composition using our novel BayesAS models. Grouping adjacent SNPs into segments provided enhanced information to estimate parameters and allowing the segments to have different (co)variances helped disentangle heterogeneous (co)variances across the genome.
75 FR 72664 - System Personnel Training Reliability Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-26
...Under section 215 of the Federal Power Act, the Commission approves two Personnel Performance, Training and Qualifications (PER) Reliability Standards, PER-004-2 (Reliability Coordination--Staffing) and PER-005-1 (System Personnel Training), submitted to the Commission for approval by the North American Electric Reliability Corporation, the Electric Reliability Organization certified by the Commission. The approved Reliability Standards require reliability coordinators, balancing authorities, and transmission operators to establish a training program for their system operators, verify each of their system operators' capability to perform tasks, and provide emergency operations training to every system operator. The Commission also approves NERC's proposal to retire two existing PER Reliability Standards that are replaced by the standards approved in this Final Rule.
Ceramic component reliability with the restructured NASA/CARES computer program
NASA Technical Reports Server (NTRS)
Powers, Lynn M.; Starlinger, Alois; Gyekenyesi, John P.
1992-01-01
The Ceramics Analysis and Reliability Evaluation of Structures (CARES) integrated design program on statistical fast fracture reliability and monolithic ceramic components is enhanced to include the use of a neutral data base, two-dimensional modeling, and variable problem size. The data base allows for the efficient transfer of element stresses, temperatures, and volumes/areas from the finite element output to the reliability analysis program. Elements are divided to insure a direct correspondence between the subelements and the Gaussian integration points. Two-dimensional modeling is accomplished by assessing the volume flaw reliability with shell elements. To demonstrate the improvements in the algorithm, example problems are selected from a round-robin conducted by WELFEP (WEakest Link failure probability prediction by Finite Element Postprocessors).